KR20140073231A - Image display apparatus, and method for operating the same - Google Patents

Image display apparatus, and method for operating the same Download PDF

Info

Publication number
KR20140073231A
KR20140073231A KR1020120141227A KR20120141227A KR20140073231A KR 20140073231 A KR20140073231 A KR 20140073231A KR 1020120141227 A KR1020120141227 A KR 1020120141227A KR 20120141227 A KR20120141227 A KR 20120141227A KR 20140073231 A KR20140073231 A KR 20140073231A
Authority
KR
South Korea
Prior art keywords
image
display
viewpoint
unit
displayed
Prior art date
Application number
KR1020120141227A
Other languages
Korean (ko)
Inventor
조재영
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Priority to KR1020120141227A priority Critical patent/KR20140073231A/en
Publication of KR20140073231A publication Critical patent/KR20140073231A/en

Links

Images

Abstract

 A video display apparatus according to an embodiment of the present invention includes a display unit for displaying a plurality of viewpoint images, a lens unit disposed on a front surface of the display unit and separating the plurality of viewpoint images according to directions, zone so that a first viewpoint image and a second viewpoint image different from the first viewpoint image are alternately displayed in a predetermined period in the specific region, And a control unit for controlling the control unit.

Description

[0001] The present invention relates to an image display apparatus and a method of operating the same,

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to an image display apparatus and an operation method thereof, and more particularly, to an image display apparatus and an operation method thereof that can improve the usability of a user when displaying a stereoscopic image by a non-eyeglass system.

A video display device is a device having a function of displaying an image that a user can view. The user can view the broadcast through the video display device. A video display device displays a broadcast selected by a user among broadcast signals transmitted from a broadcast station on a display. Currently, broadcasting is changing from analog broadcasting to digital broadcasting around the world.

Digital broadcasting refers to broadcasting in which digital video and audio signals are transmitted. Compared to analog broadcasting, digital broadcasting is strong against external noise and has a small data loss, is advantageous for error correction, has a high resolution, and provides a clear screen. Also, unlike analog broadcasting, digital broadcasting is capable of bidirectional service.

An object of the present invention is to provide an image display apparatus and an operation method thereof that enable a viewer to easily recognize a dead zone in stereoscopic image display by the non-eyeglass system, thereby improving the user's usability.

According to another aspect of the present invention, there is provided an image display apparatus including: a display for displaying a plurality of viewpoint images; a lens unit disposed on a front surface of the display, A first viewpoint image and a second viewpoint image different from the first viewpoint image are displayed in a predetermined period in the specific region, And a control unit for controlling the display unit to display alternately.

According to another aspect of the present invention, there is provided an image display apparatus including a display for displaying a plurality of viewpoint images, a lens unit disposed on a front surface of the display, and a controller for controlling the plurality of view images recognized in the dead zone to be displayed in a variable manner in the same view image.

According to another aspect of the present invention, there is provided a method of operating a video display device including displaying a plurality of viewpoint images on a display, displaying a plurality of viewpoint images on a display, A step of computing a specific area of the display on which the viewpoint image is displayed, and alternately displaying the first viewpoint image and the second viewpoint image different from the calculated first viewpoint image and the second viewpoint image alternately.

According to the embodiment of the present invention, it is possible to easily recognize that the viewer is located in the dead zone, and the usability of the user of the video display device can be improved.

1 is a view showing an appearance of an image display apparatus according to an embodiment of the present invention.
FIG. 2 is a diagram showing the lens unit of the video display device of FIG. 1 separated from the display.
3 is an internal block diagram of an image display apparatus according to an embodiment of the present invention.
4 is an internal block diagram of the control unit of FIG.
5 is a diagram showing a control method of the remote control apparatus of FIG.
Fig. 6 is an internal block diagram of the remote control device of Fig. 3; Fig.
Fig. 7 is a view for explaining how images are formed by the left eye image and the right eye image.
8 is a view for explaining the depth of the 3D image according to the interval between the left eye image and the right eye image.
9 is a diagram referred to explain the principle of a stereoscopic image display apparatus of the non-eyeglass system.
10 to 14 are views referred to explain the principle of an image display apparatus including a plurality of view-point images.
15 is a flowchart illustrating an operation method in an image display apparatus according to an exemplary embodiment.
Figs. 16 to 21 are diagrams referred to explain the operation method of the video display device of Fig.
22 is a flowchart illustrating an operation method in an image display apparatus according to an embodiment.
23 and 24 are diagrams referred to for explaining an operation method of the video display device of FIG.

Hereinafter, the present invention will be described in detail with reference to the drawings.

The suffix "module" and " part "for components used in the following description are given merely for convenience of description, and do not give special significance or role in themselves. Accordingly, the terms "module" and "part" may be used interchangeably.

FIG. 1 is a block diagram showing an appearance of an image display apparatus according to an embodiment of the present invention, and FIG. 2 is a diagram showing a separate display of a lens unit and a display of the image display apparatus of FIG.

Referring to the drawings, an image display apparatus according to an embodiment of the present invention is an image display apparatus capable of displaying a stereoscopic image, that is, a 3D image. In the embodiment of the present invention, an image display apparatus capable of 3D-image display in a non-eyeglass system is exemplified.

To this end, the image display apparatus 100 includes a display 180 and a lens unit 195.

The display 180 may display an input image, and in particular, may display a plurality of viewpoint images according to an embodiment of the present invention. Specifically, subpixels constituting a plurality of view-point images can be arranged and displayed in a predetermined pattern.

The lens unit 195 may be spaced apart from the display 180 and disposed in the user direction. In Fig. 2, the separation between the display 180 and the lens portion 195 is exemplified.

The lens unit 195 may be a lenticular system using a lenticular lens, a parallax system using a slit array, a system using a microlens array, or the like. Do. In the embodiment of the present invention, the lenticular method will be mainly described.

3 is an internal block diagram of an image display apparatus according to an embodiment of the present invention.

3, an image display apparatus 100 according to an exemplary embodiment of the present invention includes a broadcast receiving unit 105, an external device interface unit 130, a storage unit 140, a user input interface unit 150, A controller 180, a display 180, an audio output unit 185, a power supply unit 190, and a lens unit 195. The display unit 180 may include a display unit 155, a sensor unit (not shown), a controller 170,

The broadcast receiving unit 105 may include a tuner unit 110, a demodulation unit 120, and a network interface unit 130. Of course, it is possible to design the network interface unit 130 not to include the tuner unit 110 and the demodulation unit 120 as necessary, and to provide the network interface unit 130 with the tuner unit 110 And the demodulation unit 120 are not included.

The tuner unit 110 selects an RF broadcast signal corresponding to a channel selected by the user or all pre-stored channels among RF (Radio Frequency) broadcast signals received through the antenna. Also, the selected RF broadcast signal is converted into an intermediate frequency signal, a baseband image, or a voice signal.

For example, if the selected RF broadcast signal is a digital broadcast signal, it is converted into a digital IF signal (DIF). If the selected RF broadcast signal is an analog broadcast signal, it is converted into an analog baseband image or voice signal (CVBS / SIF). That is, the tuner unit 110 can process a digital broadcast signal or an analog broadcast signal. The analog baseband video or audio signal (CVBS / SIF) output from the tuner unit 110 can be directly input to the controller 170.

The tuner unit 110 may receive an RF broadcast signal of a single carrier according to an Advanced Television System Committee (ATSC) scheme or an RF broadcast signal of a plurality of carriers according to a DVB (Digital Video Broadcasting) scheme.

Meanwhile, the tuner unit 110 sequentially selects RF broadcast signals of all broadcast channels stored through a channel memory function among the RF broadcast signals received through the antenna in the present invention, and sequentially selects RF broadcast signals of the intermediate frequency signal, baseband image, . ≪ / RTI >

On the other hand, the tuner unit 110 can include a plurality of tuners in order to receive broadcast signals of a plurality of channels. Alternatively, a single tuner that simultaneously receives broadcast signals of a plurality of channels is also possible.

The demodulator 120 receives the digital IF signal DIF converted by the tuner 110 and performs a demodulation operation.

The demodulation unit 120 may perform demodulation and channel decoding, and then output a stream signal TS. At this time, the stream signal may be a signal in which a video signal, a voice signal, or a data signal is multiplexed.

The stream signal output from the demodulation unit 120 may be input to the controller 170. The control unit 170 performs demultiplexing, video / audio signal processing, and the like, and then outputs an image to the display 180 and outputs audio to the audio output unit 185.

The external device interface unit 130 can transmit or receive data with the connected external device 190. [ To this end, the external device interface unit 130 may include an A / V input / output unit (not shown) or a wireless communication unit (not shown).

The external device interface unit 130 can be connected to an external device such as a DVD (Digital Versatile Disk), a Blu ray, a game device, a camera, a camcorder, a computer , And may perform an input / output operation with an external device.

The A / V input / output unit can receive video and audio signals from an external device. Meanwhile, the wireless communication unit can perform short-range wireless communication with other electronic devices.

The network interface unit 135 provides an interface for connecting the video display device 100 to a wired / wireless network including the Internet network. For example, the network interface unit 135 can receive, via the network, content or data provided by the Internet or a content provider or a network operator.

The storage unit 140 may store a program for each signal processing and control in the control unit 170 or may store the processed video, audio, or data signals.

In addition, the storage unit 140 may perform a function for temporarily storing video, audio, or data signals input to the external device interface unit 130. [ In addition, the storage unit 140 may store information on a predetermined broadcast channel through a channel memory function such as a channel map.

Although the storage unit 140 of FIG. 3 is separately provided from the controller 170, the scope of the present invention is not limited thereto. The storage unit 140 may be included in the controller 170.

The user input interface unit 150 transmits a signal input by the user to the control unit 170 or a signal from the control unit 170 to the user.

(Not shown), such as a power key, a channel key, a volume key, and a set value, from the remote control apparatus 200, (Not shown) that senses a user's gesture to the control unit 170 or transmits a signal from the control unit 170 to the control unit 170 It is possible to transmit it to the sensor unit (not shown).

The control unit 170 demultiplexes the input stream or processes the demultiplexed signals through the tuner unit 110 or the demodulation unit 120 or the external device interface unit 130 so as to output the video or audio output Signals can be generated and output.

The video signal processed by the controller 170 may be input to the display 180 and displayed as an image corresponding to the video signal. Also, the image signal processed by the controller 170 may be input to the external output device through the external device interface unit 130.

The audio signal processed by the control unit 170 may be output to the audio output unit 185 as an audio signal. The audio signal processed by the controller 170 may be input to the external output device through the external device interface unit 130. [

Although not shown in FIG. 3, the controller 170 may include a demultiplexer, an image processor, and the like. This will be described later with reference to Fig.

In addition, the control unit 170 can control the overall operation in the video display device 100. [ For example, the control unit 170 may control the tuner unit 110 to control the tuning of the RF broadcast corresponding to the channel selected by the user or the previously stored channel.

In addition, the controller 170 may control the image display apparatus 100 according to a user command or an internal program input through the user input interface unit 150.

Meanwhile, the control unit 170 may control the display 180 to display an image. At this time, the image displayed on the display 180 may be a still image or a moving image, and may be a 3D image.

Meanwhile, the controller 170 may generate a 3D object for a predetermined object among the images displayed on the display 180, and display the 3D object. For example, the object may be at least one of a connected web screen (newspaper, magazine, etc.), EPG (Electronic Program Guide), various menus, widgets, icons, still images, moving images, and text.

Such a 3D object may be processed to have a different depth than the image displayed on the display 180. [ Preferably, the 3D object may be processed to be projected relative to the image displayed on the display 180.

On the other hand, the control unit 170 can recognize the position of the user based on the image photographed by the photographing unit 155. [ For example, the distance (z-axis coordinate) between the user and the image display apparatus 100 can be grasped. In addition, the x-axis coordinate and the y-axis coordinate in the display 180 corresponding to the user position can be grasped.

Although not shown in the drawing, a channel browsing processing unit for generating a channel signal or a thumbnail image corresponding to an external input signal may be further provided. The channel browsing processing unit receives the stream signal TS output from the demodulation unit 120 or the stream signal output from the external device interface unit 130 and extracts an image from an input stream signal to generate a thumbnail image . The generated thumbnail image may be stream-decoded together with a decoded image and input to the controller 170. The control unit 170 may display a thumbnail list having a plurality of thumbnail images on the display 180 using the input thumbnail image.

At this time, the thumbnail list may be displayed in a simple view mode displayed on a partial area in a state where a predetermined image is displayed on the display 180, or in a full viewing mode displayed in most areas of the display 180. The thumbnail images in the thumbnail list can be sequentially updated.

The display 180 converts a video signal, a data signal, an OSD signal, a control signal processed by the control unit 170, a video signal, a data signal, a control signal, and the like received from the external device interface unit 130, .

The display 180 can be a PDP, an LCD, an OLED, a flexible display, or the like, and is also capable of a 3D display.

As described above, the display 180 according to the embodiment of the present invention is a non-eyeglass 3D image display capable of not requiring a separate glass. For this, a lenticular lens unit 195 is provided.

The power supply unit 190 supplies the overall power within the video display device 100. Accordingly, each module or unit in the video display device 100 can be operated.

Also, the display 180 may be configured to include a 2D image area and a 3D image area. In this case, the power supply part 190 may supply the first power source and the second power source different from each other to the lens part 195 have. The first power supply and the second power supply may be performed under the control of the controller 170.

The lens unit 195 varies the traveling direction of the light according to the applied power source.

The first power source may be applied to the first region of the lens unit corresponding to the 2D image region of the display 180 so that light may be emitted in the same direction as the light emitted from the 2D image region of the display 180 have. Accordingly, the user sees the displayed 2D image as a 2D image.

As another example, a second power source may be applied to a second region of the lens portion corresponding to the 3D image region of the display 180, such that light emitted from the 3D image region of the display 180 is scattered, Light is generated. As a result, a 3D effect is generated, and the user perceives the displayed 3D image as a stereoscopic image without wearing a separate glass.

On the other hand, the lens portion 195 can be disposed in the user direction, away from the display 180. [ In particular, the lens portion 195 may be disposed at a distance from the display 180, parallel to the display 180, inclined at a predetermined angle, or concave or convex. On the other hand, the lens portion 195 can be arranged in a sheet form. Accordingly, the lens portion 195 according to the embodiment of the present invention may be called a lens sheet.

 Meanwhile, the display 180 may be configured as a touch screen and used as an input device in addition to the output device.

The audio output unit 185 receives the signal processed by the control unit 170 and outputs it as a voice.

The photographing unit 155 photographs the user. The photographing unit 155 may be implemented by a single camera, but the present invention is not limited thereto and may be implemented by a plurality of cameras. Meanwhile, the photographing unit 155 may be embedded in the image display device 100 above the display 180 or may be disposed separately. The image information photographed by the photographing unit 155 may be input to the control unit 170. [

The control unit 170 can sense the gesture of the user based on each of the images photographed by the photographing unit 155 or sensed signals from the sensor unit (not shown) or a combination thereof.

The remote control apparatus 200 transmits the user input to the user input interface unit 150. [ To this end, the remote control apparatus 200 can use Bluetooth, RF (radio frequency) communication, infrared (IR) communication, UWB (Ultra Wideband), ZigBee, or the like. Also, the remote control apparatus 200 can receive the video, audio, or data signal output from the user input interface unit 150 and display it or output it by the remote control apparatus 200.

Meanwhile, the video display device 100 may be a digital broadcast receiver capable of receiving a fixed or mobile digital broadcast.

Meanwhile, the video display device described in the present specification can be applied to a TV set, a monitor, a mobile phone, a smart phone, a notebook computer, a digital broadcasting terminal, a PDA (personal digital assistant), a portable multimedia player (PMP) And the like.

Meanwhile, a block diagram of the image display apparatus 100 shown in FIG. 3 is a block diagram for an embodiment of the present invention. Each component of the block diagram may be integrated, added, or omitted according to the specifications of the image display apparatus 100 actually implemented. That is, two or more constituent elements may be combined into one constituent element, or one constituent element may be constituted by two or more constituent elements, if necessary. In addition, the functions performed in each block are intended to illustrate the embodiments of the present invention, and the specific operations and apparatuses do not limit the scope of the present invention.

3, the video display apparatus 100 does not include the tuner unit 110 and the demodulation unit 120 shown in FIG. 3, and the network interface unit 130 or the external device interface unit 135 to play back the video content.

On the other hand, the image display apparatus 100 is an example of a video signal processing apparatus that performs signal processing of an image stored in the apparatus or an input image. Another example of the image signal processing apparatus includes a display 180 shown in FIG. 3, A set-top box excluding the audio output unit 185, a DVD player, a Blu-ray player, a game machine, a computer, and the like may be further exemplified.

4 is an internal block diagram of the control unit of FIG.

The control unit 170 includes a demultiplexing unit 310, an image processing unit 320, a processor 330, an OSD generating unit 340, a mixer 345, A frame rate conversion unit 350, and a formatter 360. [0031] An audio processing unit (not shown), and a data processing unit (not shown).

The demultiplexer 310 demultiplexes the input stream. For example, when an MPEG-2 TS is input, it can be demultiplexed into video, audio, and data signals, respectively. The stream signal input to the demultiplexer 310 may be a stream signal output from the tuner 110 or the demodulator 120 or the external device interface 130.

The image processing unit 320 may perform image processing of the demultiplexed image signal. To this end, the image processing unit 320 may include an image decoder 225 and a scaler 235. [

The video decoder 225 decodes the demultiplexed video signal and the scaler 235 performs scaling so that the resolution of the decoded video signal can be output from the display 180.

The video decoder 225 may include a decoder of various standards.

On the other hand, the image signal decoded by the image processing unit 320 can be divided into a case where there is only a 2D image signal, a case where a 2D image signal and a 3D image signal are mixed, and a case where there is only a 3D image signal.

For example, when an external video signal input from the external device 190 or a broadcast video signal of a broadcast signal received from the tuner unit 110 includes only a 2D video signal, when a 2D video signal and a 3D video signal are mixed And a case where there is only a 3D video signal. Accordingly, the controller 170, particularly, the image processing unit 320 and the like can process the 2D video signal, the mixed video signal of the 2D video signal and the 3D video signal, , A 3D video signal can be output.

Meanwhile, the image signal decoded by the image processing unit 320 may be a 3D image signal in various formats. For example, a 3D image signal composed of a color image and a depth image, or a 3D image signal composed of a plurality of view image signals. The plurality of viewpoint image signals may include, for example, a left eye image signal and a right eye image signal.

Here, the format of the 3D video signal is a side-by-side format in which the left-eye image signal L and the right-eye image signal R are arranged in left and right directions, a top- An interlaced format in which the left and right eye image signals and the right eye image signal are mixed line by line, a checker box for mixing the left eye image signal and the right eye image signal box by box, Format, and the like.

The processor 330 may control the overall operation in the image display apparatus 100 or in the control unit 170. [ For example, the processor 330 may control the tuner 110 to select a channel selected by the user or an RF broadcast corresponding to a previously stored channel.

In addition, the processor 330 may control the image display apparatus 100 by a user command or an internal program input through the user input interface unit 150. [

In addition, the processor 330 may perform data transfer control with the network interface unit 135 or the external device interface unit 130.

The processor 330 may control operations of the demultiplexing unit 310, the image processing unit 320, the OSD generating unit 340, and the like in the controller 170.

The OSD generation unit 340 generates an OSD signal according to a user input or by itself. For example, based on a user input signal, a signal for displaying various information in a graphic or text form on the screen of the display 180 can be generated. The generated OSD signal may include various data such as a user interface screen of the video display device 100, various menu screens, a widget, and an icon. In addition, the generated OSD signal may include a 2D object or a 3D object.

The OSD generating unit 340 can generate a pointer that can be displayed on the display based on the pointing signal input from the remote control device 200. [ In particular, such a pointer may be generated by a pointing signal processing unit, and the OSD generating unit 240 may include such a pointing signal processing unit (not shown). Of course, a pointing signal processing unit (not shown) may be provided separately from the OSD generating unit 240.

The mixer 345 may mix the OSD signal generated by the OSD generator 340 and the decoded video signal processed by the image processor 320. At this time, the OSD signal and the decoded video signal may include at least one of a 2D signal and a 3D signal. The mixed video signal is supplied to a frame rate converter 350.

A frame rate converter (FRC) 350 can convert the frame rate of an input image. On the other hand, the frame rate converter 350 can output the frame rate without conversion.

The formatter 360 can arrange the frame rate converted 3D image.

The formatter 360 receives the mixed signal, i.e., the OSD signal and the decoded video signal, from the mixer 345, and separates the 2D video signal and the 3D video signal.

In the present specification, a 3D video signal means a 3D object. Examples of the 3D object include a picuture in picture (PIP) image (still image or moving picture), an EPG indicating broadcasting program information, Icons, texts, objects in images, people, backgrounds, web screens (newspapers, magazines, etc.).

On the other hand, the formatter 360 can change the format of the 3D video signal. For example, when a 3D image is input in the above-described various formats, it can be changed to a multi-view image. In particular, it is possible to change the multiple viewpoint image to be repeated. Thereby, the 3D image of the non-spectacle type can be displayed.

Meanwhile, the formatter 360 may convert the 2D video signal into a 3D video signal. For example, according to a 3D image generation algorithm, an edge or a selectable object is detected in a 2D image signal, and an object or a selectable object according to the detected edge is separated into a 3D image signal and is generated . At this time, the generated 3D image signal may be a multiple view image signal as described above.

Although not shown in the drawing, it is also possible that a 3D processor (not shown) for 3-dimensional effect signal processing is further disposed after the formatter 360. The 3D processor (not shown) can process the brightness, tint, and color of the image signal to improve the 3D effect.

Meanwhile, the audio processing unit (not shown) in the control unit 170 can perform the audio processing of the demultiplexed audio signal. To this end, the audio processing unit (not shown) may include various decoders.

In addition, the audio processing unit (not shown) in the control unit 170 can process a base, a treble, a volume control, and the like.

The data processing unit (not shown) in the control unit 170 can perform data processing of the demultiplexed data signal. For example, if the demultiplexed data signal is a coded data signal, it can be decoded. The encoded data signal may be EPG (Electronic Program Guide) information including broadcast information such as a start time and an end time of a broadcast program broadcasted on each channel.

4 shows that the signals from the OSD generating unit 340 and the image processing unit 320 are mixed in the mixer 345 and then 3D processed in the formatter 360. However, May be located behind the formatter. That is, the output of the image processing unit 320 is 3D-processed by the formatter 360, and the OSD generating unit 340 performs 3D processing together with the OSD generation. Thereafter, the processed 3D signals are mixed by the mixer 345 It is also possible to do.

Meanwhile, the block diagram of the controller 170 shown in FIG. 4 is a block diagram for an embodiment of the present invention. Each component of the block diagram can be integrated, added, or omitted according to the specifications of the control unit 170 actually implemented.

In particular, the frame rate converter 350 and the formatter 360 are not provided in the controller 170, but may be separately provided.

5 is a diagram showing a control method of the remote control apparatus of FIG.

5A illustrates that the pointer 180 corresponding to the remote control device 200 is displayed on the display 180. In this case,

The user can move or rotate the remote control device 200 up and down, left and right (Figure 5 (b)), and back and forth (Figure 5 (c)). The pointer 205 displayed on the display 180 of the video display device corresponds to the movement of the remote control device 200. [ In this remote control device 200, as shown in the figure, since the pointer 205 is moved according to the movement in the 3D space, it can be called a space remote controller.

5B illustrates that when the user moves the remote control apparatus 200 to the left, the pointer 205 displayed on the display 180 of the image display apparatus also shifts to the left correspondingly.

Information on the motion of the remote control device 200 sensed through the sensor of the remote control device 200 is transmitted to the image display device. The image display apparatus can calculate the coordinates of the pointer 205 from the information on the motion of the remote control apparatus 200. [ The image display apparatus can display the pointer 205 so as to correspond to the calculated coordinates.

5C illustrates a case in which the user moves the remote control device 200 away from the display 180 while pressing a specific button in the remote control device 200. FIG. Thereby, the selected area in the display 180 corresponding to the pointer 205 can be zoomed in and displayed. Conversely, when the user moves the remote control device 200 close to the display 180, the selection area within the display 180 corresponding to the pointer 205 may be zoomed out and zoomed out. On the other hand, when the remote control device 200 moves away from the display 180, the selection area is zoomed out, and when the remote control device 200 approaches the display 180, the selection area may be zoomed in.

On the other hand, when the specific button in the remote control device 200 is pressed, it is possible to exclude recognizing the up, down, left, and right movement. That is, when the remote control apparatus 200 moves away from or approaches the display 180, it is not recognized that the up, down, left, and right movements are recognized, and only the forward and backward movements are recognized. Only the pointer 205 is moved in accordance with the upward, downward, leftward, and rightward movement of the remote control device 200 in a state where the specific button in the remote control device 200 is not pressed.

On the other hand, the moving speed and moving direction of the pointer 205 may correspond to the moving speed and moving direction of the remote control device 200.

Fig. 6 is an internal block diagram of the remote control device of Fig. 3; Fig.

The remote control device 200 includes a wireless communication unit 420, a user input unit 435, a sensor unit 440, an output unit 450, a power supply unit 460, a storage unit 470, And a control unit 480.

The wireless communication unit 420 transmits / receives a signal to / from any one of the video display devices according to the embodiments of the present invention described above. Of the video display devices according to the embodiments of the present invention, one video display device 100 will be described as an example.

In this embodiment, the remote control apparatus 200 may include an RF module 421 capable of transmitting and receiving signals with the image display apparatus 100 according to the RF communication standard. In addition, the remote control apparatus 200 may include an IR module 423 capable of transmitting and receiving signals to and from the image display apparatus 100 according to the IR communication standard.

In the present embodiment, the remote control device 200 transmits a signal containing information on the motion and the like of the remote control device 200 to the image display device 100 through the RF module 421.

Also, the remote control device 200 can receive the signal transmitted by the video display device 100 through the RF module 421. [ In addition, the remote control device 200 can transmit a command regarding power on / off, channel change, volume change, and the like to the video display device 100 through the IR module 423 as necessary.

The user input unit 430 may include a keypad, a button, a touchpad, or a touch screen. The user can input a command related to the image display apparatus 100 to the remote control apparatus 200 by operating the user input unit 430. [ When the user input unit 430 includes a hard key button, the user can input a command related to the image display apparatus 100 to the remote controller 200 through the push operation of the hard key button. When the user input unit 430 has a touch screen, the user can touch a soft key of the touch screen to input a command related to the image display apparatus 100 to the remote control apparatus 200. [ In addition, the user input unit 430 may include various types of input means such as a scroll key, a jog key, etc., which can be operated by the user, and the present invention does not limit the scope of the present invention.

The sensor unit 440 may include a gyro sensor 441 or an acceleration sensor 443. The gyro sensor 441 can sense information about the motion of the remote control device 200. [

For example, the gyro sensor 441 can sense information about the operation of the remote control device 200 based on the x, y, and z axes. The acceleration sensor 443 can sense information on the moving speed and the like of the remote control device 200. On the other hand, a distance measuring sensor can be further provided, whereby the distance to the display 180 can be sensed.

The output unit 450 may output an image or a voice signal corresponding to the operation of the user input unit 430 or corresponding to the signal transmitted from the image display apparatus 100. [ The user can recognize whether the user input unit 430 is operated or whether the image display apparatus 100 is controlled through the output unit 450.

For example, the output unit 450 includes an LED module 451 that is turned on when the user input unit 430 is operated or a signal is transmitted / received to / from the video display device 100 through the wireless communication unit 420, a vibration module 453 for outputting sound, an audio output module 455 for outputting sound, or a display module 457 for outputting an image.

The power supply unit 460 supplies power to the remote control device 200. The power supply unit 460 can reduce power waste by interrupting the power supply when the remote controller 200 is not moving for a predetermined period of time. The power supply unit 460 may resume power supply when a predetermined key provided in the remote control device 200 is operated.

The storage unit 470 may store various types of programs, application data, and the like necessary for the control or operation of the remote control apparatus 200. [ If the remote control device 200 wirelessly transmits and receives a signal through the image display device 100 and the RF module 421, the remote control device 200 and the image display device 100 transmit signals through a predetermined frequency band Send and receive. The control unit 480 of the remote control device 200 stores information on the frequency band and the like capable of wirelessly transmitting and receiving signals with the video display device 100 paired with the remote control device 200 in the storage unit 470 Can be referenced.

The control unit 480 controls various items related to the control of the remote control device 200. The control unit 480 transmits a signal corresponding to a predetermined key operation of the user input unit 430 or a signal corresponding to the motion of the remote control device 200 sensed by the sensor unit 440 through the wireless communication unit 420, (100).

The user input interface unit 150 of the image display apparatus 100 includes a wireless communication unit 411 capable of wirelessly transmitting and receiving signals to and from the remote control apparatus 200 and a pointer corresponding to the operation of the remote control apparatus 200. [ And a coordinate value calculation unit 415 that can calculate the coordinate value of the coordinate system.

The user input interface unit 150 can wirelessly transmit and receive signals to and from the remote control device 200 through the RF module 412. Also, the remote control device 200 can receive a signal transmitted through the IR module 413 according to the IR communication standard.

The coordinate value calculator 415 corrects the camera shake or error from the signal corresponding to the operation of the remote controller 200 received via the wireless communication unit 411 and outputs the coordinate value of the pointer 202 to be displayed on the display 170 (x, y) can be calculated.

The transmission signal of the remote controller 200 inputted to the image display apparatus 100 through the user input interface unit 150 is transmitted to the controller 170 of the image display apparatus 100. [ The control unit 170 can determine the information on the operation of the remote control apparatus 200 and the key operation from the signal transmitted from the remote control apparatus 200 and control the image display apparatus 100 in accordance with the information.

As another example, the remote control device 200 may calculate the pointer coordinate value corresponding to the operation and output it to the user input interface unit 150 of the video display device 100. [ In this case, the user input interface unit 150 of the image display apparatus 100 can transmit information on the received pointer coordinate values to the control unit 170 without any additional camera shake or error correction process.

As another example, the coordinate value calculating unit 415 may be provided in the control unit 170 instead of the user input interface unit 150, unlike the drawing.

FIG. 7 is a view for explaining how images are formed by a left eye image and a right eye image, and FIG. 8 is a view for explaining depths of a 3D image according to an interval between a left eye image and a right eye image.

First, referring to FIG. 7, a plurality of images or a plurality of objects 515, 525, 535, 545 are illustrated.

First, the first object 515 includes a first left eye image 511, L based on the first left eye image signal and a first right eye image 513, R based on the first right eye image signal, It is exemplified that the interval between the first left eye image 511, L and the first right eye image 513, R is d1 on the display 180. [ At this time, the user recognizes that an image is formed at an intersection of an extension line connecting the left eye 501 and the first left eye image 511 and an extension line connecting the right eye 503 and the first right eye image 503. Accordingly, the user recognizes that the first object 515 is located behind the display 180. [

Next, since the second object 525 includes the second left eye image 521, L and the second right eye image 523, R, and overlaps with each other and is displayed on the display 180, do. Accordingly, the user recognizes that the second object 525 is located on the display 180. [

Next, the third object 535 and the fourth object 545 are displayed on the display screen of the fourth left eye image 531, L, the second right eye image 533, R, the fourth left eye image 541, Right eye image 543 (R), and the intervals thereof are d3 and d4, respectively.

According to the above-described method, the user recognizes that the third object 535 and the fourth object 545 are located at positions where images are formed, respectively, and recognizes that they are located before the display 180 in the drawing.

At this time, it is recognized that the fourth object 545 is projected before the third object 535, that is, more protruded than the third object 535. This is because the interval between the fourth left eye image 541, L and the fourth right eye image 543, d4 is larger than the interval d3 between the third left eye image 531, L and the third right eye image 533, R. [

Meanwhile, in the embodiment of the present invention, the distance between the display 180 and the objects 515, 525, 535, and 545 recognized by the user is represented by a depth. Accordingly, it is assumed that the depth when the user is recognized as being positioned behind the display 180 has a negative value (-), and the depth when the user is recognized as being positioned before the display 180 (depth) has a negative value (+). That is, the greater the degree of protrusion in the user direction, the greater the depth.

8, the interval a between the left eye image 601 and the right eye image 602 in FIG. 8 (a) is smaller than the interval a between the left eye image 601 and the right eye image 602 shown in FIG. 8 (b) it is understood that the depth a 'of the 3D object in FIG. 8 (a) is smaller than the depth b' of the 3D object in FIG. 8 (b).

In this way, when the 3D image is exemplified as the left eye image and the right eye image, the positions recognized as images are different depending on the interval between the left eye image and the right eye image. Accordingly, by adjusting the display intervals of the left eye image and the right eye image, the depth of the 3D image or the 3D object composed of the left eye image and the right eye image can be adjusted.

Fig. 9 is a diagram referred to explain the principle of a stereoscopic image display apparatus of a non-eyeglass system.

As described above, the stereoscopic image display apparatus of the non-eyeglass system includes a lenticular system and a parallax system, and a system using a microlens array. Hereinafter, the lenticular method and the parallax method will be described in detail. Hereinafter, it will be exemplified that the viewpoint image is composed of the left-eye viewpoint image and the right-eye viewpoint image, but this is not for convenience of description.

9 (a) is a view showing a lenticular system using a lenticular lens. Referring to FIG. 9A, a block 720 (L) constituting a left eye view image and blocks 710 (R) constituting a right eye view image may be alternately arranged on a display 180. At this time, each block may include a plurality of pixels, but it is also possible to include one pixel. Hereinafter, the case where each block is composed of one pixel will be mainly described.

The lenticular lens system 195 includes a lenticular lens 195a disposed on the front surface of the display 180. The lenticular lens 195a disposed on the front surface of the display 180 is disposed in a direction Can be varied. For example, the light emitted from the pixels 720, L constituting the left eye view image is changed in the direction of progress toward the left eye 701 of the viewer, and the pixels 710, R constituting the right eye view image, The light emitted from the viewer can be changed toward the right eye 702 of the viewer.

Accordingly, in the left eye 702, the light emitted from the pixels 720, L constituting the left eye view image is merged to see the left eye view image. In the right eye 701, The light emitted from the right eye 710 (R) is combined to see the right eye view image, and the viewer is recognized as a three-dimensional image without wearing glasses.

9 (b) is a diagram showing a parallax system using a slit array. Referring to FIG. 9B, similarly to FIG. 9A, a pixel 720 (L) constituting a left eye view image and pixels 710 (R) constituting a right eye view image are alternately displayed on a display 180 Lt; / RTI > In the parallax system, a slit array 195b is disposed in the lens unit 195. The slit array 195b functions as a barrier so that light emitted from the pixels can travel only in a certain direction . Accordingly, in the same manner as the lenticular method, the user sees the left eye view image in the left eye 702, the right eye view image in the right eye 701, and the viewer recognizes the stereoscopic image without wearing any glasses.

10 to 14 are diagrams for explaining the principle of an image display device including a plurality of viewpoint images.

10 is a view showing an image display apparatus 100 including three viewpoint regions 821, 822 and 823, in which some pixels constituting three viewpoint images respectively displayed in three viewpoint regions are shown in FIG. 10 As shown, can be rearranged and displayed on the display 180. At this time, rearranging the pixels means changing the value of the pixel displayed on the display 180, rather than changing the physical position of the pixel.

The three viewpoint images may be images of the object 910 taken in different directions as shown in FIG.

The first pixel 811 displayed on the display 180 may be composed of a first subpixel 801, a second subpixel 802 and a third subpixel 803, (801, 802, and 803) may be subpixels representing any one of red, green, and blue.

FIG. 10 shows one pattern displayed by rearranging the pixels constituting the three view-point images. However, the present invention is not limited thereto, and may be rearranged and displayed in various patterns according to the lens unit 195.

In FIG. 10, the subpixels 801, 802 and 803 in which the numeral 1 is described are subpixels constituting the first viewpoint image, the subpixels in which the numeral 2 is described are subpixels constituting the second viewpoint image, The described subpixels may be subpixels constituting the third viewpoint image.

Accordingly, in the first viewpoint region 821, the subpixels in which the numeral 1 is described are combined to display the first viewpoint image, and in the second viewpoint region 822, the subpixels describing the numeral 2 are combined to display the second viewpoint image The third viewpoint image may be displayed by combining the subpixels in which the number 3 is described in the third viewpoint region.

That is, the first view image 901, the second view image 902, and the third view image 903 shown in FIG. 11 represent images displayed along the view direction. In addition, the first viewpoint image 901 is taken in the first viewpoint direction, the second viewpoint image 902 is taken in the second viewpoint direction, and the third viewpoint image 903 is taken in the third viewpoint direction It can be a video.

Therefore, when the left eye 922 of the viewer is located in the third viewpoint region 823 and the right eye 921 is located in the second viewpoint region 822 as shown in Fig. 12 (a), the left eye 922 The third viewpoint image 903 and the right eye view 921 see the second viewpoint image 902. FIG. Accordingly, as shown in Fig. 12 (b), it is recognized that the object 910 is located in front of the display 180 by the principle described in Fig. 7, so that the viewer can recognize the stereoscopic image do. Also, when the left eye 922 of the viewer is located in the second view region 822 and the right eye 921 is located in the first view region 821, stereoscopic images (3D images) can be recognized as well.

On the other hand, as shown in FIG. 10, when the pixels of the plural viewpoint images are rearranged only in the horizontal direction, the horizontal resolution is reduced to 1 / n (the number of viewpoint images) as compared with the 2D image. For example, the horizontal resolution of the stereoscopic image (3D image) of FIG. 10 is reduced to 1/3 of that of the 2D image. On the other hand, the vertical resolution has the same resolution as the multi-view images 901, 902, and 903 before being rearranged.

In the case where the number of viewpoint images per direction is large (the reason why the number of viewpoint images should be increased will be described later with reference to FIG. 14), only the horizontal resolution is reduced as compared with the vertical resolution, There is a problem that it may be degraded.

13, the lens unit 195 is disposed on the front surface of the display 180 while being inclined at a predetermined angle alpha with the longitudinal axis 185 of the display, and the lens unit 195 The subpixels constituting the multiple view image can be rearranged and displayed in various patterns. 13 shows an image display apparatus including a plurality of viewpoints in each of 25 directions according to an embodiment of the present invention. In this case, the lens unit 195 may be a lenticular lens or a slit array.

13, the red subpixels constituting the sixth view image are displayed every 5 pixels in the horizontal and vertical directions, and the stereoscopic image (3D image) The horizontal and vertical resolutions can be reduced to 1/5 of the directional multi-view images before being rearranged. Therefore, it is possible to balance the degradation in resolution compared to a method in which only the conventional horizontal resolution is reduced to 1/25.

14 is a diagram for explaining a sweet zone and a dead zone appearing on the front face of the video display device.

When the stereoscopic image is viewed using the image display device 100 as described above, the stereoscopic effect can be felt by a plurality of viewers who do not wear the special stereoscopic glasses, but the stereoscopic effect is limited to a certain area. There is an area where the viewer can view an optimal image, which can be defined by an optimal viewing distance (OVD) and a sweet zone (1020). First, the optimum viewing distance D can be determined by the distance between the left and right eyes, the pitch of the lens portion, and the focal length of the lens. The swizzone 1020 is an area in which a plurality of viewpoint regions are sequentially positioned so that a viewer can stably feel a three-dimensional feeling. 14, when the viewer is located in the sweet zone 1020, the 12th to 14th viewpoint images are recognized in the right eye 1001 and the 17th to 19th viewpoint images are recognized in the left eye view 1002 , The viewpoint image for each direction can be sequentially recognized in the left eye 1002 and the right eye 1001. [ Therefore, as described with reference to Fig. 12, the stereoscopic effect can be felt by the left eye image and the right eye image.

On the other hand, in the case where the viewer is located in the dead zone 1015 after leaving the sweet zone 1020, for example, the first to third view images are displayed in the left eye 1003, When the 23rd to 25th viewpoint images are recognized, the viewpoint-based viewpoint images are not sequentially recognized in the left eye 1003 and the right eye 1004, and the inversion of the left eye image and the right eye image may occur, . In addition, when the first view image and the 25th view image are recognized at once in the left eye 1003 or the right eye 1004, dizziness may be felt.

Therefore, when the viewer is located in the dead zone 1015, the viewer must be informed that he is located in the dead zone.

Accordingly, in the embodiment of the present invention, a viewpoint image recognized in the dead zone is displayed by being varied, thereby providing a video display device in which a viewer can easily recognize a dead zone. Hereinafter, the video display device will be described in detail.

FIG. 15 is a flowchart illustrating an operation method in an image display apparatus according to an embodiment, and FIGS. 16 through 21 are views referred to explain an operation method of the image display apparatus of FIG.

Hereinafter, for convenience of explanation, it is assumed that the number of viewpoint images and the number of viewpoint areas corresponding to the plurality of viewpoint images is 25, respectively.

In addition, the first to twenty-fifth viewpoint images described below are sequentially numbered with respect to a plurality of viewpoint-oriented viewpoint images, and the viewer can view the viewpoint images recognized in the right eye by the principle described in Fig. 7 Let's take as an example that you can feel the stereoscopic effect when the number is lower than the image.

Referring to FIG. 15, a plurality of viewpoint images may be displayed on the display 180 (S1110). As shown in FIG. 16, when the subpixels constituting the 25 multiple viewpoint images are displayed on the display 180, the first viewpoint region 1201 to the twenty-fifth viewpoint region 1225 A plurality of viewpoint regions may be formed.

In this case, the first view region 1201 in which the first viewpoint image is recognized and the 25th viewpoint region 1225 in which the 25th viewpoint image is recognized form a dead zone 1015.

For example, when the left eye 1211 of the viewer is located in the first view region 1201 and the right eye 1212 is located in the 25th viewpoint region 1225, the left eye view 1211 sees the first view image , And the right eye 1212 sees the 25th viewpoint image.

In this case, the viewpoint images of the directions are not sequentially recognized in the left eye 1211 and the right eye 1212, the inversion of the left eye image and the right eye image occurs, and the three-dimensional image is not felt properly.

Referring again to FIG. 15, a specific region in which the first view image recognized in the dead zone is displayed can be calculated (S1120).

16, the control unit 170 controls the display unit 180 to display the subpixels constituting the first view image and the twenty-fifth view image, which are recognized in the dead zone 1015, , A2) can be computed.

At this time, the first viewpoint image and the second viewpoint image different from the first viewpoint image may be alternately displayed in the calculated specific area with a certain period (S1130).

For example, as shown in FIGS. 16 and 18, in a specific area A1 of the display in which pixels constituting the first viewpoint image are displayed, the subpixels constituting the first viewpoint image and the 25th viewpoint image Can be displayed alternately.

Alternatively, the subpixels constituting the 25th viewpoint image and the subpixels constituting the first viewpoint image are alternately displayed in a specific area A2 of the display on which the pixels constituting the 25th viewpoint image are displayed can do.

As shown in FIGS. 19A and 19B, in the first view area 1201, the first view image and the 25th view image may be alternately recognized at regular intervals, In the viewpoint area 1225, the 25th viewpoint image and the first viewpoint image may be alternately recognized with a certain periodicity.

At this time, the predetermined period may be set by user input and may be a value stored in the storage unit 140 in advance.

19A and 19B, when a first viewpoint image is recognized in the first viewpoint region 1201, a twenty-fifth viewpoint image is recognized in the twenty-fifth viewpoint region 1205, The first viewpoint image is recognized in the twenty-fifth viewpoint region 1225 and the different viewpoint image is recognized in the first viewpoint region and the twenty-fifth viewpoint region. However, But is not limited thereto.

In addition, the image that is alternately recognized as the first viewpoint image in the first viewpoint region 1201 is not limited to the 25th viewpoint image, and may be displayed on the display 180 so that three or more viewpoint images are alternately recognized .

As described above, if the first viewpoint image and the twenty-fifth viewpoint image are displayed alternately in the dead zone 1015 with a certain period, the viewer located in the dead zone can recognize that the image is shaken, (1015).

On the other hand, as shown in FIG. 20, a message 1310 may be displayed on at least one of the first view image and the 25th view image. Accordingly, the message 1310 can be recognized in the dead zone 1015, and can include a phrase such as 'Move to a place where the image does not move' and a moving direction.

The OSD generating unit 340 in the controller 170 may generate an OSD signal related to the message 1310. The mixer 345 may receive the OSD signal and the multiple view image signal processed by the image processing unit 320, Can be mixed. Accordingly, an image including the OSD message can be displayed on the display 180. [

Also, although not shown, the controller 170 may control the first view image and the twenty-fifth view image to be processed as black and white images.

When the first viewpoint image and the 25th viewpoint image are processed as black and white images, the viewer located in the dead zone 1015 can recognize the black and white image and can easily recognize the dead zone 1015.

Referring to FIGS. 16 and 21, the subpixels constituting the first viewpoint image and the black data are alternately displayed in a specific area A1 of the display 180 on which the subpixels constituting the first viewpoint image are displayed can do.

Alternatively, the subpixels constituting the 25th viewpoint image and the black data may be alternately displayed in a specific area A2 of the display on which the subpixels constituting the 25th viewpoint image are displayed.

The first viewpoint image and the black image may be alternately recognized at regular intervals in the first viewpoint region 1201 and the 25th viewpoint image and the black image may be displayed in the 25th viewpoint region 1225, Can be alternately recognized with a period.

As described above, when the first view image and the 25th view image are displayed alternately in the dead zone 1015 with the black image, the viewer located in the dead zone 1015 can recognize that the image is blinking , And a dead zone 1015, as shown in Fig.

Also, as described with reference to FIG. 20, a message can be displayed on at least one of the first view image and the 25th view image. The displayed message can be recognized in the dead zone, and can include phrases such as 'Move to a place where the image does not blink' and a moving direction.

Meanwhile, the photographing unit 155 can sense the position of the viewer. The photographing unit 155 may track the position of the viewer in real time using eye tracking and transmit the position information of the viewer to the controller 170. [

The control unit 170 can determine whether or not the viewer is located in the dead zone 1015 based on the location information of the viewer.

When the viewer is located in the dead zone 1015, the control unit 1015 can operate in the manner described in Figs. 15 to 21. Fig.

For example, when it is determined that the viewer is located in the dead zone 1015, the first view image and the 25th view image can be controlled to be alternately recognized in the dead zone 1015, The black image, the 25th viewpoint image, and the black image are alternately recognized.

FIG. 22 is a flowchart showing an operation method in an image display apparatus according to an embodiment, and FIGS. 23 to 24 are drawings referred to explain an operation method of the image display apparatus in FIG.

22, a plurality of viewpoint images can be displayed on the display 180 (S1510), and a specific area in which a viewpoint image recognized in the dead zone 1015 formed on the front surface of the display 180 is displayed can be calculated (S1520). Steps S1510 and S1520 of FIG. 22 correspond to steps S1110 and S1120 of FIG. 15, respectively.

For example, as shown in FIG. 17, when the viewpoint image recognized in the dead zone 1015 is the first viewpoint image and the 25th viewpoint image, the calculated specific area is divided into the first viewpoint image and the 25th viewpoint image, (A1, A2) of the display in which the subpixels constituting the view image are displayed.

A plurality of view-point images displayed in the calculated specific area can be displayed in the same viewpoint image (S1530).

At this time, the same viewpoint image may be any one of the viewpoint images recognized in the dead zone 1015

For example, as shown in FIG. 23 (a), not only the area A1 in which the subpixels constituting the first view image of the display are displayed but also the area A2 in which the subpixels constituting the 25th viewpoint image are displayed It is possible to display the subpixels constituting the first viewpoint image.

Alternatively, as shown in FIG. 23B, not only the area A2 in which the subpixels constituting the 25th viewpoint image are displayed, but also the area A1 in which the subpixels constituting the first viewpoint image are displayed, The pixels constituting the image can be displayed.

24 (a), only the first viewpoint image is recognized in both the first viewpoint region 1201 and the twenty-fifth viewpoint region 1225, or as shown in FIG. 24 (b) Only the 25th viewpoint image may be recognized in both the first viewpoint region 1201 and the 25th viewpoint region 1225. [

Accordingly, when the viewer is located in the dead zone 1015, the same view image is recognized in the left and right eyes of the viewer, and 2D images without disparity can be recognized.

As described above, when the 2D image is displayed in the dead zone 1015, the viewer located in the dead zone 1015 may not feel dizziness due to the reversal phenomenon of the left eye image and the right eye image, and the dead zone 1015 ) Can easily be recognized.

Alternatively, a plurality of view-point images displayed in the calculated specific area can be processed as black images.

For example, in order to allow the black image to be recognized in the dead zone 1015, as shown in FIG. 21, a specific area A1 of the display 180 on which the subpixels constituting the first view image are displayed, The black data can be displayed in the specific area A2 of the display on which the subpixels constituting the display area are displayed.

Accordingly, when the viewer is located in the dead zone 1015, the viewer recognizes only the black data, so that he can see the black image and can easily recognize the dead zone 1015.

Also, as described with reference to FIG. 20, a message can be displayed on at least one of the first view image and the 25th view image. The displayed message can be recognized in the dead zone, and can include phrases such as 'Move to 3D image' and a moving direction.

The image display apparatus and the operation method thereof according to the present invention are not limited to the configuration and method of the embodiments described above, but the embodiments may be applied to all or some of the embodiments May be selectively combined.

Meanwhile, the operation method of the image display apparatus of the present invention can be implemented as a code that can be read by a processor on a recording medium readable by a processor included in the image display apparatus. The processor-readable recording medium includes all kinds of recording apparatuses in which data that can be read by the processor is stored. Examples of the recording medium that can be read by the processor include a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like, and may also be implemented in the form of a carrier wave such as transmission over the Internet . In addition, the processor-readable recording medium may be distributed over network-connected computer systems so that code readable by the processor in a distributed fashion can be stored and executed.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments, but, on the contrary, It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present invention.

Claims (14)

A display on which a plurality of viewpoint images are displayed;
A lens unit disposed on a front surface of the display, the lens unit separating the plurality of viewpoint images according to directions; And
A first area of the display area in which a first viewpoint image recognized in a dead zone is displayed is calculated from the first viewpoint image and a second viewpoint different from the first viewpoint image in the specific area, And a controller for controlling the display of the images alternately with a predetermined period.
The method according to claim 1,
Wherein,
Wherein the control unit controls the display unit to display a notification message in at least one of the first viewpoint image and the second viewpoint image.
The method according to claim 1,
Further comprising a photographing unit for acquiring location information of a viewer,
Wherein the control unit controls the display unit to alternately display the first view image and the second view image when the viewer is located in the dead zone as a result of the determination based on the position information of the viewer Display device.
The method according to claim 1,
Wherein,
And the second viewpoint image is processed as black data.
The method according to claim 1,
Wherein,
And controls the first viewpoint image and the second viewpoint image to be processed as a monochrome image.
The method according to claim 1,
The lens unit includes:
A video display device comprising a lenticular lens.
The method according to claim 6,
Wherein the lenticular lens is inclined at a predetermined angle with respect to the display.
Displaying a plurality of viewpoint images on a display;
Calculating a specific area of a display in which a first viewpoint image recognized in a dead zone is displayed among the plurality of viewpoint images;
And alternately repeating the first viewpoint image and the second viewpoint image different from the first viewpoint image in the calculated specific area.
9. The method of claim 8,
And a notification message is displayed on at least one of the first viewpoint image and the second viewpoint image.
9. The method of claim 8,
Further comprising the step of acquiring location information of the viewer,
Wherein the display unit alternately displays the first view image and the second view image when the viewer is located in the dead zone based on the position information of the viewer.
9. The method of claim 8,
Wherein the second viewpoint image is processed and displayed as black data.
9. The method of claim 8,
Wherein the first viewpoint image and the second viewpoint image are processed and displayed as a monochrome image.
A display on which a plurality of viewpoint images are displayed;
A lens unit disposed on a front surface of the display, the lens unit separating the plurality of viewpoint images according to directions; And
And a controller for controlling the plurality of view images recognized in the dead zone to be displayed in a variable manner in the same view image.
14. The method of claim 13,
Wherein the same viewpoint image is any one of a plurality of viewpoint images recognized in the dead zone.
KR1020120141227A 2012-12-06 2012-12-06 Image display apparatus, and method for operating the same KR20140073231A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020120141227A KR20140073231A (en) 2012-12-06 2012-12-06 Image display apparatus, and method for operating the same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020120141227A KR20140073231A (en) 2012-12-06 2012-12-06 Image display apparatus, and method for operating the same

Publications (1)

Publication Number Publication Date
KR20140073231A true KR20140073231A (en) 2014-06-16

Family

ID=51126832

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020120141227A KR20140073231A (en) 2012-12-06 2012-12-06 Image display apparatus, and method for operating the same

Country Status (1)

Country Link
KR (1) KR20140073231A (en)

Similar Documents

Publication Publication Date Title
KR101924058B1 (en) Image display apparatus, and method for operating the same
KR20140063272A (en) Image display apparatus and method for operating the same
KR20150116302A (en) Image display apparatus, server and method for operating the same
US20150109426A1 (en) Glassless stereoscopic image display apparatus and method for operating the same
KR101855939B1 (en) Method for operating an Image display apparatus
US20140132726A1 (en) Image display apparatus and method for operating the same
EP2672716A2 (en) Image display apparatus and method for operating the same
KR101832225B1 (en) Image display apparatus, and method for operating the same
EP2566165A2 (en) Image display apparatus and method for operating the same
KR101912635B1 (en) Image display apparatus, and method for operating the same
KR101836846B1 (en) Image display apparatus, and method for operating the same
KR20140098512A (en) Image display apparatus, and method for operating the same
KR20130120255A (en) Image display apparatus, and method for operating the same
KR20150043875A (en) Stereoscopic image display apparatus in glassless mode and method for operating the same
KR20140073231A (en) Image display apparatus, and method for operating the same
KR101825669B1 (en) Image display apparatus, and method for operating the same
KR101878808B1 (en) Image display apparatus and method for operating the same
KR101945811B1 (en) Image display apparatus, and method for operating the same
KR101890323B1 (en) Image display apparatus, settop box and method for operating the same
KR20140063276A (en) Image display apparatus and method for operating the same
KR20140063275A (en) Image display apparatus and method for operating the same
KR20140107923A (en) Image display apparatus
KR20140008188A (en) Image display apparatus, and method for operating the same
KR20130068964A (en) Method for operating an image display apparatus
KR20140064115A (en) Image display apparatus and method for operating the same

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E601 Decision to refuse application