KR20140079107A - Image display apparatus, and method for operating the same - Google Patents

Image display apparatus, and method for operating the same Download PDF

Info

Publication number
KR20140079107A
KR20140079107A KR1020120148721A KR20120148721A KR20140079107A KR 20140079107 A KR20140079107 A KR 20140079107A KR 1020120148721 A KR1020120148721 A KR 1020120148721A KR 20120148721 A KR20120148721 A KR 20120148721A KR 20140079107 A KR20140079107 A KR 20140079107A
Authority
KR
South Korea
Prior art keywords
image
eye image
right eye
left eye
error
Prior art date
Application number
KR1020120148721A
Other languages
Korean (ko)
Inventor
박대건
안승화
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Priority to KR1020120148721A priority Critical patent/KR20140079107A/en
Publication of KR20140079107A publication Critical patent/KR20140079107A/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Abstract

The present invention relates to an image display apparatus and an operation method thereof. A method of operating an image display apparatus according to an embodiment of the present invention includes receiving a left eye image from a first source, receiving a right eye image from a second source different from the first source, Displaying a 3D image using an image, restoring a lost image using a pre-stored depth map when an error of more than an allowable value occurs in any of the received left eye image and right eye image, And displaying the 3D image using the image. This makes it possible to improve the usability of the user.

Description

[0001] The present invention relates to an image display apparatus and a method of operating the same,

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to an image display apparatus and an operation method thereof, and more particularly, to an image display apparatus and an operation method thereof that can improve the usability of a user.

A video display device is a device having a function of displaying an image that a user can view. The user can view the broadcast through the video display device. A video display device displays a broadcast selected by a user among broadcast signals transmitted from a broadcast station on a display. Currently, broadcasting is changing from analog broadcasting to digital broadcasting around the world.

Digital broadcasting refers to broadcasting in which digital video and audio signals are transmitted. Compared to analog broadcasting, digital broadcasting is strong against external noise and has a small data loss, is advantageous for error correction, has a high resolution, and provides a clear screen. Also, unlike analog broadcasting, digital broadcasting is capable of bidirectional service.

It is an object of the present invention to provide an image display apparatus and an operation method thereof that can improve the usability of the user.

Another object of the present invention is to provide an image display apparatus and a method of operating the same that can stably view a 3D image when one of a left eye image and a right eye image is lost when viewing a 3D image.

According to another aspect of the present invention, there is provided a method of operating a video display device including receiving a left eye image from a first source, receiving a right eye image from a second source different from the first source, Displaying a 3D image using the received left eye image and right eye image, and restoring the lost image using a pre-stored depth map when an error of more than an allowable value occurs in any of the received left eye image and right eye image And displaying the 3D image using the reconstructed image.

According to another aspect of the present invention, there is provided an image display apparatus including a tuner unit for receiving one of a left eye image and a right eye image, a network interface unit for receiving another one of a left eye image and a right eye image, A display for displaying the 3D image using the received left and right eye images and a display for displaying the lost image using the pre-stored depth map when an error of more than an allowable value occurs in any of the received left eye image and right eye image, And reconstructs the 3D image using the reconstructed image.

According to another aspect of the present invention, there is provided an image display apparatus including a broadcast receiver for receiving a left eye image encoded by a first encoding method and a right eye image encoded by a second encoding method, A display for displaying a 3D image using a left eye image and a right eye image and a display unit for restoring the lost image using the pre-stored depth map when an error of more than an allowable value occurs in any of the received left eye image and right eye image, And a controller for controlling the 3D image to be displayed using the reconstructed image.

According to the embodiment of the present invention, when an error of more than a tolerance value occurs in any one of the images during reception of the left eye image and the right eye image from different sources, the lost image is restored using the previously stored depth map, The 3D image is displayed using the image obtained from the 3D image, and the 3D image is stably displayed. Accordingly, the usability of the user can be increased.

In particular, the 3D image is displayed by restoring the lost image by using the image with no loss and the depth map, thereby displaying the 3D image stably and seamlessly when displaying the 3D image. Accordingly, the usability of the user can be increased.

According to another embodiment of the present invention, when receiving the left eye image encoded by the first encoding method and the right eye image encoded by the second encoding method, When an error occurs, the 3D image is displayed stably by restoring the lost image using the pre-stored depth map and displaying the 3D image using the reconstructed image. Accordingly, the usability of the user can be increased.

1 is a view showing an appearance of a video display device of the present invention.
2 is an internal block diagram of an image display apparatus according to an embodiment of the present invention.
3 is an internal block diagram of the control unit of FIG.
4 is a diagram showing various formats of a 3D image.
5 is a diagram showing the operation of the viewing apparatus according to the format of FIG.
6 is a diagram illustrating various scaling methods of a 3D image signal according to an exemplary embodiment of the present invention.
Fig. 7 is a view for explaining how images are formed by the left eye image and the right eye image.
8 is a view for explaining the depth of the 3D image according to the interval between the left eye image and the right eye image.
9 is a flowchart illustrating an operation method of an image display apparatus according to an embodiment of the present invention.
FIGS. 10 to 14B are views referred to explain various examples of the operation method of the image display apparatus of FIG.

Hereinafter, the present invention will be described in detail with reference to the drawings.

The suffix "module" and " part "for components used in the following description are given merely for convenience of description, and do not give special significance or role in themselves. Accordingly, the terms "module" and "part" may be used interchangeably.

1 is a view showing an appearance of a video display device of the present invention.

Referring to FIG. 1, an image display apparatus 100 according to an embodiment of the present invention may be a fixed image display apparatus or a mobile image display apparatus.

According to the embodiment of the present invention, the image display apparatus 100 can perform signal processing of a 3D image. For example, the left eye image and the right eye image input to the image display apparatus 100 are subjected to signal processing, the left eye image and the right eye image are arranged according to the format of FIG. 4, and the 3D image is displayed according to the format .

On the other hand, the image display apparatus 100 can receive the left eye image and the right eye image from different sources. For example, the left eye image can be received from the sky wave broadcast signal Sl, and the right eye image can be received via the network signal Sr.

At this time, when an error of more than a tolerance value occurs in any of the received left eye image and right eye image, the image display apparatus 100 restores the lost image using the previously stored depth map, So that a 3D image can be displayed. Accordingly, the image display apparatus 100 can stably display the 3D image, and as a result, the usability of the user can be increased.

Particularly, the image display apparatus 100 restores the lost image by using the image with no loss and the depth map, and displays the 3D image, thereby displaying the 3D image stably without interruption when displaying the 3D image .

On the other hand, the image display apparatus 100 may receive the left eye image encoded by the first encoding method and the right eye image encoded by the second encoding method. At this time, when an error of more than a tolerance value occurs in any of the received left eye image and right eye image, the image display apparatus 100 restores the lost image using the previously stored depth map, So that a 3D image can be displayed. Accordingly, the image display apparatus 100 can stably display the 3D image, and as a result, the usability of the user can be increased.

Meanwhile, the video display device 100 described in the present specification may include a TV receiver, a monitor, a projector, a notebook computer, a digital broadcasting terminal, a mobile phone, a smart phone, and a tablet PC.

2 is an internal block diagram of an image display apparatus according to an embodiment of the present invention.

2, an image display apparatus 100 according to an exemplary embodiment of the present invention includes a broadcast receiving unit 105, an external device interface unit 130, a storage unit 140, a user input interface unit 150, (Not shown), a control unit 170, a display 180, an audio output unit 185, and a viewing device 195.

The broadcast receiving unit 105 may include a tuner unit 110, a demodulation unit 120, and a network interface unit 130. Of course, it is possible to design the network interface unit 130 not to include the tuner unit 110 and the demodulation unit 120 as necessary, and to provide the network interface unit 130 with the tuner unit 110 And the demodulation unit 120 are not included.

The tuner unit 110 selects an RF broadcast signal corresponding to a channel selected by the user or all pre-stored channels among RF (Radio Frequency) broadcast signals received through the antenna 50. Also, the selected RF broadcast signal is converted into an intermediate frequency signal, a baseband image, or a voice signal.

For example, if the selected RF broadcast signal is a digital broadcast signal, it is converted into a digital IF signal (DIF). If the selected RF broadcast signal is an analog broadcast signal, it is converted into an analog baseband image or voice signal (CVBS / SIF). That is, the tuner unit 110 can process a digital broadcast signal or an analog broadcast signal. The analog baseband video or audio signal (CVBS / SIF) output from the tuner unit 110 can be directly input to the controller 170.

The tuner unit 110 may receive an RF broadcast signal of a single carrier according to an Advanced Television System Committee (ATSC) scheme or an RF broadcast signal of a plurality of carriers according to a DVB (Digital Video Broadcasting) scheme.

Meanwhile, the tuner unit 110 sequentially selects RF broadcast signals of all broadcast channels stored through a channel memory function among the RF broadcast signals received through the antenna in the present invention, and sequentially selects RF broadcast signals of the intermediate frequency signal, baseband image, . ≪ / RTI >

On the other hand, the tuner unit 110 can include a plurality of tuners in order to receive broadcast signals of a plurality of channels. Alternatively, a single tuner that simultaneously receives broadcast signals of a plurality of channels is also possible.

The demodulator 120 receives the digital IF signal DIF converted by the tuner 110 and performs a demodulation operation.

The demodulation unit 120 may perform demodulation and channel decoding, and then output a stream signal TS. At this time, the stream signal may be a signal in which a video signal, a voice signal, or a data signal is multiplexed.

The stream signal output from the demodulation unit 120 may be input to the controller 170. The control unit 170 performs demultiplexing, video / audio signal processing, and the like, and then outputs an image to the display 180 and outputs audio to the audio output unit 185.

The external device interface unit 130 can transmit or receive data with the connected external device 190. [ To this end, the external device interface unit 130 may include an A / V input / output unit (not shown) or a wireless communication unit (not shown).

The external device interface unit 130 can be connected to an external device such as a DVD (Digital Versatile Disk), a Blu ray, a game device, a camera, a camcorder, a computer , And may perform an input / output operation with an external device.

The A / V input / output unit can receive video and audio signals from an external device. Meanwhile, the wireless communication unit can perform short-range wireless communication with other electronic devices.

The network interface unit 135 provides an interface for connecting the video display device 100 to a wired / wireless network including the Internet network. For example, the network interface unit 135 can receive, via the network, content or data provided by the Internet or a content provider or a network operator.

The storage unit 140 may store a program for each signal processing and control in the control unit 170 or may store the processed video, audio, or data signals.

In addition, the storage unit 140 may perform a function for temporarily storing video, audio, or data signals input to the external device interface unit 130. [ In addition, the storage unit 140 may store information on a predetermined broadcast channel through a channel memory function such as a channel map.

Although the storage unit 140 of FIG. 2 is provided separately from the control unit 170, the scope of the present invention is not limited thereto. The storage unit 140 may be included in the controller 170.

The user input interface unit 150 transmits a signal input by the user to the control unit 170 or a signal from the control unit 170 to the user.

(Not shown), such as a power key, a channel key, a volume key, and a set value, from the remote control apparatus 200, (Not shown) that senses a user's gesture to the control unit 170 or transmits a signal from the control unit 170 to the control unit 170 It is possible to transmit it to the sensor unit (not shown).

The control unit 170 demultiplexes the input stream or processes the demultiplexed signals through the tuner unit 110 or the demodulation unit 120 or the external device interface unit 130 so as to output the video or audio output Signals can be generated and output.

The video signal processed by the controller 170 may be input to the display 180 and displayed as an image corresponding to the video signal. Also, the image signal processed by the controller 170 may be input to the external output device through the external device interface unit 130.

The audio signal processed by the control unit 170 may be output to the audio output unit 185 as an audio signal. The audio signal processed by the controller 170 may be input to the external output device through the external device interface unit 130. [

Although not shown in FIG. 2, the controller 170 may include a demultiplexer, an image processor, and the like. This will be described later with reference to FIG.

In addition, the control unit 170 can control the overall operation in the video display device 100. [ For example, the control unit 170 may control the tuner unit 110 to control the tuning of the RF broadcast corresponding to the channel selected by the user or the previously stored channel.

In addition, the controller 170 may control the image display apparatus 100 according to a user command or an internal program input through the user input interface unit 150.

Meanwhile, the control unit 170 may control the display 180 to display an image. At this time, the image displayed on the display 180 may be a still image or a moving image, and may be a 2D image or a 3D image.

Meanwhile, the controller 170 may generate a 3D object for a predetermined 2D object among the images displayed on the display 180, and display the 3D object. For example, the object may be at least one of a connected web screen (newspaper, magazine, etc.), EPG (Electronic Program Guide), various menus, widgets, icons, still images, moving images, and text.

Such a 3D object may be processed to have a different depth than the image displayed on the display 180. [ Preferably, the 3D object may be processed to be projected relative to the image displayed on the display 180.

On the other hand, the control unit 170 can recognize the position of the user based on the image photographed from the photographing unit (not shown). For example, the distance (z-axis coordinate) between the user and the image display apparatus 100 can be grasped. In addition, the x-axis coordinate and the y-axis coordinate in the display 180 corresponding to the user position can be grasped.

Although not shown in the drawing, a channel browsing processing unit for generating a channel signal or a thumbnail image corresponding to an external input signal may be further provided. The channel browsing processing unit receives the stream signal TS output from the demodulation unit 120 or the stream signal output from the external device interface unit 130 and extracts an image from an input stream signal to generate a thumbnail image . The generated thumbnail image may be stream-decoded together with a decoded image and input to the controller 170. The control unit 170 may display a thumbnail list having a plurality of thumbnail images on the display 180 using the input thumbnail image.

At this time, the thumbnail list may be displayed in a simple view mode displayed on a partial area in a state where a predetermined image is displayed on the display 180, or in a full viewing mode displayed in most areas of the display 180. The thumbnail images in the thumbnail list can be sequentially updated.

The display 180 converts a video signal, a data signal, an OSD signal, a control signal processed by the control unit 170, a video signal, a data signal, a control signal, and the like received from the external device interface unit 130, .

The display 180 may be a PDP, an LCD, an OLED, a flexible display, or the like, and may also be capable of a 3D display.

In order to view the three-dimensional image, the display 180 may be divided into an additional display method and a single display method.

The single display method can implement a 3D image only on the display 180 without a separate additional display, for example, glass, and examples thereof include a lenticular method, a parallax barrier, and the like Various methods can be applied.

In addition, the additional display method can implement a 3D image using an additional display as the viewing device 195 in addition to the display 180. For example, various methods such as a head mount display (HMD) type and a glasses type are applied .

On the other hand, the glasses type can be further divided into a passive type such as a polarizing glasses type and an active type such as a shutter glass type. Also, the head mount display type can be divided into a passive type and an active type.

On the other hand, the viewing apparatus 195 may be a 3D glass for stereoscopic viewing. The glass 195 for 3D may include a passive polarizing glass or an active shutter glass, and may be a concept including the head mount type described above.

For example, when the viewing apparatus 195 is a polarizing glass, the left eye glass can be realized as a left eye polarizing glass, and the right eye glass can be realized as a right eye polarizing glass. At this time, the display 180 may include a polarization filter, for example, a film-type patterned retarder (FPR).

As another example, when the viewing apparatus 195 is a shutter glass, the left eye glass and the right eye glass can be alternately opened and closed.

Meanwhile, the display 180 may be configured as a touch screen and used as an input device in addition to the output device.

The audio output unit 185 receives the signal processed by the control unit 170 and outputs it as a voice.

A photographing unit (not shown) photographs the user. The photographing unit (not shown) may be implemented by a single camera, but the present invention is not limited thereto, and may be implemented by a plurality of cameras. On the other hand, the photographing unit (not shown) may be embedded in the image display device 100 on the upper side of the display 180 or may be disposed separately. The image information photographed by the photographing unit (not shown) may be input to the control unit 170.

The control unit 170 can detect the gesture of the user based on each of the images photographed from the photographing unit (not shown) or the signals sensed from the sensor unit (not shown) or a combination thereof.

The remote control apparatus 200 transmits the user input to the user input interface unit 150. [ To this end, the remote control apparatus 200 can use Bluetooth, RF (radio frequency) communication, infrared (IR) communication, UWB (Ultra Wideband), ZigBee, or the like. Also, the remote control apparatus 200 can receive the video, audio, or data signal output from the user input interface unit 150 and display it or output it by the remote control apparatus 200.

Meanwhile, a block diagram of the image display apparatus 100 shown in FIG. 2 is a block diagram for an embodiment of the present invention. Each component of the block diagram can be integrated, added, or omitted according to the specifications of the image display apparatus 100 actually implemented. That is, two or more constituent elements may be combined into one constituent element, or one constituent element may be constituted by two or more constituent elements, if necessary. The functions performed in each block are for the purpose of describing the embodiments of the present invention, and the specific operations and apparatuses do not limit the scope of rights of the present invention.

2, the video display apparatus 100 does not include the tuner unit 110 and the demodulation unit 120 shown in FIG. 2, but may be connected to the network interface unit 130 or the external device interface unit 135 to play back the video content.

On the other hand, the image display apparatus 100 is an example of a video signal processing apparatus that performs signal processing of an image stored in the apparatus or an input image. Another example of the image signal processing apparatus includes a display 180 shown in FIG. 2, A set-top box excluding the audio output unit 185, a DVD player, a Blu-ray player, a game machine, a computer, and the like may be further exemplified.

FIG. 3 is an internal block diagram of the control unit of FIG. 2, FIG. 4 is a diagram illustrating various formats of a 3D image, and FIG. 5 is a diagram illustrating operations of a viewing apparatus according to the format of FIG.

The control unit 170 includes a demultiplexing unit 310, an image processing unit 320, a processor 330, an OSD generating unit 340, a mixer 345, A frame rate conversion unit 350, and a formatter 360. [0031] An audio processing unit (not shown), and a data processing unit (not shown).

The demultiplexer 310 demultiplexes the input stream. For example, when an MPEG-2 TS is input, it can be demultiplexed into video, audio, and data signals, respectively. The stream signal input to the demultiplexer 310 may be a stream signal output from the tuner 110 or the demodulator 120 or the external device interface 130.

The image processing unit 320 may perform image processing of the demultiplexed image signal. For this, the image processing unit 320 may include a video decoder 325 and a scaler 335.

The video decoder 325 decodes the demultiplexed video signal and the scaler 335 performs scaling so that the resolution of the decoded video signal can be output from the display 180.

The video decoder 325 can include a decoder of various standards.

On the other hand, the image signal decoded by the image processing unit 320 can be divided into a case where there is only a 2D image signal, a case where a 2D image signal and a 3D image signal are mixed, and a case where there is only a 3D image signal.

For example, when an external video signal input from the external device 190 or a broadcast video signal of a broadcast signal received from the tuner unit 110 includes only a 2D video signal, when a 2D video signal and a 3D video signal are mixed And a case where there is only a 3D video signal. Accordingly, the controller 170, particularly, the image processing unit 320 and the like can process the 2D video signal, the mixed video signal of the 2D video signal and the 3D video signal, , A 3D video signal can be output.

Meanwhile, the image signal decoded by the image processing unit 320 may be a 3D image signal in various formats. For example, a 3D image signal composed of a color image and a depth image, or a 3D image signal composed of a plurality of view image signals. The plurality of viewpoint image signals may include, for example, a left eye image signal and a right eye image signal.

4, the format of the 3D video signal is a side-by-side format (Fig. 4A) in which the left eye image signal L and the right eye image signal R are arranged left and right, A frame sequential format (FIG. 4C) for arranging in a time division manner, an interlaced format (FIG. 4B) for mixing the left eye image signal and the right eye image signal line by line 4d), a checker box format (FIG. 4e) for mixing the left eye image signal and the right eye image signal box by box, and the like.

The processor 330 may control the overall operation in the image display apparatus 100 or in the control unit 170. [ For example, the processor 330 may control the tuner 110 to select a channel selected by the user or an RF broadcast corresponding to a previously stored channel.

In addition, the processor 330 may control the image display apparatus 100 by a user command or an internal program input through the user input interface unit 150. [

In addition, the processor 330 may perform data transfer control with the network interface unit 135 or the external device interface unit 130.

The processor 330 may control operations of the demultiplexing unit 310, the image processing unit 320, the OSD generating unit 340, and the like in the controller 170.

The OSD generation unit 340 generates an OSD signal according to a user input or by itself. For example, based on a user input signal, a signal for displaying various information in a graphic or text form on the screen of the display 180 can be generated. The generated OSD signal may include various data such as a user interface screen of the video display device 100, various menu screens, a widget, and an icon. In addition, the generated OSD signal may include a 2D object or a 3D object.

The OSD generating unit 340 can generate a pointer that can be displayed on the display based on the pointing signal input from the remote control device 200. [ In particular, such a pointer may be generated by a pointing signal processing unit, and the OSD generating unit 240 may include such a pointing signal processing unit (not shown). Of course, a pointing signal processing unit (not shown) may be provided separately from the OSD generating unit 240.

The mixer 345 may mix the OSD signal generated by the OSD generator 340 and the decoded video signal processed by the image processor 320. At this time, the OSD signal and the decoded video signal may include at least one of a 2D signal and a 3D signal. The mixed video signal is supplied to a frame rate converter 350.

A frame rate converter (FRC) 350 can convert the frame rate of an input image. On the other hand, the frame rate converter 350 can output the frame rate without conversion.

The formatter 360 may arrange the left eye image frame and the right eye image frame of the frame rate-converted 3D image. The left eye glass of the 3D viewing apparatus 195 and the synchronization signal Vsync for opening the right eye glass can be output.

The formatter 360 receives the mixed signal, i.e., the OSD signal and the decoded video signal, from the mixer 345, and separates the 2D video signal and the 3D video signal.

In the present specification, a 3D video signal means a 3D object. Examples of the 3D object include a picuture in picture (PIP) image (still image or moving picture), an EPG indicating broadcasting program information, Icons, texts, objects in images, people, backgrounds, web screens (newspapers, magazines, etc.).

On the other hand, the formatter 360 can change the format of the 3D video signal. For example, it can be changed to any one of various formats exemplified in FIG. Thus, according to the format, the operation of the eyeglass type viewing apparatus can be performed as shown in Fig.

5A illustrates operation of the 3D-use glass 195, particularly, the shutter glass 195 when the formatter 360 arranges and outputs the frames in the frame sequential format of the format shown in FIG. 4. FIG.

That is, when the left eye image L is displayed on the display 180, the left eye glass of the shutter glass 195 is opened and the right eye glass is closed. When the right eye image R is displayed, The left eye glass is closed, and the right eye glass is opened.

On the other hand, FIG. 5B illustrates the operation of the 3D-use glass 195, particularly the polarizing glass 195, when the formatter 360 arranges and outputs the side-by-side format of the format shown in FIG. On the other hand, the 3D glass 195 applied in FIG. 5 (b) may be a shutter glass, and the shutter glass at this time may be operated as a polarizing glass by keeping both the left-eye glass and right-eye glass open .

Meanwhile, the formatter 360 may convert the 2D video signal into a 3D video signal. For example, according to a 3D image generation algorithm, an edge or a selectable object is detected in a 2D image signal, and an object or a selectable object according to the detected edge is separated into a 3D image signal and is generated . At this time, the generated 3D image signal can be separated into the left eye image signal L and the right eye image signal R, as described above.

Although not shown in the drawing, it is also possible that a 3D processor (not shown) for 3-dimensional effect signal processing is further disposed after the formatter 360. The 3D processor (not shown) can process the brightness, tint, and color of the image signal to improve the 3D effect. For example, it is possible to perform signal processing such as making the near field clear and the far field blurring. On the other hand, the functions of such a 3D processor can be merged into the formatter 360 or merged into the image processing unit 320. [ This will be described later with reference to FIG. 6 and the like.

Meanwhile, the audio processing unit (not shown) in the control unit 170 can perform the audio processing of the demultiplexed audio signal. To this end, the audio processing unit (not shown) may include various decoders.

In addition, the audio processing unit (not shown) in the control unit 170 can process a base, a treble, a volume control, and the like.

The data processing unit (not shown) in the control unit 170 can perform data processing of the demultiplexed data signal. For example, if the demultiplexed data signal is a coded data signal, it can be decoded. The encoded data signal may be EPG (Electronic Program Guide) information including broadcast information such as a start time and an end time of a broadcast program broadcasted on each channel.

3, the signals from the OSD generating unit 340 and the image processing unit 320 are mixed in the mixer 345 and then 3D processed in the formatter 360. However, the present invention is not limited to this, May be located behind the formatter. That is, the output of the image processing unit 320 is 3D-processed by the formatter 360, and the OSD generating unit 340 performs 3D processing together with the OSD generation. Thereafter, the processed 3D signals are mixed by the mixer 345 It is also possible to do.

Meanwhile, the block diagram of the controller 170 shown in FIG. 3 is a block diagram for an embodiment of the present invention. Each component of the block diagram can be integrated, added, or omitted according to the specifications of the control unit 170 actually implemented.

In particular, the frame rate converter 350 and the formatter 360 are not provided in the controller 170, but may be separately provided.

6 is a diagram illustrating various scaling methods of a 3D image signal according to an exemplary embodiment of the present invention.

Referring to the drawing, in order to increase the 3-dimensional effect, the controller 170 may perform 3D effect signal processing. In particular, it is possible to perform a size adjustment or a tilt adjustment of a 3D object in a 3D image.

The 3D object 510 in the 3D image signal or the 3D image signal can be enlarged or reduced 512 as a whole at a certain ratio as shown in FIG. 6 (a), and also, as shown in FIG. 6 (b) , The 3D object may be partially enlarged or reduced (trapezoidal shape, 514, 516). 6 (d), at least a portion of the 3D object may be rotated (parallelogram shape 518). This scaling (scaling) or skew adjustment can emphasize the 3D effect of a 3D object in a 3D image or a 3D image, that is, a 3D effect.

On the other hand, as the slope becomes larger, the difference in length between the parallel sides of the trapezoidal shapes 514, 516 becomes larger as shown in Fig. 6B or 6C, .

The size adjustment or the tilt adjustment may be performed after the 3D image signal is aligned in a predetermined format in the formatter 360. [ Or in the scaler 335 in the image processing unit 320. [ On the other hand, the OSD generating unit 340 may generate an object in a shape as illustrated in FIG. 6 for the OSD generated for 3D effect enhancement.

Although not shown in the drawing, signal processing for a 3D effect (3-dimensional effect) may be performed by adjusting the brightness, tint, and brightness of an image signal or object, It is also possible that signal processing such as color adjustment is performed. For example, it is possible to perform signal processing such as making the near field clear and the far field blurring. The signal processing for the 3D effect may be performed in the controller 170 or may be performed through a separate 3D processor. Particularly, when it is performed in the control unit 170, it is possible to perform it in the formatter 360 or in the image processing unit 320 together with the above-described size adjustment or tilt adjustment.

FIG. 7 is a view for explaining how images are formed by a left eye image and a right eye image, and FIG. 8 is a view for explaining depths of a 3D image according to an interval between a left eye image and a right eye image.

First, referring to FIG. 7, a plurality of images or a plurality of objects 615, 625, 635, and 645 are illustrated.

First, the first object 615 includes a first left eye image 611 (L) based on the first left eye image signal and a first right eye image 613 (R) based on the first right eye image signal, It is exemplified that the interval between the first left eye image 611, L and the first right eye image 613, R is d1 on the display 180. [ At this time, the user recognizes that an image is formed at an intersection of an extension line connecting the left eye 601 and the first left eye image 611 and an extension line connecting the right eye 603 and the first right eye image 603. Accordingly, the user recognizes that the first object 615 is positioned behind the display 180. [

Next, since the second object 625 includes the second left eye image 621, L and the second right eye image 623, R and overlaps with each other and is displayed on the display 180, do. Accordingly, the user recognizes that the second object 625 is located on the display 180. [

Next, the third object 635 and the fourth object 645 are arranged in the order of the third left eye image 631, L, the second right eye image 633, R, the fourth left eye image 641, Right eye image 643 (R), and their intervals are d3 and d4, respectively.

According to the above-described method, the user recognizes that the third object 635 and the fourth object 645 are positioned at positions where the images are formed, respectively, and recognizes that they are located before the display 180 in the drawing.

At this time, it is recognized that the fourth object 645 is projected before the third object 635, that is, more protruded than the third object 635. This is because the interval between the fourth left eye image 641, L and the fourth right eye image 643, d4 is larger than the interval d3 between the third left eye image 631, L and the third right eye image 633, R. [

Meanwhile, in the embodiment of the present invention, the distance between the display 180 and the objects 615, 625, 635, and 645 recognized by the user is represented by a depth. Accordingly, it is assumed that the depth when the user is recognized as being positioned behind the display 180 has a negative value (-), and the depth when the user is recognized as being positioned before the display 180 (depth) has a positive value (+). That is, the greater the degree of protrusion in the user direction, the greater the depth.

8, the interval a between the left eye image 701 and the right eye image 702 in FIG. 8A is smaller than the interval between the left eye image 701 and the right eye image 702 shown in FIG. 8 (b) it is understood that the depth a 'of the 3D object in FIG. 8 (a) is smaller than the depth b' of the 3D object in FIG. 8 (b).

In this way, when the 3D image is exemplified as the left eye image and the right eye image, the positions recognized as images are different depending on the interval between the left eye image and the right eye image. Accordingly, by adjusting the display intervals of the left eye image and the right eye image, the depth of the 3D image or the 3D object composed of the left eye image and the right eye image can be adjusted.

FIG. 9 is a flowchart illustrating an operation method of an image display apparatus according to an embodiment of the present invention, and FIGS. 10 to 14B are referenced to explain various examples of an operation method of the image display apparatus of FIG.

Referring to FIG. 9, the image display apparatus 100 receives a left eye image from a first source (S910). Then, the image display apparatus 100 receives the right eye image from the second source (S915).

The image display apparatus 100 can receive the left eye image through the tuner unit 110 and receive the right eye image through the network receiving unit 135. [

Alternatively, the image display apparatus 100 may receive the left eye image through the tuner unit 110 and receive the right eye image through the external apparatus interface unit 130. [

Alternatively, the image display apparatus 100 may receive the left eye image through the network receiving unit 135 and receive the right eye image through the external apparatus interface unit 130. [

That is, the image display apparatus 100 can receive the left eye image and the right eye image through another source or another route.

Meanwhile, the coding method of the left eye image and the right eye image may be different from each other. For example, the left eye image may be an image compressed and transmitted by the first encoding method, and the right eye image may be an image compressed and transmitted by the second encoding method.

Here, the first encoding method is an MPEG-2 encoding method. The second encoding scheme may be an H.264 encoding scheme.

On the other hand, the image display apparatus 100 receives the left eye image and the right eye image through the tuner unit 110, and outputs the left eye image encoded by the first encoding method (for example, MPEG-2 encoding method) 2 coding scheme (for example, H.264 coding scheme).

Alternatively, the image display apparatus 100 may receive the left eye image and the right eye image through the external device interface unit 130, and output the left eye image and the left eye image, which are coded by the first coding method (for example, MPEG-2 coding method) , And a right-eye image encoded with a second encoding scheme (for example, H.264 encoding scheme).

In this specification, it can be defined that, even if they are received through the same unit, they are received through different sources when different coding schemes are used.

As described above, when 3D broadcasting is transmitted through airwave or cable broadcasting, the left eye image and the right eye image can be transmitted to the image display apparatus 100 through different sources.

11A shows an image display apparatus in which a left eye image 1110 is received as a first input signal Sl through a first source and a right eye image 1115 is received as a second input signal Sr ).

Next, the image display apparatus 100 generates a depth map based on the left eye image and the right eye image (S925). Then, the generated depth map is stored (S930).

The control unit 170 compares the inputted left eye image and right eye image to calculate disparity information. Then, a depth map is generated corresponding to the calculated parallax information.

The control unit 170 can calculate parallax information for each line or each pixel. For example, the left eye image and the right eye image may be compared for each horizontal line, and the distance difference may be calculated as parallax information. Alternatively, for each pixel, the left eye image and the right eye image may be compared, and the distance difference may be calculated as parallax information. Here, the parallax information may be an object located closer to -, and a closer object closer to + may be located in front of the object.

11B illustrates that the parallax between the respective objects 1112 and 1115 in the left eye image 1110 and the right eye image 1115 is L1 due to the difference between P1 and P2.

The control unit 170 can generate a depth map based on the parallax information to be calculated.

The depth map is caused by parallax information per object, and can be generated with only luminance information without color information. That is, the higher the brightness level is, the more the object is projected, and the lower the brightness level is, the more the object is depressed.

FIG. 11C illustrates a depth map 1120 using parallax information between the left eye image 1110 and the right eye image 1115 of FIG. 11B. The generated depth map may be stored in the storage unit 140.

Next, the image display apparatus 100 displays the 3D image using the received left eye image and right eye image (S930).

In the case of the passive system, the controller 170 synchronizes the left eye image and the right eye image with each other, and then outputs the left eye image and the right eye image to the side by side format of FIG. 4 (a) 4A and 4B are arranged in any one of a top / down format, an interlaced format of FIG. 4D, and a checker box format of FIG. 4E, and the arranged left and right eye images are simultaneously displayed . Accordingly, the display 180 can display a 3D image in a passive manner.

Alternatively, in the active mode, the controller 170 may synchronize the received left and right eye images with the left eye image and the right eye image in the frame sequential format of FIG. 4 (c) And the right-eye image can be sequentially displayed. Accordingly, the display 180 can display 3D images in an active manner.

FIG. 11D illustrates that a left eye image and a right eye image are simultaneously displayed by a passive method. This enables the user wearing the polarizing glass 195 to feel a stereoscopic feeling from the 3D image 1130 having the object 1135 of the predetermined depth D1.

Next, the image display apparatus 100 determines whether an error of an allowable value or more has occurred in any one of the left eye image and the right eye image (S935). If an error occurs, the image display apparatus 100 restores the lost image using the stored depth map (S940). Then, the 3D image is displayed using the restored image (S945).

The control unit 170 determines whether a video loss occurs in the received left eye image or right eye image. Specifically, it is determined whether or not an error exceeding the allowable value has occurred.

Here, the error of the tolerance value or more may be a concept including a loss error of a predetermined area or more of the image frames of the left eye image or the right eye image, or an error in which the difference between the reception time of the left eye image frame and the right eye image frame is longer than a predetermined time.

12A illustrates a case where the right eye image 1215 among the left eye image 1210 and the right eye image 1215 is lost through different sources and is not received by the image display apparatus 100. [

That is, as shown in FIG. 12B, the controller 170 receives only the left eye image 1210 and does not receive the right eye image 1215.

In this case, the control unit 170 can restore the right-eye image using the depth map stored in the storage unit 140 and the left-eye image in which no loss has occurred, with respect to the left-eye image in which the loss has occurred . That is, as shown in (b) of FIG. 12C, the reconstructed right eye image 1217 can be generated.

In the depth map, the depth data corresponding to the object stored in the storage unit 140 among the objects in the left eye image 1210 may be used as the data used in the restoration. On the other hand, if there is no matching depth data, the depth data of the previous frame or the like can be used. Alternatively, it is also possible that average depth data for a predetermined period of time is used.

The control unit 170 can control the 3D image 1230 to be displayed using the reconstructed right eye image 1217 and the left eye image 1210 without loss as shown in FIG. 12D. Accordingly, the image display apparatus 100 stably displays the 3D image, and as a result, the usability of the user can be increased.

13A illustrates that the reception timings of the right eye image 1315 among the received left eye image 1310 and right eye image 1315 are delayed by a predetermined time or more through different sources. In the drawing, it is exemplified that a delay is made for Tx time.

13B, the controller 170 can not synchronize the left eye image 1310 and the right eye image 1315 with the 3D image because of the Tx time delay of the right eye image 1315. [

In this case, the control unit 170 can restore the right eye image using the depth map stored in the storage unit 140 and the left eye image, except for the right eye image 1315 in which the reception delay has occurred . That is, as shown in (b) of FIG. 13C, the reconstructed right eye image 1317 can be generated.

The control unit 170 can control the 3D image 1330 to be displayed as shown in FIG. 13D by using the reconstructed right eye image 1317 and the left eye image 1310 without loss. Accordingly, the image display apparatus 100 stably displays the 3D image, and as a result, the usability of the user can be increased.

Next, FIG. 14A illustrates that a loss occurs in some areas of the right eye image 1415 among the received left eye image 1410 and right eye image 1415 through different sources.

In this case, as shown in FIG. 14B, the controller 170 removes the area 1417 where the loss of the right eye image 1315 has occurred, and uses the depth map stored in the storage unit 140, It is possible to restore a part of the area 1419 of FIG. That is, as shown in FIG. 14B (b), the reconstructed right eye image 1415 can be generated.

The controller 170 can control the 3D image to be displayed using the reconstructed right eye image 1415 and the left eye image 1410 without loss. Accordingly, the image display apparatus 100 stably displays the 3D image, and as a result, the usability of the user can be increased.

Although not shown in the drawing, the controller 170 of the video display device 100 can stop the 3D image display when an error of more than a tolerance value occurs in both the left eye image and the right eye image that are received. In this case, even if the depth map stored in the storage unit 140 is used, the 3D image display can be stopped because the 3D image display can not be restored. At this time, it is also possible that a message indicating the interruption of the 3D image display is displayed on the display 180. [

On the other hand, if it is determined in step 935 that the error does not occur in the left eye image or the right eye image, a step 950 may be performed. That is, the image display apparatus 100 displays the 3D image using the received left eye image and right eye image (S950). That is, as shown in FIG. 11A, in the case where there is no error, it is possible to display the 3D image directly without any restoration operation. Also in this case, it is possible to generate the depth map using the parallax information of the left eye image and the right eye image.

In summary, in the embodiment of the present invention, when a loss occurs in either the left eye image or the right eye image received from different sources, the lost image is restored using the pre-stored depth map, .

Fig. 10 shows this series of processes.

That is, during the first period T1 during which the left eye image Sl and the right eye image Sr are received through different sources without error, the storage unit 140 stores the depth map generated by the control unit 170 . Then, the display 180 displays a normal 3D image.

Next, when a loss occurs in the right eye image Sr during the second period T2 between the time Ta and the time Tb, the control unit 170 uses the depth map stored in the storage unit 140, . Then, the reconstructed image is displayed for a period T4 longer than the second period T2. This takes into consideration the delay of the 3D image display period of the normally received left eye image and right eye image from time Tb.

Meanwhile, since the left eye image Sl and the right eye image Sr are received without loss during the third period T3 from the time Tb, the storage unit 140 stores the depth map generated by the control unit 170 . Then, the display 180 displays the normal 3D image from the time Tc onward. Through such a process, even if a loss occurs in any one of the images, the 3D image can be stably viewed.

It is to be understood that the present invention is not limited to the configuration and the method of the embodiments described above but may be applied to all or any of the embodiments so that various modifications may be made. Some of which may be selectively combined.

Meanwhile, the operation method of the image display apparatus of the present invention can be implemented as a code that can be read by a processor on a recording medium readable by a processor included in the image display apparatus. The processor-readable recording medium includes all kinds of recording apparatuses in which data that can be read by the processor is stored. Examples of the recording medium that can be read by the processor include a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like, and may also be implemented in the form of a carrier wave such as transmission over the Internet . In addition, the processor-readable recording medium may be distributed over network-connected computer systems so that code readable by the processor in a distributed fashion can be stored and executed.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments, but, on the contrary, It will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the present invention.

Claims (15)

Receiving a left eye image from a first source;
Receiving a right eye image from a second source different from the first source;
Displaying a 3D image using the received left eye image and right eye image;
Reconstructing the lost image using a pre-stored depth map when an error of more than a tolerance value occurs in any one of the received left eye image and right eye image; And
And displaying the 3D image using the reconstructed image. The method of claim 1, wherein the reconstructed image is a 3D image.
The method according to claim 1,
Generating the depth map based on the received left and right eye images; And
And storing the generated depth map. The method of claim 1, further comprising:
The method according to claim 1,
Wherein the period of displaying the 3D image using the restored image is longer than the period of error occurrence.
The method according to claim 1,
In the restoring step,
Wherein the lost image is restored by using one of the received left eye image and right eye image in which no error has occurred and the depth data in the depth map.
The method according to claim 1,
In the restoring step,
And restores the lost image so as to have a parallax corresponding to the depth data.
The method according to claim 1,
And stopping the 3D image display if an error of more than a tolerance value occurs in both of the received left eye image and right eye image.
The method according to claim 1,
If the error exceeds the tolerance,
Wherein a loss error of a predetermined area or more of an image frame of the left eye image or a right eye image or an error of a difference in reception time of the left eye image frame and the right eye image frame is equal to or longer than a predetermined time.
A tuner unit for receiving either the left eye image or the right eye image;
A network interface unit receiving another one of the left eye image and the right eye image;
A display for displaying a 3D image using the received left and right eye images;
And restoring the lost image using the pre-stored depth map when the error in the received left eye image or the right eye image exceeds an allowable error, and controlling the 3D image to be displayed using the restored image And a control unit for controlling the display unit.
9. The method of claim 8,
Further comprising: a storage unit,
Wherein,
Wherein the depth map generation unit generates the depth map based on the received left and right eye images and stores the generated depth map in the storage unit.
9. The method of claim 8,
Wherein a period for displaying the 3D image using the reconstructed image is longer than the error occurrence period.
9. The method of claim 8,
Wherein,
Wherein the restored image is reconstructed using another one of the received left eye image and right eye image in which no error has occurred and the depth data in the depth map.
9. The method of claim 8,
Wherein,
And restores the lost image so as to have a parallax corresponding to the depth data.
9. The method of claim 8,
Wherein,
And stops the 3D image display when an error of more than a tolerance value occurs in both of the received left eye image and right eye image.
9. The method of claim 8,
If the error exceeds the tolerance,
Wherein a loss error of a predetermined area or more of the image frames of the left eye image or the right eye image or an error of a difference in reception time of the left eye image frame and the right eye image frame is equal to or greater than a predetermined time.
A broadcast receiver for receiving a left eye image encoded by the first encoding method and a right eye image encoded by the second encoding method;
A display for displaying a 3D image using the received left and right eye images; And
And restoring the lost image using the pre-stored depth map when the error in the received left eye image or the right eye image exceeds an allowable error, and controlling the 3D image to be displayed using the restored image And a control unit for controlling the display unit.
KR1020120148721A 2012-12-18 2012-12-18 Image display apparatus, and method for operating the same KR20140079107A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020120148721A KR20140079107A (en) 2012-12-18 2012-12-18 Image display apparatus, and method for operating the same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020120148721A KR20140079107A (en) 2012-12-18 2012-12-18 Image display apparatus, and method for operating the same

Publications (1)

Publication Number Publication Date
KR20140079107A true KR20140079107A (en) 2014-06-26

Family

ID=51130413

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020120148721A KR20140079107A (en) 2012-12-18 2012-12-18 Image display apparatus, and method for operating the same

Country Status (1)

Country Link
KR (1) KR20140079107A (en)

Similar Documents

Publication Publication Date Title
KR20110052771A (en) Image display device and operating method for the same
KR20120034996A (en) Image display apparatus, and method for operating the same
US20130057541A1 (en) Image display apparatus and method for operating the same
EP2574068A2 (en) Image display apparatus and method for operating the same
US9117387B2 (en) Image display apparatus and method for operating the same
KR101836846B1 (en) Image display apparatus, and method for operating the same
KR101741432B1 (en) Apparatus for displaying image and method for operating the same
KR101730323B1 (en) Apparatus for viewing image image display apparatus and method for operating the same
KR20120062428A (en) Image display apparatus, and method for operating the same
KR102014149B1 (en) Image display apparatus, and method for operating the same
KR20140079107A (en) Image display apparatus, and method for operating the same
KR101880479B1 (en) Image display apparatus, and method for operating the same
KR101825669B1 (en) Image display apparatus, and method for operating the same
KR101890323B1 (en) Image display apparatus, settop box and method for operating the same
KR20140055124A (en) Image display apparatus, and method for operating the same
KR101878808B1 (en) Image display apparatus and method for operating the same
KR20140083544A (en) Image display apparatus, image transmitting apparatus, and method for operating the same
KR20130076349A (en) Image display apparatus, and method for operating the same
KR101716144B1 (en) Image display apparatus, and method for operating the same
KR101882214B1 (en) Image display apparatus, server and method for operating the same
KR101176500B1 (en) Image display apparatus, and method for operating the same
KR101737367B1 (en) Image display apparatus and method for operating the same
KR101691801B1 (en) Multi vision system
KR20110134087A (en) Image display apparatus and method for operating the same
KR20130030603A (en) Image display apparatus, and method for operating the same

Legal Events

Date Code Title Description
WITN Withdrawal due to no request for examination