KR20110090640A - Image display device and operating method for the same - Google Patents

Image display device and operating method for the same Download PDF

Info

Publication number
KR20110090640A
KR20110090640A KR1020100010548A KR20100010548A KR20110090640A KR 20110090640 A KR20110090640 A KR 20110090640A KR 1020100010548 A KR1020100010548 A KR 1020100010548A KR 20100010548 A KR20100010548 A KR 20100010548A KR 20110090640 A KR20110090640 A KR 20110090640A
Authority
KR
South Korea
Prior art keywords
eye image
pixel
right eye
left eye
image
Prior art date
Application number
KR1020100010548A
Other languages
Korean (ko)
Inventor
주재현
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Priority to KR1020100010548A priority Critical patent/KR20110090640A/en
Publication of KR20110090640A publication Critical patent/KR20110090640A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • H04N13/31Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using parallax barriers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/324Colour aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/327Calibration thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/361Reproducing mixed stereoscopic images; Reproducing mixed monoscopic and stereoscopic images, e.g. a stereoscopic image overlay window on a monoscopic image background

Abstract

PURPOSE: An image display device and operating method thereof are provided to increase a picture quality of an image in a range of information about an image and a previously providing image. CONSTITUTION: A histogram of a left image and a right image is extracted(S810). A histogram of a right image and a histogram of a left image are determined if it is different(S820). If it is different, a pixel value of a left image corresponding pixel of a left image and a right image corresponding pixel of a right image is revised same(S830). A pixel value outputs a corrected left image and a right image(S840).

Description

Image Display Device and Operating Method for the Same

The present invention relates to an image display apparatus and an operation method thereof. More particularly, the present invention relates to an image display apparatus or an image display method for correcting a stereoscopic image.

The image display device is a device having a function of displaying an image that a user can watch. The user can watch the broadcast through the image display device. A video display device displays a broadcast selected by a user among broadcast signals transmitted from a broadcast station on a display. Currently, broadcasting is shifting from analog broadcasting to digital broadcasting all over the world.

Digital broadcasting refers to broadcasting for transmitting digital video and audio signals. Digital broadcasts are more resistant to false noise than analog broadcasts, resulting in lower data loss, better error correction, higher resolution, and clearer pictures. In addition, unlike analog broadcasting, digital broadcasting is capable of bidirectional services.

Recently, researches on various contents that can be provided through stereoscopic images and stereoscopic images have been actively conducted, and stereoscopic image technology has become more and more common and practical in computer graphics as well as in various other environments and technologies. In addition, the above-described digital broadcasting can transmit a stereoscopic image, and a development for a device for reproducing the same is also in progress.

However, a stereoscopic image is displayed using a left eye image and a right eye image, and the image quality of the stereoscopic image is basically determined by the image quality of the left eye image and the right eye image. In particular, when the left eye image and the right eye image have different image quality or have different gradations or contrast ratios, the left eye image and the right eye image have different image quality, thereby providing better stereoscopic image quality.

Accordingly, an object of the present invention is to increase the quality of an image when displaying a stereoscopic image. In particular, when the pixel values of the same point of the left eye image and the right eye image are different, the left eye image and the right eye image are equally corrected based on the image having better image quality, so that the image already provided and the information on the image are within the range. I want to improve the quality of the image. In particular, the emphasis is on image gradation, contrast ratio, and sharpness.

In accordance with another aspect of the present invention, there is provided a method of displaying a stereoscopic image using a left eye image and a right eye image, including extracting a histogram between the left eye image and the right eye image; If the histogram of the left eye image and the histogram of the right eye image are different, correcting pixel values of the left eye image corresponding pixel of the left eye image and the right eye image corresponding pixel of the right eye image; and correcting the pixel value of the left eye image. And outputting a right eye image, wherein the left eye image corresponding pixel and the right eye image corresponding pixel have the same coordinate values and different pixel values within the left eye image and the right eye image, respectively.

In addition, the image display device according to an embodiment of the present invention for displaying the three-dimensional image by using a left eye image and a right eye image, extracting the histogram of the left eye image and the right eye image and the histogram of the left eye image And a control unit for equally correcting pixel values of a left eye image corresponding pixel of the left eye image and a right eye image corresponding pixel of the right eye image when the histogram of the right eye image is different from the histogram. And a display unit, wherein the left eye image corresponding pixel and the right eye image corresponding pixel have the same coordinate values and different pixel values in the left eye image and the right eye image, respectively.

According to the exemplary embodiment of the present invention, the stereoscopic image may be provided to the user, and the image quality of the stereoscopic image may be improved. By correcting the other image based on a better quality image by using histograms and meta information included in the left eye image and the right eye image, the image quality of the stereoscopic image can be improved without obtaining additional information. In particular, the quality of an image is improved in terms of gradation, contrast ratio, and sharpness of a stereoscopic image. In addition, the operation process required for image correction can be minimized, and different calculation methods can be applied according to the characteristics of each pixel.

1 is an internal block diagram of an image display apparatus according to an embodiment of the present invention.
FIG. 2 is a diagram illustrating an example of an external device connectable to the image display device of FIG. 1. FIG.
3 is a diagram illustrating an example of an internal block diagram of the controller of FIG. 1;
4 is a diagram illustrating separation of a 2D video signal and a 3D video signal in the formatter of FIG. 3.
FIG. 5 is a diagram illustrating various formats of 3D images output from the formatter of FIG. 3. FIG.
FIG. 6 is a diagram referred to describe scaling of 3D images output from the formatter of FIG. 3. FIG.
7A to 7C illustrate an example of an image displayed on the image display apparatus of FIG. 1.
8 is a flowchart illustrating a method of operating an image display apparatus according to an exemplary embodiment.
9 is a flowchart illustrating a method of operating an image display apparatus according to another exemplary embodiment.
10 is a flowchart illustrating a method of operating an image display apparatus according to another exemplary embodiment.
11 and 12 are diagrams illustrating a left eye image, a right eye image, and histograms before correction.
FIG. 13 is a diagram illustrating a left eye image, a right eye image, and histograms corrected by the image display apparatus and the operation method according to the exemplary embodiment described with reference to FIG. 10.

Hereinafter, with reference to the drawings will be described the present invention in more detail.

The suffixes "module" and "unit" for components used in the following description are merely given in consideration of ease of preparation of the present specification, and do not impart any particular meaning or role by themselves. Therefore, the "module" and "unit" may be used interchangeably.

1 is a block diagram illustrating an image display apparatus according to an exemplary embodiment of the present invention.

Referring to FIG. 1, the image display apparatus 100 according to an exemplary embodiment of the present invention includes a tuner 110, a demodulator 120, an external signal input / output unit 130, a storage unit 140, and an interface unit 150. ), A sensing unit (not shown), a controller 170, a display 180, and a sound output unit 185.

The tuner 110 selects an RF broadcast signal corresponding to a channel selected by a user or all pre-stored channels among RF (Radio Frequency) broadcast signals received through an antenna. Also, the selected RF broadcast signal is converted into an intermediate frequency signal, a baseband image, or a voice signal.

For example, if the selected RF broadcast signal is a digital broadcast signal, it is converted into a digital IF signal (DIF). If the analog broadcast signal is converted into an analog baseband video or audio signal (CVBS SIF). That is, the tuner 110 may process a digital broadcast signal or an analog broadcast signal. The analog baseband video or audio signal CVBS SIF output from the tuner 110 may be directly input to the signal processor 180.

Also, the tuner 110 can receive RF carrier signals of a single carrier according to an Advanced Television System Committee (ATSC) scheme or RF carriers of a plurality of carriers according to a DVB (Digital Video Broadcasting) scheme.

Meanwhile, the tuner 110 sequentially selects RF broadcast signals of all broadcast channels stored through a channel memory function among RF broadcast signals received through an antenna and converts them into intermediate frequency signals or baseband video or audio signals. I can convert it. This is for illustrating a thumbnail list including a plurality of thumbnail images corresponding to a broadcast channel on the display 180. Accordingly, the tuner 110 may receive the RF broadcast signals of the selected channel or all the pre-stored channels sequentially or periodically.

The demodulator 120 receives the digital IF signal DIF converted by the tuner 110 and performs a demodulation operation.

For example, when the digital IF signal output from the tuner 110 is an ATSC scheme, the demodulator 120 performs 8-VSB (8-Vestigal Side Band) demodulation. Also, the demodulation unit 120 may perform channel decoding. To this end, the demodulator 120 includes a trellis decoder, a de-interleaver, and a reed solomon decoder to perform trellis decoding, deinterleaving, Solomon decoding can be performed.

For example, when the digital IF signal output from the tuner 110 is a DVB scheme, the demodulator 120 performs COFDMA (Coded Orthogonal Frequency Division Modulation) demodulation. Also, the demodulation unit 120 may perform channel decoding. To this end, the demodulator 120 may include a convolutional decoder, a deinterleaver, a reed-soloman decoder, and the like to perform convolutional decoding, deinterleaving, and reed-soloman decoding.

The demodulation unit 120 may perform demodulation and channel decoding, and then output a stream signal TS. In this case, the stream signal may be a signal multiplexed with a video signal, an audio signal, or a data signal. For example, the stream signal may be an MPEG-2 TS (Transport Stream) multiplexed with an MPEG-2 standard video signal, a Dolby AC-3 standard audio signal, or the like. Specifically, the MPEG-2 TS may include a header of 4 bytes and a payload of 184 bytes.

On the other hand, the demodulator 120 described above can be provided separately according to the ATSC system and the DVB system. That is, it can be provided as an ATSC demodulation unit and a DVB demodulation unit.

The stream signal output from the demodulator 120 may be input to the controller 170. After performing demultiplexing and signal processing, the controller 170 outputs an image to the display 180 and outputs an audio to the sound output unit 185.

The external signal input / output unit 130 may connect the external device to the image display device 100. To this end, the external signal input / output unit 130 may include an A / V input / output unit or a wireless communication unit.

The external signal input / output unit 130 is connected to an external device such as a DVD (Digital Versatile Disk), a Blu-ray, a game device, a camera, a camcorder, a computer (laptop), or the like by wire / wireless. The external signal input / output unit 130 transmits an image, audio or data signal input from the outside through the connected external device to the controller 170 of the image display apparatus 100. In addition, the controller 170 may output an image, audio, or data signal processed by the controller 170 to a connected external device.

The A / V input / output unit includes an Ethernet terminal, a USB terminal, a Composite Video Banking Sync (CVBS) terminal, a component terminal, and an S-video to input video and audio signals of an external device to the video display device 100. Terminals (analog), DVI (Digital Visual Interface) terminal, HDMI (High Definition Multimedia Interface) terminal, RGB terminal, D-SUB terminal and the like.

The wireless communication unit may perform a wireless internet connection. The image display apparatus 100 may be connected to the wireless Internet through a wireless communication unit. For wireless Internet access, wireless LAN (Wi-Fi), wireless broadband (Wibro), world interoperability for microwave access (Wimax), high speed downlink packet access (HSDPA) communication standards, and the like may be used.

In addition, the wireless communication unit may perform near field communication with other electronic devices. The image display device 100 may be connected to other electronic devices and networks according to communication standards such as Bluetooth, Radio Frequency Identification (RFID), Infrared Data Association (IrDA), Ultra Wideband (UWB), ZigBee, etc. Can be connected.

In addition, the external signal input / output unit 130 may be connected through at least one of the various set top boxes and the various terminals described above to perform input / output operations with the set top box.

For example, when the set-top box is a set-top box for an Internet Protocol (IP) TV, the video, audio, or data signal processed by the set-top box for the IP TV may be transmitted to the control unit 170 to enable bidirectional communication. Signals processed at 170 may be delivered to a set top box for an IP TV.

Meanwhile, the above-described IPTV may mean ADSL-TV, VDSL-TV, FTTH-TV, etc. according to the type of transmission network, and include TV over DSL, Video over DSL, TV overIP (TVIP), and Broadband TV ( BTV) and the like. In addition, IPTV may also mean an Internet TV capable of accessing the Internet, or a full browsing TV.

In addition, the external signal input / output unit 130 may be connected to a communication network capable of video or voice calls. The communication network may mean a broadcast type communication network, a public telephone network, a mobile communication network, or the like connected through a LAN.

The storage 140 may store a program for processing and controlling each signal in the controller 170, or may store a signal processed video, audio, or data signal.

In addition, the storage 140 may perform a function for temporarily storing an image, audio, or data signal input to the external signal input / output unit 130. In addition, the storage 140 may store information on a predetermined broadcast channel through a channel storage function.

The storage unit 140 may include a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (for example, SD or XD memory), It may include at least one type of storage medium such as RAM, ROM (EEPROM, etc.). The image display apparatus 100 may reproduce and provide a file (video file, still image file, music file, document file, etc.) stored in the storage 140 to a user.

1 illustrates an embodiment in which the storage unit 140 is provided separately from the control unit 170, but the scope of the present invention is not limited thereto. The storage 140 may be included in the controller 170.

The interface unit 150 transmits a signal input by the user to the controller 170 or transmits a signal from the controller 170 to the user. For example, the interface unit 150 may power on / off, channel selection, screen setting, etc. from the remote control apparatus 200 according to various communication schemes such as RF (Radio Frequency) communication scheme and infrared (IR) communication scheme. Receives a user input signal of, or transmits a signal from the control unit 170 to the remote control device 200. The sensing unit (not shown) allows a user who does not use the remote control apparatus 200 to input a user command to the image display apparatus 100. The detailed configuration of the sensing unit (not shown) will be described later.

The controller 170 demultiplexes a stream input through the tuner 110 and the demodulator 120 or the external signal input / output unit 130 and processes the demultiplexed signals to generate a signal for video or audio output. And output. In addition, the overall operation of the image display apparatus 100 may be controlled. The controller 170 may control the image display apparatus 100 by a user command or an internal program input through the interface unit 150 or a sensing unit (not shown).

Although not shown in FIG. 1, the controller 170 may include a demultiplexer, an image processor, a voice processor, and the like.

The controller 170 may control the tuner 110 to control the tuner 110 to select an RF broadcast corresponding to a channel selected by a user or a pre-stored channel.

The controller 170 may demultiplex the received stream signal, for example, the MPEG-2 TS, and divide the received stream signal into video, audio, and data signals, respectively. Here, the stream signal input to the controller 170 may be a stream signal output from the tuner 110, the demodulator 120, or the external signal input / output unit 130.

In addition, the controller 170 may perform image processing of the demultiplexed image signal. For example, when the demultiplexed video signal is an encoded video signal, video decoding may be performed. In particular, when decoding a video, when the demultiplexed video signal is a mixed signal of 2D video and 3D video, or consists only of a 2D video signal or a 3D video signal, 2D video decoding or 3D according to the corresponding codec Image decoding may be performed. A signal processing operation of the 2D image or the 3D image of the controller 170 will be described in detail with reference to FIG. 3 below.

In addition, the controller 170 may process brightness, tint and color of the image signal.

The image signal processed by the controller 170 may be input to the display 180 and displayed as an image corresponding to the image signal. In addition, the image signal processed by the controller 170 may be input to the external output device through the external signal input / output unit 130.

In addition, the controller 170 may perform voice processing of the demultiplexed voice signal. For example, if the demultiplexed speech signal is a coded speech signal, it can be decoded. Specifically, when the demultiplexed speech signal is an encoded speech signal of MPEG-2 standard, it may be decoded by an MPEG-2 decoder. In addition, when the demultiplexed speech signal is an encoded speech signal of MPEG 4 Bit Sliced Arithmetic Coding (BSAC) standard according to the terrestrial digital multimedia broadcasting (DMB) scheme, it may be decoded by an MPEG 4 decoder. In addition, when the demultiplexed speech signal is an encoded audio signal of the AAC (Advanced Audio Codec) standard of MPEG 2 according to the satellite DMB scheme or DVB-H, it may be decoded by the AAC decoder.

In addition, the controller 170 may process a base, treble, volume control, and the like.

The voice signal processed by the controller 170 may be output to the sound output unit 185. In addition, the voice signal processed by the controller 170 may be input to the external output device through the external signal input / output unit 130.

In addition, the controller 170 may perform data processing of the demultiplexed data signal. For example, when the demultiplexed data signal is an encoded data signal, it may be decoded. The encoded data signal may be EPG (Electronic Progtam Guide) information including broadcast information such as a start time and an end time of a broadcast program broadcasted in each channel. For example, the EPG information may be TSC-PSIP (ATSC-Program and System Information Protocol) information in the ATSC scheme, and may include DVB-Service Information (DVB-SI) in the DVB scheme. . The ATSC-PSIP information or the DVB-SI information may be information included in the aforementioned stream, that is, the header (4 bytes) of the MPEG-2 TS.

In addition, the controller 170 may perform an on screen display (OSD) process. In detail, the controller 170 may display various types of information on a screen of the display 180 based on at least one of an image processed image signal and a data processed data signal or a user input signal input through the remote controller 200. It can generate a signal to display the graphic (Graphic) or text (Text). The generated OSD signal may be input to the display 180 together with the image processed image signal and the data processed data signal.

The signal generated for the graphic or text display described above may include various data such as a user interface screen, various menu screens, widgets, and icons of the image display apparatus 100.

Meanwhile, when generating an OSD signal, the controller 170 may implement the OSD signal as a 2D video signal or a 3D video signal. This will be described in detail with reference to FIG. 3 below.

In addition, the controller 170 receives an analog baseband video / audio signal CVBS / SIF and performs signal processing. Here, the analog baseband video / audio signal CVBS / SIF input to the controller 170 may be an analog baseband video / audio signal output from the tuner 110 or the external signal input / output unit 130. The signal processed video signal is input to the display 180 to display an image, and the processed audio signal is input to the audio output unit 185, for example, a speaker, to output audio.

Although not shown in the drawing, a channel browsing processor may be further provided to generate a thumbnail image corresponding to the channel signal or the external input signal. The channel browsing processor receives a stream signal TS output from the demodulator 120 or a stream signal output from the external signal input / output unit 130, extracts an image from the input stream signal, and generates a thumbnail image. Can be. The generated thumbnail image may be input as it is or encoded to the controller 170. In addition, the generated thumbnail image may be encoded in a stream form and input to the controller 170. The controller 170 may display a thumbnail list including a plurality of thumbnail images on the display 180 by using the input thumbnail image.

The controller 170 may receive a signal transmitted from the remote controller 200 through the interface unit 150. The controller 170 determines a command input by the user to the remote control apparatus 200 through the received signal and controls the image display apparatus 100 to correspond to the command. For example, when the user inputs a predetermined channel selection command, the controller 170 controls the tuner 110 to input a signal of the selected channel. It also processes video, audio, or data signals on selected channels. The controller 170 allows the channel information selected by the user to be output through the display 180 or the audio output unit 185 together with the processed video or audio signal.

As another example, the user may input another type of video or audio output command through the remote control apparatus 200. The user may want to watch a camera or camcorder video signal input through the external signal input / output unit 130 instead of the broadcast signal. In this case, the controller 170 may output the video signal or the audio signal input through the external signal input / output unit 130 through the display 180 or the audio output unit 185.

The controller 170 may determine a user command input by a local key (not shown), which is one of the sensing units (not shown) included in the image display apparatus 100, and control the image display apparatus 100 to correspond thereto. have. For example, the user may input an on / off command, a channel change command, a volume change command, or the like, through the local key. The local key may be formed of a button or a key formed on the image display apparatus 100. The controller 170 may determine whether the local key is manipulated and control the image display apparatus 100 accordingly.

The display 180 converts an image signal, a data signal, an OSD signal, or an image signal, data signal, etc., received from the external signal input / output unit 130, processed by the controller 170, into R, G, and B signals, respectively. Generate a signal.

The display 180 may be a PDP, an LCD, an OLED, a flexible display, or the like, and in particular, according to an embodiment of the present invention, it is preferable that a 3D display is possible.

To this end, the display 180 may be divided into an additional display method and a single display method.

The independent display method may implement a 3D image by the display 180 alone without additional display, for example, glasses, and the like. For example, various methods such as a lenticular method and a parallax barrier may be used. Can be applied.

Meanwhile, the additional display method may implement a 3D image by using an additional display in addition to the display 180. For example, various methods such as a head mounted display (HMD) type and glasses type may be applied. The glasses type may include a polarizing glasses type, a shutter glass type, a spectral filter type, or the like.

The display 180 may be configured as a touch screen and used as an input device in addition to the output device.

The sound output unit 185 receives a signal processed by the controller 170, for example, a stereo signal, a 3.1 channel signal, or a 5.1 channel signal, and outputs a voice signal. The voice output unit 185 may be implemented by various types of speakers.

The remote control apparatus 200 transmits a user input to the interface unit 150. To this end, the remote control apparatus 200 can use Bluetooth, RF (radio frequency) communication, infrared (IR) communication, UWB (Ultra Wideband), ZigBee, or the like. In addition, the remote control apparatus 200 may receive an image, an audio or a data signal output from the interface unit 150, and display or output the audio from the remote control apparatus 200.

Although not shown in the drawing, the image display apparatus 100 according to the embodiment of the present invention may further include a sensing unit (not shown). The sensing unit (not shown) may include a touch sensor, a voice sensor, a position sensor, an operation sensor, and the like.

The touch sensor may be a touch screen constituting the display 180. The touch sensor may sense a position or strength of the user's touch on the touch screen. The voice sensor may sense a user voice or various sounds generated by the user. The location sensor may sense the location of the user. The motion sensor may sense a gesture motion of the user. The position sensor or the motion sensor may be configured as an infrared sensor or a camera, and may sense a distance between the image display device 100 and the user, whether a user moves, a user's hand gesture, and the like.

Each sensor described above transmits a result of sensing a user's voice, touch, location, and motion to a separate sensing signal processor (not shown), or first interprets the sensing result and generates a corresponding sensing signal to control the controller. It can be entered at 170.

The sensing signal processor (not shown) may process a signal generated by the sensing unit (not shown) and transmit the signal to the controller 170.

The video display device 100 described above is a fixed type of ATSC (8-VSB) digital broadcasting, DVB-T (COFDM) digital broadcasting, ISDB-T (BST-OFDM) digital broadcasting, and the like. It may be a digital broadcast receiver capable of receiving at least one. In addition, as a mobile type, digital broadcasting of terrestrial DMB system, digital broadcasting of satellite DMB system, digital broadcasting of ATSC-M / H system, digital broadcasting of DVB-H system (COFDM system) and media flow link only system It may be a digital broadcast receiver capable of receiving at least one of digital broadcasts. It may also be a digital broadcast receiver for cable, satellite communications, or IPTV.

On the other hand, the video display device described in the present specification is a TV receiver, a mobile phone, a smart phone (notebook computer), a digital broadcasting terminal, PDA (Personal Digital Assistants), PMP (Portable Multimedia Player), etc. May be included.

Meanwhile, a block diagram of the image display apparatus 100 shown in FIG. 1 is a block diagram for an embodiment of the present invention. Each component of the block diagram may be integrated, added, or omitted according to the specifications of the image display apparatus 100 that is actually implemented. That is, two or more constituent elements may be combined into one constituent element, or one constituent element may be constituted by two or more constituent elements, if necessary. In addition, the functions performed in each block are intended to illustrate the embodiments of the present invention, and the specific operations and apparatuses do not limit the scope of the present invention.

FIG. 2 is a diagram illustrating an example of an external device connectable to the image display device of FIG. 1. As shown, the image display device 100 according to the present invention may be connected to the external device via wired / wireless via the external signal input / output unit 130.

In the present embodiment, an external device that may be connected to the image display device 100 may include, for example, a camera 211, a screen type remote control device 212, a set top box 213, a game device 214, a computer ( 215, a mobile communication terminal 216, and the like.

The image display device 100 may display a graphic user interface screen of an external device connected through the external signal input / output unit 130 on the display 180. The user may connect the external device to the image display apparatus 100 and watch the image being reproduced or stored in the external apparatus through the image display apparatus 100.

In addition, the image display apparatus 100 may output a voice being reproduced or stored in an external device connected through the external signal input / output unit 130 through the audio output unit 185.

Data of an external device connected through the image display apparatus 100 and the external signal input / output unit 130, for example, a still image file, a video file, a music file, or a text file, may be stored in the image display apparatus 100. 140 may be stored.

The image display apparatus 100 displays a still image file, a video file, a music file, or a text file stored in the storage 140 on the display 180 even after the connection with the external device is released, or the audio output unit 185. Can be output via

When the video display device 100 is connected to the mobile communication terminal 216 or the communication network through the external signal input / output unit 130, the image display device 100 may display a screen for video or voice call on the display 180. In addition, the image display apparatus 100 may output an audio for an image or a voice call through the audio output unit 185. The user may make a video or audio call using the mobile communication terminal 216 or the video display device 100 connected to the communication network.

3 is a diagram illustrating an example of an internal block diagram of the controller of FIG. 1, FIG. 4 is a diagram illustrating separation of a 2D video signal and a 3D video signal in the formatter of FIG. 3, and FIG. 5 is an output from the formatter of FIG. 3. FIG. 6 is a diagram illustrating various formats of 3D images, and FIG. 6 is a diagram referred to describe scaling of 3D images output from the formatter of FIG. 3.

The controller 170 according to an embodiment of the present invention may include an image processor 310, a formatter 320, an OSD generator 330, and a mixer 340.

First, as shown in FIG. 3 (a), the controller 170 may separate and process images of multiple views in the formatter 320 based on the image signal decoded by the image processor 310. In operation 340, the multi-view video signal generated separately by the OSD generator 330 may be mixed with the multi-view video signal output from the formatter 320.

The image processor 310 may process a video signal from a broadcast signal passed through the tuner 110 and the demodulator 120 or an external input signal passed through the external signal input / output unit 130.

Meanwhile, as described above, the signal input to the image processor 310 may be a signal obtained by demultiplexing a stream signal.

For example, when the demultiplexed video signal is an encoded 2D video signal of MPEG-2 standard, it may be decoded by an MPEG-2 decoder.

In addition, for example, when the demultiplexed 2D video signal is a digital video broadcasting (DMB) method or an encoded video signal of H.264 standard according to DVB-H, it may be decoded by an H.264 decoder.

Also, for example, when the demultiplexed video signal is a depth image of MPEC-C part 3, the demultiplexed video signal may be decoded by the MPEC-C decoder. In addition, disparity information may be decoded.

Also, for example, when the demultiplexed video signal is a multi-view video according to MVC (Multi-view Video Coding), it may be decoded by an MVC decoder.

Also, for example, when the demultiplexed video signal is a free view video according to a free-viewpoint TV (FTV), it may be decoded by an FTV decoder.

Meanwhile, the image signal decoded by the image processor 310 may be classified into a case in which only a 2D image signal is present, a case in which a 2D image signal and a 3D image signal are mixed, and a case in which only a 3D image signal is present.

The image signal decoded by the image processor 310 may be a 3D image signal having various formats as described above. For example, the image may be a 3D image signal including a color image and a depth image, or may be a 3D image signal including a plurality of view image signals. The plurality of viewpoint image signals may include, for example, a left eye image signal and a right eye image signal.

Here, the format of the 3D video signal is a side by side format (FIG. 5A) in which the left eye video signal L and the right eye video signal R are arranged left and right, as shown in FIG. Top / Down format (FIG. 5B) to arrange, Frame Sequential format (FIG. 5C) to arrange by time division, Interlaced format which mixes the left eye signal and the right eye video signal by line 5d), a checker box format (FIG. 5E) for mixing the left eye image signal and the right eye image signal for each box.

Meanwhile, when the decoded video signal includes a caption or video signal related to data broadcasting, the image processor 310 may separate the image signal and output the separated video signal to the OSD generator 330. The video signal associated with the caption or data broadcast may be generated by the OSD generator 330 as a 3D object.

The formatter 320 may receive the decoded video signal and separate the 2D video signal and the 3D video signal. The 3D video signal may be separated into video signals of multiple views. For example, it may be separated into a left eye image signal and a right eye image signal.

Whether the decoded video signal is a 2D video signal or a 3D video signal includes a 3D video flag or 3D video metadata or 3D video in a header of a stream indicating that the video signal is a 3D video. The determination may be made by referring to the format information and the like.

The 3D image flag, the 3D image metadata, or the format information of the 3D image may include location information, area information, or size information of the 3D image in addition to the 3D image information.

Meanwhile, the 3D video flag, the 3D video metadata, or the format information of the 3D video may be decoded at the time of stream demultiplexing and input to the formatter 320.

The formatter 320 may separate the 3D video signal from the decoded video signal by using the 3D video flag, the 3D video metadata, or the format information of the 3D video.

The formatter 320 may generate 3D image signals of multiple views by recombining 3D image signals having a predetermined format based on the format information of the 3D image. For example, it may be separated into a left image signal and a right eye image signal.

The formatter 175 may determine the format of an input video signal by referring to a data signal related to the video signal. The formatter 175 may convert the input image signal into a format suitable for the display unit 180 and output the converted image signal to the display unit 180.

4, a 2D video signal and a 3D video signal are separated from the decoded video signal received by the image processor 310, and the 3D video signal is recombined by the formatter 320 and separated into a left eye video signal and a right eye video signal. Shows that

First, as shown in FIG. 4A, when the first video signal 410 is a 2D video signal and the second video signal 420 is a 3D video signal, the formatter 320 may be configured with the first video signal 410. The second image signal 420 may be separated, and the second image signal may be separated into a left eye image signal 423 and a right eye image signal 426. Meanwhile, the first image signal 410 may correspond to the main image displayed on the display 180, and the second image signal 420 may correspond to the PIP image displayed on the display 180.

Next, as shown in FIG. 4B, when both of the first image signal 410 and the second image signal 420 are 3D image signals, the formatter 320 separates them, respectively, and the first image signal 410. Each of the second image signals 420 may be separated into a left eye image signal 413 and 423 and a right eye image signal 416 and 426.

Next, as shown in FIG. 4C, when the first video signal 410 is a 3D video signal and the second video signal 420 is a 2D video signal, the formatter 320 may receive the first video signal 410. The left eye video signal 413 and the right eye video signal 416 may be separated.

Next, as shown in FIGS. 4D and 4E, when one of the first video signal 410 and the second video signal 420 is a 3D video signal and the other is a 2D video signal, 2D The video signal may be converted into a 3D video signal. Such switching may be performed according to a user's input.

For example, an edge may be detected within a 2D image signal according to a 3D image generation algorithm, and an object according to the detected edge may be separated and generated into a 3D image signal. For example, the selectable object may be detected within the 2D video signal according to the 3D video generating algorithm, and the object may be generated by separating the object into the 3D video signal. In this case, the generated 3D image signal may be separated into a left eye image signal L and a right eye image signal R as described above. Meanwhile, a portion of the 2D image signal except for the object region generated as the 3D image signal may be output as a new 2D image signal.

Meanwhile, when both the first image signal 410 and the second image signal 420 are 2D image signals, only one 2D image signal is converted into a 3D image signal according to a 3D image generation algorithm as shown in FIG. 4 (f). Can be. Alternatively, as shown in FIG. 4G, all 2D image signals may be converted into 3D image signals according to a 3D image generation algorithm.

Meanwhile, when there is 3D image flag, 3D image metadata, or format information of the 3D image, the formatter 320 determines whether the 3D image signal is recognized using the 3D image flag, 3D image metadata, or 3D image. If there is no format information, it may be determined whether the 3D video signal is recognized using the 3D video generation algorithm as described above.

Meanwhile, the 3D image signal output from the formatter 320 may be divided into a left eye image signal 413 or 423 and a right eye image 416 or 426, and may be output in any one of the formats illustrated in FIG. 5. . In this case, the 2D video signal may be output as it is without additional signal processing in the formatter 320 or may be output after being converted into a corresponding format according to the format of the 3D video signal.

FIG. 5 is a diagram referred to for describing a 3D video signal format that may be generated through the formatter 320 of the present embodiment. The formatter 320 according to the present exemplary embodiment has a side by side format a in which the left eye image signal L and the right eye image signal R are arranged left and right, and a top down which is arranged up and down. ) Format (b), time sequential frame sequential format (c), interlaced format (d) in which left-eye and right-eye video signals are mixed line by line, and left-eye and right-eye video signals A 3D video signal of various formats such as a checker box format (e) mixed for each box may be output.

Meanwhile, the user may select any one of the formats illustrated in FIG. 5 as an output format. When the user selects the Top / Down format as the output format of the formatter 320, the formatter 320 recombines a 3D video signal in another input format, for example, side by side format. The video signal may be separated into a left eye video signal and a right eye video signal, and may be output in a top down format.

Meanwhile, the 3D video signal input to the formatter 320 may be a broadcast video signal or an external input signal, and may be a 3D video signal having a predetermined depth. Accordingly, the formatter 320 may separate the 3D image signal having the corresponding depth into a left eye image signal and a right eye image signal.

Meanwhile, 3D image signals having different depths may be separated into different left eye image signals and right eye image signals due to depth differences. That is, the left eye image signal and the right eye image signal may be changed according to the depth of the 3D image signal.

Meanwhile, when the depth of the 3D video signal is changed according to a user's input or setting, the formatter 320 may separate the corresponding left eye video signal and the right eye video signal according to the changed depth.

In addition, the formatter 320 may scale the 3D video signal. Specifically, the 3D object in the 3D video signal may be scaled.

Referring to FIG. 6, the formatter 320 may scale the 3D image signal in various ways.

As shown in FIG. 6A, the 3D image signal or the 3D object in the 3D image signal may be enlarged or reduced as a whole at a predetermined ratio, and as shown in FIG. 6B, the 3D object may be partially enlarged or reduced (a trapezoidal shape). )You may. In addition, as shown in FIG. 6C, at least a part of the 3D object may be rotated (parallel quadrilateral form). Through such scaling, a 3D effect, or 3D effect, can be emphasized on the 3D image signal or the 3D object in the 3D image signal.

Meanwhile, the 3D image signal of FIG. 6 may be a left eye image signal or a right eye image signal corresponding to the second image signal of FIG. 4A. That is, the image may be a left eye image signal or a right eye image signal corresponding to the PIP image.

As a result, the formatter 320 may receive the decoded video signal, separate the 2D video signal or the 3D video signal, and separate the 3D video signal into a left eye video signal and a right eye video signal. The left eye video signal and the right eye video signal may be scaled and output in a predetermined format as shown in FIG. 5. On the other hand, scaling may also be performed after the output format is formed.

The OSD generator 330 generates an OSD signal according to a user input or itself. The generated OSD signal may include a 2D OSD object or a 3D OSD object.

Whether the object is a 2D OSD object or a 3D OSD object may be determined according to a user input or according to the size of the object or whether the object is a selectable object.

Unlike the formatter 320 that receives and processes the decoded video signal, the OSD generator 330 may generate and output a 2D OSD object or a 3D OSD object immediately. Meanwhile, as shown in FIG. 6, the 3D OSD object may be scaled and output in various ways. Also, the 3D OSD object output according to the depth may vary.

As shown in FIG. 5, the output format may be any one of various formats in which a left eye and a right eye are combined. In this case, the output format is the same as the output format of the formatter 320. For example, when the user selects the top down format as the output format of the formatter 320, the output format of the OSD generator 330 is determined as the top down format.

The OSD generator 330 may receive an image signal related to caption or data broadcast from the image processor 310 and output an OSD signal related to caption or data broadcast. The OSD signal at this time may be a 2D OSD object or a 3D OSD object as described above.

The mixer 340 mixes the video signal output from the formatter 340 and the OSD signal output from the OSD generator. The output image signal is input to the display 180.

On the other hand, the internal block diagram inside the control unit 170 may be configured as shown in Figure 3 (b). The image processor 310, the formatter 320, the OSD generator 330, and the mixer 340 of FIG. 3B are similar to those of FIG. 3A, and the differences will be described below.

First, the mixer 340 mixes the decoded image signal from the image processor 310 and the OSD signal generated by the OSD generator 330. Since the output of the mixer 340 is input and processed by the formatter 320, unlike the OSD generator 330 of FIG. 3 (a), which generates the 3D object itself and outputs it according to the corresponding format, the output of the mixer 340 of FIG. It is sufficient for the OSD generator 330 to generate an OSD signal corresponding to the 3D object.

The formatter 320 receives the OSD signal and the decoded video signal, separates the 3D video signal, and separates the 3D video signal into a plurality of view video signals as described above. For example, the 3D image signal may be divided into a left eye image signal and a right eye image signal, and the separated left eye image signal and the right eye image signal may be scaled as shown in FIG. 6 and output in a predetermined format shown in FIG. 5. .

Meanwhile, a block diagram of the controller 170 shown in FIG. 3 is a block diagram for one embodiment of the present invention. Each component of the block diagram may be integrated, added, or omitted according to the specification of the controller 170 that is actually implemented. That is, two or more constituent elements may be combined into one constituent element, or one constituent element may be constituted by two or more constituent elements, if necessary. In addition, the functions performed in each block are intended to illustrate the embodiments of the present invention, and the specific operations and apparatuses do not limit the scope of the present invention.

FIG. 7 is a diagram illustrating an example of an image displayed on the image display apparatus of FIG. 1.

Referring to the drawing, the image display apparatus 100 may display a 3D image to suit the top-down method among the 3D image formats shown in FIG. 5.

FIG. 7A illustrates an image displayed on the display 180 when image reproduction is stopped in the image display apparatus 100. The top down format arranges a plurality of viewpoint images up and down as shown. Therefore, when the image reproduction is stopped, as shown in FIG. 7A, the images 351 and 352 of which the top and bottom are separated are displayed on the display 180.

On the other hand, for example, when displaying a 3D image or the like by an additional display method such as polarized glasses, if the user does not wear polarized glasses or the like, the image to be viewed is not in focus.

That is, as shown in FIG. 7B, when viewing without wearing polarized glasses, the 3D objects 353a, 353b, and 353c may not be in focus. The same may be true of the image 353 displayed on the display 180.

FIG. 7C illustrates that the image 354 and 3D objects 354a, 354b, and 354c that are displayed to the user are focused when the screen of the image display apparatus 100 illustrated in FIG. 7B is worn while viewing the screen. Shows. In this case, the 3D objects 354a, 354b, and 354c may appear to protrude toward the user.

Meanwhile, in the image display apparatus displaying a 3D image according to a single display method, the image and the 3D object shown to the user may be as shown in FIG. 7C even if the user does not wear polarized glasses.

On the other hand, the object in the present specification, the object is information on the video display device 100, such as the audio output level, channel information, the current time of the video display device 100 or the image displayed on the video display device 100 It can include an image or text that represents information about it.

For example, the object may include a volume control button, a channel control button, an image display control menu, an icon, a navigation tab, a scroll bar, a progressive bar, a text box, and a window displayed on the display 180 of the image display apparatus 100. It may be one of the.

Through such an object, the user may recognize information about the image display apparatus 100 or information about an image displayed on the image display apparatus 100. In addition, a command may be input to the image display apparatus 100 through the object displayed on the image display apparatus 100.

Meanwhile, in the present specification, the depth of the 3D object protruding in the user direction is set to (+), and the depth of the 2D image or the 3D image displayed on the display 180 or the display 180 is 0. Set it. In addition, the depth of the 3D object that is concavely represented in the display 180 is set to (−). As a result, the depth becomes larger as it protrudes toward the viewer.

Meanwhile, the 3D object in the present specification is an object processed to have a three-dimensional effect, and includes an object having a three-dimensional effect or an object having a different depth by scaling illustrated in FIG. 6.

In FIG. 7C, an example of a 3D object is illustrated as a PIP image, but is not limited thereto, and may be an EPG representing broadcast program information, various menus, widgets, icons, etc. of an image display device.

8 is a flowchart illustrating a method of operating an image display apparatus according to an exemplary embodiment.

As described above, the image display apparatus according to an embodiment of the present invention generates a left eye image and a right eye image for displaying a stereoscopic image, or receives a left eye image signal for the left eye image and a right eye image signal for the right eye image. However, when the histogram of the left eye image and the right eye image are different, the image quality of the stereoscopic image may be improved by correcting pixel values of the image based on the image having better image quality.

To this end, the control unit 170 first extracts a histogram of the left eye image and the right eye image (S810). A histogram generally refers to a graph showing the characteristics of the distribution of observed data at a glance in order to represent the frequency distribution. However, in the exemplary embodiment of the present invention, the histogram is an example of such a graph, which means an image histogram. In the histogram to be described below, the x-axis represents pixel values from 0 to 255, and the y-axis represents the number of pixels. That is, a graph showing how many pixels each pixel value has in a specific image. Total variation of the image, distribution of color tone or contrast appear in the histogram. For example, an image showing a histogram in which the number of pixels is distributed around the pixel value 0 along the horizontal axis has a dark tone overall. On the contrary, an image showing a histogram with a large number of pixels distributed around the pixel value 255 has a light tone overall. It may be an image.

When the histogram for the binocular image is derived through the above-described process, the histogram of the left eye image and the histogram of the right eye image are compared (S820). As a result of the comparison, if the histograms of the two images are different, the pixel values are corrected at points having the same pixel value but different positions in the image (S830). Here, a point or a pixel at the same position in the image is referred to as a corresponding pixel. In particular, a corresponding pixel in the left eye image may be referred to as a left eye image corresponding pixel and a corresponding pixel in the right eye image may be referred to as a right eye image corresponding pixel.

The left eye image corresponding pixel and the right eye image corresponding pixel are two pixels which represent the same point in the stereoscopic image when the image is output. Therefore, the left eye image corresponding pixel and the right eye image corresponding pixel have the same coordinate values in the left eye image and the right eye image, respectively. When the left eye image corresponding pixel and the right eye image corresponding pixel have different pixel values, these corresponding pixels are subjected to pixel value correction, and a step of searching for the corresponding pixels may be added.

When the left eye image corresponding pixel and the right eye image corresponding pixel having different pixel values are found, the controller 170 equally corrects the pixel values of the left eye image corresponding pixel and / or the right eye image corresponding pixel. More specifically, the demultiplexer (not shown) or the image processor 310 of the controller 170 separates an image signal and a data signal for the image, wherein pixel value information of each pixel is divided into the separated data signal. Included. The data signal modified according to the pixel value correction is transmitted to the formatter 320. An image corrector (not shown) for correcting the pixel value may be separately provided. The formatter 320 receives the data signal and the image signal including the corrected pixel value information, processes the image signal according to the standard of the display 180, and simultaneously corrects or reproduces the left eye image and the right eye image.

There may be various examples of what value the pixel values of the left eye image corresponding pixel and the right eye image corresponding pixel are corrected. For example, the image quality of the stereoscopic image may be improved by correcting the pixel value of the corresponding pixel of the other image based on the image having the better image quality among the left eye image and the right eye image. In addition, as described above, the pixel value of the corresponding pixel of the other image may be corrected based on the better image of the left eye image and the right eye image, or the pixel value of the two pixels may be corrected by an average value of the pixel values of the two pixels. It may be. A method of correcting a pixel value of a left eye image corresponding pixel or a right eye image corresponding pixel will be described in detail later with reference to other embodiments.

The formatter 320 outputs the corrected left eye image signal and right eye image signal to the display 180. Then, the display 180 displays the corrected left eye image and the right eye image, thereby displaying a three-dimensional image (S840).

9 is a flowchart illustrating a method of operating an image display apparatus according to another exemplary embodiment.

When an image signal for displaying a stereoscopic image (here, the image signal includes a left eye image signal and a right eye image signal) is generated or received, the controller 170 compares the histogram of the left eye image and the right eye image (S910). When the histograms of the two images are different, the controller 170 selects an image having one or more of gray level or contrast ratio of the image among the left eye image and the right eye image (S920).

Gradation refers to the concentration transition stage from the darkest part of the image to the lightest effective part of the image. As the gray level of the image is better, the level of contrast is subdivided, so that the change from the dark part to the bright part of the image can be naturally expressed sequentially. In order to compare the gradation of the left eye image and the right eye image, the gradation of the image may be numerically converted to a gradation value, and then the gradation value of the left eye image and the gradation value of the right eye image may be compared with each other. In this case, an image having a higher gradation value may be determined as an image having a better gradation.

Contrast ratio refers to the difference in brightness between the brightest and darkest parts of the picture. That is, the contrast ratio of an image represents a difference in luminance between the darkest part (the blackest part) and the brightest part (the whitest part) in the image. Therefore, the sharpness of both images may look different depending on the contrast ratio of left and right eye images.

That is, according to the exemplary embodiment described with reference to FIG. 9, the pixel value may be corrected based on an image having better gradation or an image having higher sharpness according to contrast ratio. The pixel value of the pixel corresponding to the left eye image corresponding pixel or the pixel value of the pixel corresponding to the right eye image is corrected based on the image having a higher gray level value. For example, when the gradation value of the left eye image is higher or the contrast ratio is higher, the pixel value of the right eye image corresponding pixel having another pixel value is corrected based on the pixel value of the left eye image corresponding pixel. Similarly, when the gradation value or contrast ratio of the right eye image is higher, the pixel value of the pixel corresponding to the left eye image is corrected to the pixel value of the pixel corresponding to the right eye image (S930). As a result, according to the exemplary embodiment described with reference to FIG. 9, the pixel value correction of the corresponding pixel is performed only on one of the left eye image and the right eye image.

When the pixel values of the left eye image or the right eye image are corrected, the display 180 displays the corrected left eye image or the right eye image (S940).

10 is a flowchart illustrating a method of operating an image display apparatus according to another exemplary embodiment.

When an image signal for displaying a stereoscopic image (here, the image signal may include a left eye image signal and a right eye image signal) is generated or received, the controller 170 compares the histogram of the left eye image and the right eye image (S1010). ). When the histograms of the two images are different, the controller 170 corrects pixel values of the left eye image corresponding pixel and / or the right eye image corresponding pixel having different pixel values. In order to correct the pixel value, the controller 170 searches for a left eye image corresponding pixel and a right eye image corresponding pixel having different values (S1020).

As a result of the search, the pixel values of the corresponding pixels having different pixel values, that is, the pixel value correction targets, may be corrected in different ways according to the area of the pixel value. The pixel value of the left eye image corresponding pixel, the pixel value of the right eye image corresponding pixel, the pixel value of the left eye image corresponding pixel and the pixel value of the right eye image corresponding pixel according to a value and a pixel value of the right eye image corresponding pixel. The pixel value is corrected based on one of average values. As an example, it is assumed that the total range of pixel values 0 to 255 is divided into three areas. The range of pixel values may be divided into three regions: 0 to the first reference value, the first reference value to the second reference value, and the second reference value to 255. Whether the pixel value correction method is changed by dividing the total area of the pixel value into several areas may vary according to embodiments, and thus, the scope of the present invention is not limited.

The controller 170 determines whether the pixel value of the left eye image corresponding pixel or the right eye image corresponding pixel to be corrected is greater than or equal to 0 and smaller than the first reference value (S1030). In this case, the smaller of the pixel value of the pixel corresponding to the left eye image corresponding pixel and the pixel value of the pixel corresponding to the right eye image. In other words, the pixels for the darkened portion of the image are corrected with darker pixel values.

If the pixel value of the left eye image corresponding pixel and the right eye image corresponding pixel is greater than or equal to 0 and smaller than the first reference value, for example, the pixel value of the left eye image corresponding pixel is smaller than the pixel value of the right eye image corresponding pixel, the controller 170 The pixel value of the right eye image corresponding pixel is corrected to the pixel value of the left eye image corresponding pixel. In this case, dark areas are displayed darker. Even when the pixel value of the right eye image corresponding pixel is smaller, the pixel value of the left eye image corresponding pixel is corrected according to the same rule (S1040).

When the pixel value of the left eye image corresponding pixel or the right eye image corresponding pixel to be corrected is smaller than the first reference value, the controller 170 determines whether the pixel values to be corrected are greater than or equal to the first reference value and smaller than the second reference value. (S1050). Of course, step S1030, step S1050, and step S1070 do not necessarily need to be dependent on each other, nor do they need to be in an order or a superior order between these steps. The controller 170 may perform these steps (S1030, S1050, and S1080) independently and simultaneously.

If the pixel value of the left eye image corresponding pixel or the right eye image corresponding pixel to be corrected is larger than or equal to the first reference value and smaller than the second reference value, in this case, the pixel value of the left eye image corresponding pixel and the pixel value of the right eye image corresponding pixel The average value serves as a reference for pixel value correction. In other words, when a pixel value is corrected for a pixel having a medium brightness in the image, the pixel value is corrected for this part by unifying the brightness between the left and right eye images.

To this end, the method may further include calculating an average value of the pixel value of the pixel corresponding to the left eye image corresponding pixel and the pixel value of the pixel corresponding to the right eye image (S1060). The average value may be any one of an average value according to an arithmetic mean, geometric mean, harmonic mean, and the like. There is no particular limitation on a method of calculating the average value.

That is, when the pixel values of the pixel corresponding to the left eye image and the pixel corresponding to the right eye image are greater than or equal to the first reference value and smaller than the second reference value, the controller 170 controls both the pixel value of the pixel corresponding to the left eye image and the pixel value of the image corresponding to the right eye image. The average value is corrected (S1070).

The controller 170 determines whether the pixel value of the left eye image corresponding pixel or the right eye image corresponding pixel to be corrected is greater than or equal to the second reference value (S1080). That is, this is the case where the range of pixel values to be corrected is greater than or equal to the second reference value and less than or equal to 255. The controller 170 corrects the remaining pixel value by a larger value between the pixel value of the pixel corresponding to the left eye image and the pixel value of the pixel corresponding to the right eye image (S1090). That is, if the pixel value of the pixel corresponding to the left eye image is larger, the pixel value of the pixel corresponding to the right eye image is corrected to the pixel value of the pixel corresponding to the left eye image. The pixel value of the pixel corresponding to the left eye image is corrected.

That is, if there is a difference between the left eye image and the right eye image with respect to the brightly displayed portion in the image, the pixel value of the remaining image is corrected based on the brighter side as far as the portion is concerned. As a result, the overall contrast ratio of the screen may increase, and the sharpness or contrast of the screen may increase. The display 180 may display a stereoscopic image using the corrected left eye image and the corrected right eye image (S1095).

11 and 12 are diagrams illustrating a left eye image, a right eye image, and histograms before correction. FIG. 13 is a diagram illustrating a left eye image, a right eye image, and histograms corrected by the image display apparatus and the operation method according to the exemplary embodiment described with reference to FIG. 10.

11 illustrates a left eye image before extraction of a left eye image corresponding pixel and correction of pixel values, and a histogram before correction of the left eye image. The left eye first region 1110 and the left eye second region 1115 are shown in the histogram of the left eye image. 12 illustrates a right eye image before extraction of a pixel corresponding to a right eye image and correction of a pixel value, and a histogram before correction of the right eye image. The right eye first region 1210 and the right eye second region 1215 are shown in the histogram of the right eye image. The left eye first region 1110 and the right eye first region 1210 represent regions in which histograms for pixels having a pixel value of 0 or more and a first reference value or less are displayed, and the second left eye region 1115 and the right eye second region are displayed. The area 1215 represents an area in which a histogram for pixels having a pixel value greater than or equal to the second reference value and less than or equal to 255 is displayed.

13 illustrates a left / right eye image and a histogram after pixel values of corresponding pixels in the left eye image and the right eye image are equally corrected. Since the left eye image and the right eye image are the same after the correction, the left eye image and the right eye image are not separately illustrated. FIG. 13 is also a histogram of pixels having a pixel value greater than or equal to a second reference value in the corrected first region 1310 and a pixel having a pixel value less than or equal to a first reference value in the corrected image. Corrected second region 1315 is shown.

That is, the left eye first area 1110, the right eye first area 1210, and the corrected first area 1310 include pixels representing dark portions in the stereoscopic image, and the left eye second area 1115 and the right eye second area ( 1215 and the corrected second region 1315 are histograms of pixels representing bright portions of the stereoscopic image.

Comparing the left eye first region 1110 and the right eye first region 1210, it can be seen that there are more pixels having a pixel value below the first reference value in the left eye image. However, according to the exemplary embodiment described with reference to FIG. 10, the pixel value of the corresponding pixel having the lower pixel value among the left eye image corresponding pixel and the right eye image corresponding pixel having the pixel value less than or equal to the first reference value becomes the reference for correction. Therefore, for pixels having a pixel value less than or equal to the first reference value, the pixel value of the pixel corresponding to the left eye image becomes a reference value for correction. Therefore, the dark portion of the corrected image and its histogram, that is, the corrected first region 1310 are corrected close to the left eye first region 1110.

Similarly, when comparing the left eye second area 1115 and the right eye second area 1215, it can be seen that there are more pixels having a pixel value greater than or equal to the second reference value in the right eye image. However, according to the exemplary embodiment described with reference to FIG. 10, the pixel value of the corresponding pixel having the higher pixel value among the left eye image corresponding pixels having the pixel value equal to or greater than the second reference value and the right eye image corresponding pixel becomes the reference for correction. Therefore, for pixels having a pixel value equal to or greater than the second reference value, the pixel value of the pixel corresponding to the right eye image becomes a reference value for correction. Therefore, the bright part of the corrected image and its histogram, that is, the corrected second region 1315 are corrected to be closer to the right eye first region 1215.

As a result, it is expected that the corrected first region of the corrected image is corrected according to the left eye first region, the corrected second region according to the right eye second region, and the contrast ratio of the corrected image shown in the histogram becomes more pronounced due to the pixel value correction. have.

110: tuner unit
120: demodulator
130: external signal input and output unit
140: storage unit
150: interface unit
170:
180: display

Claims (18)

In the operation method of the image display device for displaying a three-dimensional image using the left eye image and the right eye image,
Extracting histograms of the left eye image and the right eye image;
Correcting pixel values of a left eye image corresponding pixel of the left eye image and a right eye image corresponding pixel of the right eye image if the histogram of the left eye image is different from the histogram of the right eye image; And
And outputting a left eye image and a right eye image in which the pixel values are corrected.
And the left eye image corresponding pixel and the right eye image corresponding pixel have the same coordinate values and different pixel values in the left eye image and the right eye image, respectively.
The method of claim 1,
After comparing the histogram of the left eye image and the histogram of the right eye image, the left eye image corresponding pixel and the right eye image correspond to the left eye image and the right eye image, respectively, using the histogram of the left eye image and the histogram of the right eye image. Further comprising finding pixels
The method of claim 1,
Correcting the pixel value,
And correcting the other one of the pixel values of the pixel value of the pixel corresponding to the left eye image and the pixel value of the pixel corresponding to the right eye image.
The method of claim 3,
An operation method of an image display apparatus, wherein one of a pixel value of the pixel corresponding to the left eye image or a pixel value of the pixel corresponding to the right eye image is corrected based on an image having a higher gray level value among the left eye image and the right eye image; .
The method of claim 3,
And correcting one of a pixel value of the pixel corresponding to the left eye image or a pixel value of the pixel corresponding to the right eye image based on an image having a higher contrast ratio between the left eye image and the right eye image.
The method of claim 1,
Correcting the pixel value,
A pixel value of the left eye image corresponding pixel, a pixel value of the right eye image corresponding pixel, a pixel value of the left eye image corresponding pixel and the right eye according to a pixel value of the left eye image corresponding pixel and a pixel value of the right eye image corresponding pixel And correcting the pixel value based on one of average values of pixel values of an image corresponding pixel.
The method of claim 6,
Correcting the pixel value,
When the pixel value of the left eye image corresponding pixel and the pixel value of the right eye image corresponding pixel are greater than or equal to 0 and smaller than a first reference value, the smaller value of the pixel value of the left eye image corresponding pixel and the pixel value of the right eye image corresponding pixel. And correcting at least one pixel value of the pixel value of the left eye image corresponding pixel and the pixel value of the right eye image corresponding pixel.
The method of claim 6,
Correcting the pixel value,
When the pixel value of the left eye image corresponding pixel and the pixel value of the right eye image corresponding pixel are greater than or equal to the first reference value and smaller than the second reference value, the pixel value of the left eye image corresponding pixel and the pixel value of the right eye image corresponding pixel And correcting pixel values of the left eye image corresponding pixel and the right eye image corresponding pixel as an average value.
The method of claim 6,
Correcting the pixel value,
When the pixel value of the left eye image corresponding pixel and the pixel value of the right eye image corresponding pixel are greater than or equal to the second reference value and less than or equal to 255, the pixel value of the left eye image corresponding pixel and the pixel value of the right eye image corresponding pixel. And correcting at least one pixel value of the pixel value of the pixel corresponding to the left eye image and the pixel value of the pixel corresponding to the right eye image by a large value.
An image display apparatus for displaying a stereoscopic image using a left eye image and a right eye image,
When the histogram of the left eye image and the right eye image are extracted, and the histogram of the left eye image and the histogram of the right eye image are different, the pixel values of the left eye image corresponding pixel of the left eye image and the right eye image corresponding pixel of the right eye image are the same. A controller for correcting; And
And a display unit for outputting a left eye image and a right eye image in which the pixel values are corrected.
And the left eye image corresponding pixel and the right eye image corresponding pixel have the same coordinate values and different pixel values in the left eye image and the right eye image, respectively.
The method of claim 10,
The control unit compares the histogram of the left eye image and the histogram of the right eye image, and uses the histogram of the left eye image and the histogram of the right eye image, respectively, in the left eye image and the right eye image, respectively, the pixel corresponding to the left eye image and the right eye image. An image display apparatus comprising finding an image corresponding pixel.
The method of claim 10,
And the controller is further configured to correct the other one of the pixel values of the pixel value of the pixel corresponding to the left eye image and the pixel value of the pixel corresponding to the right eye image.
The method of claim 12,
The controller may be configured to correct one of a pixel value of the pixel corresponding to the left eye image or a pixel value of the pixel corresponding to the right eye image based on an image having a higher gray level value among the left eye image and the right eye image. .
The method of claim 12,
And the control unit corrects one of the pixel value of the pixel corresponding to the left eye image or the pixel value of the pixel corresponding to the right eye image based on an image having a higher contrast ratio among the left eye image and the right eye image.
The method of claim 10,
The control unit may include a pixel value of the left eye image corresponding pixel, a pixel value of the right eye image corresponding pixel, and a pixel value of the left eye image corresponding pixel according to a pixel value of the left eye image corresponding pixel and a pixel value of the right eye image corresponding pixel. And correcting the pixel value based on one of an average value of pixel values of the pixel corresponding to the right eye image.
16. The method of claim 15,
If the pixel value of the pixel corresponding to the left eye image corresponding pixel and the pixel value of the pixel corresponding to the right eye image is greater than or equal to 0 and smaller than a first reference value, the controller may include the pixel value of the pixel corresponding to the left eye image corresponding to the pixel value of the right eye image corresponding pixel. And correcting at least one pixel value of the pixel value of the pixel corresponding to the left eye image and the pixel value of the pixel corresponding to the right eye image by a smaller value.
16. The method of claim 15,
The controller may be configured to adjust the pixel value of the left eye image corresponding pixel and the right eye image corresponding pixel when the pixel value of the left eye image corresponding pixel and the pixel value of the right eye image corresponding pixel are greater than or equal to the first reference value and smaller than a second reference value. And correcting pixel values of the left eye image corresponding pixel and the right eye image corresponding pixel using an average value of pixel values.
16. The method of claim 15,
The controller may be configured to adjust the pixel value of the left eye image corresponding pixel and the right eye image corresponding pixel when the pixel value of the left eye image corresponding pixel and the pixel value of the right eye image corresponding pixel are greater than or equal to the second reference value and less than or equal to 255. And at least one pixel value of the pixel value of the pixel corresponding to the left eye image and the pixel value of the pixel corresponding to the right eye image with a larger value among the pixel values.
KR1020100010548A 2010-02-04 2010-02-04 Image display device and operating method for the same KR20110090640A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020100010548A KR20110090640A (en) 2010-02-04 2010-02-04 Image display device and operating method for the same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020100010548A KR20110090640A (en) 2010-02-04 2010-02-04 Image display device and operating method for the same

Publications (1)

Publication Number Publication Date
KR20110090640A true KR20110090640A (en) 2011-08-10

Family

ID=44928385

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020100010548A KR20110090640A (en) 2010-02-04 2010-02-04 Image display device and operating method for the same

Country Status (1)

Country Link
KR (1) KR20110090640A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101581586B1 (en) 2014-10-13 2016-01-11 이화여자대학교 산학협력단 Compensation method for noise of depth image
US9253467B2 (en) 2012-11-09 2016-02-02 Electronics And Telecommunications Research Institute Method and apparatus for correcting errors in multiple stream-based 3D images
KR101878184B1 (en) * 2012-03-08 2018-07-13 엘지디스플레이 주식회사 Method for detecting jagging area and jagging area detection device
KR101988551B1 (en) * 2018-01-15 2019-06-12 충북대학교 산학협력단 Efficient object detection and matching system and method using stereo vision depth estimation

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101878184B1 (en) * 2012-03-08 2018-07-13 엘지디스플레이 주식회사 Method for detecting jagging area and jagging area detection device
US9253467B2 (en) 2012-11-09 2016-02-02 Electronics And Telecommunications Research Institute Method and apparatus for correcting errors in multiple stream-based 3D images
KR101581586B1 (en) 2014-10-13 2016-01-11 이화여자대학교 산학협력단 Compensation method for noise of depth image
KR101988551B1 (en) * 2018-01-15 2019-06-12 충북대학교 산학협력단 Efficient object detection and matching system and method using stereo vision depth estimation

Similar Documents

Publication Publication Date Title
US8896672B2 (en) Image display device capable of three-dimensionally displaying an item or user interface and a method for operating the same
KR101647722B1 (en) Image Display Device and Operating Method for the Same
US8872900B2 (en) Image display apparatus and method for operating the same
KR101611263B1 (en) Apparatus for displaying image and method for operating the same
KR101349276B1 (en) Video display device and operating method therefor
KR20110053734A (en) Image display device and operating method for the same
KR20110120825A (en) Display device and method for outputting of audio signal
US20120242808A1 (en) Image display apparatus and method for operating the same
KR20110086415A (en) Image display device and operation controlling method for the same
KR101635567B1 (en) Apparatus for displaying image and method for operating the same
KR20110082380A (en) Apparatus for displaying image and method for operating the same
KR20120034996A (en) Image display apparatus, and method for operating the same
KR20110090640A (en) Image display device and operating method for the same
KR20110111136A (en) Image display device and operating method for the same
KR101730424B1 (en) Image display apparatus and method for operating the same
KR20120062428A (en) Image display apparatus, and method for operating the same
KR101176500B1 (en) Image display apparatus, and method for operating the same
KR101657564B1 (en) Apparatus for displaying image and method for operating the same
KR20120029783A (en) Image display apparatus and method for operating the same
KR101638536B1 (en) Image Display Device and Controlling Method for the Same
KR20110134087A (en) Image display apparatus and method for operating the same
KR101716144B1 (en) Image display apparatus, and method for operating the same
KR101737367B1 (en) Image display apparatus and method for operating the same
KR20120034836A (en) Image display apparatus, and method for operating the same
KR101730423B1 (en) Apparatus for displaying image and method for operating the same

Legal Events

Date Code Title Description
WITN Withdrawal due to no request for examination