KR20130028345A - Jagging detection method and stereoscopic image display device using the same - Google Patents

Jagging detection method and stereoscopic image display device using the same Download PDF

Info

Publication number
KR20130028345A
KR20130028345A KR1020110091833A KR20110091833A KR20130028345A KR 20130028345 A KR20130028345 A KR 20130028345A KR 1020110091833 A KR1020110091833 A KR 1020110091833A KR 20110091833 A KR20110091833 A KR 20110091833A KR 20130028345 A KR20130028345 A KR 20130028345A
Authority
KR
South Korea
Prior art keywords
jagging
luminance
area
color difference
detecting
Prior art date
Application number
KR1020110091833A
Other languages
Korean (ko)
Inventor
강석주
김수형
임형섭
Original Assignee
엘지디스플레이 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지디스플레이 주식회사 filed Critical 엘지디스플레이 주식회사
Priority to KR1020110091833A priority Critical patent/KR20130028345A/en
Publication of KR20130028345A publication Critical patent/KR20130028345A/en

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/34Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source
    • G09G3/36Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source using liquid crystals
    • G09G3/3611Control of matrices with row and column drivers
    • G09G3/3696Generation of voltages supplied to electrode drivers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/02Improving the quality of display appearance

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Chemical & Material Sciences (AREA)
  • Crystallography & Structural Chemistry (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Control Of Indicators Other Than Cathode Ray Tubes (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

PURPOSE: A jagging detection method and a three-dimensional image display device using the same are provided to detect a jagging area from a generated pixel map by detecting a predetermined black tone area from brightness and chrominance information extracted from inputted image data. CONSTITUTION: A brightness and chrominance information extraction unit extracts brightness and chrominance information from three-dimensional data inputted in a three-dimensional mode(S101,S103). A brightness and chrominance information analysis unit generates a pixel map by detecting a predetermined black tone area from the extracted brightness and chrominance information(S104). A jagging detection unit detects a jagging area by shifting a PxQ mask to the generated pixel map(S105). A jagging improvement unit improves the detected jagging area. [Reference numerals] (AA) Detecting a jagging area by shifting a PxQ mask to a pixel map by a pixel; (BB) Improving the jagging of the detected jagging area; (S101) 3D mode?; (S102) Directly outputting input 2D image data; (S103) Extracting the brightness and chrominance information of input 3D image data; (S104) Generating a pixel map by detecting a predetermined black tone area from the brightness and chrominance information

Description

JAGGING DETECTION METHOD AND STEREOSCOPIC IMAGE DISPLAY DEVICE USING THE SAME}

The present invention relates to a method for detecting a jogging and a pattern retarder type stereoscopic image display apparatus using the same.

The stereoscopic image display apparatus is divided into a binocular parallax technique and an autostereoscopic technique. The binocular parallax method uses a parallax image of the left and right eyes with a large stereoscopic effect, and there are glasses and no glasses, both of which are put to practical use. The spectacle method includes a pattern retarder method in which a polarization direction of a left and right parallax image is displayed on a direct view display device or a projector and a stereoscopic image is realized using polarized glasses. In addition, the glasses method is a shutter glasses method that time-divisionally displays left and right parallax images on a direct-view display device or a projector and implements a stereoscopic image using a liquid crystal shutter glasses. In the autostereoscopic method, an optical plate such as a parallax barrier and a lenticular lens is generally used to realize a stereoscopic image by separating an optical axis of a parallax image.

1 is a view showing a pattern retarder type stereoscopic image display apparatus. Referring to FIG. 1, a liquid crystal display for implementing a stereoscopic image using a pattern retarder method includes polarization characteristics of a patterned retarder PR disposed on a display panel DIS, and polarization glasses worn by a user. The stereoscopic image is realized using the polarization characteristic of (PG). The pattern retarder type stereoscopic image display apparatus displays left eye images on odd (odd) lines of the display panel DIS and right eye images on even (even) lines. The left eye image of the display panel DIS is converted to the left eye polarization when passing through the pattern retarder PR, and the right eye image is converted to the right eye polarization when passing through the pattern retarder PR. The left eye polarizing filter of the polarizing glasses PG passes only the left eye polarized light and the right eye polarizing filter passes only the right eye polarized light. Therefore, the user sees only the left eye image through the left eye, and only the right eye image through the right eye.

The pattern retarder type stereoscopic image display device shown in FIG. 1 displays a left eye image on odd lines and a right eye image on even lines in 3D mode, so that a boundary of the image is not smooth but looks like a staircase. This can happen. In order to improve this gagging, a method of improving the detected gagging area after detecting the gagging area has been used. Conventionally, jagging is detected using an edge detection algorithm using a Sobel mask or the like. However, when detecting a jogging using an edge detection algorithm, not only the gagging region but also the edge region may be simultaneously detected. That is, the accuracy of detecting the gagging region is lowered.

The present invention provides a jagging detection method capable of accurately detecting only a jogging region excluding an edge region and a stereoscopic image display apparatus using the same.

The method for detecting a jogging of the present invention includes extracting luminance and color difference information of 3D image data input in the 3D mode; Generating a pixel map by detecting a predetermined black gray area from the luminance and color difference information; And shifting a P × Q (P, Q is a natural number) mask in units of pixels in the pixel map and detecting a jagging area.

A stereoscopic image display device according to the present invention includes a display panel in which data lines and gate lines cross; An image processor which detects a jogging by analyzing luminance and chrominance information of the input image data and improves the jagging of the detected gagging region; A data driver converting the image data output from the image processor into data voltages and outputting the data voltages to the data lines; And a gate driver configured to sequentially output gate pulses synchronized with the data voltages to the gate lines, wherein the image processor is configured to extract luminance and color difference information of the 3D image data input in the 3D mode. An information extraction unit; A luminance and color difference information analyzer configured to generate a pixel map by detecting a predetermined black tone region from the luminance and color difference information; And a jagging detector configured to detect a jagging area by shifting a P × Q (P, Q is a natural number) mask on a pixel basis in the pixel map.

According to the present invention, luminance and color difference information are extracted from input image data, and a predetermined black gray area is detected from the extracted luminance and color difference information. As a result, the present invention can accurately detect only the jogging region except the edge region. In addition, the present invention can improve image quality by improving the jagging of the detected gagging region.

1 is a view showing a stereoscopic image display apparatus of a pattern retarder method.
2 is a block diagram schematically illustrating a stereoscopic image display device according to an exemplary embodiment of the present invention.
3 is an exploded perspective view illustrating a display panel, a pattern retarder, and polarizing glasses.
4 is a block diagram illustrating an image processor in detail.
5 is a flowchart illustrating an image processing method of an image processing unit.
6A and 6B are diagrams illustrating a predetermined black gradation region in luminance and color difference information.
7 is an exemplary view showing a P × Q mask according to an embodiment of the present invention.
FIG. 8 is a diagram illustrating a left eye image seen through a left eye and a right eye image seen through a right eye in a pattern retarder method.

Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings. Like reference numerals throughout the specification denote substantially identical components. In the following description, a detailed description of known functions and configurations incorporated herein will be omitted when it may make the subject matter of the present invention rather unclear. Component names used in the following description may be selected in consideration of ease of specification, and may be different from actual product part names.

2 is a block diagram schematically illustrating a stereoscopic image display device according to an exemplary embodiment of the present invention. 3 is an exploded perspective view illustrating a display panel, a pattern retarder, and polarizing glasses. 2 and 3, the stereoscopic image display apparatus of the present invention includes a display panel 10, polarizing glasses 20, a gate driver 110, a data driver 120, a timing controller 130, and an image processor ( 140, and host system 150, and the like. The stereoscopic image display device of the present invention is a liquid crystal display (LCD), a field emission display (FED), a plasma display panel (PDP), an organic light emitting diode (Organic Light Emitting) Diodes, OLEDs), and the like. Although the present invention has been exemplified by the liquid crystal display device in the following embodiment, it should be noted that the present invention is not limited to the liquid crystal display device.

The display panel 10 displays an image under the control of the timing controller 130. In the display panel 10, a liquid crystal layer is formed between two substrates. The data lines D and the gate lines G (or scan lines) intersect each other on the lower substrate of the display panel 10, and are formed by the data lines D and the gate lines G. A TFT array in which pixels are arranged in a matrix form in defined cell regions is formed. Each pixel of the display panel 10 is connected to a thin film transistor and is driven by an electric field between the pixel electrode and the common electrode.

A color filter array including a black matrix, a color filter, a common electrode, and the like is formed on the upper substrate of the display panel 10. The common electrode is formed on the upper substrate in a vertical electric field driving method such as twisted nematic (TN) mode and vertical alignment (VA) mode, and a horizontal electric field driving such as IPS (In Plane Switching) mode and FFS (Fringe Field Switching) mode. In the method, the pixel electrode is formed on the lower substrate. The liquid crystal mode of the display panel 10 may be implemented in any of the liquid crystal modes as well as the above-described TN mode, VA mode, IPS mode, and FFS mode.

As the display panel 10, a transmissive liquid crystal display panel that modulates light from the backlight unit may be selected. The backlight unit includes a light source, a light guide plate (or diffusion plate), and a plurality of optical sheets that are turned on in accordance with a driving current supplied from the backlight unit driving unit. The backlight unit may be implemented as a direct type backlight unit or an edge type backlight unit. The light sources of the backlight unit may include one of a hot cathode fluorescent lamp (HCFL), a cold cathode fluorescent lamp (CCFL), an external electrode fluorescent lamp (EEFL), a light emitting diode (LED), or two or more light sources. .

The backlight unit driver generates a driving current for turning on light sources of the backlight unit. The backlight unit driver turns on / off a driving current supplied to the light sources under the control of the backlight controller. The backlight controller outputs the backlight control data in which the backlight brightness and the lighting timing are adjusted according to the global / local dimming signal input from the host system in the SPI (Serial Pheripheral Interface) data format to the backlight unit driver.

Referring to FIG. 3, an upper polarizer 11a is attached to an upper substrate of the display panel 10, and a lower polarizer 11b is attached to a lower substrate. The light transmission axis r1 of the upper polarizing plate 11a and the light transmission axis r2 of the lower polarizing plate 11b are orthogonal to each other. In addition, an alignment layer for setting a pre-tilt angle of the liquid crystal is formed on the upper substrate and the lower substrate. A spacer for maintaining a cell gap of the liquid crystal layer is formed between the upper substrate and the lower substrate of the display panel 10.

In the 2D mode, pixels of odd lines and pixels of even lines of the display panel 10 display a 2D image. In the 3D mode, pixels of the odd lines of the display panel 10 display a left eye image (or right eye image) and pixels of even lines represent a right eye image (or left eye image). Light of the image displayed on the pixels of the display panel 10 is incident on the patterned retarder 30 disposed on the display panel 10 through the upper polarizing film.

First retarders 31 are formed in odd lines of the pattern retarder 30, and second retarders 32 are formed in even lines. Accordingly, the pixels of the odd lines of the display panel 10 are opposed to the first retarder 31 formed in the odd lines of the pattern retarder 30, and the pixels of the even lines of the display panel 10 are pattern retarder. It is opposed to the second retarder 32 formed in the even lines of the rudder 30.

The first retarder 31 delays the phase value of the light from the display panel 10 by + λ / 4 (λ is the wavelength of light). The second retarder 32 delays the phase value of the light from the display panel 10 by -λ / 4. The optical axis r3 of the first retarder 31 and the optical axis r4 of the second retarder 32 are orthogonal to each other. The first retarder 31 of the pattern retarder 30 may be implemented to pass only the first circularly polarized light (left circularly polarized light). The second retarder 32 may be implemented to pass only the second circularly polarized light (right polarized light).

The left eye polarization filter of the polarizing glasses 20 has the same optical axis as the first retarder 31 of the pattern retarder 30. The right eye polarization filter of the polarizing glasses 20 has the same optical axis as the second retarder 32 of the pattern retarder 30. For example, the left eye polarization filter of the polarizing glasses 20 may be selected as a left circular polarization filter, and the right eye polarization filter of the polarizing glasses 20 may be selected as a right circular polarization filter.

As a result, in the stereoscopic image display apparatus of the pattern retarder method, the left eye image displayed on the pixels of the odd lines of the display panel 10 is converted to the left circularly polarized light through the first retarder 31 and the pixels of the even lines The right eye image displayed on the field passes through the second retarder 32 and is converted into right circularly polarized light. The left circularly polarized light reaches the user's left eye through the left eye polarization filter of the polarizing glasses 20, and the right circularly polarized light passes through the right eye polarization filter of the polarizing glasses 20 to reach the right eye of the user. Therefore, the user sees only the left eye image through the left eye, and only the right eye image through the right eye.

The data driver 120 includes a plurality of source drive ICs. The source drive ICs convert 2D / 3D image data (RGB 2D / RGB 3D ') input from the timing controller 130 into positive / negative gamma compensation voltages to generate positive / negative analog data voltages. The positive / negative analog data voltages output from the source drive ICs are supplied to the data lines D of the display panel 10.

The gate driver 110 sequentially supplies gate pulses synchronized with the data voltage to the gate lines G of the display panel 10 under the control of the timing controller 130. The gate driver 110 may be composed of a plurality of gate drive integrated circuits each including a shift register, a level shifter for converting an output signal of the shift register into a swing width suitable for TFT driving of the liquid crystal cell, have. Alternatively, the gate driver 110 may be directly formed on the lower substrate of the display panel 10 by using a gate drive IC in panel (GIP) method. In the GIP method, the level shifter may be mounted on a printed circuit board (PCB), and the shift register may be formed on a lower substrate of the display panel 10.

The timing controller 130 gates the gate driver control signal GCS based on the 2D / 3D image data RGB 2D / RGB 3D ′, timing signals, and a mode signal MODE output from the image processor 140. The driver 110 outputs the data driver control signal DCS to the data driver 120. The timing signals include a vertical synchronization signal, a horizontal synchronization signal, a data enable signal, a dot clock, and the like. The gate driver control signal GCS includes a gate start pulse, a gate shift clock, a gate output enable signal, and the like. The gate start pulse controls the timing of the first gate pulse. The gate shift clock is a clock signal for shifting the gate start pulse. The gate output enable signal controls the output timing of the gate driver 110.

The data driver control signal DCS includes a source start pulse, a source sampling clock, a source output enable signal, a polarity control signal, and the like. The source start pulse controls the data sampling start time of the data driver 120. The source sampling clock is a clock signal that controls the sampling operation of the data driver 120 based on the rising or falling edge. If the digital video data to be input to the data driver 120 is transmitted using mini LVDS (Low Voltage Differential Signaling) interface standard, the source start pulse and the source sampling clock may be omitted. The polarity control signal inverts the polarity of the data voltage output from the data driver 120 at an L (L is a natural number) horizontal period period. The source output enable signal controls the output timing of the data driver 120.

The host system 150 supplies 2D / 3D image data (RGB 2D / RGB 3D ) to the image processor 140 through an interface such as a low voltage differential signaling (LVDS) interface and a transition minimized differential signaling (TMDS) interface. In addition, the host system 150 supplies timing signals, a mode signal MODE, and the like to the image processor 140. The mode signal MODE occurs at a high or low logic level depending on whether it is a 2D or 3D mode.

The image processing unit 140 can distinguish the 2D mode from the 3D mode according to the mode signal MODE. The image processor 140 outputs the 2D image data RGB 2D input from the host system 150 to the timing controller 130 as it is in the 2D mode. The image processor 140 detects a gagging area of the 3D image data RGB 3D input from the host system 150 in the 3D mode and then improves the jogging. The image processor 140 converts and outputs 3D image data having improved jagging in a 3D mode into a 3D format of a pattern retarder method. The image processor 140 may convert the 3D image data RGB 3D input from the host system 150 into the 3D format of the pattern retarder method in the 3D mode, detect the gagging region, and improve the jagging. The image processor 140 outputs the 3D image data RGB 3D ′ in which the jagging is improved and converted into the 3D format in the 3D mode to the timing controller 130. Detailed descriptions of the method for detecting and improving the jagging of the image processor 140 will be described later with reference to FIGS. 4 and 5.

4 is a block diagram illustrating an image processor in detail. 5 is a flowchart illustrating an image processing method of an image processing unit. Referring to FIG. 4, the image processor 140 includes a luminance and chrominance information extractor 141, a luminance and chrominance information analyzer 142, a jogging detector 143, and a jagging enhancer 144. 4 and 5, the luminance and chrominance information extractor 141, the luminance and chrominance information analyzer 142, the jogging detector 143, and the jagging enhancer 144 of the image processor 140 will be described below. Take a closer look at.

First, the luminance and color difference information extractor 141 receives 2D / 3D image data RGB 2D / RGB 3D and a mode signal MODE from the host system 150. The luminance and color difference information extractor 141 may determine whether the 2D mode or the 3D mode is in accordance with the mode signal MODE. The 2D image data RGB 2D is input in the 2D mode and the 3D image data RGB 3D is input in the 3D mode. The luminance and color difference information extractor 141 outputs the 2D image data RGB 2D input in the 2D mode to the timing controller 130 as it is. (S101, S102)

Secondly, the luminance and color difference information extractor 141 extracts luminance and color difference information Y, Cb, and Cr of the 3D image data RGB 3D input in the 3D mode. The luminance and color difference information extractor 141 extracts luminance and color difference information Y, Cb, and Cr for each pixel data. The luminance and color difference information extracting unit 141 may extract luminance information Y as shown in Equation 1, and extract color difference information Cb and Cr as shown in Equations 2 and 3. Equations 1 to 3 are just examples showing conversion equations in case of JPEG conversion, and a conversion method using other equations may be used.

Figure pat00001

Figure pat00002

Figure pat00003

In Equations 1 to 3, Y denotes luminance information, and Cb and Cr denote color difference information. In addition, R denotes a gray level of red (R) data, G denotes a gray level of green (G) data, and B denotes a gray level of blue (B) data. When 8-bit image data RGB 3D is input, the gradation of red (R) data, the gradation of green (G) data, and the gradation of blue (B) data are represented by 0 to 255 values (G0 to G255). Luminance and color difference information (Y, Cb, Cr) may also be represented by 0 to 255 values (G0 to G255). (S103)

Third, the luminance and color difference information analyzer 142 receives the luminance and color difference information Y, Cb, and Cr extracted from the luminance and color difference information extractor 141. The luminance and color difference information analyzer 142 determines whether the luminance information Y is included in the predetermined black gradation region for each pixel. As shown in FIG. 6A, the luminance and chrominance information analyzing unit 142 has a luminance information Y of the coordinate pixels P (a, b) of 0 to M (M is a natural number). If the color difference information (Cb, Cr) is within the range of 0 to N (N is a natural number) grayscale (G0 to GN) as shown in Fig. 6B, the corresponding (a, b) coordinate pixels (P (a, b)) ) Is detected as a predetermined black gradation region. In more detail, the luminance and chrominance information analyzing unit 142 may have the luminance information Y of the coordinate pixels P (a, b) of 0 to M (M is a natural number) as shown in FIG. 6A. If the color difference information (Cb, Cr) is within the range of 0 to N (N is a natural number) grayscale (G0 to GN) as shown in Fig. 6B, the corresponding coordinate pixel (P (a, b)) Store the value '1' in The luminance and chrominance information analyzing unit 142 does not have the luminance information Y of the (a, b) coordinate pixels P (a, b) in the range of 0 to M gradations (G0 to GM) gradations, If Cb and Cr are not in the range of 0 to N gray (G0 to GN), the value '0' is stored in the corresponding coordinate pixel P (a, b). That is, the luminance and color difference information analyzer 142 generates a pixel map having a value of '0' or '1' as shown in FIG. 7. For example, the luminance and color difference information analyzer 142 may form a pixel map including 1920 × 1080 pixels in the case of 1920 × 1080 resolution.

In FIG. 6A, the x axis means gray level and the y axis means luminance information Y. In FIG. In FIG. 6B, the x-axis denotes a gray level and the y-axis denotes color difference information Cb and Cr. In addition, the predetermined black gradation region is set based on a gradation range that the viewer can recognize as a black image. In particular, since the viewer has a narrow black image recognition range of the color difference information Cb and Cr, while a black image recognition range of the luminance information Y is wide, the viewer has a range of the luminance information Y determined as the predetermined black gradation region. (M) may be set larger than the range (N) of the color difference information (Cb, Cr) determined as the predetermined black gradation region. For example, the range M of the luminance information Y determined as the predetermined black gradation region is three times larger than the range N of the color difference information Cb and Cr determined as the predetermined black gradation region. The range N of the color difference information Cb and Cr, which is set and determined as the predetermined black gradation region, may be set within approximately 0 gradation G0 to 20 gradation G20. (S104)

Fourth, the jagging detector 143 shifts the P × Q (P and Q are natural numbers) masks M by pixel in the pixel map MAP generated from the luminance and chrominance information analyzer 142, and the gagging region. Is detected. The P × Q mask M includes P rows and Q columns. First, the jogging detector 143 may determine the values of the first to Qth pixels P (1, j) to P (Q, j) of the jth line in the P × Q mask M as shown in Equation 4 below. After summing up, it is determined whether the sum value Lj of the j th line is greater than the first threshold value Vth1.

Figure pat00004

When the sum value Lj of the jth line is greater than the first threshold value Vth1 as shown in Equation 5, the jogging detector 143 stores the value '1' as the representative value Fj of the jth line. When the sum value Lj of the j-th line is equal to or less than the first threshold value Vth1 as shown in Equation 5, the jogging detector 143 stores a value '0' as the representative value Fj of the j-th line.

Figure pat00005

The jogging detector 143 adds the representative values Fj of the first to the P-th lines in the P × Q mask M as shown in Equation 6, and then the representative value sum value B becomes the second threshold value ( It is determined whether it is larger than Vth2).

Figure pat00006

When the representative value sum value B is larger than the second threshold value Vth2, the jogging detection unit 143 detects the area of the P × Q mask M as the jogging area. When the representative value sum value B is equal to or less than the second threshold value Vth2, the jogging detection unit 143 does not detect the area of the P × Q mask M as the jogging area. The jagging detector 143 shifts the P × Q mask M in units of pixels, and then repeatedly performs the steps described based on Equations 4 to 6 below. The P × Q mask M may be shifted from the left to the right direction, and may be shifted from the bottom to the right again from below one line when there is no longer an area to be shifted to the right. (S105)

The jogging improvement unit 144 improves the jagging of the gagging area detected by the jogging detection unit 143. The jagging improvement unit 144 receives 3D image data RGB 3D and transfers data of the (a, b) coordinate pixel P (a, b) detected as the gagging region to the pixels P (a, b-1)) or a value arithmetic averaged with the data of the pixels P (a, b + 1) on the subsequent lines.The jagging improvement unit 144 detects using a method other than the arithmetic mean. It is possible to improve the jagging of the used jagging area (S106).

FIG. 8 is a diagram illustrating a left eye image seen through a left eye and a right eye image seen through a right eye in a pattern retarder method. Referring to FIG. 8, the left eye image input to the viewer's left eye is displayed only on the radix lines by the pattern retarder PR, and the right eye image input to the viewer's right eye is the even lines by the pattern retarder PR. Only displayed. A portion blocked by the pattern retarder PR in the left eye image and a portion blocked by the pattern retarder PR in the right eye image may be recognized as black by the viewer. Therefore, when a jagging probable region (JPR) such as a predetermined black gradation region is displayed, a problem occurs that the viewer perceives that the pattern retarder PR is extended. That is, the viewer feels the jogging because the boundary of the image is not smooth but looks like a staircase due to the jagging enabled region JPR.

However, the present invention improves the jogging by detecting a jogging region (JPR) such as a predetermined black gradation region as the jagging region. Therefore, since the jagging area JPR is displayed at a higher gray level than the predetermined black gray level that the viewer perceives as black, the viewer no longer recognizes the jogging area JPR as extending the pattern retarder PR. do. As a result, the present invention can not only accurately detect the gagging region except the edge region, but also improve the image quality by improving the jagging of the detected gagging region.

It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or scope of the invention. Therefore, the present invention should not be limited to the details described in the detailed description, but should be defined by the claims.

10: display panel 11a: upper polarizing plate
11b: lower polarizer 20: polarized glasses
30: pattern retarder 31: first retarder
32: second retarder 110: gate driver
120: data driver 130: timing controller
140: image processor 141: luminance and color difference information calculator
142: luminance and color difference information analysis unit 143: jagging detection unit
144: jagging enhancement unit 150: host system

Claims (11)

Extracting luminance and color difference information of the 3D image data input in the 3D mode;
Generating a pixel map by detecting a predetermined black gray area from the luminance and color difference information; And
And shifting a P × Q (P, Q is a natural number) mask in units of pixels in the pixel map and detecting a jagging area.
The method of claim 1,
Generating the pixel map,
When the luminance information of the (a, b) coordinate pixel is in the range of 0 to M (M is a natural number), and the color difference information of the (a, b) coordinate pixel is in the range of 0 to N (N is a natural number), And detecting the (a, b) coordinate pixels as the predetermined black gradation region.
The method of claim 2,
The method of claim 2 wherein M is greater than N.
The method of claim 1,
The detecting of the jagging area may include:
After summing values of the first to Qth pixels of the j-th line in the P × Q mask, the representative value of the j-th line is set to the first value when the sum of the j-th lines is greater than the first threshold value. And storing the representative value of the j-th line as a second value when the sum value of the j-th line is equal to or less than a first threshold value.
The method of claim 4, wherein
The detecting of the jagging area may include:
Summing the representative values of the first to P-th lines within the P × Q mask, and detecting the area of the P × Q mask as a jagging area when the representative value sum is greater than the second threshold. Jagging detection method.
A display panel in which data lines and gate lines cross each other;
An image processor which detects a jogging by analyzing luminance and chrominance information of the input image data and improves the jagging of the detected gagging region;
A data driver converting the image data output from the image processor into data voltages and outputting the data voltages to the data lines; And
A gate driver sequentially outputting gate pulses synchronized with the data voltages to the gate lines,
Wherein the image processing unit comprises:
A luminance and color difference information extracting unit which extracts the luminance and color difference information of the 3D image data input in the 3D mode;
A luminance and color difference information analyzer configured to generate a pixel map by detecting a predetermined black tone region from the luminance and color difference information; And
And a jagging detector configured to detect a jagging area by shifting a P × Q (P, Q is a natural number) mask on a pixel basis in the pixel map.
The method according to claim 6,
The luminance and color difference information analysis unit,
When the luminance information of the (a, b) coordinate pixel is in the range of 0 to M (M is a natural number), and the color difference information of the (a, b) coordinate pixel is in the range of 0 to N (N is a natural number), And the (a, b) coordinate pixels are detected as the predetermined black gradation region.
The method according to claim 6,
The M is greater than the N stereoscopic image display device.
The method according to claim 6,
The jagging detection unit,
After summating the values of the first to Qth pixels of the jth line (j is a natural number) in the P × Q mask, the representative value of the jth line when the sum value of the jth line is greater than a first threshold And store the representative value of the j-th line as a second value when the sum value of the j-th line is less than or equal to the first threshold value.
The method of claim 9,
The jagging detection unit,
Summing the representative values of the first to P-th lines within the P × Q mask, and detecting the area of the P × Q mask as a jagging area when the representative value sum is greater than the second threshold. Stereoscopic Display.
The method according to claim 6,
Replacing (a, b) coordinate pixel data of the detected gagging region with (a, b-1) coordinate pixel data of a previous line or (a, b + 1) coordinate pixel data of a subsequent line with an arithmetic average value The stereoscopic image display device further comprises a jagging improvement unit.
KR1020110091833A 2011-09-09 2011-09-09 Jagging detection method and stereoscopic image display device using the same KR20130028345A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020110091833A KR20130028345A (en) 2011-09-09 2011-09-09 Jagging detection method and stereoscopic image display device using the same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020110091833A KR20130028345A (en) 2011-09-09 2011-09-09 Jagging detection method and stereoscopic image display device using the same

Publications (1)

Publication Number Publication Date
KR20130028345A true KR20130028345A (en) 2013-03-19

Family

ID=48178894

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020110091833A KR20130028345A (en) 2011-09-09 2011-09-09 Jagging detection method and stereoscopic image display device using the same

Country Status (1)

Country Link
KR (1) KR20130028345A (en)

Similar Documents

Publication Publication Date Title
KR101888672B1 (en) Streoscopic image display device and method for driving thereof
KR101869872B1 (en) Method of multi-view image formation and stereoscopic image display device using the same
KR20130027214A (en) Stereoscopic image display device and driving method thereof
US20120274748A1 (en) Stereoscopic Image Display Device and Method for Driving the Same
KR101829459B1 (en) Image processing method and stereoscopic image display device using the same
KR101981530B1 (en) Stereoscopic image display device and method for driving the same
KR101296902B1 (en) Image processing unit and stereoscopic image display device using the same, and image processing method
KR101840876B1 (en) Stereoscopic image display device and driving method thereof
US9420269B2 (en) Stereoscopic image display device and method for driving the same
KR101793283B1 (en) Jagging improvement method and stereoscopic image display device using the same
KR102126532B1 (en) Method of multi-view image formation and stereoscopic image display device using the same
KR101773616B1 (en) Image processing method and stereoscopic image display device using the same
KR101894090B1 (en) Method of multi-view image formation and stereoscopic image display device using the same
KR101839150B1 (en) Method for improving 3d image quality and stereoscopic image display using the same
KR101843197B1 (en) Method of multi-view image formation and stereoscopic image display device using the same
KR101843198B1 (en) Method of multi-view image formation and stereoscopic image display device using the same
KR101870233B1 (en) Method for improving 3d image quality and stereoscopic image display using the same
KR20130012672A (en) Stereoscopic image display device and driving method thereof
KR20130028345A (en) Jagging detection method and stereoscopic image display device using the same
KR101782648B1 (en) Liquid crystal display device
KR101829466B1 (en) Stereoscopic image display device
US8723931B2 (en) Stereoscopic image display
KR101803564B1 (en) Stereoscopic image display device and driving method thereof
KR101803572B1 (en) Stereoscopic image display device
KR20130039544A (en) Stereoscopic image display device and method for driving the same

Legal Events

Date Code Title Description
WITN Withdrawal due to no request for examination