KR20140070856A - Global depth map generation method and stereoscopic image display device using the same - Google Patents

Global depth map generation method and stereoscopic image display device using the same Download PDF

Info

Publication number
KR20140070856A
KR20140070856A KR1020120136164A KR20120136164A KR20140070856A KR 20140070856 A KR20140070856 A KR 20140070856A KR 1020120136164 A KR1020120136164 A KR 1020120136164A KR 20120136164 A KR20120136164 A KR 20120136164A KR 20140070856 A KR20140070856 A KR 20140070856A
Authority
KR
South Korea
Prior art keywords
edge
data
horizontal
vertical
depth map
Prior art date
Application number
KR1020120136164A
Other languages
Korean (ko)
Other versions
KR101996657B1 (en
Inventor
전호민
Original Assignee
엘지디스플레이 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지디스플레이 주식회사 filed Critical 엘지디스플레이 주식회사
Priority to KR1020120136164A priority Critical patent/KR101996657B1/en
Publication of KR20140070856A publication Critical patent/KR20140070856A/en
Application granted granted Critical
Publication of KR101996657B1 publication Critical patent/KR101996657B1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Landscapes

  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The present invention relates to a global depth map generating method and a stereoscopic image display device using the same. The global depth map generating method according to an embodiment of the invention comprises a first step of converting 2D image data to edge data; a second step of analyzing the edge data to determine a direction of edge; and a third step of analyzing the edge in a horizontal direction of the 2D image data to generate a global depth map when the direction of the edge is vertical, and analyzing the edge in a vertical direction of the 2D image data to generate a global depth map when the direction of the edge is horizontal.

Description

Technical Field [0001] The present invention relates to a global depth map generation method and a stereoscopic image display method using the same,

The present invention relates to a global depth map generation method and a stereoscopic image display apparatus using the same.

The stereoscopic display is divided into a stereoscopic technique and an autostereoscopic technique. The binocular parallax method uses parallax images of right and left eyes with large stereoscopic effect, and both glasses and non-glasses are used, and both methods are practically used. In the spectacle system, there is a pattern retarder system in which a polarizing direction of a right and left parallax image is displayed on a direct view type display device or a projector, and a stereoscopic image is realized using polarizing glasses. The eyeglass system has a shutter glasses system in which right and left parallax images are displayed in a time-division manner on a direct view type display device or a projector, and a stereoscopic image is implemented using liquid crystal shutter glasses. In the non-eyeglass system, an optical plate such as a parallax barrier or a lenticular lens is generally used to separate the optical axes of the right and left parallax images to realize a stereoscopic image.

Generally, a stereoscopic image display device receives 3D image data from outside in order to realize a stereoscopic image. In this case, the three-dimensional image is displayed by converting the 3D image data into a 3D format corresponding to the stereoscopic image method described above. However, the stereoscopic image display apparatus can implement a stereoscopic image even when 2D image data is inputted from the outside. In this case, the stereoscopic image display apparatus generates 3D image data from the received 2D image data, and converts the 3D image data into a 3D format corresponding to the stereoscopic image method, thereby displaying the stereoscopic image.

Specifically, the stereoscopic image display apparatus can generate 3D image data using a depth map calculated from 2D image data and 2D image data. The depth map means a map created with depth data of one frame period calculated by analyzing 2D image data of one frame period. The depth data is a value indicating the depth information of the 2D image data. The depth data becomes smaller as the depth of the 2D image data becomes deeper, and the depth data becomes larger as the depth of the 2D image data becomes shallower. The depth map can be calculated using a global depth map and a local depth map. The global depth map is a depth map calculated by analyzing edges of 2D image data. The local depth map is a depth map calculated by analyzing the luminance and color of 2D image data. Edge refers to the outline of each of the 2D image objects.

On the other hand, generally, the upper area of the 2D image is a deep background area, and the lower area is an area where objects having a shallow depth are displayed. That is, since the perspective of the 2D image is generally displayed in the vertical direction, the global depth map is generated by analyzing the edge in the horizontal direction of the 2D image data. However, the perspective of the 2D image may appear in the horizontal direction (x-axis direction) as shown in Fig. In Fig. 1, the left area of the 2D image is a shallow area, and the right area is a deep area. However, the global depth map generated by analyzing the edges in the horizontal direction of the 2D image data can only reflect the perspective of the vertical direction, and there is a problem that the perspective of the horizontal direction can not be reflected. That is, when the perspective of the 2D image occurs in the horizontal direction (x-axis direction) as shown in Fig. 1, there occurs a problem that the global depth map is created incorrectly. Owing to the misrepresentation of the global depth map, the 3D sensation of the 3D image that the user feels is reduced.

The present invention provides a global depth map calculation method and a stereoscopic image display apparatus using the same that can prevent erroneous creation of a global depth map.

A global depth map generation method according to an embodiment of the present invention includes a first step of converting 2D image data into edge data; A second step of analyzing the edge data to determine the direction of the edge; And generating a global depth map by analyzing an edge in the horizontal direction of the 2D image data when the direction of the edge is a vertical direction and generating an edge in the vertical direction of the 2D image data, And a third step of generating the global depth map by analyzing the global depth map.

A stereoscopic image display device according to an embodiment of the present invention includes a display panel including data lines and gate lines; A depth map is generated from the input 2D image data and a local depth map, a depth map is generated using the global depth map and the local depth map, a 3D image is generated using the 2D image data and the depth map, A data generator; A data driving circuit for converting the 3D image data into data voltages and outputting the data voltages to the data lines; And a gate driving circuit for sequentially outputting gate pulses synchronized with the data voltages to the gate lines, wherein the 3D image data generator comprises: an edge data converter for converting 2D image data into edge data; And an edge direction determination unit for analyzing the data to determine the direction of the edge, and when the direction of the edge is the vertical direction, analyzing edges in the horizontal direction of the 2D image data to generate the global depth map, And a global depth data generator for calculating the global depth data by analyzing an edge in the vertical direction of the 2D image data in a horizontal direction.

According to the present invention, a direction in which a perspective of a 2D image appears can be detected by determining the direction of an edge, and the edge is analyzed in the vertical direction of the 2D image data according to the direction of the edge to calculate global depth data, And determines whether to calculate global depth data. That is, according to the present invention, global depth data is calculated in consideration of the direction in which the perspective of the 2D image appears, so that it is possible to prevent erroneous creation of the global depth map. As a result, the present invention can maintain the stereoscopic quality of the 3D image at a high level.

1 is an image showing an example of a 2D image in which a perspective is displayed in a horizontal direction.
2 is a block diagram schematically showing a stereoscopic image display apparatus according to an embodiment of the present invention.
3 is a block diagram showing the image processing circuit of FIG. 2 in detail;
4 is a flow chart showing in detail the image processing method of the image processing circuit.
5 is a detailed block diagram of the global depth map generator of FIG.
FIG. 6 is a flowchart showing in detail a global depth map generating method of a global depth map generating unit. FIG.
7A and 7B are images showing an edge image and a compressed image.
8 is a flow chart showing in detail the edge direction determination method of the edge direction determination unit.
Figure 9 is an example showing a histogram of edge direction vectors;
10 is a flowchart showing in detail a global depth map calculating method of the global depth map calculating unit.
11A is an exemplary view showing one example of first through nth horizontal weights.
11B is an exemplary view showing one example of first through nth vertical weights.
11C is an exemplary view showing another example of the first to n-th vertical weights.
12A is an exemplary view showing a calculated global depth map when an edge direction is a vertical direction;
12B is an exemplary view showing the calculated global depth map when the direction of the edge is the horizontal direction.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Reference will now be made in detail to the preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings. Like reference numerals throughout the specification denote substantially identical components. In the following description, a detailed description of known functions and configurations incorporated herein will be omitted when it may make the subject matter of the present invention rather unclear. The component name used in the following description may be selected in consideration of easiness of specification, and may be different from the actual product name.

2 is a block diagram schematically showing a stereoscopic image display apparatus according to an embodiment of the present invention. 2, a stereoscopic image display apparatus according to an exemplary embodiment of the present invention includes a display panel 10, a gate driving circuit 110, a data driving circuit 120, a timing controller 130, an image processing circuit 140, And a host system 150, and the like. The display panel 10 of the stereoscopic image display apparatus according to the embodiment of the present invention may be applied to a liquid crystal display (LCD), a field emission display (FED), a plasma display panel (PDP) ), An organic light emitting diode (OLED), or the like. Although the present invention has been described with reference to the case where the display panel 10 is implemented as a liquid crystal display device in the following embodiments, it should be noted that the present invention is not limited thereto. In addition, the stereoscopic image display apparatus of the present invention can be applied to a stereoscopic display system that implements a stereoscopic image by a binocular disparity such as a shutter glass system, a pattern retarder system, and an active retarder system, An optical system such as a parallax barrier, a lenticular lens, or the like can be used to realize a stereoscopic image by a binocular parallax.

The display panel 10 includes an upper substrate and a lower substrate facing each other with a liquid crystal layer interposed therebetween. The display panel 10 is formed with a pixel array including pixels arranged in a matrix form by the intersection structure of the data lines D and the gate lines G (or scan lines). Each of the pixels of the pixel array drives a liquid crystal layer of the liquid crystal layer by a voltage difference between a pixel electrode through which a data voltage is charged through a TFT (Thin Film Transistor) and a common electrode to which a common voltage is applied, Display. On the upper substrate of the display panel 10, a black matrix and a color filter are formed. The common electrode is formed on the upper substrate in the case of a vertical electric field driving method such as a TN (Twisted Nematic) mode and a VA (Vertical Alignment) mode. The common electrode may be formed in the IPS (In-Plane Switching) mode and the FFS And may be formed on the lower substrate together with the pixel electrode in the case of the horizontal field driving method. The liquid crystal mode of the display panel 10 may be implemented in any liquid crystal mode as well as a TN mode, a VA mode, an IPS mode, and an FFS mode. On the upper substrate and the lower substrate of the liquid crystal display panel, an alignment film is formed to attach a polarizing plate and set a pre-tilt angle of the liquid crystal. A spacer is formed between the upper substrate and the lower substrate of the display panel 10 to maintain a cell gap of the liquid crystal layer.

The display panel 10 can be implemented in any form such as a transmissive liquid crystal display panel, a transflective liquid crystal display panel, and a reflective liquid crystal display panel. In a transmissive liquid crystal display panel and a transflective liquid crystal display panel, a backlight unit is required. The backlight unit may be implemented as a direct type backlight unit or an edge type backlight unit.

The data driving circuit 120 includes a plurality of source drive integrated circuits (ICs). The source drive ICs convert the 2D image data (RGB2D) or the 3D image data (RGB3D) to the positive / negative gamma compensation voltages under the control of the timing controller 130 to generate positive / negative analog data voltages. Positive / negative polarity analog data voltages output from the source drive ICs are supplied to the data lines D of the display panel 10.

The gate driving circuit 110 sequentially supplies gate pulses (or scan pulses) to the gate lines G of the display panel 10 in synchronization with the data voltages under the control of the timing controller 130. The gate driver 110 may be composed of a plurality of gate drive integrated circuits each including a shift register, a level shifter for converting an output signal of the shift register into a swing width suitable for TFT driving of the liquid crystal cell, have.

The timing controller 130 receives 2D image data RGB2D or 3D image data RGB3D and timing signals from the image processing circuit 140 and a mode signal MODE. The timing signals may include a vertical synchronization signal, a horizontal synchronization signal, a data enable signal, and a clock signal. The timing controller 130 generates a gate control signal GCS for controlling the gate driving circuit 110 based on the timing signals and generates a data control signal DCS for controlling the data driving circuit 120 do. The timing controller 130 supplies a gate control signal GCS to the gate driving circuit 110. [ The timing controller 130 supplies the 2D image data RGB2D and the data control signal DCS to the data driving circuit 120 in the 2D mode and the 3D image data RGB3D and the data control signal DCS in the 3D mode And supplies it to the data driving circuit 120.

The host system 150 includes a system-on-chip (hereinafter referred to as " system ") system having a scaler for converting the 2D image data RGB2D input from the external video source device into a data format suitable for display on the display panel 10. [ on Chip). The host system 150 supplies the 2D image data RGB2D and the timing signals to the image processing circuit 140 via an interface such as a Low Voltage Differential Signaling (LVDS) interface or a TMDS (Transition Minimized Differential Signaling) interface. Also, the host system 150 supplies the image processing circuit 140 with a mode signal MODE capable of distinguishing the 2D mode from the 3D mode.

The image processing circuit 140 outputs the 2D image data (RGB2D) in the 2D mode to the timing controller 130 without conversion. The image processing circuit 140 generates the 3D image data RGB3D from the 2D image data RGB2D in the 3D mode and outputs the 3D image data RGB3D to the timing controller 130. [ The 3D image data (RGB3D) may be multi-view image data including at least two view image data. As a result, even if 2D image data (RGB2D) is inputted, the stereoscopic image display device according to the embodiment of the present invention can realize a stereoscopic image by generating 3D image data (RGB3D) using the image processing circuit 140. [

3 is a detailed block diagram of the image processing circuit of FIG. 4 is a flow chart showing in detail the image processing method of the image processing circuit. 3, the image processing circuit 140 includes a global depth map generating unit 200, a local depth map generating unit 300, a depth map generating unit 400, and a 3D image data generating unit 500 do. Hereinafter, the image processing method of the image processing circuit 140 will be described in detail with reference to FIGS. 3 and 4. FIG.

First, the global depth map generator 200 receives 2D image data (RGB2D) from the host system 150. FIG. The global depth map generator 200 calculates the global depth data GDD of one frame period from the 2D image data RGB2D of one frame period. The global depth map means a map created with global depth data (GDD) of one frame period. When the resolution of the display panel 10 is pxq (p, q is a natural number), the 2D image data RGB2D of one frame period includes pxq 2D image data RGB2D, The global depth map data GDD includes p x q global depth map data GDD. A detailed description of the global depth map generating method of the global depth map generating unit 200 will be described in detail with reference to FIGS. 5 and 6. FIG. (S101)

Secondly, the local depth map generator 300 receives the 2D image data RGB2D from the host system 150. [ The local depth map generator 300 analyzes the luminance and color of the 2D image data RGB2D of one frame period to calculate the local depth data LDD of one frame period. The local depth map means a map created with the local depth data (LDD) of one frame period. When the resolution of the display panel 10 is pxq (p, q is a natural number), the 2D image data RGB2D of one frame period includes pxq 2D image data RGB2D, The local depth map data LDD includes p x q local depth map data LDD. For example, the local depth map generator 300 may calculate the local depth data LDD as the brightness of the object of the 2D image is higher, The depth data (LDD) can be calculated smaller. The local depth map generator 300 further calculates the local depth data LDD as the 2D image data RGB2D is closer to the red color and the local depth data LDD ) Can be calculated smaller. (S102)

Thirdly, the depth map generator 400 receives the global depth data GDD from the global depth map generator 200 and receives the local depth data LDD from the local depth map generator 300. The depth map generator 400 may calculate the depth data DD by applying the first weight w1 to the global depth data GDD and applying the second weight w2 to the local depth data LDD. have. The depth map means a map created by the depth data DD of one frame period. The depth data DD in one frame period includes p x q depth data DD.

Specifically, the depth map generator 400 applies the first weight w1 to the p × q global depth data GDD and the second weight w2 to the p × q local depth data LDD To thereby calculate p × q depth data DD. The p × q global depth data GDD, p × q local depth data LDD, and p × q depth data DD can be represented by (x, y) coordinates. The depth map generator 400 calculates global depth data ((i, j)) in the coordinates (i, j) (i is a natural number satisfying 1? I? P and j is a natural number satisfying 1? The value obtained by multiplying GDD (i, j) by the first weight w1 and the local depth data LDD (i, j) at the (i, j) coordinate by the second weight w2 are added the depth data DD in the (i, j) coordinates can be calculated. In Equation (1), the sum of the first weight w1 and the second weight w2 is "1 ".

Figure pat00001

Fourth, the 3D image data generation unit 500 receives the 2D image data RGB2D from the host system 150 and receives the depth data DD from the depth map generation unit 400. FIG. The 3D image data generator 500 calculates the disparity using the depth data DD, convergence, and max disparity. The disparity means a value for shifting the 2D image data to form a three-dimensional sensation. Convergence refers to the position where the focus is formed, and the three-dimensional effect can be adjusted to the front or rear of the display panel by adjusting the convergence. The maximum disparity means the maximum value of the disparity. Convergence and maximum disparity can be predetermined through preliminary experiments.

Specifically, the 3D image data generation unit 500 uses the depth data DD (i, j), the convergence C, and the maximum disparity MD in the coordinates (i, j) (Dis (i, j)) in the (i, j) coordinate can be calculated. In Equation (2), MG denotes the maximum gradation value of the depth data DD (i, j) at the coordinates (i, j). If the depth data DD (i, j) in the (i, j) coordinate is 8-bit data, the MG will be "255 ".

Figure pat00002

The 3D image data generator 500 generates 3D image data RGB3D by shifting the 2D image data RGB2D using the disparity. The 3D image data (RGB3D) can be generated as multi-view image data including at least two view image data. For example, the 3D image data generator 500 converts the 2D image data RGB2D (i, j) at (i, j) coordinates into the disparity Dis (i, j) (I, j) in the (i, j) coordinate by shifting the 2D view image data (RGB2D (i, j) j) in the second horizontal direction by the disparity (Dis (i, j)) at the (i, j) ) To generate two view image data. Also, the 3D image data generation unit 500 generates the first view image data V1 (i, j) at coordinates (i, j) and the second view image data V2 (i, j) at (i, j) coordinates of the at least one (i, j) coordinate system among the plurality of view image data.

The stereoscopic image display apparatus according to the embodiment of the present invention may be applied to a stereoscopic image display system that implements a stereoscopic image by a binocular disparity such as a shutter glass system, a pattern retarder system, and an active retarder system, Or a non-eyeglass system in which a stereoscopic image is realized by a binocular parallax using an optical plate such as a parallax barrier, a lenticular lens, or the like. Accordingly, the 3D image data generation unit 500 converts the 3D image data (RGB3D) into a 3D format corresponding to the stereoscopic image format, and outputs the 3D format data to the timing controller 130. (S104)

5 is a detailed block diagram of the global depth map generator of FIG. 6 is a detailed flowchart illustrating a global depth map generating method of the global depth map generating unit. 5, the global depth map generation unit 200 includes an edge data conversion unit 210, a compressed data generation unit 220, an edge direction determination unit 230, and a global depth data calculation unit 240 do. Hereinafter, the global depth map generating method of the global depth map generator 200 will be described in detail with reference to FIGS. 5 and 6. FIG.

First, the edge data conversion unit 210 receives 2D image data (RGB2D) from the host system 150. [ The edge data conversion unit 210 converts the 2D image data RGB2D into edge data ED to detect the edge of the 2D image data RGB2D. An edge is an outline of an object of a 2D image.

The edge data conversion unit 210 may convert the 2D image data RGB2D into gray scale data G (RGB) as shown in Equation (3).

Figure pat00003

The edge data conversion unit 210 converts the gray scale data G (RGB) into edge data ED using a well-known mask such as a sobel mask or the like. At this time, the Sobel mask may be set to u x v (u, v is a natural number of 2 or more) masks, and the mask coefficient may be determined by a preliminary experiment.

7A is an image showing an edge image. The edge image is an image obtained from the edge data ED of one frame period. As shown in FIG. 7A, an edge is expressed by a white gray scale value in an edge image. (S201)

Second, the compressed data generation unit 220 compresses the edge data ED to generate compressed data CD. The compressed data generation unit 220 compresses r edge data ED continuously existing in the horizontal direction into one piece of data to generate compressed data CD. In this case, the p x q pieces of edge data ED are compressed into (p / r) x q pieces of compressed data CD. For example, the compressed data generation unit 220 generates edge data ED (i, j) to ED (i, j) at coordinates (i, j) (m, n)) at the coordinates (m, n) can be generated.

Figure pat00004

Alternatively, the compressed data generation unit 220 may generate the edge data ED (i, j) to ED (i + r) at coordinates (i, j) -1, j) can be generated as compressed data CD (m, n) at (m, n) coordinates.

Figure pat00005

7B is an image showing a compressed image. The compressed image is an image obtained from compressed data (CD) in one frame period. 7A and 7B, the compressed image is an image compressed by 1 / r in the horizontal direction as compared with the edge image. As shown in FIG. 7B, the edge is expressed by a white gray scale value in a compressed image. (S202)

Thirdly, the edge direction determination unit 230 determines the direction of the edge using the compressed image data CD. The edge direction determination unit 230 determines whether the edge direction is the first horizontal direction, the second horizontal direction, or the vertical direction. If the first horizontal direction indicates a direction from one side of the compressed image to the other side, the second horizontal direction indicates a direction from one side to the other side of the compressed image, and the vertical direction indicates a direction from the lower end to the upper end of the compressed image do. The edge direction determination unit 230 can determine the direction of the edge in consideration of the size and direction of the edge. A detailed description of the edge direction determination method of the edge direction determination unit 230 will be described later with reference to FIG. (S203)

Fourth, the global depth data calculating method of the global depth data calculating unit 240 depends on the direction of the edge. When the edge direction is the vertical direction, the global depth data calculation unit 240 calculates the global depth data by analyzing the edges in the horizontal direction of the 2D image data. If the direction of the edge is vertical, the perspective of the 2D image appears in the vertical direction. The global depth data calculating unit 240 calculates the global depth data by analyzing the edges in the vertical direction of the 2D image data when the edge direction is the horizontal direction. If the direction of the edge is horizontal, the perspective of the 2D image appears in the horizontal direction. A detailed description of the global depth data calculation method of the global depth data calculation unit 240 will be given later with reference to FIG. (S204)

As described above, according to the present invention, it is possible to detect the direction in which the perspective of the 2D image appears by determining the direction of the edge, and to analyze the edge in the vertical direction of the 2D image data according to the direction of the edge, To determine whether to calculate the global depth data. That is, according to the present invention, global depth data is calculated in consideration of the direction in which the perspective of the 2D image appears, so that it is possible to prevent erroneous creation of the global depth map. As a result, the present invention can maintain the stereoscopic quality of the 3D image at a high level.

8 is a flowchart showing in detail an edge direction determination method of the edge direction determination unit. Referring to FIG. 8, the edge direction determination unit 230 determines the direction of the edge according to steps S301 to S306.

First, the edge direction determination unit 230 calculates the horizontal direction factor Sx and the vertical direction factor Sy of the compressed data CD. More specifically, the edge direction determination unit 230 determines the horizontal direction of the compressed data CD (m, n) and the compressed data CD (m, n) at coordinates (m, n) (M-1, n) and the compressed data CD (m-1, n) at the coordinates (m, n) (M, n) of the compressed data CD (m, n) at the coordinates (m, n) can be calculated smaller as the difference between the horizontal direction parameter Sx The edge direction determination unit 230 determines the direction of the compressed data CD (m, n) and the compressed data CD (m, n) at the coordinates (m, n) (M, n-1) at the coordinates (m, n) and the compressed data CD (m, n-1) (M, n)) of the compressed data CD (m, n) at the coordinates (m, n) can be calculated smaller as the difference between them becomes larger. (S301)

Second, the edge direction determination unit 230 determines the horizontal direction parameter Sx (m, n) and the vertical direction parameter Sy (m, n) of the compressed data CD (m, n) (m, n) at the coordinates (m, n) are calculated by using the edge intensity data EI (m, n). For example, the edge direction determination unit 230 determines the absolute value of the horizontal direction factor Sx (m, n) of the compressed data CD (m, n) at the coordinates (m, n) And the absolute value of the vertical direction factor Sy (m, n) can be calculated as the edge strength data EI (m, n) at the (m, n)

Figure pat00006

Third, the edge direction determination unit 230 determines the horizontal direction parameter Sx (m, n) of the compressed data CD (m, n) at the (m, n) , n) are used to calculate edge direction data ED (m, n) at (m, n) coordinates. For example, the edge direction determination unit 230 determines an edge direction of the compressed data CD (m, n) at the coordinates (m, n) (M, n) at the coordinates (m, n) by substituting a value obtained by dividing the horizontal direction factor Sx (m, n) by the absolute value of the horizontal direction factor Sx (m, n) into the arc tangent function.

Figure pat00007

(M, n)) of the compressed data CD (m, n) at the coordinates (m, n) The edge direction data ED (m, n) at the (m, n) coordinates can be calculated from 0 DEG to 180 DEG (pi). (S303)

Fourth, the edge direction determination unit 230 may quantize the edge intensity data EI and the edge direction data ED. For example, the edge direction determination unit 230 assigns the highest gradation value to the edge intensity data EI calculated to be higher than the predetermined threshold value, and adds the lowest gradation value to the edge intensity data EI calculated below the predetermined threshold value By assigning gradation values, quantization can be performed. The edge direction determination unit 230 assigns 180 degrees (?) To the edge direction data ED included in the first range and adds 90 degrees (? /?) To the edge direction data ED included in the second range. 2), and assigning 0 DEG to the edge direction data ED included in the third range. (S304)

Fifth, the edge direction determination unit 230 calculates the edge direction vector EDV using the quantized edge intensity data EI and the edge direction data ED. For example, the edge direction determination unit 230 determines the edge direction data ED (m, n) and the edge direction data (m, n) including both the edge direction data ED ) Can be calculated from the edge direction vector (EDV (m, n)). (S305)

Sixth, the edge direction determination unit 230 calculates cumulative numbers of edge direction vectors (EDV) to generate a histogram. The edge direction determination unit 230 analyzes the histogram HIS to determine whether the edge direction is the first horizontal direction, the second horizontal direction, or the vertical direction. For example, the edge direction determination unit 230 determines that the edge direction data ED is 0 ° and the edge strength data EI is the edge direction vector EDV (0), which is the highest gradation value, in the histogram HIS, °, 255) is the largest, it can be determined that the direction of the edge is the first horizontal direction. In Fig. 9, the description has been mainly made on the case where the maximum gradation value is "255" and the minimum gradation value is "0". The edge direction determination unit 230 determines that the edge direction data ED is 180 degrees in the histogram HIS and the edge strength data EI is the edge direction vector EDV (180 degrees, 255) If the number is the largest, it can be determined that the direction of the edge is the second horizontal direction. The edge direction determination unit 230 determines that the edge direction data ED is 90 degrees and the edge strength data EI is the edge direction vector EDV (90 degrees, 255) having the highest gradation value in the histogram HIS When the number is the greatest, it can be judged that the direction of the edge is the vertical direction.

The edge direction determination unit 230 may determine the direction of the edge after normalizing the histogram HIS. Normalization means the cumulative number of edge direction vectors (EDV) divided by the total number of edge direction vectors (EDV). The edge direction determination unit 230 outputs edge direction information data (EDID) including information on the direction of the edge to the global depth data calculation unit 240. (S306)

10 is a flowchart showing in detail a global depth map calculating method of the global depth data calculating unit. Referring to FIG. 10, the global depth data calculating unit 240 calculates global depth data according to steps S401 to S407.

First, the global depth data calculating unit 240 receives the edge direction information data (EDID) from the edge direction determining unit 230 and receives the edge data ED from the edge data converting unit 210. The global depth data calculating unit 240 can determine the direction of the edge of the edge data ED from the edge direction information data EDID. The global depth data calculating unit 240 calculates the first edge representative values ER1 of the first to nth horizontal lines when the edge direction is the vertical direction. The first edge representative value ER1 (j) of the j-th horizontal line can be calculated as shown in Equation (8).

Figure pat00008

Second, the global depth data calculator 240 applies the first through n-th horizontal weights HW to the first edge representative values ER1 of the first through n-th horizontal lines, And calculates the second edge representative values ER2 of the lines. The first to nth horizontal weight values HW may be implemented so that the value decreases from the first horizontal weight value to the nth horizontal weight value as shown in FIG. 11A. This is because, when the direction of the edge is vertical, there are objects having a shallow depth in the lower region of the 2D image, so that the weight is applied to the lower region of the 2D image with a high weight. The shallower the depth, the larger the depth data. (S403)

Third, the global depth data calculating unit 240 calculates the second edge representative value DR2 (j) of the j-th horizontal line and the second edge representative values DR2 (j) of the plurality of horizontal lines adjacent to the j- The global depth data GDD (j) of the j-th horizontal line is calculated. For example, the global depth data calculator 240 applies the weight α to the second edge representative value DR2 (j-1) of the j-1 horizontal line as shown in Equation (9) The weight? Is applied to the second edge representative value DR2 (j) and the weight? Is applied to the second edge representative value DR2 (j + 1) of the j + 1 horizontal line, It is possible to calculate the depth data GDD (j). At this time, the sum of the weight values?,?,? Is "1", and the weight values? And? Can be realized to be smaller than?. Also, the weights? And? Can be implemented with the same value.

Figure pat00009

That is, the step S404 is a step of smoothing to prevent the global depth data GDD (j) of the j-th horizontal line from becoming too large compared to the global depth data of the plurality of horizontal lines adjacent to the j-th horizontal line Can be defined.

12A is an exemplary view showing the calculated global depth map when the direction of the edge is the vertical direction. The global depth data calculating unit 240 calculates the global depth data GDD of the first to nth horizontal lines and creates the global depth map using the global depth data GDD of the first to nth horizontal lines. The global depth map is represented by the same gray level value for each horizontal line as shown in FIG. 12A. (S404)

Fourth, the global depth data calculating unit 240 calculates the first edge representative values ER1 of the first through m-th vertical lines when the edge direction is the first horizontal direction or the second horizontal direction. The first edge representative value ER1 (i) of the i-th vertical line can be calculated as shown in Equation (10).

Figure pat00010

The global depth data calculator 240 may apply the first through m-th vertical weight values VW to the first edge representative values ER1 of the first through m-th vertical lines, And calculates the second edge representative values ER2 of the lines. When the direction of the edge is the first horizontal direction, the first to mth vertical weight values VW may be reduced from the first vertical weight value to the mth vertical weight value as shown in FIG. 11B. This is because, when the direction of the edge is the first horizontal direction, there are objects deep in the area from one side (left side) to the other side (right side) of the 2D image, so that the weight is applied to one side (left side) to be. When the direction of the edge is the second horizontal direction, the first through m-th vertical weights VW may be increased from the first vertical weight to the m-th vertical weight as shown in FIG. 11C. This is because, when the direction of the edge is the second horizontal direction, there are objects deep in the area from the other side (right side) to one side (left side) of the 2D image, so that the weight is applied to the other side to be. (S406)

Sixth, the global depth data calculating unit 240 calculates the second edge representative value DR2 (i) of the i-th vertical line and the second edge representative values DR2 (i) of the plurality of vertical lines adjacent to the i- The global depth data GDD (i) of the ith vertical line is calculated. For example, the global depth data calculator 240 applies the weight α to the second edge representative value DR2 (i-1) of the (i-1) th vertical line as shown in Equation (11) A weight? Is applied to the second edge representative value DR2 (i), and a weight? Is applied to the second edge representative value DR2 (i + 1) of the i + 1th vertical line, It is possible to calculate the depth data GDD (i). At this time, the sum of the weight values?,?,? Is "1", and the weight values? And? Can be realized to be smaller than?. Also, the weights? And? Can be implemented with the same value.

Figure pat00011

Step S406 is a step for preventing the global depth data GDD (i) of the i-th vertical line from becoming too large compared to the global depth data of a plurality of vertical lines adjacent to the i-th vertical line.

12B is an exemplary view showing the calculated global depth map when the direction of the edge is the horizontal direction. The global depth data calculating unit 240 calculates global depth data GDD of the first through mth vertical lines and creates a global depth map using the global depth data GDD of the first through mth vertical lines. The global depth map is represented by the same tone value for each vertical line as shown in FIG. 12B. (S407)

As described above, according to the present invention, it is possible to detect the direction in which the perspective of a 2D image appears by determining the direction of the edge, and to analyze the edge in the vertical direction of the 2D image data according to the direction of the edge to calculate global depth data , And determines whether to calculate the global depth data by analyzing the edge in the horizontal direction. That is, according to the present invention, global depth data is calculated in consideration of the direction in which the perspective of the 2D image appears, so that it is possible to prevent erroneous creation of the global depth map. As a result, the present invention can maintain the stereoscopic quality of the 3D image at a high level.

It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or scope of the invention. Therefore, the technical scope of the present invention should not be limited to the contents described in the detailed description of the specification, but should be defined by the claims.

10: display panel 110: gate drive circuit
120: Data driving circuit 130: Timing controller
140: image processing circuit 150: host system
200: Global depth map generator 210: Edge data converter
220: compressed data generation unit 230: edge direction determination unit
240: Global depth data calculating unit 300: Local depth map generating unit
400: Depth map generation unit 500: 3D image data generation unit

Claims (20)

A first step of converting 2D image data into edge data;
A second step of analyzing the edge data to determine the direction of the edge; And
When the direction of the edge is a vertical direction, a global depth map is generated by analyzing the edges in the horizontal direction of the 2D image data, and when the direction of the edge is horizontal, the edge is analyzed in the vertical direction of the 2D image data And a third step of generating the global depth map.
The method according to claim 1,
The second step comprises:
Compressing the edge data to generate compressed data;
Calculating a horizontal direction factor and a vertical direction factor of the compressed data;
Calculating edge strength data and edge direction data using the horizontal direction factor and the vertical direction factor;
Calculating an edge direction vector including the edge strength data and edge direction data;
Calculating an accumulated number of the edge direction vectors to generate a histogram, and analyzing the histogram to determine the direction of the edge.
3. The method of claim 2,
Wherein the step of compressing the edge data to generate compressed data comprises:
Wherein the compressed data is generated by compressing the edge data by 1 / r (r is a natural number equal to or larger than 2) in the horizontal direction.
3. The method of claim 2,
Wherein the step of calculating the horizontal direction factor and the vertical direction factor of the compressed data comprises:
(m, n) coordinate and the compressed data adjacent in the horizontal direction to the compressed data in the (m, n) coordinate, the horizontal direction factor of the compressed data in the (m, n) And the smaller the difference between the compressed data at the (m, n) coordinate and the compressed data adjacent to the compressed data at the (m, n) coordinate in the vertical direction, The global depth map generation method comprising the steps of:
5. The method of claim 4,
Wherein the step of calculating the edge strength data and the edge direction data using the compressed data comprises:
(M, n), the horizontal direction factor of the compressed data is Sx (m, n), the vertical direction factor is Sy (m, n) , n)
EI (m, n) represents edge intensity data at the (m, n)
Figure pat00012

Wherein the global depth map is calculated using the global depth map.
5. The method of claim 4,
Wherein the step of calculating the edge strength data and the edge direction data using the compressed data comprises:
The edge direction data at the (m, n) coordinate is ED (m, n), the horizontal direction factor of the compressed data at the (m, n) , n)
The edge direction data ED (m, n) at the (m, n)
Figure pat00013

Wherein the global depth map is calculated using the global depth map.
The method according to claim 1,
In the third step,
Calculating a first edge representative value of the first through n-th horizontal lines when the direction of the edge is the vertical direction; And
And calculating second edge representative values of the first through nth horizontal lines by applying first through nth horizontal weights to the first edge representative values of the first through nth horizontal lines,
Wherein the first to nth horizontal weights are increased from the first horizontal weight to the nth horizontal weight.
8. The method of claim 7,
In the third step,
Further comprising calculating global depth data of the j-th horizontal line by applying weights to a second edge representative value of the j-th horizontal line and a second edge representative value of a plurality of horizontal lines adjacent to the j-th horizontal line, The global depth map generation method comprising:
The method according to claim 1,
In the third step,
Calculating a first edge representative value of the first through mth vertical lines when the direction of the edge is the horizontal direction; And
And calculating second edge representative values of the first through mth vertical lines by applying first through mth vertical weights to the first edge representative values of the first through mth vertical lines,
When the direction of the edge is the first horizontal direction, the value decreases from the first vertical weight to the mth vertical weight,
And when the direction of the edge is the second horizontal direction, the value increases from the first vertical weight to the mth vertical weight.
10. The method of claim 9,
In the third step,
Calculating global depth data of the i-th horizontal line by applying weights to a second edge representative value of the i-th vertical line and a second edge representative value of a plurality of vertical lines adjacent to the i-th vertical line, The global depth map generation method comprising:
A display panel including data lines and gate lines;
A global depth map and a local depth map are generated from input 2D image data, a depth map is generated using a global depth map and a local depth map, and an image processing for generating 3D image data using 2D image data and the depth map Circuit;
A data driving circuit for converting the 3D image data into data voltages and outputting the data voltages to the data lines; And
And a gate driving circuit sequentially outputting gate pulses synchronized with the data voltages to the gate lines,
The image processing circuit comprising:
An edge data conversion unit for converting the 2D image data into edge data, an edge direction determination unit for determining the direction of the edge by analyzing the edge data, and an edge direction determination unit for determining the edge direction in the horizontal direction of the 2D image data, And a global depth data calculation unit for calculating the global depth data by analyzing an edge in the vertical direction of the 2D image data when the direction of the edge is a horizontal direction, And a display unit for displaying the three-dimensional image.
12. The method of claim 11,
Wherein the global depth map generator comprises:
Further comprising a compressed data generation unit for generating the compressed data by compressing the edge data by 1 / r (r is a natural number of 2 or more) in the horizontal direction.
13. The method of claim 12,
The edge direction determination unit may determine,
The edge direction data and the edge direction data are calculated using the horizontal direction factor and the vertical direction factor, and the edge direction data and the edge direction data are calculated using the horizontal direction factor and the vertical direction factor, Calculating a direction vector, calculating an accumulated number of the edge direction vectors to generate a histogram, and analyzing the histogram to determine the direction of the edge.
14. The method of claim 13,
The edge direction determination unit may determine,
(m, n) coordinate and the compressed data adjacent in the horizontal direction to the compressed data in the (m, n) coordinate, the horizontal direction factor of the compressed data in the (m, n) And the smaller the difference between the compressed data at the (m, n) coordinate and the compressed data adjacent to the compressed data at the (m, n) coordinate in the vertical direction, And the factor of the three-dimensional image is largely calculated.
15. The method of claim 14,
The edge direction determination unit may determine,
(M, n), the horizontal direction factor of the compressed data is Sx (m, n), the vertical direction factor is Sy (m, n) , n)
EI (m, n) represents edge intensity data at the (m, n)
Figure pat00014

And the three-dimensional image display apparatus.
15. The method of claim 14,
The edge direction determination unit may determine,
The edge direction data at the (m, n) coordinate is ED (m, n), the horizontal direction factor of the compressed data at the (m, n) , n)
The edge direction data ED (m, n) at the (m, n)
Figure pat00015

And the three-dimensional image display apparatus.
12. The method of claim 11,
Wherein the global depth data calculating unit calculates,
N-th horizontal lines of the first through n-th horizontal lines, and a first edge representative value of the first through n-th horizontal lines when the edge direction is the vertical direction, To calculate second edge representative values of the first through n-th horizontal lines,
Wherein the first to nth horizontal weights are increased from the first horizontal weight to the nth horizontal weight.
18. The method of claim 17,
Wherein the global depth data calculating unit calculates,
And calculates global depth data of the j-th horizontal line by applying weights to a second edge representative value of the j-th horizontal line and a second edge representative value of a plurality of horizontal lines adjacent to the j-th horizontal line, Video display device.
12. The method of claim 11,
Wherein the global depth data calculating unit calculates,
And calculating a first edge representative value of the first through mth vertical lines when the direction of the edge is the horizontal direction and calculating a first edge representative value of the first through mth vertical lines based on the first through mth vertical weights To calculate a second edge representative value of the first through m th vertical lines,
When the direction of the edge is the first horizontal direction, the value decreases from the first vertical weight to the mth vertical weight,
And when the direction of the edge is the second horizontal direction, the value increases from the first vertical weight to the mth vertical weight.
20. The method of claim 19,
Wherein the global depth data calculating unit calculates,
Wherein the global depth data of the i-th horizontal line is calculated by applying a weight to a second edge representative value of the i-th vertical line and a second edge representative value of a plurality of vertical lines adjacent to the i- Video display device.
KR1020120136164A 2012-11-28 2012-11-28 Global depth map generation method and stereoscopic image display device using the same KR101996657B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020120136164A KR101996657B1 (en) 2012-11-28 2012-11-28 Global depth map generation method and stereoscopic image display device using the same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020120136164A KR101996657B1 (en) 2012-11-28 2012-11-28 Global depth map generation method and stereoscopic image display device using the same

Publications (2)

Publication Number Publication Date
KR20140070856A true KR20140070856A (en) 2014-06-11
KR101996657B1 KR101996657B1 (en) 2019-10-02

Family

ID=51125480

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020120136164A KR101996657B1 (en) 2012-11-28 2012-11-28 Global depth map generation method and stereoscopic image display device using the same

Country Status (1)

Country Link
KR (1) KR101996657B1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090008808A (en) * 2007-07-19 2009-01-22 주식회사 이시티 Apparatus and method for converting 2d image signals into 3d image signals
KR20120087867A (en) * 2012-06-20 2012-08-07 이광호 Method for converting 2 dimensional video image into stereoscopic video

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090008808A (en) * 2007-07-19 2009-01-22 주식회사 이시티 Apparatus and method for converting 2d image signals into 3d image signals
KR20120087867A (en) * 2012-06-20 2012-08-07 이광호 Method for converting 2 dimensional video image into stereoscopic video

Also Published As

Publication number Publication date
KR101996657B1 (en) 2019-10-02

Similar Documents

Publication Publication Date Title
KR101888672B1 (en) Streoscopic image display device and method for driving thereof
US8743111B2 (en) Stereoscopic image display and method for driving the same
KR101869872B1 (en) Method of multi-view image formation and stereoscopic image display device using the same
KR101992163B1 (en) Stereoscopic image display device and method for driving the same
KR20130009173A (en) Image processing method and stereoscopic image display device using the same
KR101963385B1 (en) Disparity calculation method and stereoscopic image display device
KR101990334B1 (en) Stereoscopic image display device and method for driving the same
KR101929042B1 (en) Disparity calculation unit and stereoscopic image display device including the same and disparity calculation method
KR20120127893A (en) Image processing method and stereoscopic image display device using the same
KR101681776B1 (en) Method of controlling picture quality and display device using the same
KR20120040386A (en) 2d-3d image conversion method and stereoscopic image display using the same
KR101996657B1 (en) Global depth map generation method and stereoscopic image display device using the same
KR20140092055A (en) Stereoscopic image display device and driving method thereof
KR101798236B1 (en) Stereoscopic image display and method of adjusting brightness thereof
KR101843197B1 (en) Method of multi-view image formation and stereoscopic image display device using the same
KR101870233B1 (en) Method for improving 3d image quality and stereoscopic image display using the same
KR102022527B1 (en) Stereoscopic image display device and disparity calculation method thereof
KR101961943B1 (en) 3d image data formation method and stereoscopic image display device using the same
KR101843198B1 (en) Method of multi-view image formation and stereoscopic image display device using the same
KR20120015006A (en) Stereoscopic image display device and driving method the same
KR101829466B1 (en) Stereoscopic image display device
KR101983369B1 (en) Multiview image generation method and stereoscopic image display device using the same
KR102126530B1 (en) 3d conversion method and stereoscopic image display device using the same
KR20140073814A (en) Disparity calculation method and stereoscopic image display device
KR101957975B1 (en) Disparity calculation method and stereoscopic image display device using the same

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right