KR20120015719A - Method for providing anaglyph image and three-dimensional converting adapter using the same - Google Patents
Method for providing anaglyph image and three-dimensional converting adapter using the same Download PDFInfo
- Publication number
- KR20120015719A KR20120015719A KR1020100078088A KR20100078088A KR20120015719A KR 20120015719 A KR20120015719 A KR 20120015719A KR 1020100078088 A KR1020100078088 A KR 1020100078088A KR 20100078088 A KR20100078088 A KR 20100078088A KR 20120015719 A KR20120015719 A KR 20120015719A
- Authority
- KR
- South Korea
- Prior art keywords
- image
- eye image
- right eye
- left eye
- depth value
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/128—Adjusting depth or disparity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/324—Colour aspects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
- H04N13/334—Displays for viewing with the aid of special glasses or head-mounted displays [HMD] using spectral multiplexing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2213/00—Details of stereoscopic systems
- H04N2213/005—Aspects relating to the "3D+depth" image format
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2213/00—Details of stereoscopic systems
- H04N2213/008—Aspects relating to glasses for viewing stereoscopic images
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
The present invention relates to a method for implementing anaglyph and a 3D conversion adapter using the same, the method for implementing anaglyph includes: (a) receiving an image; (b) separating and extracting a left eye image and a right eye image from the input image; (c) calculating a depth value of the image by comparing the left eye image with the right eye image; (d) shifting one of the left eye image and the right eye image right or left by the depth value using the depth value; (e) synthesizing the left eye image and the right eye image.
Through this, by applying the method of implementing anaglyph, it is possible to simply watch 3D images on a conventional 2D TV.
Description
The present invention relates to a 3D image implementation method, and more particularly to an anaglyph implementation method and a 3D conversion adapter using the same.
Recently, as movies of 3D display type hit in movies, interest in 3D images is increasing. Currently, digital satellite broadcasting is providing 3D broadcasting, and in October 2010, terrestrial broadcasting will start 3D broadcasting service.
The 3D image is transmitted in a side-by-side method, a top bottom method, a dual stream method, and the like.
3D content transmitted from broadcasting stations can be viewed in three dimensions only through the 3D display device.
There are various types of 3D display methods such as anaglyph method, polarization method, shutter glass method, and the like, and polarizing method or shutter glass method is mainly applied to TV. The 3D TV converts the 3D image into a polarization method or a shutter glasses method and displays the converted 3D image inside.
Most of the test broadcasts currently in service or planned are loaded with 3D content in the existing 2D broadcast transmission format by side-by-side or top-bottom, so the left and right images are left and right with the existing 2D TV. It is displayed as a screen pasted side by side or a screen pasted up and down.
Therefore, the 2D TV user is very uncomfortable because the two images are broadcast side by side on one screen.
In order to watch 3D content on existing 2D TVs, it is necessary to modify broadcast signals or display devices, and it is a general view that many people in the industry realize that implementation is unrealistic due to cost or performance problems.
Accordingly, an object of the present invention is to enable anaglyph implementation method to simply watch 3D images on a conventional 2D TV.
The above object is a method for implementing anaglyph, comprising: (a) receiving an image; (b) separating and extracting a left eye image and a right eye image from the input image; (c) calculating a depth value of the image by comparing the left eye image with the right eye image; (d) shifting one of the left eye image and the right eye image right or left by the depth value using the depth value; (e) synthesizing the left eye image and the right eye image.
Here, in the step (b), the left eye image is generated by extracting a red component from a left eye image of the input image, and the right eye image is a green component and a blue component from the right eye image of the input image. (Blue) extracting and generating a component, and in the step (c), setting a portion of the right eye image as a region of interest, detecting a region corresponding to the region of interest in the left eye image, and calculating the depth value. It may include.
In addition, in the step (b), the right eye image is generated by extracting a red component from the right eye image of the input image, and the left eye image is a green component and a blue component from the left eye image of the input image. (Blue) extracting and generating a component, and in the step (c), setting a part of the left eye image as a region of interest, calculating a depth value by detecting a region corresponding to the region of interest in the right eye image. It may include.
On the other hand, between the step (d) and the step (e), (f) detecting the horizontal edge of the image containing the red component; And (g) removing the high frequency components of the detected edges.
The method of implementing anaglyph may further include correcting the red component such that the red component is enhanced in the image including the red component.
On the other hand, according to the present invention, it can be achieved by the 3D conversion adapter using the method of implementing anaglyph.
As described above, according to the present invention, by applying the anaglyph implementation method, it is possible to simply watch 3D images on a conventional 2D TV.
1 schematically illustrates a method of implementing anaglyph according to an embodiment of the present invention.
2 illustrates types of input 3D images.
3 is a conceptual diagram illustrating the calculation and correction of the depth of an image.
4 is a flowchart illustrating edge detection and high frequency filtering processing added according to an embodiment of the present invention.
5 illustrates an example of a table for correcting red color.
Hereinafter, specific embodiments of the present invention will be described with reference to the drawings.
1 schematically illustrates a method of implementing anaglyph according to an embodiment of the present invention.
Referring to FIG. 1, first, a 3D image is input (S10). 2 illustrates types of input 3D images. As illustrated in FIG. 2, the 3D image may be input in a side-by-side method, a top bottom method, a dual stream method, or the like.
When the 3D image is input, the left eye image and the right eye image are separated and extracted from the input 3D image (S11). The anaglyph method is a method of creating 3D image by applying a complementary color filter to a left image and a right image, and wearing glasses with the same complementary color filter and watching 3D images.
When the left eye image and the right eye image are extracted, a complementary color filter is applied, respectively. In an embodiment of the present invention, the left eye image extracts the red component from the left eye image, and the right eye image extracts the green and blue components from the right eye image. Extract. It may be reversed.
On the other hand, the conventional anaglyph method has a disadvantage that the color is more distorted than the original image and the ghost phenomenon. When viewing the image with both eyes, the brain synthesizes the difference between the images of each eye into the 3D image. When the left eye image is seen on the right eye and the right eye image is seen on the left eye, ghost phenomenon occurs. Occurs.
The present invention uses the concept of predominantly and poorly washed eyes to reduce ghosting phenomena that these people perceive. When you look at things with both eyes, there is one that you use more of both eyes. The eye that uses this eye relatively less often is called Dominant eye or dominant cyan.
In an embodiment of the present invention, the green side having a lot of information on the image and affecting the luminance (luminance) is included in the dominant eye, and the red side of the opposite side is viewed by the red eye. Since the left eye image is composed of the red component and the right eye image is composed of the green and blue components, the right eye image becomes the right eye and the left eye image is the thirteen eye in the present embodiment.
Of course, when the left eye image and the right eye image are extracted, that is, when the left eye image is extracted with the green and blue components, and the right eye image is extracted with the red component, the right eye image becomes ten eyes and the left eye image becomes the right eye.
In the present invention, the depth value of the image is calculated based on the left eye image and the right eye image by applying the concept of right eye and thermal eye to improve the ghost phenomenon (S12), and corrects the depth value of the image using the value (S13). . In other words, if the depth of the image is too large, ghosting occurs, so that the depth of the image is properly adjusted based on the main object of the image.
3 is a conceptual diagram illustrating the calculation and correction of the depth of an image.
Referring to FIG. 3, after setting a center portion of an image as a region of interest (ROI), a depth value of the image of the set region is calculated.
For example, in FIG. 3, the size of the ROI is set to 1/2 of the size of the horizontal (W) and the vertical (H) of the entire image, and is set to be located at the center.
And, as shown in Equation 1 below, by bringing the same size window with offset d in the left eye image and moving the d value in the range of -W / 40 to W / 40, SAD (Luminance) component is used. Sum of absolute differences)
(-W / 40 ≤ d ≤ W / 40)
Here, Yr is the luminance value of the right eye image, and Yl is the luminance value of the left eye image. In addition, when obtaining the SAD, the block size may be set in various ways, for example, 16 × 16 or the size of the ROI (in this example, 1/4 size of the entire image).
The above equation obtains the depth information of the image by obtaining the motion vector size and direction according to the position of the object. In this case, the calculation amount is greatly reduced by using only the horizontal component instead of the vertical component.
The depth value of the image calculated by the present invention means a d value at which SAD is minimum in the above equation, and shifts the entire left eye image to the right by the depth d of the calculated image.
When the depth value of the image is obtained and the left eye image is shifted by that value, the difference between the left and right eye images is reduced, thereby reducing the ghost phenomenon.
At this time, the left and right areas of the image are taken from the right eye image and corrected. Afterwards, filtering is not a problem because the corrected parts are not easily seen.
Meanwhile, the ROI may be adaptively set according to the characteristics of the image rather than the predetermined region as shown in FIG. 3.
For example, it is possible to find out which of the objects in the image are located on the screen, which corresponds to the background behind or the foreground in front, and set the location of the main object as the region of interest. . This can eliminate ghosting by correcting the depth value of the image around the main object that the human eye is easily interested in.
After correcting the depth value of the image, the left eye image and the right eye image are synthesized and output (S14).
Meanwhile, according to another embodiment of the present invention, the image may be further subjected to a filtering process for removing edge detection and high frequency components in addition to correcting the depth value of the image as a method for eliminating ghosting.
4 is a flowchart illustrating edge detection and high frequency filtering processing added according to an embodiment of the present invention.
The processing process of FIG. 4 is a procedure that can be inserted in addition between steps S13 and S14 of FIG. First, an edge component is detected from an image including a red component, that is, a left eye image of an embodiment of the present invention (S20).
Specifically, the edge of the horizontal component of the left eye image is detected to find a portion where the edge is strong. Edge detection focuses on vertical edges and eliminates shorter edges to minimize unnecessary image processing.
Equation 2 below represents a determinant for detecting edge components.
Here, A is a luminance value centered on the current position and a 3 × 3 matrix of its neighbors, and the convolution of the above matrix and A is obtained and the absolute value is taken to obtain the edge size G of the horizontal component. In the present invention, only the portions having the large edge sizes connected in the vertical direction are detected.
In addition, a Gaussian filter or the like is applied to remove the high frequency component around the detected portion (S21).
Equation 3 shows an equation of a Gaussian filter applied to the present invention, and Table 1 shows a Gaussian filter table.
(σ = 1)
[Table 1]
As described above, the present invention removes high-frequency components using a Gaussian filter on the red image to weaken the boundary part so that the thermal facer sees the image, and the filtered image can reduce the ghost phenomenon even though the three-dimensional effect remains. have. This is because the luminance component representing the shape of the image is most included in Green.
The strength of the filtering can be applied differently according to the position according to the edge detection result. The strong filtering is performed in the area where the vertical edge component is strong and long, which is likely to cause ghosting.
In some cases, if the depth value of the image is too large (when the left and right parallax is too large), the ghost phenomenon may appear even if the above method is applied. However, the present invention can prevent the ghost phenomenon by using the depth information of the image to correct the overall depth value so that the depth value of the center portion of the image most prominent is not too large. At this time, the relative depth value for each position does not change.
In addition, in order to correct the wrong color, which is a disadvantage of the conventional anaglyph implementation, a suitable color table is generated in consideration of the characteristics of the display device and the glasses to enhance the red color (S22).
5 illustrates an example of a table for correcting red color. The horizontal axis represents input and the vertical axis represents output.
Meanwhile, when the synthesized image is smaller than the size of the image to be output, a task of greatly increasing the size may be added.
Meanwhile, the anaglyph implementation method described above may be applied to a 3D conversion adapter. The 3D conversion adapter allows anaglyph glasses to be viewed in a stereoscopic 3D image on a 2D TV. The 3D conversion adapter is installed at a TV signal input terminal using the anaglyph implementation method described above to implement 3D.
In the above-described embodiment, the case in which the predominant eye and the predominant eye are set are described as an example, but the user may select the predominant eye in the predominant eye. That is, since the predominant face and the ten face face may be different according to the person, it may be set differently according to the user's selection.
Although some embodiments of the invention have been shown and described, it will be apparent to those skilled in the art that modifications may be made to the embodiment without departing from the spirit or spirit of the invention. . It is intended that the scope of the invention be defined by the claims appended hereto and their equivalents.
Claims (6)
(a) receiving an image;
(b) separating and extracting a left eye image and a right eye image from the input image;
(c) calculating a depth value of the image by comparing the left eye image with the right eye image;
(d) shifting one of the left eye image and the right eye image right or left by the depth value using the depth value;
(e) synthesizing the left eye image and the right eye image.
In the step (b), the left eye image is generated by extracting a red component from a left eye image of the input image, and the right eye image is a green component and a blue component in the right eye image of the input image. ) Extract the ingredients to produce
The step (c) may include setting a portion of the right eye image as a region of interest and calculating the depth value by detecting a region corresponding to the region of interest in the left eye image. Way.
In the step (b), the right eye image is generated by extracting a red component from the right eye image of the input image, and the left eye image is a green component and a blue component from the left eye image of the input image. ) Extract the ingredients to produce
The step (c) may include setting a portion of the left eye image as a region of interest, and calculating an depth value by detecting a region corresponding to the region of interest in the right eye image. Way.
Between step (d) and step (e),
(f) detecting a horizontal edge of an image including the red component; And
(g) removing the high frequency components of the detected edges.
The method for implementing anaglyphs further comprises correcting the red component such that the red component is enhanced in the image including the red component.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020100078088A KR20120015719A (en) | 2010-08-13 | 2010-08-13 | Method for providing anaglyph image and three-dimensional converting adapter using the same |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020100078088A KR20120015719A (en) | 2010-08-13 | 2010-08-13 | Method for providing anaglyph image and three-dimensional converting adapter using the same |
Publications (1)
Publication Number | Publication Date |
---|---|
KR20120015719A true KR20120015719A (en) | 2012-02-22 |
Family
ID=45838326
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020100078088A KR20120015719A (en) | 2010-08-13 | 2010-08-13 | Method for providing anaglyph image and three-dimensional converting adapter using the same |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR20120015719A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101655036B1 (en) * | 2015-06-19 | 2016-09-07 | 인하대학교 산학협력단 | Method and System for Generating Anaglyph Image Reconstruction and Depth Map |
US12003697B2 (en) | 2021-05-06 | 2024-06-04 | Samsung Electronics Co., Ltd. | Wearable electronic device and method of outputting three-dimensional image |
-
2010
- 2010-08-13 KR KR1020100078088A patent/KR20120015719A/en not_active Application Discontinuation
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101655036B1 (en) * | 2015-06-19 | 2016-09-07 | 인하대학교 산학협력단 | Method and System for Generating Anaglyph Image Reconstruction and Depth Map |
US12003697B2 (en) | 2021-05-06 | 2024-06-04 | Samsung Electronics Co., Ltd. | Wearable electronic device and method of outputting three-dimensional image |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101759943B1 (en) | Broadcasting receiver and method for displaying 3d images | |
KR101185870B1 (en) | Apparatus and method for processing 3 dimensional picture | |
US8558875B2 (en) | Video signal processing device | |
US20130038611A1 (en) | Image conversion device | |
EP2569950B1 (en) | Comfort noise and film grain processing for 3 dimensional video | |
JP4940397B2 (en) | Stereo image format conversion method applied to display system | |
CN102186023B (en) | Binocular three-dimensional subtitle processing method | |
US20110242296A1 (en) | Stereoscopic image display device | |
US20120140029A1 (en) | Image Processing Device, Image Processing Method, and Program | |
JP2001258052A (en) | Stereoscopic image display device | |
Tam et al. | Three-dimensional TV: A novel method for generating surrogate depth maps using colour information | |
JP6139691B2 (en) | Method and apparatus for handling edge disturbance phenomenon in multi-viewpoint 3D TV service | |
US20110175980A1 (en) | Signal processing device | |
US20110134215A1 (en) | Method and apparatus for providing 3d image and method and apparatus for displaying 3d image | |
KR20120015719A (en) | Method for providing anaglyph image and three-dimensional converting adapter using the same | |
KR20120015718A (en) | Digital-analog converting apparatus for three-dimensional image | |
EP2485491B1 (en) | Image processing apparatus and control method thereof | |
US20120140026A1 (en) | 2D-to-3D COLOR COMPENSATION SYSTEM AND METHOD THEREOF | |
CN110636282B (en) | No-reference asymmetric virtual viewpoint three-dimensional video quality evaluation method | |
Tam et al. | Temporal sub-sampling of depth maps in depth image-based rendering of stereoscopic image sequences | |
WO2013092219A1 (en) | Method for reducing crosstalk in stereoscopic images | |
US20120194655A1 (en) | Display, image processing apparatus and image processing method | |
JP2012049880A (en) | Image processing apparatus, image processing method, and image processing system | |
Knorr et al. | Basic rules for “good 3d” and the avoidance of visual discomfort in stereoscopic vision | |
JPH0211093A (en) | Stereoscopic video display method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
E601 | Decision to refuse application |