KR20140113066A - Multi-view points image generating method and appararus based on occulsion area information - Google Patents
Multi-view points image generating method and appararus based on occulsion area information Download PDFInfo
- Publication number
- KR20140113066A KR20140113066A KR1020130027970A KR20130027970A KR20140113066A KR 20140113066 A KR20140113066 A KR 20140113066A KR 1020130027970 A KR1020130027970 A KR 1020130027970A KR 20130027970 A KR20130027970 A KR 20130027970A KR 20140113066 A KR20140113066 A KR 20140113066A
- Authority
- KR
- South Korea
- Prior art keywords
- image
- images
- generating
- virtual
- present
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/111—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/271—Image signal generators wherein the generated image signals comprise depth maps or disparity maps
Abstract
Description
The present invention relates to an image processing technique, and more particularly, to a method and apparatus for generating a mutation in a stereo image or a multi-view image and generating a multi-view image based on the generated mutation.
Recently, interest in UHD (Ultra High Definition) having resolution more than 4 times that of HDTV has been increased, and a compression technique for higher resolution and higher image quality is required.
Along with this, interest and demand for 3D video service is increasing as another next generation technology after HDTV.
For example, 3D movies are welcome, and home displays also support 3D images. This is expected to lead to 3D image support in portable terminals, and it is becoming clear that the demand for 3D images and the expansion of the market will increase.
In this regard, a standard technology for processing a 3D image based on high-definition image processing technology has been established, and many companies are launching a product based on 3D image processing technology.
When a 3D image is implemented, a depth sense is generated using the parallax between images at different viewpoints. In order to generate multi-view images, it is possible to generate images of virtual viewpoints from left and right color images and depth images, or generate images of virtual viewpoints by rendering based on images of three or more viewpoints.
On the other hand, in the case of generating an image at a virtual viewpoint, a method for reducing the influence of an occlusion region or a hole in generation of a virtual viewpoint image to raise the quality of a 3D image is a problem.
It is an object of the present invention to provide a method and apparatus for extracting information of a shielded region separately and removing holes and boundary noise at a depth boundary of a virtual viewpoint image.
An object of the present invention is to provide a method and an apparatus capable of generating a multi-viewpoint image of high picture quality by considering the variation accuracy in generating an image of a virtual viewpoint.
The present invention relates to a multi-view image generation method and apparatus based on shielded area information.
According to an embodiment of the present invention, a method of generating a multi-view image includes calculating a variation between right and left images, determining a weight based on the variation information, and generating a virtual view image by applying the weight can do.
According to the present invention, it is possible to improve the picture quality in the vicinity of the discontinuous area of the object boundary and the depth by separately extracting the shielded area information from the stereo input image and projecting the shielded area on the intermediate view image based on the extracted information of the shielded area have.
According to the present invention, when blending the result of projection of the right and left images, the color value of the virtual view image is calculated by reflecting the weight considering the similarity between the right and left corresponding points, the presence or absence of the shielded area, Can be reduced.
FIG. 1 is a block diagram schematically illustrating an example of a virtual viewpoint generation method according to the present invention.
2 is a conceptual diagram schematically illustrating a method of extracting shielded area information according to the present invention.
3 is a view schematically illustrating an example of generating an intermediate image using shielded area information according to the present invention.
In describing the embodiments of the present invention, if the detailed description of related known structures or functions is deemed to obscure the subject matter of the present specification, the description may be omitted.
The present invention relates to an image processing technique, and more particularly, to a method and apparatus for generating a mutation in a stereo image or a multi-view image and generating a multi-view image based on the mutation. The present invention can improve image quality in a hole or a shielded area.
Multiview images can be acquired or generated in various ways.
As a method of acquiring or generating a multi-view image, there is a method of acquiring a multi-view image directly using cameras corresponding to the number of viewpoints.
As another method of acquiring or generating a multi-view image, there is a method of acquiring a color image and a depth image using a color camera and a depth camera, respectively, and then generating a multi-view image using the 2D image and the depth information. The depth image is information representing the distance from the camera in black and white contrast.
As another method of acquiring or generating a multi-view image, there is a method of extracting disparity information from a stereo image acquired with two color cameras and generating a multi-view image.
In this case, the variation information may be information on the positional difference between the same objects in the left and right images.
A device for implementing a multi-view image, e.g., a 3D image, generated as described above is a 3D display. Many of the displays currently in circulation are binocular displays, and thus a large number of content is stereo images available in binocular displays.
In the case of binocular display, it is possible to use different glasses to differentiate between the image entering the viewer's left eye and the image entering the viewer's right eye.
In order to differentiate the images into two eyes, the stereo image is composed of a left image and a right image, and a 3D image can be implemented using the parallax between the two images.
Considering the situation where stereoscopic 3D display language is already prevalent, it is necessary to use stereoscopic contents even if other types of displays are introduced later.
For example, there is a need for a method capable of viewing stereoscopic images that have already been produced on a non-stereoscopic 3D display, even if a non-stereoscopic multi-view 3D display is introduced in the future. Depth Image Based Rendering (DIBR) technique can be widely used.
However, extraction of a disparity map or extraction of a depth map using stereo matching from a stereoscopic image is time consuming, but has a problem of poor accuracy.
Furthermore, in this case, boundary noise and / or holes may exist in the virtual view image generated by the occlusion region and the incorrect parallax information.
Therefore, a high performance hole filling process may be required.
Automatic depth map estimation is still lacking in accuracy and reliability. Therefore, it is necessary to consider a method of generating high-resolution multi-view images while reducing holes and noise at depth boundaries.
Hereinafter, a method for generating a multi-view image based on variation information from a stereo image according to the present invention will be described with reference to the drawings.
According to the present invention, it is possible to improve the image quality at an intermediate point in consideration of the information of the shielded area and the reliability of the variation.
FIG. 1 is a block diagram schematically illustrating an example of a virtual viewpoint generation method according to the present invention.
In the example of FIG. 1, a virtual viewpoint image is generated based on the stereo image and the shielded area information according to the present invention.
A stereo image is an image captured at two right and left viewpoints with respect to the same object, and includes a time difference between images.
At this time, the shielded area indicates a region that is hidden from other objects in the image in the left image or the right image.
Some of the shielded region of the left image or the shielded region of the left image can be confirmed on the right image by the parallax and a part of the shielded region of the right image or the shielded region of the right image can be recognized on the left image due to the parallax.
Referring to FIG. 1, the virtual viewpoint
When the stereo image and the left and right shielded area information are input to the left and right
Conventional left and right shielded area information is extracted based on FDU (Free-viewpoint TV Data Unit) or extracted based on LDV (Layered Depth Video).
Both methods extract the shielded area between the middle image and the left image based on the image and the variation of the three view points, and extract the shielded area between the center image and the right image.
In contrast, in the present invention, shielded area information is extracted through mutual intersection extraction between right and left images. Extraction of shielded area information according to the present invention will be described later.
When calculating the left and right sides, the
By excluding the shielded region from the object of variation calculation, the complexity of the calculation can be reduced, and the process can be performed quickly, and the accuracy of the variation calculation can be improved.
The blending
≪ Formula 1 >
I virtual = W left占 I left + W right占 I right
In
W left is a weight for a projected color pixel of the left image, and W right is a weight for a projected color pixel of the right image.
W left and W right can be derived as in Equation (2).
&Quot; (2) "
W left = w 2L (w 1L + w 3L ) / 2
W right = w 2R (w 1R + w 3R ) / 2
In Equation (2), w 1L and w 1R represent the weight due to color similarity between mutual reliability or corresponding points. w1L and w1R have values between 0 and 1 and can be calculated using a similar function such as Normalized Cross Correlation (NCC) between the corresponding points of the left and right images obtained by the mutation.
In addition, w 2L and w 2R represent weights according to the presence or absence of a shielded area. For example, w 2L (w 2R ) has a value of 0 for pixels belonging to the shielded area, and 1 for pixels not belonging to the shielded area. Therefore, when generating the virtual viewpoint image, the color information by the shielded area pixels is not contributed.
W 3L and w 3R are weight values determined by distances between the positions of the left and right images to be projected and the positions of the virtual view image, and have values between 0 and 1. As the distance between the two positions becomes closer, I have.
Then, the virtual viewpoint
The virtual viewpoint
The
The
2 is a conceptual diagram schematically illustrating a method of extracting shielded area information according to the present invention.
As described above, in the present invention, information of a shielded region is extracted by extracting mutual intersection between a stereo image, that is, a left image and a right image.
Specifically, the shielded area information is extracted using the difference between the virtual viewpoint image generated by the mutual crossing between the left and right images and the original image.
For example, according to the present invention, a virtual right image of the original right image position can be generated by the variation of the original left image, and a virtual left image of the original left image position can be generated by the variation of the original right image. Then, the left and right shielded area information can be obtained by calculating the difference between the original image and the virtual image.
Referring to FIG. 2, a right viewpoint
Subsequently, the left
At this time, the synthesis errors can be reflected as the weights of Equation 2, e.g., w1L and w1R, w2L and w2R, and w3L and w3R .
Then, the left and right shielded
At this time, the colors at the left viewpoint shielding region position and the right viewpoint shielding region position may be stored in the form of an image filled with binary images or colors of other viewpoint images that are not shielded.
For example, the color for the shielded region position at the left viewpoint may be filled with the corresponding unshielded partial color at the right viewpoint image, and the color for the shielded region position at the right viewpoint may be the corresponding partial color not shielded from the left viewpoint image Can be filled.
It is noted that FIG. 2 is one embodiment for facilitating understanding of the present invention, and it is noted that the extraction of the shielded area information can be performed in various ways within the scope of the technical idea according to the present invention.
3 is a view schematically illustrating an example of generating an intermediate image using shielded area information according to the present invention.
Referring to FIG. 3, it may be determined whether an image belongs to a
In the
In the
In the above-described exemplary system, the methods are described on the basis of a flowchart as a series of steps or blocks, but the present invention is not limited to the order of the steps, and some steps may occur in different orders or simultaneously . In addition, the above-described embodiments include examples of various aspects. For example, combinations of the embodiments are also to be understood as one embodiment of the present invention.
Claims (1)
Determining a weight based on the variation information; And
And generating a virtual viewpoint image by applying the weight values.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020130027970A KR20140113066A (en) | 2013-03-15 | 2013-03-15 | Multi-view points image generating method and appararus based on occulsion area information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020130027970A KR20140113066A (en) | 2013-03-15 | 2013-03-15 | Multi-view points image generating method and appararus based on occulsion area information |
Publications (1)
Publication Number | Publication Date |
---|---|
KR20140113066A true KR20140113066A (en) | 2014-09-24 |
Family
ID=51757728
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020130027970A KR20140113066A (en) | 2013-03-15 | 2013-03-15 | Multi-view points image generating method and appararus based on occulsion area information |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR20140113066A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108076384A (en) * | 2018-01-02 | 2018-05-25 | 京东方科技集团股份有限公司 | A kind of image processing method based on virtual reality, device, equipment and medium |
GB2569979A (en) * | 2018-01-05 | 2019-07-10 | Sony Interactive Entertainment Inc | Image generating device and method of generating an image |
CN113676774A (en) * | 2021-08-20 | 2021-11-19 | 京东方科技集团股份有限公司 | Image processing method, image processing apparatus, display apparatus, and storage medium |
-
2013
- 2013-03-15 KR KR1020130027970A patent/KR20140113066A/en not_active Application Discontinuation
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108076384A (en) * | 2018-01-02 | 2018-05-25 | 京东方科技集团股份有限公司 | A kind of image processing method based on virtual reality, device, equipment and medium |
WO2019134368A1 (en) * | 2018-01-02 | 2019-07-11 | Boe Technology Group Co., Ltd. | Image processing method of virtual reality and apparatus thereof |
CN108076384B (en) * | 2018-01-02 | 2019-12-06 | 京东方科技集团股份有限公司 | image processing method, device, equipment and medium based on virtual reality |
US11373337B2 (en) | 2018-01-02 | 2022-06-28 | Beijing Boe Optoelectronics Technology Co., Ltd. | Image processing method of virtual reality and apparatus thereof |
GB2569979A (en) * | 2018-01-05 | 2019-07-10 | Sony Interactive Entertainment Inc | Image generating device and method of generating an image |
US10848733B2 (en) | 2018-01-05 | 2020-11-24 | Sony Interactive Entertainment Inc. | Image generating device and method of generating an image |
GB2569979B (en) * | 2018-01-05 | 2021-05-19 | Sony Interactive Entertainment Inc | Rendering a mixed reality scene using a combination of multiple reference viewing points |
CN113676774A (en) * | 2021-08-20 | 2021-11-19 | 京东方科技集团股份有限公司 | Image processing method, image processing apparatus, display apparatus, and storage medium |
CN113676774B (en) * | 2021-08-20 | 2024-04-09 | 京东方科技集团股份有限公司 | Image processing method, image processing apparatus, display apparatus, and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10148930B2 (en) | Multi view synthesis method and display devices with spatial and inter-view consistency | |
US8488869B2 (en) | Image processing method and apparatus | |
Huynh-Thu et al. | Video quality assessment: From 2D to 3D—Challenges and future trends | |
KR101185870B1 (en) | Apparatus and method for processing 3 dimensional picture | |
JP6027034B2 (en) | 3D image error improving method and apparatus | |
US20140098100A1 (en) | Multiview synthesis and processing systems and methods | |
US20130069942A1 (en) | Method and device for converting three-dimensional image using depth map information | |
US20130038600A1 (en) | System and Method of Processing 3D Stereoscopic Image | |
CN104662896A (en) | An apparatus, a method and a computer program for image processing | |
Cheng et al. | Spatio-temporally consistent novel view synthesis algorithm from video-plus-depth sequences for autostereoscopic displays | |
KR20170140187A (en) | Method for fully parallax compression optical field synthesis using depth information | |
US8982187B2 (en) | System and method of rendering stereoscopic images | |
JP2013527646A5 (en) | ||
Winkler et al. | Stereo/multiview picture quality: Overview and recent advances | |
JP7184748B2 (en) | A method for generating layered depth data for a scene | |
US9639944B2 (en) | Method and apparatus for determining a depth of a target object | |
KR20140113066A (en) | Multi-view points image generating method and appararus based on occulsion area information | |
Köppel et al. | Filling disocclusions in extrapolated virtual views using hybrid texture synthesis | |
KR20110060180A (en) | Method and apparatus for producing 3d models by interactively selecting interested objects | |
KR20170075656A (en) | Tridimensional rendering with adjustable disparity direction | |
US20120170841A1 (en) | Image processing apparatus and method | |
US9787980B2 (en) | Auxiliary information map upsampling | |
Knorr et al. | From 2D-to stereo-to multi-view video | |
KR101192121B1 (en) | Method and apparatus for generating anaglyph image using binocular disparity and depth information | |
TWM529333U (en) | Embedded three-dimensional image system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WITN | Withdrawal due to no request for examination |