KR20140113066A - Multi-view points image generating method and appararus based on occulsion area information - Google Patents

Multi-view points image generating method and appararus based on occulsion area information Download PDF

Info

Publication number
KR20140113066A
KR20140113066A KR1020130027970A KR20130027970A KR20140113066A KR 20140113066 A KR20140113066 A KR 20140113066A KR 1020130027970 A KR1020130027970 A KR 1020130027970A KR 20130027970 A KR20130027970 A KR 20130027970A KR 20140113066 A KR20140113066 A KR 20140113066A
Authority
KR
South Korea
Prior art keywords
image
images
generating
virtual
present
Prior art date
Application number
KR1020130027970A
Other languages
Korean (ko)
Inventor
엄기문
김찬
신홍창
이현
이응돈
정원식
허남호
Original Assignee
한국전자통신연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국전자통신연구원 filed Critical 한국전자통신연구원
Priority to KR1020130027970A priority Critical patent/KR20140113066A/en
Publication of KR20140113066A publication Critical patent/KR20140113066A/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps

Abstract

The present invention relates to a method and an apparatus for generating a multi-view image based on shielded area information. The method for generating a multi-view image, according to the present invention, comprises the steps of: calculating a variation between the left and right images; determining a weight based on the variation information; and generating a virtual viewpoint image by applying the weight.

Description

TECHNICAL FIELD [0001] The present invention relates to a multi-viewpoint image generation method and apparatus based on shielded area information,

The present invention relates to an image processing technique, and more particularly, to a method and apparatus for generating a mutation in a stereo image or a multi-view image and generating a multi-view image based on the generated mutation.

Recently, interest in UHD (Ultra High Definition) having resolution more than 4 times that of HDTV has been increased, and a compression technique for higher resolution and higher image quality is required.

Along with this, interest and demand for 3D video service is increasing as another next generation technology after HDTV.

For example, 3D movies are welcome, and home displays also support 3D images. This is expected to lead to 3D image support in portable terminals, and it is becoming clear that the demand for 3D images and the expansion of the market will increase.

In this regard, a standard technology for processing a 3D image based on high-definition image processing technology has been established, and many companies are launching a product based on 3D image processing technology.

When a 3D image is implemented, a depth sense is generated using the parallax between images at different viewpoints. In order to generate multi-view images, it is possible to generate images of virtual viewpoints from left and right color images and depth images, or generate images of virtual viewpoints by rendering based on images of three or more viewpoints.

On the other hand, in the case of generating an image at a virtual viewpoint, a method for reducing the influence of an occlusion region or a hole in generation of a virtual viewpoint image to raise the quality of a 3D image is a problem.

It is an object of the present invention to provide a method and apparatus for extracting information of a shielded region separately and removing holes and boundary noise at a depth boundary of a virtual viewpoint image.

An object of the present invention is to provide a method and an apparatus capable of generating a multi-viewpoint image of high picture quality by considering the variation accuracy in generating an image of a virtual viewpoint.

The present invention relates to a multi-view image generation method and apparatus based on shielded area information.

According to an embodiment of the present invention, a method of generating a multi-view image includes calculating a variation between right and left images, determining a weight based on the variation information, and generating a virtual view image by applying the weight can do.

According to the present invention, it is possible to improve the picture quality in the vicinity of the discontinuous area of the object boundary and the depth by separately extracting the shielded area information from the stereo input image and projecting the shielded area on the intermediate view image based on the extracted information of the shielded area have.

According to the present invention, when blending the result of projection of the right and left images, the color value of the virtual view image is calculated by reflecting the weight considering the similarity between the right and left corresponding points, the presence or absence of the shielded area, Can be reduced.

FIG. 1 is a block diagram schematically illustrating an example of a virtual viewpoint generation method according to the present invention.
2 is a conceptual diagram schematically illustrating a method of extracting shielded area information according to the present invention.
3 is a view schematically illustrating an example of generating an intermediate image using shielded area information according to the present invention.

In describing the embodiments of the present invention, if the detailed description of related known structures or functions is deemed to obscure the subject matter of the present specification, the description may be omitted.

The present invention relates to an image processing technique, and more particularly, to a method and apparatus for generating a mutation in a stereo image or a multi-view image and generating a multi-view image based on the mutation. The present invention can improve image quality in a hole or a shielded area.

Multiview images can be acquired or generated in various ways.

As a method of acquiring or generating a multi-view image, there is a method of acquiring a multi-view image directly using cameras corresponding to the number of viewpoints.

As another method of acquiring or generating a multi-view image, there is a method of acquiring a color image and a depth image using a color camera and a depth camera, respectively, and then generating a multi-view image using the 2D image and the depth information. The depth image is information representing the distance from the camera in black and white contrast.

As another method of acquiring or generating a multi-view image, there is a method of extracting disparity information from a stereo image acquired with two color cameras and generating a multi-view image.

In this case, the variation information may be information on the positional difference between the same objects in the left and right images.

A device for implementing a multi-view image, e.g., a 3D image, generated as described above is a 3D display. Many of the displays currently in circulation are binocular displays, and thus a large number of content is stereo images available in binocular displays.

In the case of binocular display, it is possible to use different glasses to differentiate between the image entering the viewer's left eye and the image entering the viewer's right eye.

In order to differentiate the images into two eyes, the stereo image is composed of a left image and a right image, and a 3D image can be implemented using the parallax between the two images.

Considering the situation where stereoscopic 3D display language is already prevalent, it is necessary to use stereoscopic contents even if other types of displays are introduced later.

For example, there is a need for a method capable of viewing stereoscopic images that have already been produced on a non-stereoscopic 3D display, even if a non-stereoscopic multi-view 3D display is introduced in the future. Depth Image Based Rendering (DIBR) technique can be widely used.

However, extraction of a disparity map or extraction of a depth map using stereo matching from a stereoscopic image is time consuming, but has a problem of poor accuracy.

Furthermore, in this case, boundary noise and / or holes may exist in the virtual view image generated by the occlusion region and the incorrect parallax information.

Therefore, a high performance hole filling process may be required.

Automatic depth map estimation is still lacking in accuracy and reliability. Therefore, it is necessary to consider a method of generating high-resolution multi-view images while reducing holes and noise at depth boundaries.

Hereinafter, a method for generating a multi-view image based on variation information from a stereo image according to the present invention will be described with reference to the drawings.

According to the present invention, it is possible to improve the image quality at an intermediate point in consideration of the information of the shielded area and the reliability of the variation.

FIG. 1 is a block diagram schematically illustrating an example of a virtual viewpoint generation method according to the present invention.

In the example of FIG. 1, a virtual viewpoint image is generated based on the stereo image and the shielded area information according to the present invention.

A stereo image is an image captured at two right and left viewpoints with respect to the same object, and includes a time difference between images.

At this time, the shielded area indicates a region that is hidden from other objects in the image in the left image or the right image.

Some of the shielded region of the left image or the shielded region of the left image can be confirmed on the right image by the parallax and a part of the shielded region of the right image or the shielded region of the right image can be recognized on the left image due to the parallax.

Referring to FIG. 1, the virtual viewpoint image generating apparatus 100 includes a left and right side calculating unit 110, a blending weight calculating unit 120, a virtual viewpoint image generating unit 130, and a post-processing unit 140.

When the stereo image and the left and right shielded area information are input to the left and right variation calculation unit 110, the left and right variation calculation unit 110 calculates left and right variations in the left and right images.

Conventional left and right shielded area information is extracted based on FDU (Free-viewpoint TV Data Unit) or extracted based on LDV (Layered Depth Video).

Both methods extract the shielded area between the middle image and the left image based on the image and the variation of the three view points, and extract the shielded area between the center image and the right image.

In contrast, in the present invention, shielded area information is extracted through mutual intersection extraction between right and left images. Extraction of shielded area information according to the present invention will be described later.

When calculating the left and right sides, the variation calculation unit 110 excludes the pixels detected as the shielded area from the left and right images from the object of variation calculation. In the case of the shielded area, it may be present in one side image (for example, left side original image) but not in the other side image (for example, right side original image).

By excluding the shielded region from the object of variation calculation, the complexity of the calculation can be reduced, and the process can be performed quickly, and the accuracy of the variation calculation can be improved.

The blending weight calculation unit 120 calculates the blending weight based on the left and right sides. The blending weight calculation unit 120 may calculate the blending weight for each pixel of the left and right images by reflecting the distance from the left and right images, the reliability of the variation, and whether or not the area is the shielded area.

Equation 1 shows a method for calculating a blending weight according to the present invention.

≪ Formula 1 >

I virtual = W left占 I left + W right占 I right

In Equation 1, I virtual is the color value of each pixel in the final image generated at the virtual viewpoint. The virtual viewpoint image generating apparatus generates I virtual for all the pixels of the virtual viewpoint image.

W left is a weight for a projected color pixel of the left image, and W right is a weight for a projected color pixel of the right image.

W left and W right can be derived as in Equation (2).

&Quot; (2) "

W left = w 2L (w 1L + w 3L ) / 2

W right = w 2R (w 1R + w 3R ) / 2

In Equation (2), w 1L and w 1R represent the weight due to color similarity between mutual reliability or corresponding points. w1L and w1R have values between 0 and 1 and can be calculated using a similar function such as Normalized Cross Correlation (NCC) between the corresponding points of the left and right images obtained by the mutation.

In addition, w 2L and w 2R represent weights according to the presence or absence of a shielded area. For example, w 2L (w 2R ) has a value of 0 for pixels belonging to the shielded area, and 1 for pixels not belonging to the shielded area. Therefore, when generating the virtual viewpoint image, the color information by the shielded area pixels is not contributed.

W 3L and w 3R are weight values determined by distances between the positions of the left and right images to be projected and the positions of the virtual view image, and have values between 0 and 1. As the distance between the two positions becomes closer, I have.

Then, the virtual viewpoint image generating unit 130 generates an innermost viewpoint image. The virtual viewpoint image generation unit 130 blends the image projected by the left image and the image projected by the right image based on the weights derived through Equations 1 and 2 to generate a virtual intermediate view image Can be generated.

The virtual viewpoint image generation unit 130 may generate a virtual intermediate viewpoint image repeatedly at each viewpoint by the number of viewpoints required for the multi-viewpoint image.

The post-processing unit 140 performs post-processing necessary for the generated virtual viewpoint images to generate a multi-viewpoint image.

The post-processing unit 140 may include hole filling to fill a hole area existing in the virtual viewpoint images or inpainting to fill the color information of an unprojected area using surrounding color information Can be selectively performed.

2 is a conceptual diagram schematically illustrating a method of extracting shielded area information according to the present invention.

As described above, in the present invention, information of a shielded region is extracted by extracting mutual intersection between a stereo image, that is, a left image and a right image.

Specifically, the shielded area information is extracted using the difference between the virtual viewpoint image generated by the mutual crossing between the left and right images and the original image.

For example, according to the present invention, a virtual right image of the original right image position can be generated by the variation of the original left image, and a virtual left image of the original left image position can be generated by the variation of the original right image. Then, the left and right shielded area information can be obtained by calculating the difference between the original image and the virtual image.

Referring to FIG. 2, a right viewpoint virtual image 220 can be generated by a variation of a left viewpoint original image 210. In addition, the left viewpoint virtual image 240 can be generated by the variation of the right viewpoint original image 230. In this case, the variation may be a difference between a left-point original image and a right-point original image.

Subsequently, the left image synthesis error 250 may be derived based on the difference between the left-view original image 210 and the left-view-point virtual image 240. The right image synthesis error 260 can be derived based on the difference between the right view original image 230 and the right view virtual image 220. [

At this time, the synthesis errors can be reflected as the weights of Equation 2, e.g., w1L and w1R, w2L and w2R, and w3L and w3R .

Then, the left and right shielded area information 270 can be extracted based on the left image synthesis error 250 and the right image synthesis error 260.

At this time, the colors at the left viewpoint shielding region position and the right viewpoint shielding region position may be stored in the form of an image filled with binary images or colors of other viewpoint images that are not shielded.

For example, the color for the shielded region position at the left viewpoint may be filled with the corresponding unshielded partial color at the right viewpoint image, and the color for the shielded region position at the right viewpoint may be the corresponding partial color not shielded from the left viewpoint image Can be filled.

It is noted that FIG. 2 is one embodiment for facilitating understanding of the present invention, and it is noted that the extraction of the shielded area information can be performed in various ways within the scope of the technical idea according to the present invention.

3 is a view schematically illustrating an example of generating an intermediate image using shielded area information according to the present invention.

Referring to FIG. 3, it may be determined whether an image belongs to a region 330 on the left image side or an area 340 on the right image side between the left image 310 and the right image 320, Image) can be generated.

In the region 330 on the left image side, the intermediate image of the virtual viewpoint can be generated using the information of the left image, the left side image, and the right side shielded region.

In the region 340 on the right image side, the intermediate image of the virtual viewpoint can be generated using the information of the right image, the right side image, and the left side shielded region.

In the above-described exemplary system, the methods are described on the basis of a flowchart as a series of steps or blocks, but the present invention is not limited to the order of the steps, and some steps may occur in different orders or simultaneously . In addition, the above-described embodiments include examples of various aspects. For example, combinations of the embodiments are also to be understood as one embodiment of the present invention.

Claims (1)

Calculating a variation between the left and right images;
Determining a weight based on the variation information; And
And generating a virtual viewpoint image by applying the weight values.
KR1020130027970A 2013-03-15 2013-03-15 Multi-view points image generating method and appararus based on occulsion area information KR20140113066A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020130027970A KR20140113066A (en) 2013-03-15 2013-03-15 Multi-view points image generating method and appararus based on occulsion area information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020130027970A KR20140113066A (en) 2013-03-15 2013-03-15 Multi-view points image generating method and appararus based on occulsion area information

Publications (1)

Publication Number Publication Date
KR20140113066A true KR20140113066A (en) 2014-09-24

Family

ID=51757728

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020130027970A KR20140113066A (en) 2013-03-15 2013-03-15 Multi-view points image generating method and appararus based on occulsion area information

Country Status (1)

Country Link
KR (1) KR20140113066A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108076384A (en) * 2018-01-02 2018-05-25 京东方科技集团股份有限公司 A kind of image processing method based on virtual reality, device, equipment and medium
GB2569979A (en) * 2018-01-05 2019-07-10 Sony Interactive Entertainment Inc Image generating device and method of generating an image
CN113676774A (en) * 2021-08-20 2021-11-19 京东方科技集团股份有限公司 Image processing method, image processing apparatus, display apparatus, and storage medium

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108076384A (en) * 2018-01-02 2018-05-25 京东方科技集团股份有限公司 A kind of image processing method based on virtual reality, device, equipment and medium
WO2019134368A1 (en) * 2018-01-02 2019-07-11 Boe Technology Group Co., Ltd. Image processing method of virtual reality and apparatus thereof
CN108076384B (en) * 2018-01-02 2019-12-06 京东方科技集团股份有限公司 image processing method, device, equipment and medium based on virtual reality
US11373337B2 (en) 2018-01-02 2022-06-28 Beijing Boe Optoelectronics Technology Co., Ltd. Image processing method of virtual reality and apparatus thereof
GB2569979A (en) * 2018-01-05 2019-07-10 Sony Interactive Entertainment Inc Image generating device and method of generating an image
US10848733B2 (en) 2018-01-05 2020-11-24 Sony Interactive Entertainment Inc. Image generating device and method of generating an image
GB2569979B (en) * 2018-01-05 2021-05-19 Sony Interactive Entertainment Inc Rendering a mixed reality scene using a combination of multiple reference viewing points
CN113676774A (en) * 2021-08-20 2021-11-19 京东方科技集团股份有限公司 Image processing method, image processing apparatus, display apparatus, and storage medium
CN113676774B (en) * 2021-08-20 2024-04-09 京东方科技集团股份有限公司 Image processing method, image processing apparatus, display apparatus, and storage medium

Similar Documents

Publication Publication Date Title
US10148930B2 (en) Multi view synthesis method and display devices with spatial and inter-view consistency
US8488869B2 (en) Image processing method and apparatus
Huynh-Thu et al. Video quality assessment: From 2D to 3D—Challenges and future trends
KR101185870B1 (en) Apparatus and method for processing 3 dimensional picture
JP6027034B2 (en) 3D image error improving method and apparatus
US20140098100A1 (en) Multiview synthesis and processing systems and methods
US20130069942A1 (en) Method and device for converting three-dimensional image using depth map information
US20130038600A1 (en) System and Method of Processing 3D Stereoscopic Image
CN104662896A (en) An apparatus, a method and a computer program for image processing
Cheng et al. Spatio-temporally consistent novel view synthesis algorithm from video-plus-depth sequences for autostereoscopic displays
KR20170140187A (en) Method for fully parallax compression optical field synthesis using depth information
US8982187B2 (en) System and method of rendering stereoscopic images
JP2013527646A5 (en)
Winkler et al. Stereo/multiview picture quality: Overview and recent advances
JP7184748B2 (en) A method for generating layered depth data for a scene
US9639944B2 (en) Method and apparatus for determining a depth of a target object
KR20140113066A (en) Multi-view points image generating method and appararus based on occulsion area information
Köppel et al. Filling disocclusions in extrapolated virtual views using hybrid texture synthesis
KR20110060180A (en) Method and apparatus for producing 3d models by interactively selecting interested objects
KR20170075656A (en) Tridimensional rendering with adjustable disparity direction
US20120170841A1 (en) Image processing apparatus and method
US9787980B2 (en) Auxiliary information map upsampling
Knorr et al. From 2D-to stereo-to multi-view video
KR101192121B1 (en) Method and apparatus for generating anaglyph image using binocular disparity and depth information
TWM529333U (en) Embedded three-dimensional image system

Legal Events

Date Code Title Description
WITN Withdrawal due to no request for examination