CN106791774A - Virtual visual point image generating method based on depth map - Google Patents
Virtual visual point image generating method based on depth map Download PDFInfo
- Publication number
- CN106791774A CN106791774A CN201710034878.0A CN201710034878A CN106791774A CN 106791774 A CN106791774 A CN 106791774A CN 201710034878 A CN201710034878 A CN 201710034878A CN 106791774 A CN106791774 A CN 106791774A
- Authority
- CN
- China
- Prior art keywords
- image
- virtual viewpoint
- images
- depth
- virtual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 230000000007 visual effect Effects 0.000 title claims abstract description 11
- 238000013507 mapping Methods 0.000 claims abstract description 43
- 238000012937 correction Methods 0.000 claims abstract description 16
- 230000007246 mechanism Effects 0.000 claims abstract description 6
- 238000012545 processing Methods 0.000 claims description 22
- 238000009877 rendering Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 7
- 230000002457 bidirectional effect Effects 0.000 claims description 6
- 230000014509 gene expression Effects 0.000 claims description 6
- 238000009499 grossing Methods 0.000 claims description 6
- 238000007500 overflow downdraw method Methods 0.000 claims description 6
- 101100400452 Caenorhabditis elegans map-2 gene Proteins 0.000 claims description 5
- 101150064138 MAP1 gene Proteins 0.000 claims description 5
- 230000015572 biosynthetic process Effects 0.000 claims description 5
- 238000003786 synthesis reaction Methods 0.000 claims description 5
- 239000011800 void material Substances 0.000 claims description 5
- 102000003712 Complement factor B Human genes 0.000 claims description 3
- 108090000056 Complement factor B Proteins 0.000 claims description 3
- 239000000654 additive Substances 0.000 claims description 3
- 230000000996 additive effect Effects 0.000 claims description 3
- 238000003384 imaging method Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 230000008719 thickening Effects 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 3
- 230000004927 fusion Effects 0.000 description 5
- 238000006243 chemical reaction Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 238000003706 image smoothing Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/128—Adjusting depth or disparity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/122—Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
The present invention proposes a kind of virtual visual point image generating method based on depth map, reference picture is smoothed using half-pixel accuracy interpolation method first and video camera internal reference is adjusted, the image and corresponding depth image of two reference views are processed using two-way asynchronous mapping mechanism, draw out respectively virtual visual point image and with virtual depth image;It is then based on depth information to be extended empty borderline region, the prospect falseness profile remained in removal background;Gamma correction is carried out to drawing image according to the luminance difference simplified model between image, to eliminate brightness discontinuous problem;And the image drawn is merged using weighting composition algorithm, block cavity to eliminate major part;Empty edge is finally searched for using window function, empty point interpolation is filled using background information, generate picture clearly virtual visual point image.
Description
Technical Field
The invention belongs to the technical field of image processing, relates to a virtual viewpoint image generation method, and particularly relates to a virtual viewpoint image generation method based on a depth map.
Background
With the rapid development of computer vision technology, people have higher and higher requirements on the quality of vision resources, the requirements on vision experience are gradually improved, and the requirements on the presentation of a three-dimensional effect and interactive experience are stronger when high-quality video sources are pursued. The position and the orientation of the camera can be changed at any time through interactive operation by a user, the camera moves along a self-defined track, the video is watched through a viewpoint which does not exist, and the limitation of the visual angle of a lens in the traditional video is broken. To achieve this goal, virtual viewpoint generation techniques have come to light. The virtual viewpoint generating technology is to generate a virtual viewpoint image of a camera middle viewpoint by using images of two adjacent viewpoints photographed by a video camera, and the generated virtual viewpoint image can enable a user to feel a real objective world from more angles.
There are two main techniques for rendering virtual viewpoints: model-based rendering techniques and image-based rendering techniques. Although the model-based drawing technology has a good effect on the aspect of detailed information, the modeling process is complex and the practicability is poor. The image-based rendering technology is a technology for generating an image of a virtual viewpoint by using an image of a reference viewpoint, and can reduce transmission bandwidth more and obtain better image rendering quality. The virtual viewpoint generation technology based on the depth image is a common rendering technology based on the image, and mainly comprises four steps of depth image preprocessing, image conversion, image fusion and cavity filling. The depth map processing is to perform image smoothing and other processing on depth images of images captured by adjacent cameras to reduce the number of holes and cracks in a synthesized image, the image conversion is to obtain an image of a virtual viewpoint in the middle of the cameras by using images of two cameras through a three-dimensional coordinate conversion method, the image fusion technology is to fuse two virtual viewpoint images obtained by two cameras into one image, and the hole filling is to fill hole points in the image obtained by image fusion to generate an image of a virtual viewpoint with high image quality. However, the images rendered by the depth image-based virtual viewpoint generation technology have technical difficulties such as overlapping, holes, cracks, artifacts, and the like.
Disclosure of Invention
The invention aims to provide a virtual viewpoint image generation method based on a depth map, which ensures image mapping precision through interpolation smoothing and camera internal parameter adjustment, removes false contours and artifacts by using a cavity region expansion method, eliminates the problem of discontinuous brightness according to a brightness difference simplification model between images, adopts a weighted synthesis algorithm to eliminate most of shielding cavities, searches the edge of the cavity by using a window function, interpolates and fills cavity points by combining background information, and finally generates a virtual viewpoint image with a clear picture.
In order to achieve the technical purpose, the technical proposal of the invention is that,
a virtual viewpoint image generation method based on a depth map comprises the following steps:
s1, selecting any two viewpoint images in the camera shooting process as reference images, extracting a depth map of the reference images, smoothing the reference images by adopting a half-pixel precision interpolation method, and adjusting internal parameters of a camera to ensure that the interpolated images can still meet an image mapping equation; processing the images of the two reference viewpoints and the corresponding depth images by using a bidirectional asynchronous mapping mechanism, and respectively drawing a virtual viewpoint image and a virtual depth image;
s2, expanding the boundary region of the hollow based on the depth information, and removing the residual foreground false contour on the background; and the brightness correction is carried out on the virtual viewpoint image according to the brightness difference simplification model between the images so as to eliminate the problem of discontinuous brightness;
s3, classifying and judging according to different pixel point obtaining modes of the two virtual viewpoint images after brightness correction, and fusing the images after brightness correction by adopting a weighted synthesis algorithm to eliminate most shielding holes;
s4, searching the edge of the hole by adopting a window function, and interpolating and filling the hole points by using background information to generate a virtual viewpoint image with clear picture.
In the present invention, S1 includes the steps of:
s11 any two viewpoint images in the camera shooting process are selected as a reference image 1 and a reference image 2, the depth information of the images is obtained through the parallax between the views based on the sequence image matching method, and the depth maps of the reference image 1 and the reference image 2 are extracted to respectively obtain a reference depth map 1 and a reference depth map 2. There are various methods for acquiring image depth information, such as a method based on sequence image matching, a structured light ranging method, a triangulation method, and the like.
S12, in order to improve the rendering accuracy of the image, before the virtual viewpoint image is rendered, a half-pixel accuracy interpolation method is adopted, and the reference image is smoothed by solving the average value of adjacent pixel points to obtain the value of a half-pixel interpolation point.
Let W and H be the width and height, respectively, of the reference image, and f be the camera focal length, (μ)0,ν0) Is the coordinate of a point in the reference image under the pixel coordinate system, s is a distortion parameter, and k is used1、k2Adjusting the original camera intrinsic parameter matrix as a multiplier, wherein k1=(2W-1)/W,k2And (2H-1)/H, adjusting the internal parameters of the original camera to new internal parameters of the camera according to the formula (1), so that the interpolated image can still meet the image mapping equation.
S13 processes the images of the two reference viewpoints and the corresponding depth images by using a bidirectional asynchronous mapping mechanism, and respectively draws a virtual viewpoint image and a virtual depth image, which specifically includes:
and (3) establishing a 3D image mapping equation, and respectively obtaining a virtual viewpoint image and a virtual depth image by using the formula (2) according to the two reference images and the corresponding depth images after half-pixel precision interpolation processing.
Firstly, a pixel point m on a reference image after half-pixel precision interpolation processing1(μ1,ν1) Projecting the point M corresponding to the three-dimensional space to a virtual viewpoint imaging plane to obtain the corresponding point M on the virtual viewpoint imagePoint coordinate is m2(μ2,ν2) Combining the two projection mappings to obtain a 3D image mapping equation as shown in equation (2):
wherein N is1,R1,T1Camera parameters, N, respectively, of reference viewpoints2,R2,T2Camera parameters for virtual viewpoints, A1For depth values of three-dimensional spatial points in the reference viewpoint camera coordinate system, A2Are the depth values of the three-dimensional space points in the virtual viewpoint camera coordinate system.
Further, the virtual viewpoint image and the virtual depth image drawn by the two reference images after the half-pixel precision interpolation processing are mapped through a reverse image to reduce the number of holes of the virtual viewpoint image, and the method comprises the following steps: initializing two flag matrixes flag1 and flag2 with the same size as the two reference images after half-pixel precision interpolation processing, setting the initial value to be 0, mapping the hole points on the two drawn virtual viewpoint images (marked as drawing figure 1 and drawing figure 2) to the corresponding reference images to obtain the pixel values of the hole points, and simultaneously setting the values of the points at the corresponding positions in the flag matrixes to be 1. The number of holes of the virtual viewpoint image is obviously reduced after the reverse mapping processing.
Thus, the first stage is completed, and the reference image 1, the reference depth map 1, the reference image 2 and the reference depth map 2 of the two reference viewpoints are used to respectively obtain the corresponding virtual viewpoint image 1 and the corresponding virtual viewpoint image 2.
S2 of the present invention includes the steps of:
s21, expanding the boundary region of the hollow based on the depth information, and removing the residual foreground false contour on the background, the method is: marking the boundary region of the hole in the virtual viewpoint as boundary, setting the initial value of the boundary to be 1, performing difference operation on the depth values of the left and right sides of the boundary region of the hole in the virtual viewpoint to obtain an absolute difference value, if the absolute difference value is greater than a set threshold value, taking the boundary value of the point with a small depth value to be 0, otherwise keeping the boundary value unchanged, and then performing 5 x 5 expansion and thickening on the boundary region of the hole with the boundary value of 1, so as to eliminate the residual foreground false contour on the background.
S22 performs brightness correction on the drawn image based on the brightness difference reduction model between the images to eliminate the brightness discontinuity problem.
Let the virtual viewpoint image 1 and the virtual viewpoint image 2 be IlAnd IrAnd recording the obtained image as I after the processing of S21l1And Ir1N is the number of non-hole pixel values, and the false contour is removed according to the image Il1And Ir1Non-void pixel value of (1)l1(x, y) and Ir1The expressions for the multiplicative error factor A and the additive error factor B are calculated as follows:
image I according to parameter A, Bl1And Ir1Brightness correction is carried out, and the corrected image is Il1' and Ir1', then the image brightness correction expression is:
the specific method of S3 of the invention is as follows:
due to the two selected reference viewpoint images, the situation that an invisible area in one viewpoint image is visible in the other viewpoint image can exist, namely, an occlusion question existsTo address the void phenomenon, classification and judgment are first performed. When I isl1' (x, y) and Ir1When the values of 'x, y' are all 0, the corresponding point I in the virtual viewpoint imagev(x, y) takes the value of 0; when I isl1' (x, y) and Ir1If the value of' x, y is 0, then pixel values other than 0 are assigned to Iv(x, y); when I isl1' (x, y) and Ir1When the values of 'x, y' are not 0, judging the projection mode obtained by the pixel points, and setting the threshold tau to be 5 according to the judgment basis of the previous value conditions of the flag matrixes flag1 and flag 2. If the values of the flag1(x, y) and the flag2(x, y) are both 0 or both 1, namely, the points in the two virtual images are obtained by forward mapping or reverse mapping, and the formula I is obtained by using a weighted fusion method (6)v(x, y); if the values of the flag1(x, y) and the flag2(x, y) are different, that is, the same point is obtained by forward mapping and reverse mapping in the two images respectively, the absolute difference | I of the two points is obtainedl1′(x,y)-Ir1' (x, y) | is subjected to threshold judgment, and when | I |l1′(x,y)-Ir1When the (x, y) is less than or equal to tau, the formula (6) is used for obtaining the I by a weighted fusion methodv(x, y); when Il1′(x,y)-Ir1When' (x, y) | > tau, assigning the value obtained by forward mapping to Iv(x, y) where α is a weighting factor and t represents the amount of translation of the camera's capture viewpoint position.
The specific method of the invention S4 is as follows: filling holes in the virtual viewpoint image fused in the step S3, scanning image pixel values to find the positions of the holes in the virtual viewpoint image, numbering the holes, scanning numbered hole pixel points one by using a 3 × 3 window function, if some points around the hole points belong to the hole points and some do not belong to the hole points (the points in the window function range are scanned one by one to judge the pixel values of the points, if the pixel value of the point is 0, the point is a hole point, otherwise, the point is a non-hole point.), indicating that the point is located at the edge of the hole, interpolating the hole points, filling the hole points by using the background information of the depth map, improving the filling quality, and generating the virtual viewpoint image with clear picture.
The invention has the beneficial effects that:
the invention provides a virtual viewpoint image generation method based on a depth map, which ensures image mapping precision through interpolation smoothing and camera internal parameter adjustment, removes false contours and artifacts by using a hole region expansion method, simplifies a model according to brightness difference between images to eliminate the problem of discontinuous brightness, adopts a weighted synthesis algorithm to eliminate most of shielding holes, searches hole edges by using a window function, interpolates and fills hole points by combining background information, and finally generates a virtual viewpoint image with clear pictures.
Drawings
FIG. 1 is a block flow diagram of the present invention.
Wherein,
1, acquiring a reference viewpoint image and a reference depth image;
2, acquiring a virtual viewpoint image and a virtual depth image corresponding to each reference viewpoint for bidirectional mapping;
3, removing false contours;
4, correcting the brightness of the image;
5, image fusion;
and 6, filling the holes.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
As shown in fig. 1, the present invention provides a method for generating a virtual viewpoint image based on a depth map, which includes firstly processing images of two reference viewpoints and corresponding depth images by using a bidirectional asynchronous mapping mechanism, and respectively drawing a virtual viewpoint image and a virtual depth image; then expanding the boundary region of the hollow based on the depth information, and removing the residual foreground false contour on the background; according to the brightness difference simplification model between the images, the brightness correction is carried out on the drawn images so as to eliminate the problem of brightness discontinuity; fusing the drawn images by adopting a weighted synthesis algorithm to eliminate most of shielding holes; and finally, searching the edge of the hole by adopting a window function, and interpolating and filling the hole points by utilizing background information to generate a virtual viewpoint image with a clear picture.
Any two viewpoint images in the camera shooting process are selected as a reference image 1 and a reference image 2, the depth information of the images is obtained through the parallax between views based on a sequence image matching method, the depth map of the reference image is extracted to obtain a reference depth map 1 and a reference depth map 2, and the method for obtaining the depth information of the images is various, such as a sequence image matching based method, a structured light ranging method, a triangulation ranging method and the like. The images of different visual angles are similar to the images of the same scene, but a large amount of redundant information exists, and the depth information can partially reflect the position relation of space objects, so that the depth information is used as reference information in the virtual viewpoint generation, and the redundancy among the images of different visual angles can be reduced.
In order to improve the drawing precision of the image, before the virtual viewpoint image is drawn, a half-pixel precision interpolation method is adopted, and the reference image is smoothed by solving the average value of adjacent pixel points to obtain a half-pixel interpolation point value. Let W and H be the width and height of the image, respectively, and f be the camera focal length, (μ)0,ν0) Is the coordinate of a point in the image under the pixel coordinate system, s is a distortion parameter, and k is used1、k2Adjusting the original camera intrinsic parameter matrix as a multiplier, wherein k1=(2W-1)/W,k2Adjusting the original camera intrinsic parameters to new camera intrinsic parameters according to the formula (1) when the original camera intrinsic parameters are (2H-1)/H, so that the interpolated image can still be fullA foot image mapping equation.
And then processing the images of the two reference viewpoints and the corresponding depth images by using a bidirectional asynchronous mapping mechanism. And (3) establishing a 3D image mapping equation, and respectively obtaining a virtual viewpoint image and a virtual depth image by using the formula (2) according to the two reference images and the corresponding depth images after the half-pixel processing. Firstly, a pixel point m on a reference image after half-pixel precision interpolation processing1(μ1,ν1) Projecting the point M in the three-dimensional space to a virtual viewpoint imaging plane to obtain a corresponding point coordinate M on the virtual viewpoint image2(μ2,ν2) The two projection mappings are combined to obtain the mapping equation shown as the formula (2).
N1,R1,T1Camera parameters, N, respectively, of reference viewpoints2,R2,T2Camera parameters for virtual viewpoints, A1For depth values of three-dimensional spatial points in the reference viewpoint camera coordinate system, A2Are the depth values of the three-dimensional space points in the virtual viewpoint camera coordinate system.
The virtual viewpoint images and the depth maps drawn by the two reference images are mapped through a reverse image, two flag matrixes flag1 and flag2 with the same size as the reference images subjected to half-pixel precision interpolation processing are initialized, the initial value is set to be 0, the hole points on the two drawn virtual viewpoint images are mapped to the corresponding reference images to obtain the pixel values of the hole points, meanwhile, the values of the points at the corresponding positions in the flag matrixes are set to be 1, and the number of holes in the virtual viewpoint images after the reverse mapping processing is obviously reduced.
Thus, the first stage is completed, and the reference image 1, the reference depth map 1, the reference image 2 and the reference depth map 2 of the two reference viewpoints are used to respectively obtain the corresponding virtual viewpoint image 1 and the corresponding virtual viewpoint image 2. And then, carrying out the second stage of work, which mainly comprises the operations of removing the residual foreground false contour on the background of the virtual viewpoint image, correcting the brightness, weighting and fusing the images, filling the cavity and the like, and finally generating the virtual viewpoint image with clear picture.
Due to mutual cluttering at the boundary between the foreground and background pixels, when the known viewpoint is transformed to the virtual viewpoint, the pixels of the foreground object remain on the background picture, thereby generating a false contour at the virtual viewpoint. The key to erasing the false contour is to judge the correct erasing area, which can be obtained by comparing the depth values of the two sides of the transformed cavity.
Marking the cavity boundary area in the virtual viewpoint as boundary, setting the initial value of the boundary to be 1, performing difference operation on the depth values of the left and right sides of the cavity boundary area in the virtual viewpoint to obtain an absolute difference value, if the absolute difference value is greater than a set threshold value, taking the boundary value of the point with the small depth value to be 0, otherwise keeping the boundary value unchanged, and then performing 5 x 5 expansion and thickening on the cavity boundary area with the boundary value of 1, namely eliminating most foreground pixels remained on the background.
By simplifying the digital camera model, the brightness difference between the images can be approximately regarded as a linear relationship. Let the virtual viewpoint image 1 and the virtual viewpoint image 2 be IlAnd IrAnd recording an image I after false contour erasing processingl1And Ir1N is the number of non-hole pixel values, and the false contour is removed according to the image Il1And Ir1Non-void pixel value of (1)l1(x, y) and Ir1The expressions for the multiplicative error factor A and the additive error factor B are calculated as follows:
image I according to parameter A, Bl1And Ir1Brightness correction is carried out, and the corrected image is Il1' and Ir1', then the image brightness correction expression is:
in order to obtain a virtual viewpoint image with a good visual effect, two virtual viewpoint images I after brightness correction are requiredl1' and Ir1' fusion is performed. Classifying and dereferencing the images in the fusion process according to different pixel point acquisition modes in the two images, wherein the specific process is as follows: in the two selected reference viewpoint images, the situation that an invisible area in one viewpoint image is visible in the other viewpoint image, namely, a hole phenomenon caused by a shielding problem may exist, and therefore classification judgment is performed first. When I isl1' (x, y) and Ir1When the values of 'x, y' are all 0, the corresponding point I in the virtual imagev(x, y) takes the value of 0; when I isl1' (x, y) and Ir1If the value of' x, y is 0, then pixel values other than 0 are assigned to Iv(x, y); when I isl1' (x, y) and Ir1When the values of 'x, y' are not 0, judging the projection mode obtained by the pixel points, and setting the threshold tau to be 5 according to the judgment basis of the previous value conditions of the flag matrixes flag1 and flag 2. If the values of the flag1(x, y) and the flag2(x, y) are both 0 or both 1, namely, the points in the two virtual images are obtained by forward mapping or reverse mapping, and the formula I is obtained by using a weighted fusion method (6)v(x, y); if the values of the flag1(x, y) and the flag2(x, y) are different, that is, the same point is obtained by forward mapping and reverse mapping in the two images respectively, the absolute values of the two points are obtainedDifference Il1′(x,y)-Ir1' (x, y) | is subjected to threshold judgment, and when | I |l1′(x,y)-Ir1When the (x, y) is less than or equal to tau, the formula (6) is used for obtaining the I by a weighted fusion methodv(x, y); when Il1′(x,y)-Ir1When' (x, y) | > tau, assigning the value obtained by forward mapping to Iv(x, y) where α is a weighting factor and t represents the amount of translation of the camera's capture viewpoint position.
Filling holes in the fused virtual viewpoint images, scanning image pixel values to find out the positions of the holes in the images, numbering the holes, scanning numbered hole pixel points one by adopting a 3X 3 window function, if some points around the hole points belong to the hole points and some points do not belong to the hole points, indicating that the points are positioned at the edges of the holes, interpolating the hole edge points, filling the hole points by utilizing background information of a depth map, and improving the filling quality.
The foregoing description of the preferred embodiments of the present invention has been included to describe the features of the invention in detail, and is not intended to limit the inventive concepts to the particular forms of the embodiments described, as other modifications and variations within the spirit of the inventive concepts will be protected by this patent. The subject matter of the present disclosure is defined by the claims, not by the detailed description of the embodiments.
Claims (10)
1. A virtual viewpoint image generation method based on a depth map is characterized by comprising the following steps:
s1, selecting any two viewpoint images in the camera shooting process as reference images, extracting a depth map of the reference images, smoothing the reference images by adopting a half-pixel precision interpolation method, and adjusting internal parameters of a camera to ensure that the interpolated images can still meet an image mapping equation; processing the images of the two reference viewpoints and the corresponding depth images by using a bidirectional asynchronous mapping mechanism, and respectively drawing a virtual viewpoint image and a virtual depth image;
s2, expanding the boundary region of the hollow based on the depth information, and removing the residual foreground false contour on the background; and the brightness correction is carried out on the virtual viewpoint image according to the brightness difference simplification model between the images so as to eliminate the problem of discontinuous brightness;
s3, classifying and judging according to different pixel point obtaining modes of the two virtual viewpoint images after brightness correction, and fusing the images after brightness correction by adopting a weighted synthesis algorithm to eliminate most shielding holes;
s4, searching the edge of the hole by adopting a window function, and interpolating and filling the hole points by using background information to generate a virtual viewpoint image with clear picture.
2. The method for generating a virtual viewpoint image based on a depth map as claimed in claim 1, wherein in S1, the depth maps of two reference images, i.e. reference image 1 and reference image 2, are extracted by a method based on sequence image matching and are recorded as reference depth map 1 and reference depth map 2.
3. The method for generating the virtual viewpoint image based on the depth map as claimed in claim 1 or 2, wherein in S1, smoothing the reference image by using a half-pixel precision interpolation method means smoothing the reference image by solving an average value of adjacent pixel points to obtain a half-pixel interpolation point value.
4. The method for generating a virtual viewpoint image based on a depth map as claimed in claim 3, wherein in S1, the camera intrinsic parameters are adjusted so that the interpolated image still satisfies the image mapping equation, and the method is:
let W and H be the width and height, respectively, of the reference image, and f be the camera focal length, (μ)0,ν0) Is the coordinate of a point in the reference image under the pixel coordinate system, s is a distortion parameter, and k is used1、k2Adjusting the original camera intrinsic parameter matrix as a multiplier, wherein k1=(2W-1)/W,k2And (2H-1)/H, adjusting the internal parameters of the original camera to new internal parameters of the camera according to the formula (1), so that the interpolated image still can meet the image mapping equation:
5. the method for generating a virtual viewpoint image based on a depth map as claimed in claim 4, wherein in S1, the method for rendering the virtual viewpoint image and the virtual depth image is:
establishing a 3D image mapping equation, and respectively obtaining a virtual viewpoint image and a virtual depth image by using the formula (2) according to the two reference images and the corresponding depth images after half-pixel precision interpolation processing;
firstly, a pixel point m on a reference image after half-pixel precision interpolation processing1(μ1,ν1) Projecting the point M in the three-dimensional space to a virtual viewpoint imaging plane to obtain a corresponding point coordinate M on the virtual viewpoint image2(μ2,ν2) Combining the two projection mappings to obtain a 3D image mapping equation as shown in equation (2):
wherein N is1,R1,T1Camera parameters, N, respectively, of reference viewpoints2,R2,T2Camera parameters for virtual viewpoints, A1For depth values of three-dimensional spatial points in the reference viewpoint camera coordinate system, A2Are the depth values of the three-dimensional space points in the virtual viewpoint camera coordinate system.
6. The method for generating a virtual visual point image based on a depth map as claimed in claim 5, wherein in S1, the obtained virtual visual point image and virtual depth image are mapped by a reverse image to reduce the number of holes in the virtual visual point image, the method comprising: initializing two flag matrixes flag1 and flag2 with the same size as the two reference images after half-pixel precision interpolation processing, setting the initial value to be 0, mapping the hole points on the two drawn virtual viewpoint images to the respective corresponding reference images to obtain the pixel values of the hole points, and simultaneously setting the values of the points at the corresponding positions in the flag matrixes to be 1.
7. The method for generating a virtual viewpoint image based on a depth map as claimed in claim 5 or 6, wherein the step S2 is to expand the void boundary region based on the depth information to remove the foreground false contour remaining on the background, and the method comprises: marking the boundary region of the hole in the virtual viewpoint as boundary, setting the initial value of the boundary to be 1, performing difference operation on the depth values of the left and right sides of the boundary region of the hole in the virtual viewpoint to obtain an absolute difference value, if the absolute difference value is greater than a set threshold value, taking the boundary value of the point with a small depth value to be 0, otherwise keeping the boundary value unchanged, and then performing 5 x 5 expansion and thickening on the boundary region of the hole with the boundary value of 1, so as to eliminate the residual foreground false contour on the background.
8. The method of claim 7, wherein the rendering image is luminance-corrected according to the simplified model of luminance difference between images in step S2, by:
let the virtual viewpoint image 1 and the virtual viewpoint image 2 be IlAnd IrAnd recording the obtained image as I after the processing of S21l1And Ir1N is the number of non-hole pixel values, and the false contour is removed according to the image Il1And Ir1Non-void pixel value of (1)l1(x, y) and Ir1The expressions for the multiplicative error factor A and the additive error factor B are calculated as follows:
image I according to parameter A, Bl1And Ir1Brightness correction is carried out, and the corrected image is Il1' and Ir1', then the image brightness correction expression is:
9. the method of claim 7, wherein the step of S3 comprises:
firstly, classification judgment is carried out, when Il1' (x, y) and Ir1When the values of 'x, y' are all 0, the corresponding point I in the virtual viewpoint imagev(x, y) takes the value of 0; when I isl1' (x, y) and Ir1If the value of' x, y is 0, then pixel values other than 0 are assigned to Iv(x, y); when I isl1' (x, y) and Ir1When the values of 'x, y' are not 0, judging the projection mode obtained by the pixel points, and setting the threshold tau to be 5 according to the judgment basis of the previous value conditions of the flag matrixes flag1 and flag 2; if the values of the flag1(x, y) and the flag2(x, y) are both 0 or both 1, namely, the points in the two virtual images are obtained by forward mapping or reverse mapping, and the formula I is obtained by using a weighted fusion method (6)v(x, y); if the values of the flag1(x, y) and the flag2(x, y) are different, that is, the same point is obtained by forward mapping and reverse mapping in the two images respectively, the absolute difference | I of the two points is obtainedl1′(x,y)-Ir1' (x, y) | is subjected to threshold judgment, and when | I |l1′(x,y)-Ir1When the (x, y) is less than or equal to tau, the formula (6) is used for obtaining the I by a weighted fusion methodv(x, y); when Il1′(x,y)-Ir1When' (x, y) | > tau, assigning the value obtained by forward mapping to Iv(x, y), wherein α is a weight factor, and t represents the translation amount of the camera shooting viewpoint position;
10. the method of generating a virtual viewpoint image based on a depth map as set forth in claim 9, wherein the method of S4 is: filling holes in the virtual viewpoint image fused in the step S3, scanning image pixel values to find the positions of the holes in the virtual viewpoint image, numbering the holes, scanning numbered hole pixel points one by using a 3 × 3 window function, if some points around the hole point belong to the hole point and some do not belong to the hole point, indicating that the point is located at the hole edge, interpolating the hole edge points, filling the hole point by using the background information of the depth map, improving the filling quality, and generating the virtual viewpoint image with a clear picture.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710034878.0A CN106791774A (en) | 2017-01-17 | 2017-01-17 | Virtual visual point image generating method based on depth map |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710034878.0A CN106791774A (en) | 2017-01-17 | 2017-01-17 | Virtual visual point image generating method based on depth map |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106791774A true CN106791774A (en) | 2017-05-31 |
Family
ID=58946362
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710034878.0A Pending CN106791774A (en) | 2017-01-17 | 2017-01-17 | Virtual visual point image generating method based on depth map |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106791774A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109714587A (en) * | 2017-10-25 | 2019-05-03 | 杭州海康威视数字技术股份有限公司 | A kind of multi-view image production method, device, electronic equipment and storage medium |
CN109769109A (en) * | 2019-03-05 | 2019-05-17 | 东北大学 | Method and system based on virtual view synthesis drawing three-dimensional object |
CN109982064A (en) * | 2019-03-18 | 2019-07-05 | 深圳岚锋创视网络科技有限公司 | A kind of virtual visual point image generating method and portable terminal of naked eye 3D |
CN110062220A (en) * | 2019-04-10 | 2019-07-26 | 长春理工大学 | The maximized virtual visual point image generating method of parallax level |
CN111667438A (en) * | 2019-03-07 | 2020-09-15 | 阿里巴巴集团控股有限公司 | Video reconstruction method, system, device and computer readable storage medium |
CN112749610A (en) * | 2020-07-27 | 2021-05-04 | 腾讯科技(深圳)有限公司 | Depth image, reference structured light image generation method and device and electronic equipment |
CN113450274A (en) * | 2021-06-23 | 2021-09-28 | 山东大学 | Self-adaptive viewpoint fusion method and system based on deep learning |
CN113936116A (en) * | 2021-11-12 | 2022-01-14 | 合众新能源汽车有限公司 | Complex space curved surface mapping method for transparent A column |
WO2022116397A1 (en) * | 2020-12-04 | 2022-06-09 | 北京大学深圳研究生院 | Virtual viewpoint depth map processing method, device, and apparatus, and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3593466B2 (en) * | 1999-01-21 | 2004-11-24 | 日本電信電話株式会社 | Method and apparatus for generating virtual viewpoint image |
US20060077255A1 (en) * | 2004-08-10 | 2006-04-13 | Hui Cheng | Method and system for performing adaptive image acquisition |
CN104661013A (en) * | 2015-01-27 | 2015-05-27 | 宁波大学 | Virtual view point drawing method based on spatial weighting |
-
2017
- 2017-01-17 CN CN201710034878.0A patent/CN106791774A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3593466B2 (en) * | 1999-01-21 | 2004-11-24 | 日本電信電話株式会社 | Method and apparatus for generating virtual viewpoint image |
US20060077255A1 (en) * | 2004-08-10 | 2006-04-13 | Hui Cheng | Method and system for performing adaptive image acquisition |
CN104661013A (en) * | 2015-01-27 | 2015-05-27 | 宁波大学 | Virtual view point drawing method based on spatial weighting |
Non-Patent Citations (2)
Title |
---|
汪敬媛: "基于深度图像的虚拟视点绘制算法研究", 《中国优秀硕士学位论文全文数据库》 * |
高利杰: "基于深度图像的多视点立体图像中的虚拟视点生成算法研究", 《中国优秀硕士学位论文全文数据库》 * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109714587A (en) * | 2017-10-25 | 2019-05-03 | 杭州海康威视数字技术股份有限公司 | A kind of multi-view image production method, device, electronic equipment and storage medium |
CN109769109A (en) * | 2019-03-05 | 2019-05-17 | 东北大学 | Method and system based on virtual view synthesis drawing three-dimensional object |
CN111667438B (en) * | 2019-03-07 | 2023-05-26 | 阿里巴巴集团控股有限公司 | Video reconstruction method, system, device and computer readable storage medium |
CN111667438A (en) * | 2019-03-07 | 2020-09-15 | 阿里巴巴集团控股有限公司 | Video reconstruction method, system, device and computer readable storage medium |
CN109982064B (en) * | 2019-03-18 | 2021-04-27 | 影石创新科技股份有限公司 | Naked eye 3D virtual viewpoint image generation method and portable terminal |
CN109982064A (en) * | 2019-03-18 | 2019-07-05 | 深圳岚锋创视网络科技有限公司 | A kind of virtual visual point image generating method and portable terminal of naked eye 3D |
WO2020187339A1 (en) * | 2019-03-18 | 2020-09-24 | 影石创新科技股份有限公司 | Naked eye 3d virtual viewpoint image generation method and portable terminal |
CN110062220B (en) * | 2019-04-10 | 2021-02-19 | 长春理工大学 | Virtual viewpoint image generation method with maximized parallax level |
CN110062220A (en) * | 2019-04-10 | 2019-07-26 | 长春理工大学 | The maximized virtual visual point image generating method of parallax level |
CN112749610A (en) * | 2020-07-27 | 2021-05-04 | 腾讯科技(深圳)有限公司 | Depth image, reference structured light image generation method and device and electronic equipment |
WO2022116397A1 (en) * | 2020-12-04 | 2022-06-09 | 北京大学深圳研究生院 | Virtual viewpoint depth map processing method, device, and apparatus, and storage medium |
CN113450274A (en) * | 2021-06-23 | 2021-09-28 | 山东大学 | Self-adaptive viewpoint fusion method and system based on deep learning |
CN113450274B (en) * | 2021-06-23 | 2022-08-05 | 山东大学 | Self-adaptive viewpoint fusion method and system based on deep learning |
CN113936116A (en) * | 2021-11-12 | 2022-01-14 | 合众新能源汽车有限公司 | Complex space curved surface mapping method for transparent A column |
CN113936116B (en) * | 2021-11-12 | 2024-04-16 | 合众新能源汽车股份有限公司 | Complex space curved surface mapping method for transparent A column |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106791774A (en) | Virtual visual point image generating method based on depth map | |
CN101902657B (en) | Method for generating virtual multi-viewpoint images based on depth image layering | |
CN111325693B (en) | Large-scale panoramic viewpoint synthesis method based on single viewpoint RGB-D image | |
US7260274B2 (en) | Techniques and systems for developing high-resolution imagery | |
CN111047709B (en) | Binocular vision naked eye 3D image generation method | |
CN109462747B (en) | DIBR system cavity filling method based on generation countermeasure network | |
CN112543317B (en) | Method for converting high-resolution monocular 2D video into binocular 3D video | |
JP4828506B2 (en) | Virtual viewpoint image generation device, program, and recording medium | |
CN103905813B (en) | Based on the DIBR hole-filling method of background extracting and divisional reconstruction | |
CN106791773B (en) | A kind of novel view synthesis method based on depth image | |
TWI469088B (en) | Depth map generation module for foreground object and the method thereof | |
CN104954780A (en) | DIBR (depth image-based rendering) virtual image restoration method applicable to high-definition 2D/3D (two-dimensional/three-dimensional) conversion | |
CN106060509B (en) | Introduce the free view-point image combining method of color correction | |
CN108924434B (en) | Three-dimensional high dynamic range image synthesis method based on exposure transformation | |
CN111447428A (en) | Method and device for converting plane image into three-dimensional image, computer readable storage medium and equipment | |
CN108833879A (en) | With time and space continuity virtual visual point synthesizing method | |
JPH0981746A (en) | Two-dimensional display image generating method | |
CN115063303B (en) | Image 3D method based on image restoration | |
CN113450274B (en) | Self-adaptive viewpoint fusion method and system based on deep learning | |
CN107610070B (en) | Free stereo matching method based on three-camera collection | |
Seitner et al. | Trifocal system for high-quality inter-camera mapping and virtual view synthesis | |
CN115564708A (en) | Multi-channel high-quality depth estimation system | |
JP2019149112A (en) | Composition device, method, and program | |
CN112565623A (en) | Dynamic image display system | |
CN117061720B (en) | Stereo image pair generation method based on monocular image and depth image rendering |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170531 |