CN106791774A - Virtual visual point image generating method based on depth map - Google Patents

Virtual visual point image generating method based on depth map Download PDF

Info

Publication number
CN106791774A
CN106791774A CN201710034878.0A CN201710034878A CN106791774A CN 106791774 A CN106791774 A CN 106791774A CN 201710034878 A CN201710034878 A CN 201710034878A CN 106791774 A CN106791774 A CN 106791774A
Authority
CN
China
Prior art keywords
image
visual point
virtual
virtual visual
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710034878.0A
Other languages
Chinese (zh)
Inventor
向北海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Youxiang Technology Co Ltd
Original Assignee
Hunan Youxiang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Youxiang Technology Co Ltd filed Critical Hunan Youxiang Technology Co Ltd
Priority to CN201710034878.0A priority Critical patent/CN106791774A/en
Publication of CN106791774A publication Critical patent/CN106791774A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention proposes a kind of virtual visual point image generating method based on depth map, reference picture is smoothed using half-pixel accuracy interpolation method first and video camera internal reference is adjusted, the image and corresponding depth image of two reference views are processed using two-way asynchronous mapping mechanism, draw out respectively virtual visual point image and with virtual depth image;It is then based on depth information to be extended empty borderline region, the prospect falseness profile remained in removal background;Gamma correction is carried out to drawing image according to the luminance difference simplified model between image, to eliminate brightness discontinuous problem;And the image drawn is merged using weighting composition algorithm, block cavity to eliminate major part;Empty edge is finally searched for using window function, empty point interpolation is filled using background information, generate picture clearly virtual visual point image.

Description

Virtual visual point image generating method based on depth map
Technical field
The invention belongs to technical field of image processing, it is related to virtual visual point image generating method, refers in particular to a kind of based on depth The virtual visual point image generating method of figure.
Background technology
With the fast development of computer vision technique, requirement more and more higher of the people to vision resource quality, to vision The demand of experience is stepped up, not only pursuing high-quality video source, while presentation and interactive experience to stereoeffect It is required that also further strong.User is set to pass through the position and orientation that interactive operation changes video camera at any time, along self-defined Track movement, a video, the limitation at camera lens visual angle in the video that breaks traditions are watched by this non-existent viewpoint.In order to reach this One target, virtual view generation technique arises at the historic moment.Virtual view generation technique is that utilize video camera to shoot two are adjacent to be regarded The image of point generates the virtual visual point image of camera intermediate-view, and the virtual visual point image of generation can allow user from more Angle experiences the objective world.
Virtual viewpoint rendering technology mainly has two kinds:Based on modeling rendering technology and Content_based image technology.Based on mould Although type rendering technique has preferable effect in terms of detailed information, modeling process is complicated, and practicality is poor.Based on image Rendering technique is a kind of image using reference view generates the technology of the image of virtual view, reduction transmission that can be more Bandwidth, while also obtain preferable image rendering quality.Virtual view generation technique based on depth image is a kind of conventional Image basedrendering, the main pretreatment including depth map, image conversion, image co-registration, cavity filling four are walked Suddenly.The treatment of depth map is that the depth image of the image shot to adjacent video camera carries out the treatment such as image smoothing to reduce The quantity in cavity and crack in the image of synthesis, image conversion is turned by three-dimensional coordinate using two images of video camera The method changed obtains the image of the virtual view in the middle of video camera, and image fusion technology is two width that will be obtained by two video cameras Virtual visual point image is fused into piece image, and cavity filling is that the cavity point in the image that will be obtained after image co-registration is filled out Fill to generate the image of picture quality virtual view higher.But drawn in the virtual view generation technique based on depth image Image there are the technological difficulties such as overlap, cavity, crack, artifact.
The content of the invention
It is an object of the invention to propose a kind of virtual visual point image generating method based on depth map, it passes through interpolation and puts down Sliding, video camera internal reference adjustment ensures image mapping accuracy, false profile and artifact is removed using hole region development method, according to figure Luminance difference simplified model as between eliminates brightness discontinuous problem, and eliminating major part using weighting composition algorithm blocks cavity Afterwards, empty edge is searched for using window function, empty point interpolation is filled with reference to background information, ultimately produce picture clearly virtual Visual point image.
To realize above-mentioned technical purpose, the technical scheme is that,
A kind of virtual visual point image generating method based on depth map, comprises the following steps:
Any two visual point image in S1 selection camera shooting process extracts the depth of reference picture as reference picture Figure, is smoothed using the method for half-pixel accuracy interpolation to reference picture, and adjusts camera intrinsic parameter, makes image after interpolation Remain to meet image mapping equation;And using two-way asynchronous mapping mechanism to two images and corresponding depth map of reference view As being processed, virtual visual point image and virtual depth image are drawn out respectively;
S2 is extended based on depth information to empty borderline region, the prospect falseness profile remained in removal background;And Gamma correction is carried out to virtual visual point image according to the luminance difference simplified model between image, to eliminate brightness discontinuous problem;
S3 according to gamma correction after the difference of two width virtual visual point image pixel acquisition modes carry out classification judgement, adopt The image after gamma correction is merged with weighting composition algorithm, cavity is blocked to eliminate major part;
S4 searches for empty edge using window function, and empty point interpolation is filled using background information, and generation picture is clear Virtual visual point image.
In the present invention, S1 is comprised the following steps:
Any two visual point image in S11 selection camera shooting process is based on as reference picture 1 and reference picture 2 The method of sequence image matching is the depth information that image is obtained by the parallax between view, extracts reference picture 1 and ginseng The depth map for examining image 2 respectively obtains reference depth Fig. 1 and reference depth Fig. 2.The method for obtaining image depth information has various, Such as method, structure light measurement method, triangle telemetry based on sequence image matching etc..
S12 in order to improve the drafting precision of image, before virtual visual point image is drawn, using half-pixel accuracy interpolation Method, is smoothed by the average value for being worth to motion predication point for solving neighbor pixel to reference picture.
If W and H are respectively the wide and height of reference picture, f is focal length of camera, (μ00) it is a little in picture in reference picture Coordinate under plain coordinate system, s is warp parameters, using k1、k2Former camera intrinsic parameter matrix is adjusted as multiplier, its Middle k1=(2W-1)/W, k2=(2H-1)/H, new camera intrinsic parameter is adjusted to according to (1) formula by former camera intrinsic parameter, is made Image remains to meet image mapping equation after interpolation.
S13 is processed the image and corresponding depth image of two reference views using two-way asynchronous mapping mechanism, Virtual visual point image and virtual depth image are drawn out respectively, and detailed process is:
3D rendering mapping equation is set up, according to two width reference pictures and corresponding depth after half-pixel accuracy interpolation processing Figure, (2) formula of utilization respectively obtains virtual visual point image and virtual depth image.
First by through the pixel m on the reference picture after half-pixel accuracy interpolation processing111) project to three dimensions Corresponding points are M, then the point M in three dimensions is projected into virtual view imaging plane, obtain corresponding on virtual visual point image Point coordinates is m222), projection mapping will merge and can obtain 3D rendering mapping equation as shown in (2) formula twice:
Wherein, N1,R1,T1The respectively camera parameters of reference view, N2,R2,T2It is the camera parameters of virtual view, A1It is depth value of the three dimensions point in reference view camera coordinate system, A2It is three dimensions point in virtual view video camera Depth value in coordinate system.
Further, the virtual visual point image and void that will be drawn through two width reference pictures after half-pixel accuracy interpolation processing Intend depth image to be mapped by reverse image to reduce the empty quantity of virtual visual point image, method is:Initialization two and warp Mark the matrix flag1 and flag2 of the two equal sizes of width reference picture after half-pixel accuracy interpolation processing, initial value are set to 0, The cavity point on two width virtual visual point images (be designated as drawing Fig. 1 and draw Fig. 2) that will be drawn is mapped to each self-corresponding reference To obtain the pixel value of cavity point on image, while the value for setting these points of correspondence position in mark matrix is 1.Through reverse Mapping The empty quantity of virtual visual point image is significantly reduced after treatment.
So far, the first stage is completed by the reference picture 1 of two reference views, reference depth Fig. 1 and reference picture 2, ginseng Examine depth map 2 and respectively obtain corresponding virtual visual point image 1 and virtual visual point image 2.
S2 of the invention is comprised the following steps:
S21 is extended based on depth information to empty borderline region, the prospect falseness profile remained in removal background, side Method is:Empty borderline region in virtual view is labeled as boundary, if boundary initial values are 1, in virtual view The depth value of empty borderline region left and right sides point carry out making difference operation, absolute difference is tried to achieve, if absolute difference is more than setting Threshold value, then the border boundary value for taking the small point of depth value is 0, otherwise keeps border boundary values constant, then right Boundary values carry out 5*5 extension overstrikings for 1 empty borderline region, you can eliminate the prospect falseness profile remained in background.
S22 carries out gamma correction according to the luminance difference simplified model between image to drawing image, is not connected with eliminating brightness Continuous problem.
Virtual visual point image 1 and virtual visual point image 2 are designated as I respectivelylAnd Ir, obtain image after being processed through S21 and be designated as Il1And Ir1, N is non-empty pixel value number, according to the image I after the false profile of removall1And Ir1In non-empty pixel value Il1 (x, y) and Ir1(x, y) is calculated multiplying property error factor A and the expression formula of additive errors factor B is as follows:
According to parameter A, B to image Il1And Ir1Gamma correction is carried out, the image after correction is Il1' and Ir1', then image is bright Degree corrects expression formula:
The specific method of S3 of the invention is:
In due to two reference view images of selection, may there is the invisible area in a visual point image another It is visible situation in one visual point image, that is, has the cavitation that occlusion issue causes, therefore carries out classification judgement first.When Il1' (x, y) and Ir1When ' (x, y) values are 0, corresponding point I in virtual visual point imagev(x, y) value is 0;Work as Il1′(x, And I y)r1It is 0 that ' (x, y) values have one, then will not be that 0 pixel value is assigned to Iv(x,y);Work as Il1' (x, y) and Ir1′(x, Y) when value is not 0, then the projection pattern that pixel is obtained is judged, according to mark matrix flag1 before and Used as basis for estimation, given threshold τ is 5 to the value condition of flag2.If the value of flag1 (x, y) and flag2 (x, y) is all 0 Or the point being all in 1, i.e. two width virtual images is obtained by Direct mapping or reverse Mapping, using the method (6) of Weighted Fusion Formula obtains Iv(x,y);If the value of flag1 (x, y) and flag2 (x, y) is different, i.e., same point is in two images respectively by just Obtained to mapping and reverse Mapping, then obtain 2 points of absolute difference | Il1′(x,y)-Ir1' (x, y) | threshold decision is carried out, when | Il1′(x,y)-Ir1' (x, y) | during≤τ, I is obtained using method (6) formula of Weighted Fusionv(x,y);When | Il1′(x,y)-Ir1′ (x, y) | during > τ, the value that Direct mapping is obtained is assigned to Iv(x,y).Wherein α is weight factor, and t represents that camera shoots viewpoint The translational movement of position.
The specific method of S4 of the present invention is:Hole-filling is carried out to the virtual visual point image after being merged through step S3, to figure As pixel value is scanned the position for finding out cavity in virtual visual point image, and cavity is numbered, using 3 × 3 window The empty pixel of function pair numbering is scanned one by one, if the point around some cavity points belongs to cavity point and some are not belonging to cavity (point in the range of window function is scanned the pixel value for judging each point to point one by one, if the pixel value is 0 as empty Point, is otherwise non-empty point.), then illustrate that this point is located at empty edge, these empty marginal points are entered with row interpolation, using depth Figure background information is spent to cavity point filling, filling quality is improved, and generates picture clearly virtual visual point image.
The beneficial effects of the invention are as follows:
The present invention proposes a kind of virtual visual point image generating method based on depth map, by interpolation smoothing, video camera Internal reference adjustment ensures image mapping accuracy, false profile and artifact is removed using hole region development method, according to bright between image Degree difference simplified model eliminates brightness discontinuous problem, is eliminated after major part blocks cavity using weighting composition algorithm, utilizes The empty edge of window function search, fills with reference to background information to empty point interpolation, ultimately produces picture clearly virtual view figure Picture.
Brief description of the drawings
Fig. 1 FB(flow block)s of the invention.
Wherein,
1 is acquisition reference view image and reference depth image;
2 is that biaxial stress structure obtains the corresponding virtual visual point image of each reference view and virtual depth image;
3 is the false profile of removal;
4 correct for brightness of image;
5 is image co-registration;
6 is hole-filling.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with accompanying drawing to embodiment party of the present invention Formula is described in further detail.
As shown in figure 1, the present invention proposes a kind of virtual visual point image generating method based on depth map, first with double The image and corresponding depth image of two reference views are processed to asynchronous mapping mechanism, virtual view is drawn out respectively Image and with virtual depth image;It is then based on depth information to be extended empty borderline region, is remained in removal background Prospect falseness profile;Gamma correction is carried out to drawing image according to the luminance difference simplified model between image, to eliminate brightness not Continuous problem;And the image drawn is merged using weighting composition algorithm, block cavity to eliminate major part;Finally use The empty edge of window function search, is filled using background information to empty point interpolation, generates picture clearly virtual visual point image.
Any two visual point image in camera shooting process is chosen as reference picture 1 and reference picture 2, based on sequence The method of images match is the depth information that image is obtained by the parallax between view, and the depth map for extracting reference picture is obtained To reference depth Fig. 1 and reference depth Fig. 2, the method for obtaining image depth information has various, such as based on sequence image matching Method, structure light measurement method, triangle telemetry etc..Different visual angles are similar to the picture of Same Scene, but exist substantial amounts of superfluous Remaining information, depth information can partly reflect the position relationship of space object, therefore, believed using depth in virtual view generation Breath can reduce the redundancy between different visual angles image as reference information.
In order to improve the drafting precision of image, before virtual visual point image is drawn, using the side of half-pixel accuracy interpolation Method, is smoothed by the average value for being worth to motion predication point for solving neighbor pixel to reference picture.If W and H points Not Wei image wide and height, f is focal length of camera, (μ00) it is the coordinate in image a little under pixel coordinate system, s is distortion Parameter, using k1、k2Former camera intrinsic parameter matrix is adjusted as multiplier, wherein k1=(2W-1)/W, k2=(2H- 1)/H, new camera intrinsic parameter is adjusted to according to (1) formula by former camera intrinsic parameter, and image remains to meet image after making interpolation Mapping equation.
Then the image and corresponding depth image of two reference views are processed using two-way asynchronous mapping mechanism. Set up 3D rendering mapping equation, according to halfpel process after two width reference pictures and corresponding depth map, utilize (2) formula difference Obtain virtual visual point image and virtual depth image.First by through the pixel on the reference picture after half-pixel accuracy interpolation processing m111) three dimensions corresponding points are projected to for M, then the point M in three dimensions is projected into virtual view imaging plane, obtain Corresponding point coordinates is m on to virtual visual point image222), projection mapping merging twice can be obtained into mapping equation such as (2) shown in formula.
N1,R1,T1The respectively camera parameters of reference view, N2,R2,T2It is the camera parameters of virtual view, A1For Depth value of the three dimensions point in reference view camera coordinate system, A2It is three dimensions point in virtual view camera coordinates Depth value in system.
By two width reference pictures draw virtual visual point image mapped by reverse image with depth map, initialize two and Through mark the matrix flag1 and flag2 of the equal size of the reference picture after half-pixel accuracy interpolation processing, initial value is set to 0, will Cavity point on the two width virtual visual point images drawn is mapped on each self-corresponding reference picture to obtain the pixel of cavity point Value, while the value for setting these points of correspondence position in mark matrix is 1, the sky of virtual visual point image after being processed through reverse Mapping Hole quantity is significantly reduced.
So far, the first stage is completed by the reference picture 1 of two reference views, reference depth Fig. 1 and reference picture 2, ginseng Examine depth map 2 and respectively obtain corresponding virtual visual point image 1 and virtual visual point image 2.Then the work of second stage is carried out, it is main To include the prospect falseness profile to being remained in virtual visual point image removal background, gamma correction, image weighting fusion, cavity is filled out The operation such as benefit, ultimately generates picture clearly virtual visual point image.
Because the border between foreground and background pixel is confused in the presence of mutual rod, when known viewpoint change is to virtual view, The pixel of foreground object can be remained on background frame, false profile is left so as to be produced on virtual view.The false profile of erasing It is critical only that judicious erasing region, the depth of empty both sides can be worth to after comparing conversion.
Empty borderline region in virtual view is labeled as boundary, if boundary initial values are 1, virtual view In the depth value of empty borderline region left and right sides point carry out making difference operation, absolute difference is tried to achieve, if absolute difference is more than setting Fixed threshold value, then the border boundary values for taking the small point of depth value are 0, otherwise keep border boundary values constant, then To boundary values for 1 empty borderline region carries out 5*5 extension overstrikings, you can before eliminating most remaining in background Scene element.
By the simplification to digital camera model, the luminance difference between image can be approximately regarded as linear relationship. Virtual visual point image 1 and virtual visual point image 2 are designated as I respectivelylAnd Ir, obtain image after being processed through false profile erasing and remember It is Il1And Ir1, N is non-empty pixel value number, according to the image I after the false profile of removall1And Ir1In non-empty pixel value Il1(x, y) and Ir1(x, y) is calculated multiplying property error factor A and the expression formula of additive errors factor B is as follows:
According to parameter A, B to image Il1And Ir1Gamma correction is carried out, the image after correction is Il1' and Ir1', then image is bright Degree corrects expression formula:
In order to obtain the preferable virtual visual point image of visual effect, it is necessary to two width virtual view figures after to gamma correction As Il1' and Ir1' merged.The difference of the pixel acquisition modes in two images, will be carried out during image co-registration Classification value, detailed process is as follows:In due to two reference view images of selection, may be in the presence of in a visual point image Invisible area is visible situation in another visual point image, that is, have the cavitation that occlusion issue causes, therefore first Carry out classification judgement.Work as Il1' (x, y) and Ir1When ' (x, y) values are 0, corresponding point I in virtual imagev(x, y) value is 0;Work as Il1' (x, y) and Ir1It is 0 that ' (x, y) values have one, then will not be that 0 pixel value is assigned to Iv(x,y);Work as Il1′(x, And I y)r1When ' (x, y) values are not 0, then the projection pattern that pixel is obtained is judged, according to mark matrix before Used as basis for estimation, given threshold τ is 5 to the value condition of flag1 and flag2.If flag1's (x, y) and flag2 (x, y) takes Value is all 0 or the point that is all in 1, i.e. two width virtual images is obtained by Direct mapping or reverse Mapping, using Weighted Fusion Method (6) formula obtains Iv(x,y);If the value difference of flag1 (x, y) and flag2 (x, y), i.e. same point are in two images Obtained by Direct mapping and reverse Mapping respectively, then obtain 2 points of absolute difference | Il1′(x,y)-Ir1' (x, y) | carry out threshold value Judge, when | Il1′(x,y)-Ir1' (x, y) | during≤τ, I is obtained using method (6) formula of Weighted Fusionv(x,y);When | Il1′(x, y)-Ir1' (x, y) | during > τ, the value that Direct mapping is obtained is assigned to Iv(x,y).Wherein α is weight factor, and t represents that camera is clapped Take the photograph the translational movement of viewpoint position.
Hole-filling is carried out to the virtual visual point image after fusion, image pixel value is scanned and is found out cavity in image Position, and cavity is numbered, the empty pixel numbered is scanned one by one using 3 × 3 window function, if some are empty Point around the point of hole belongs to cavity point and some are not belonging to empty point, then illustrate that this point is located at empty edge, to these cavities Marginal point enters row interpolation, using depth map background information to cavity point filling, improves filling quality.
The explanation of the preferred embodiment of the present invention contained above, this be in order to describe technical characteristic of the invention in detail, and Be not intended to be limited in the content of the invention in the concrete form described by embodiment, carry out according to present invention purport other Modification and modification are also protected by this patent.The purport of present invention is to be defined by the claims, rather than by embodiment Specific descriptions are defined.

Claims (10)

1. a kind of virtual visual point image generating method based on depth map, it is characterised in that comprise the following steps:
Any two visual point image in S1 selection camera shooting process extracts the depth map of reference picture as reference picture, Reference picture is smoothed using the method for half-pixel accuracy interpolation, and adjusts camera intrinsic parameter, image is still after making interpolation Image mapping equation can be met;And using two-way asynchronous mapping mechanism to two images and corresponding depth image of reference view Processed, virtual visual point image and virtual depth image are drawn out respectively;
S2 is extended based on depth information to empty borderline region, the prospect falseness profile remained in removal background;And according to Luminance difference simplified model between image carries out gamma correction to virtual visual point image, to eliminate brightness discontinuous problem;
S3 according to gamma correction after the difference of two width virtual visual point image pixel acquisition modes carry out classification judgement, using plus Power composition algorithm is merged to the image after gamma correction, and cavity is blocked to eliminate major part;
S4 searches for empty edge using window function, and empty point interpolation is filled using background information, and generation picture is clearly empty Intend visual point image.
2. the virtual visual point image generating method based on depth map according to claim 1, it is characterised in that in S1, adopt The depth map that two width reference pictures are reference picture 1 and reference picture 2 is extracted with the method based on sequence image matching, ginseng is designated as Examine depth map 1 and reference depth Fig. 2.
3. the virtual visual point image generating method based on depth map according to claim 1 and 2, it is characterised in that in S1, It refers to be worth to by solving the average of neighbor pixel reference picture to be carried out smooth using the method for half-pixel accuracy interpolation The value of motion predication point is smoothed to reference picture.
4. the virtual visual point image generating method based on depth map according to claim 3, it is characterised in that in S1, adjusts Whole camera intrinsic parameter, image remains to meet image mapping equation after making interpolation, and its method is:
If W and H are respectively the wide and height of reference picture, f is focal length of camera, (μ00) a little sat in pixel for reference picture is interior Coordinate under mark system, s is warp parameters, using k1、k2Former camera intrinsic parameter matrix is adjusted as multiplier, wherein k1 =(2W-1)/W, k2=(2H-1)/H, new camera intrinsic parameter is adjusted to according to (1) formula by former camera intrinsic parameter, makes to insert Image remains to meet image mapping equation after value:
f / d x s μ 0 0 f / d y v 0 0 0 1 → k 1 f / d x k 2 s 2 μ 0 0 k 2 f / d y 2 v 0 0 0 1 - - - ( 1 ) .
5. the virtual visual point image generating method based on depth map according to claim 4, it is characterised in that in S1, it is empty The method for drafting for intending visual point image and virtual depth image is:
3D rendering mapping equation is set up, according to two width reference pictures and corresponding depth map after half-pixel accuracy interpolation processing, (2) formula of utilization respectively obtains virtual visual point image and virtual depth image;
First by through the pixel m on the reference picture after half-pixel accuracy interpolation processing111) project to three dimensions correspondence Point is M, then the point M in three dimensions is projected into virtual view imaging plane, obtains corresponding point on virtual visual point image and sits It is designated as m222), projection mapping will merge and can obtain 3D rendering mapping equation as shown in (2) formula twice:
A 2 μ 2 v 2 T = N 2 RN 1 - 1 A 1 μ 1 v 1 T - N 2 RT 1 + N 2 T 2 R = R 2 R 1 - 1 - - - ( 2 )
Wherein, N1,R1,T1The respectively camera parameters of reference view, N2,R2,T2It is the camera parameters of virtual view, A1For Depth value of the three dimensions point in reference view camera coordinate system, A2It is three dimensions point in virtual view camera coordinates Depth value in system.
6. the virtual visual point image generating method based on depth map according to claim 5, it is characterised in that in S1, will The virtual visual point image and virtual depth image for obtaining are mapped by reverse image to reduce the empty quantity of virtual visual point image, Method is:Two are initialized with the mark matrix through the two equal sizes of width reference picture after half-pixel accuracy interpolation processing Flag1 and flag2, initial value is set to 0, and the cavity point on the two width virtual visual point images that will be drawn is mapped to each self-corresponding ginseng Examine on image to obtain the pixel value of cavity point, while the value for setting these points of correspondence position in mark matrix is 1.
7. the virtual visual point image generating method based on depth map according to claim 5 or 6, it is characterised in that in S2 Empty borderline region is extended based on depth information, the prospect falseness profile remained in removal background, its method is:By void The empty borderline region intended in viewpoint is labeled as boundary, if boundary initial values are 1, the empty border in virtual view The depth value of region left and right sides point carries out making difference operation, tries to achieve absolute difference, if absolute difference takes more than the threshold value of setting The border boundary values of the small point of depth value are 0, otherwise keep border boundary values constant, are then to boundary values 1 empty borderline region carries out 5*5 extension overstrikings, you can eliminate the prospect falseness profile remained in background.
8. the virtual visual point image generating method based on depth map according to claim 7, it is characterised in that basis in S2 Luminance difference simplified model between image carries out gamma correction to drawing image, and its method is:
Virtual visual point image 1 and virtual visual point image 2 are designated as I respectivelylAnd Ir, obtain image after being processed through S21 and be designated as Il1With Ir1, N is non-empty pixel value number, according to the image I after the false profile of removall1And Ir1In non-empty pixel value Il1(x,y) And Ir1(x, y) is calculated multiplying property error factor A and the expression formula of additive errors factor B is as follows:
A = 1 N Σ I l 1 ( x , y ) I r 1 ( x , y ) - - - ( 3 )
B = 1 N Σ [ I l 1 ( x , y ) - AI r 1 ( x , y ) ] - - - ( 4 )
According to parameter A, B to image Il1And Ir1Gamma correction is carried out, the image after correction is Il1' and Ir1', then brightness of image school Positive expression formula is:
I l 1 ′ ( x , y ) = A * I l 1 ( x , y ) + B I r 1 ′ ( x , y ) = A * I r 1 ( x , y ) + B - - - ( 5 ) .
9. the virtual visual point image generating method based on depth map according to claim 7, it is characterised in that the method for S3 For:
Classification judgement is carried out first, works as Il1' (x, y) and Ir1When ' (x, y) values are 0, corresponding point I in virtual visual point imagev (x, y) value is 0;Work as Il1' (x, y) and Ir1It is 0 that ' (x, y) values have one, then will not be that 0 pixel value is assigned to Iv(x, y);Work as Il1' (x, y) and Ir1When ' (x, y) values are not 0, then the projection pattern that pixel is obtained is judged, according to it Used as basis for estimation, given threshold τ is 5 to the value condition of preceding mark matrix flag1 and flag2;If flag1 (x, y) and The value of flag2 (x, y) is all 0 or the point that is all in 1, i.e. two width virtual images is obtained by Direct mapping or reverse Mapping, I is obtained using method (6) formula of Weighted Fusionv(x,y);If the value of flag1 (x, y) and flag2 (x, y) is different, i.e., same Point is obtained by Direct mapping and reverse Mapping respectively in two images, then obtain 2 points of absolute difference | Il1′(x,y)-Ir1′ (x, y) | threshold decision is carried out, when | Il1′(x,y)-Ir1' (x, y) | during≤τ, I is obtained using method (6) formula of Weighted Fusionv (x,y);When | Il1′(x,y)-Ir1' (x, y) | during > τ, the value that Direct mapping is obtained is assigned to Iv(x,y);Wherein α is weight The factor, t represents that camera shoots the translational movement of viewpoint position;
I v ( x , y ) = αI l 1 ′ ( x , y ) + ( 1 - α ) I r 1 ′ ( x , y ) α = | t v - t r | | t v - t r | + | t v - t l | - - - ( 6 ) .
10. the virtual visual point image generating method based on depth map according to claim 9, it is characterised in that the side of S4 Method is:Hole-filling is carried out to the virtual visual point image after being merged through step S3, image pixel value is scanned and is found out virtually The position in cavity, and is numbered to cavity in visual point image, using 3 × 3 window function to the empty pixel numbered by Individual scanning, if the point around some cavity points belongs to cavity point and some are not belonging to empty point, illustrates that this point is located at cavity These empty marginal points are entered row interpolation by edge, using depth map background information to cavity point filling, improve filling quality, raw Into picture clearly virtual visual point image.
CN201710034878.0A 2017-01-17 2017-01-17 Virtual visual point image generating method based on depth map Pending CN106791774A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710034878.0A CN106791774A (en) 2017-01-17 2017-01-17 Virtual visual point image generating method based on depth map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710034878.0A CN106791774A (en) 2017-01-17 2017-01-17 Virtual visual point image generating method based on depth map

Publications (1)

Publication Number Publication Date
CN106791774A true CN106791774A (en) 2017-05-31

Family

ID=58946362

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710034878.0A Pending CN106791774A (en) 2017-01-17 2017-01-17 Virtual visual point image generating method based on depth map

Country Status (1)

Country Link
CN (1) CN106791774A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109714587A (en) * 2017-10-25 2019-05-03 杭州海康威视数字技术股份有限公司 A kind of multi-view image production method, device, electronic equipment and storage medium
CN109769109A (en) * 2019-03-05 2019-05-17 东北大学 Method and system based on virtual view synthesis drawing three-dimensional object
CN109982064A (en) * 2019-03-18 2019-07-05 深圳岚锋创视网络科技有限公司 A kind of virtual visual point image generating method and portable terminal of naked eye 3D
CN110062220A (en) * 2019-04-10 2019-07-26 长春理工大学 The maximized virtual visual point image generating method of parallax level
CN111667438A (en) * 2019-03-07 2020-09-15 阿里巴巴集团控股有限公司 Video reconstruction method, system, device and computer readable storage medium
CN112749610A (en) * 2020-07-27 2021-05-04 腾讯科技(深圳)有限公司 Depth image, reference structured light image generation method and device and electronic equipment
CN113450274A (en) * 2021-06-23 2021-09-28 山东大学 Self-adaptive viewpoint fusion method and system based on deep learning
CN113936116A (en) * 2021-11-12 2022-01-14 合众新能源汽车有限公司 Complex space curved surface mapping method for transparent A column
WO2022116397A1 (en) * 2020-12-04 2022-06-09 北京大学深圳研究生院 Virtual viewpoint depth map processing method, device, and apparatus, and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3593466B2 (en) * 1999-01-21 2004-11-24 日本電信電話株式会社 Method and apparatus for generating virtual viewpoint image
US20060077255A1 (en) * 2004-08-10 2006-04-13 Hui Cheng Method and system for performing adaptive image acquisition
CN104661013A (en) * 2015-01-27 2015-05-27 宁波大学 Virtual view point drawing method based on spatial weighting

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3593466B2 (en) * 1999-01-21 2004-11-24 日本電信電話株式会社 Method and apparatus for generating virtual viewpoint image
US20060077255A1 (en) * 2004-08-10 2006-04-13 Hui Cheng Method and system for performing adaptive image acquisition
CN104661013A (en) * 2015-01-27 2015-05-27 宁波大学 Virtual view point drawing method based on spatial weighting

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
汪敬媛: "基于深度图像的虚拟视点绘制算法研究", 《中国优秀硕士学位论文全文数据库》 *
高利杰: "基于深度图像的多视点立体图像中的虚拟视点生成算法研究", 《中国优秀硕士学位论文全文数据库》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109714587A (en) * 2017-10-25 2019-05-03 杭州海康威视数字技术股份有限公司 A kind of multi-view image production method, device, electronic equipment and storage medium
CN109769109A (en) * 2019-03-05 2019-05-17 东北大学 Method and system based on virtual view synthesis drawing three-dimensional object
CN111667438B (en) * 2019-03-07 2023-05-26 阿里巴巴集团控股有限公司 Video reconstruction method, system, device and computer readable storage medium
CN111667438A (en) * 2019-03-07 2020-09-15 阿里巴巴集团控股有限公司 Video reconstruction method, system, device and computer readable storage medium
CN109982064B (en) * 2019-03-18 2021-04-27 影石创新科技股份有限公司 Naked eye 3D virtual viewpoint image generation method and portable terminal
CN109982064A (en) * 2019-03-18 2019-07-05 深圳岚锋创视网络科技有限公司 A kind of virtual visual point image generating method and portable terminal of naked eye 3D
WO2020187339A1 (en) * 2019-03-18 2020-09-24 影石创新科技股份有限公司 Naked eye 3d virtual viewpoint image generation method and portable terminal
CN110062220B (en) * 2019-04-10 2021-02-19 长春理工大学 Virtual viewpoint image generation method with maximized parallax level
CN110062220A (en) * 2019-04-10 2019-07-26 长春理工大学 The maximized virtual visual point image generating method of parallax level
CN112749610A (en) * 2020-07-27 2021-05-04 腾讯科技(深圳)有限公司 Depth image, reference structured light image generation method and device and electronic equipment
WO2022116397A1 (en) * 2020-12-04 2022-06-09 北京大学深圳研究生院 Virtual viewpoint depth map processing method, device, and apparatus, and storage medium
CN113450274A (en) * 2021-06-23 2021-09-28 山东大学 Self-adaptive viewpoint fusion method and system based on deep learning
CN113450274B (en) * 2021-06-23 2022-08-05 山东大学 Self-adaptive viewpoint fusion method and system based on deep learning
CN113936116A (en) * 2021-11-12 2022-01-14 合众新能源汽车有限公司 Complex space curved surface mapping method for transparent A column
CN113936116B (en) * 2021-11-12 2024-04-16 合众新能源汽车股份有限公司 Complex space curved surface mapping method for transparent A column

Similar Documents

Publication Publication Date Title
CN106791774A (en) Virtual visual point image generating method based on depth map
CN102075779B (en) Intermediate view synthesizing method based on block matching disparity estimation
CN100355272C (en) Synthesis method of virtual viewpoint in interactive multi-viewpoint video system
JP7062506B2 (en) Image processing equipment, image processing methods, and programs
CN105262958B (en) A kind of the panorama feature splicing system and its method of virtual view
CN111047709B (en) Binocular vision naked eye 3D image generation method
Li et al. Synthesizing light field from a single image with variable MPI and two network fusion.
CN104378619B (en) A kind of hole-filling algorithm rapidly and efficiently based on front and back's scape gradient transition
CN106060509B (en) Introduce the free view-point image combining method of color correction
CN101771893A (en) Video frequency sequence background modeling based virtual viewpoint rendering method
CN102892021B (en) New method for synthesizing virtual viewpoint image
CN104299220B (en) A kind of method that cavity in Kinect depth image carries out real-time filling
JP2003526829A (en) Image processing method and apparatus
CN104506872B (en) A kind of method and device of converting plane video into stereoscopic video
CN115298708A (en) Multi-view neural human body rendering
CN100337473C (en) Panorama composing method for motion video
CN104270624B (en) A kind of subregional 3D video mapping method
CN106028020B (en) A kind of virtual perspective image cavity complementing method based on multi-direction prediction
CN108259881A (en) 3D synthetic methods and its system based on parallax estimation
CN111612878A (en) Method and device for making static photo into three-dimensional effect video
CN105488760A (en) Virtual image stitching method based on flow field
KR101454780B1 (en) Apparatus and method for generating texture for three dimensional model
CN104778673B (en) A kind of improved gauss hybrid models depth image enhancement method
JP4214529B2 (en) Depth signal generation device, depth signal generation program, pseudo stereoscopic image generation device, and pseudo stereoscopic image generation program
CN102026012B (en) Generation method and device of depth map through three-dimensional conversion to planar video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170531