CN106791773B - A kind of novel view synthesis method based on depth image - Google Patents

A kind of novel view synthesis method based on depth image Download PDF

Info

Publication number
CN106791773B
CN106791773B CN201611251733.8A CN201611251733A CN106791773B CN 106791773 B CN106791773 B CN 106791773B CN 201611251733 A CN201611251733 A CN 201611251733A CN 106791773 B CN106791773 B CN 106791773B
Authority
CN
China
Prior art keywords
mrow
msub
newi
mtd
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611251733.8A
Other languages
Chinese (zh)
Other versions
CN106791773A (en
Inventor
冯远静
黄良鹏
李佳镜
陈丰
潘善伟
杨勇
胡键巧
孔德平
陈宏�
黄晨晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201611251733.8A priority Critical patent/CN106791773B/en
Publication of CN106791773A publication Critical patent/CN106791773A/en
Application granted granted Critical
Publication of CN106791773B publication Critical patent/CN106791773B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation

Abstract

A kind of novel view synthesis method based on depth image carries out three-dimension varying to the texture maps at the reference view of left and right and depth map;The edge of object in the reference view depth map of left and right is searched for, edge pixel is subjected to three-dimension varying at new viewpoint, then erases corresponding depth pixel point at new viewpoint;Carry out medium filtering to obtained depth map, and will be filtered after image compared with the depth map obtained through three-dimension varying, mark the pixel of variation;Back projection is carried out to labeled pixel, is projected at original reference view, then by the pixel value in initial reference texture maps, is assigned in new viewpoint image, the pixel identical with labeled pixel point coordinates;Again to the obtained occlusion area of new viewpoint image into row interpolation;Remaining cavity is repaired, obtains final new viewpoint image.The present invention effectively eliminates cavity and ghost image in new viewpoint image, and experiment effect is good, and the new viewpoint image of generation meets human eye viewing effect.

Description

A kind of novel view synthesis method based on depth image
Technical field
The present invention relates to the fields such as image procossing, numerical analysis, three-dimensional reconstruction, computer science, especially one kind to be used for The rendering intent based on depth map of binocular camera new viewpoint virtual image synthesis.
Background technology
The synthesis of binocular camera new viewpoint virtual image is that a kind of utilize has visual point image and the inside and outside calibration of video camera Parameter, the technology that image at new viewpoint is reconstructed.Its main method is with reference to camera calibration parameter and existing regards The depth information of point image, projects corresponding texture maps and re-projection, constructs the image at New Century Planned Textbook.It is new in structure During viewpoint, can there are it is more the problem of, it is such as new and into image be present with crack, cavity, ghost image, regional occlusion and Phenomena such as object is incomplete, the presence of these problems affect the quality of new viewpoint image, the interaction of user are generated greatly It influences.
The content of the invention
In order to overcome the problems, such as the various influence picture qualities generated in existing new viewpoint image synthesizing procedure, experiment effect Fruit is poor, can not meet the deficiency of human eye viewing effect, the present invention propose it is a kind of eliminate it is various influence picture quality the problem of, Experiment effect is good, effectively meets the novel view synthesis method based on depth image of human eye viewing effect.
The technical solution adopted in the present invention is as follows:
A kind of novel view synthesis method based on depth image, the visual point synthesizing method comprise the following steps:
(1) three-dimension varying is carried out to the texture maps at the reference view of left and right and depth map, conversion process is as follows:
1.1) image is stored to each pixel in coordinate system, with reference to corresponding depth information, projects to world's seat In mark system:
PWi=(XWi, YWi, ZWi)T=(KiRi)-1iPi+KiRiCi), (i=l, r)
Wherein PWi={ PWi=(XWi, YWi, ZWi)T| i=l, r } represent that the pixel of reference view image current position exists Three-dimensional coordinate in world coordinate system, l and r represent left and right, X respectivelyWiAnd YWiTransverse and longitudinal coordinate respectively in world coordinate system,For depth, diFor the gray value of current position depth map, MinZiAnd MaxZiPoint Minimum and maximum gray value that Wei be in the corresponding depth image of present frame,It represents The internal reference matrix of video camera, fxi={ fxi∈ R | i=l, r } and fyi={ fyi∈ R | i=l, r } it is respectively left and right cameras x directions With the focal length on y directions, R is real number, u0i={ u0i∈ R | i=l, r } and v0i={ v0i∈ R | i=l, r } it is respectively left and right camera shooting Coordinate of the principal point of machine in image stores coordinate system, Ri={ Ri∈R3×3| i=l, r } represent video camera spin matrix, R3×3 Represent the real number field space of matrices of dimension 3 × 3, λi={ λi=Fi31XWi+Fi32YWi+Fi33ZWi+Fi34| i=l, r } it is video camera Homogeneous scale factor,For projection matrix, Ti={ Ti∈R3×1| i=l, r } For the translation matrix of video camera, Pi={ Pi=(ui, vi, 1)T| i=l, r } for reference view homogeneous graph picture store coordinate, ui= {ui∈ R | i=l, r } and vi={ vi∈ R | i=l, r } it is respectively horizontal, vertical seat of the current pixel point in image stores coordinate system Mark, CiFor the centre coordinate of video camera;
Same processing is done to corresponding depth map:
DWi=(XWi, YWi, ZWi) T=(KiRi)-1iDi+KiRiCi), (i=l, r)
Wherein DWi={ DWi=(XWi, YWi, ZWi)T| i=l, r } represent the pixel of reference view depth map current position Three-dimensional coordinate in world coordinate system, Di={ Di=(u0i, v0i, 1)T| i=l, r are the homogeneous graph picture of reference view depth map Store coordinate;
1.2) will be by projecting obtained each point for the first time, the inside and outside ginseng of virtual camera at combining target new viewpoint Number and depth information, project to again in the plane of delineation coordinate system of new viewpoint:
PNewi=(KNewiRNewiPWi-KNewiRNewiCNewi)/λNewi, (i=l, r)
Wherein PNewi=PNewI=(uNewi, vNewi, 1) | i=l, r } it is PiThe coordinate of corresponding points, u at new viewpointNewi= {uNewi∈ R | i=l, r } and vNewi={ vNewi∈ R | i=l, r } it is respectively current pixel point in image stores coordinate system Horizontal, ordinate, KNewi、RNewi、CNewi、λNewiRepresent that the internal reference matrix of virtual camera at new viewpoint, spin matrix, center are sat Be marked with and secondly scale factor;
Corresponding new viewpoint depth map is expressed as:
DNewi=(KNewiRNewiDWi-KNewiRNewiCNewi)/λNewi, (i=l, r)
Wherein DNewi={ DNewi=(uNewi, vNewi, 1) | i=l, r } it is DiThe coordinate of corresponding points at new viewpoint;
(2) edge of object in the reference view depth map of left and right is searched for, the method in (1) is utilized, edge pixel is carried out Then three-dimension varying is erased corresponding depth pixel point at new viewpoint at new viewpoint:
DNew_edgei=0, (i=l, r)
Wherein DNew_edgeiFor the target edges pixel after three-dimension varying;
(3) utilize 3 × 3 template, medium filtering carried out to obtained depth map, remove crackle in depth image with it is thin Small loophole, and will be filtered after image compared with the depth map obtained through three-dimension varying, mark the picture of variation Vegetarian refreshments:
INewi=S (M (DNew_Imgi), DNew_Imgi), (i=l, r)
Wherein INewiFor labeled pixel, M is 3 × 3 medium filtering function, DNew_ImgiFor by step (1) and The depth map at new viewpoint obtained after step (2) processing, S for comparing two images, and mark to compare labeling function The pixel of variation;
(4) crack generated because of three-dimension varying is repaired, mending course is as follows:
4.1) back projection is carried out to labeled pixel, projected at original reference view:
PINewi=W (INewi), (i=l, r)
Wherein, PINewiIt represents in the new viewpoint image obtained by back projection, the pixel of cracks, W represents step (1) The three-dimension varying of description;
4.2) pixel value in initial reference texture maps is assigned in new viewpoint image again, with labeled pixel point coordinates Identical pixel;
(5) using the new viewpoint image obtained after step (4) processing, the cavity of image occlusion area is inserted Value:
PIn_Img=IN (PNew_Imgl, PNew_Imgr)
Wherein, PIN_ImgFor by the new viewpoint texture maps after interpolation, PNew_ImglIt represents to pass through step by left reference picture (1) the new viewpoint texture maps obtained after all processing of-(4), PNew_ImgrIt represents to pass through step (1)-(4) by right reference picture All processing after obtain, the new viewpoint texture maps at same viewpoint, IN is interpolating function;
(6) the inpaint methods that Telea is proposed in OpenCV library functions are called, remaining cavity is repaired, is obtained Final new viewpoint image:
PNew_Img=inpaint (PIn_Img)
Wherein inpaint be OpenCV library functions, PNew_ImgFor the new viewpoint texture image finally obtained.
Further, in the step (2), find target edges pixel the step of it is as follows:
Wherein, PImgRepresent reference picture, D represents the depth map of reference view, TdFor self-defined threshold value, experiments verify that, The 1/4 of maximum depth value generally is taken, when pixel meets above formula, it is ghost edge to represent the point.
Further, the processing procedure of the comparison labeling function S in step (3) comprises the following steps:
3.1) gray value of each pixel in the depth map before and after comparison medium filtering;
3.2) coordinate of the different pixel of gray value is recorded.
Further, in the step 4.2), the process to mend a split is as follows:
Using the mark point coordinates obtained in 4.1), the pixel value of mark point position in initial reference texture maps is chosen, is assigned It is remained unchanged to corresponding mark pixel, the pixel of rest position in new viewpoint image:
Wherein PNew_ImgiTo pass through the texture maps after crack repairing, P at new viewpointImgiFor reference texture figure, PONew_Imgi For the new viewpoint texture maps obtained afterwards by step (1) and step (2) processing, mark represents whether current pixel point is marked Note, it is labeled point that current point is represented when mark is 1.
Further, in the step (5), the expression formula of IN interpolating functions is as follows:
WhereinFor proportion parameter, TNew_t、Tl_tAnd Tr_tRespectively new viewpoint, The translation vector of left reference view and right reference view in camera coordinate system, Oi={ Oi(u, v) | i=l, r } represent figure It is 1 if it there is cavity, expression formula is as follows as whetheing there is cavity at (u, v):
Wherein Zi={ Zi(u, v) | i=l, r } for depth value of the new viewpoint at (u, v), ThFor threshold value.
Beneficial effects of the present invention are embodied in:Present invention employs three-dimension varying methods to eliminate ghost profile, employs anti- To sciagraphy, the crack in new viewpoint depth map is effectively eliminated, left and right left and right viewpoint interpolation method is employed, has filled up blocked area Domain, and using the inpaint methods in OpenCV storehouses, remaining cavity has effectively been filled up, experiment effect is good, the new viewpoint of generation Image meets human eye viewing effect.
Specific embodiment
The present invention will be further described below.
A kind of novel view synthesis method based on depth image, the visual point synthesizing method comprise the following steps:
(1) three-dimension varying is carried out to the texture maps at the reference view of left and right and depth map, conversion process is as follows:
1.1) image is stored to each pixel in coordinate system, with reference to corresponding depth information, projects to world's seat In mark system:
Wherein PWi={ PWi=(XWi, YWi, ZWi)T| i=l, r } represent that the pixel of reference view image current position exists Three-dimensional coordinate in world coordinate system, 1 and r represent left and right, X respectivelyWiAnd YWiTransverse and longitudinal coordinate respectively in world coordinate system,For depth, diFor the gray value of current position depth map, MinZiAnd MaxZiPoint Minimum and maximum gray value that Wei be in the corresponding depth image of present frame,It represents The internal reference matrix of video camera, fxi={ fxi∈ R | i=l, r } and fyi={ fyi∈ R | i=l, r } it is respectively left and right cameras x directions With the focal length on y directions, R is real number, u0i={ u0i∈ R | i=l, r } and v0i={ v0i∈ R | i=l, r } it is respectively left and right camera shooting Coordinate of the principal point of machine in image stores coordinate system, Ri={ Ri∈R3×3| i=l, r } represent video camera spin matrix, R3×3 Represent the real number field space of matrices of dimension 3 × 3, λi={ λi=Fi31XWi+Fi32YWi+Fi33ZWi+Fi34| i=l, r } it is video camera Homogeneous scale factor,For projection matrix, Ti={ Ti∈R3×1| i=l, r } For the translation matrix of video camera, Pi={ Pi=(ui, vi, 1)T| i=l, r } for reference view homogeneous graph picture store coordinate, ui= {ui∈ R | i=l, r } and vi={ vi∈ R | i=l, r } it is respectively horizontal, vertical seat of the current pixel point in image stores coordinate system Mark, CiFor the centre coordinate of video camera;
Same processing is done to corresponding depth map:
DWi=(XWi, YWi, ZWi) T=(KiRi)-1iDi+KiRiCi), (i=l, r)
Wherein DWi={ DWi=(XWi, YWi, ZWi)T| i=l, r } represent the pixel of reference view depth map current position Three-dimensional coordinate in world coordinate system, Di={ Di∈(u0i, v0i, 1)T| i=l, r } be reference view depth map homogeneous graph As storage coordinate;
1.2) will be by projecting obtained each point for the first time, the inside and outside ginseng of virtual camera at combining target new viewpoint Number and depth information, project to again in the plane of delineation coordinate system of new viewpoint:
PNewi=(KNewiRNewiPWi-KNewiRNewiCNewi)/λNewi, (i=l, r)
Wherein PNewi={ PNewi=(uNewi, vNewi, 1) | i=l, r } it is PiThe coordinate of corresponding points, u at new viewpointNewi= {uNewi∈ R | i=l, r } and vNewi={ vNewi∈ R | i=l, r } it is respectively current pixel point in image stores coordinate system Horizontal, ordinate, KNewi、RNewi、CNewi、λNewiRepresent that the internal reference matrix of virtual camera at new viewpoint, spin matrix, center are sat Be marked with and secondly scale factor;
Corresponding new viewpoint depth map is expressed as:
DNewi=(KNewiRNewiDWi-KNewiRNewiCNewi)/λNewi, (i=l, r)
Wherein DNewi=DNewi=(uNewi, vNewi, 1) | i=l, r } it is DiThe coordinate of corresponding points at new viewpoint;
(2) edge of object in the reference view depth map of left and right is searched for, the method in (1) is utilized, edge pixel is carried out Then three-dimension varying is erased corresponding depth pixel point at new viewpoint at new viewpoint:
DNew_edgei=0, (i=l, r)
Wherein DNew_edgeiFor the target edges pixel after three-dimension varying;
The process for finding target edges pixel is as follows:
Wherein, PImgRepresent reference picture, D represents the depth map of reference view, TdFor self-defined threshold value, experiments verify that, The 1/4 of maximum depth value generally is taken, when pixel meets above formula, it is ghost edge to represent the point;
(3) utilize 3 × 3 template, medium filtering carried out to obtained depth map, remove crackle in depth image with it is thin Small loophole, and will be filtered after image compared with the depth map obtained through three-dimension varying, mark the picture of variation Vegetarian refreshments:
INewi=S (M (DNew_Imgi), DNew_Imgi), (i=l, r)
Wherein INewiFor labeled pixel, M is 3 × 3 medium filtering function, DNew_ImgiFor by step (1) and The depth map at new viewpoint obtained after step (2) processing, S for comparing two images, and mark to compare labeling function The pixel of variation;
The processing procedure of the relatively labeling function S comprises the following steps:
3.1) gray value of each pixel in the depth map before and after comparison medium filtering;
3.2) coordinate of the different pixel of gray value is recorded;
(4) crack generated because of three-dimension varying is repaired, mending course is as follows:
4.1) back projection is carried out to labeled pixel, projected at original reference view:
PINewi=W (INewi), (i=l, r)
Wherein, PINewiIt represents in the new viewpoint image obtained by back projection, the pixel of cracks, W represents step (1) The three-dimension varying of description;
4.2) pixel value in initial reference texture maps is assigned in new viewpoint image again, with labeled pixel point coordinates Identical pixel;
The process to mend a split is as follows:Using the mark point coordinates obtained in 4.1), the acceptance of the bid of initial reference texture maps is chosen The pixel value of note point position, is assigned to corresponding mark pixel in new viewpoint image, the pixel of rest position remains unchanged:
Wherein PNew_ImgiTo pass through the texture maps after crack repairing, P at new viewpointImgiFor reference texture figure, PONew_Imgi For the new viewpoint texture maps obtained afterwards by step (1) and step (2) processing, mark represents whether current pixel point is marked Note, it is labeled point that current point is represented when mark is 1;
(5) using the new viewpoint image obtained after step (4) processing, the cavity of image occlusion area is inserted Value:
PIn_Img=IN (PNew_Imgl,PNew_Imgr)
Wherein, PIn_ImgFor by the new viewpoint texture maps after interpolation, PNew_ImglIt represents to pass through step by left reference picture (1) the new viewpoint texture maps obtained after all processing of-(4), PNew_ImgrRepresent by right reference picture by step (1)- (4) obtained after all processing, the new viewpoint texture maps at same viewpoint, IN is interpolating function;
The expression formula of IN interpolating functions is as follows:
WhereinFor proportion parameter, TNew_t、Tl_tAnd Tr_tRespectively new viewpoint, a left side The translation vector of reference view and right reference view in camera coordinate system, Oi={ Oi(u, v) | i=l, r } represent image Cavity is whether there is at (u, v), is 1 if it there is cavity, expression formula is as follows:
Wherein Zi={ Zi(u, v) | i=l, r } for depth value of the new viewpoint at (u, v), ThFor threshold value;
(6) the inpaint methods that Telea is proposed in OpenCV library functions are called, remaining cavity is repaired, is obtained Final new viewpoint image:
PNew_Img=inpaint (PIn_Img)
Wherein inpaint be OpenCV library functions, PNew_ImgFor the new viewpoint texture image finally obtained.

Claims (5)

  1. A kind of 1. novel view synthesis method based on depth image, which is characterized in that the visual point synthesizing method includes following Step:
    (1) three-dimension varying is carried out to the texture maps at the reference view of left and right and depth map, conversion process is as follows:
    1.1) image is stored to each pixel in coordinate system, with reference to corresponding depth information, projects to world coordinate system In:
    PWi=(XWi, YWi, ZWi)T=(KiRi)-1iPi+KiRiCi), (i=l, r)
    Wherein PWi={ PWi=(XWi, YWi, ZWi)T| i=l, r } represent the pixel of reference view image current position in the world Three-dimensional coordinate in coordinate system, l and r represent left and right, X respectivelyWiAnd YWiTransverse and longitudinal coordinate respectively in world coordinate system,For depth, diFor the gray value of current position depth map, MinZiAnd MaxZiPoint Minimum and maximum gray value that Wei be in the corresponding depth image of present frame,It represents The internal reference matrix of video camera, fxi={ fxi∈ R | i=l, r } and fyi={ fyi∈ R | i=l, r } it is respectively left and right cameras x directions With the focal length on y directions, R is real number, u0i={ u0i∈ R | i=l, r } and v0i={ v0i∈ R | i=l, r } it is respectively left and right camera shooting Coordinate of the principal point of machine in image stores coordinate system, Ri={ Ri∈R3×3| i=l, r } represent video camera spin matrix, R3×3 Represent the real number field space of matrices of dimension 3 × 3, λi={ λi=Fi31XWi+Fi32YWi+Fi33zWi+Fi34| i=l, r } it is video camera Homogeneous scale factor,For projection matrix, Ti={ Ti∈R3×1| i=l, r } For the translation matrix of video camera, Pi={ Pi=(ui, vi, 1)T| i=l, r } for reference view homogeneous graph picture store coordinate, ui= {ui∈ R | i=l, r } and vi={ vi∈ R | i=l, r } it is respectively horizontal, vertical seat of the current pixel point in image stores coordinate system Mark, CiFor the centre coordinate of video camera;
    Same processing is done to corresponding depth map:
    DWi=(XWi, YWi, ZWi)T=(KiRi)-1iDi+KiRiCi), (i=l, r)
    Wherein DWi={ DWi=(XWi, YWi, ZWi)T| i=l, r } represent that the pixel of reference view depth map current position is alive Three-dimensional coordinate in boundary's coordinate system, Di={ Di=(u0i, v0i, 1)T| i=l, r } it is deposited for the homogeneous graph picture of reference view depth map Store up coordinate;
    1.2) will be by projecting obtained each point for the first time, the inside and outside parameter of virtual camera at combining target new viewpoint, And depth information, it projects to again in the plane of delineation coordinate system of new viewpoint:
    PNewi=(KNewiRNewiPWi-KNewiRNewiCNewi)/λNewi, (i=l, r)
    Wherein PNewi={ PNewi=(uNewi, vNewi, 1) | i=l, r } it is PiThe coordinate of corresponding points, u at new viewpointNewi={ uNewi ∈ R | i=l, r } and vNewi={ vNewi∈ R | i=l, r } it is respectively horizontal, vertical seat of the current pixel point in image stores coordinate system Mark, KNewi、RNewi、CNewi、λNewiRepresent the internal reference matrix of virtual camera at new viewpoint, spin matrix, centre coordinate and its Secondary scale factor;
    Corresponding new viewpoint depth map is expressed as:
    DNewi=(KNewiRNewiDWi-KNewiRNewiCNewi)/λNewi, (i=l, r)
    Wherein DNewi={ DNewi=(uNewi, vNewi, 1) | i=l, r } it is DiThe coordinate of corresponding points at new viewpoint;
    (2) edge of object in the reference view depth map of left and right is searched for, utilizes the method in (1), edge pixel is carried out three-dimensional It transforms at new viewpoint, then erases corresponding depth pixel point at new viewpoint:
    DNew_edgei=0, (i=l, r)
    Wherein DNew_edgeiFor the target edges pixel after three-dimension varying;
    (3) 3 × 3 template is utilized, medium filtering is carried out to obtained depth map, removes the crackle in depth image and tiny leakage Hole, and will be filtered after image compared with the depth map obtained through three-dimension varying, mark the pixel of variation:
    INewi=S (M (DNew_Imgi), DNew_Imgi), (i=l, r)
    Wherein INewiFor labeled pixel, M is 3 × 3 medium filtering function, DNew_ImgiTo pass through step (1) and step (2) depth map at new viewpoint obtained after handling, S for comparing two images, and mark variation to compare labeling function Pixel;
    (4) crack generated because of three-dimension varying is repaired, mending course is as follows:
    4.1) back projection is carried out to labeled pixel, projected at original reference view:
    PINewi=W (INewi), (i=l, r)
    Wherein, PINewiIt represents in the new viewpoint image obtained by back projection, the pixel of cracks, W represents that step (1) describes Three-dimension varying;
    4.2) pixel value in initial reference texture maps is assigned in new viewpoint image again, it is identical with labeled pixel point coordinates Pixel;
    (5) using the new viewpoint image obtained after step (4) processing, to the empty into row interpolation of image occlusion area:
    PIn_Img=IN (PNew_Imgl, PNew_Imgr)
    Wherein, PIn_ImgFor by the new viewpoint texture maps after interpolation, PNew_ImglIt represents to pass through step by left reference picture (1) the new viewpoint texture maps obtained after all processing of-(4), PNew_ImgrRepresent by right reference picture by step (1)- (4) obtained after all processing, the new viewpoint texture maps at same viewpoint, IN is interpolating function;
    (6) the inpaint methods that Telea is proposed in OpenCV library functions are called, remaining cavity is repaired, are obtained final New viewpoint image:
    PNew_Img=inpaint (PIn_Img)
    Wherein inpaint be OpenCV library functions, PNew_ImgFor the new viewpoint texture image finally obtained.
  2. 2. a kind of novel view synthesis method based on depth image as described in claim 1, it is characterised in that:The step (2) in, the process for finding target edges pixel is as follows:
    <mrow> <mo>&amp;ForAll;</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>&amp;Element;</mo> <msub> <mi>P</mi> <mrow> <mi>Im</mi> <mi>g</mi> </mrow> </msub> <mo>,</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mo>-</mo> <mn>1</mn> </mrow> <mn>1</mn> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mo>-</mo> <mn>1</mn> </mrow> <mn>1</mn> </munderover> <mi>D</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>+</mo> <mi>i</mi> <mo>,</mo> <mi>v</mi> <mo>+</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>-</mo> <mn>9</mn> <mo>&amp;times;</mo> <mi>D</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>&gt;</mo> <msub> <mi>T</mi> <mi>d</mi> </msub> </mrow>
    Wherein, PImgRepresent reference picture, D represents the depth map of reference view, TdFor self-defined threshold value, when pixel meets above formula When, it is ghost edge to represent the point.
  3. 3. a kind of novel view synthesis method based on depth image as claimed in claim 1 or 2, it is characterised in that:The step Suddenly in (3), the processing procedure for comparing labeling function S comprises the following steps:
    3.1) gray value of each pixel in the depth map before and after comparison medium filtering;
    3.2) coordinate of the different pixel of gray value is recorded.
  4. 4. a kind of novel view synthesis method based on depth image as claimed in claim 1 or 2, it is characterised in that:The step It is rapid 4.2) in, the process to mend a split is as follows:
    Using the mark point coordinates obtained in 4.1), the pixel value of mark point position in initial reference texture maps is chosen, is assigned to new Corresponding mark pixel, the pixel of rest position remain unchanged in visual point image:
    <mrow> <msub> <mi>P</mi> <mrow> <mi>N</mi> <mi>e</mi> <mi>w</mi> <mo>_</mo> <mi>Im</mi> <mi>g</mi> <mi>i</mi> </mrow> </msub> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>P</mi> <mrow> <mi>Im</mi> <mi>g</mi> <mi>i</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>P</mi> <mrow> <mi>I</mi> <mi>N</mi> <mi>e</mi> <mi>w</mi> <mi>i</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>m</mi> <mi>a</mi> <mi>r</mi> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>P</mi> <mrow> <mi>O</mi> <mi>N</mi> <mi>e</mi> <mi>w</mi> <mo>_</mo> <mi>Im</mi> <mi>g</mi> <mi>i</mi> </mrow> </msub> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>m</mi> <mi>a</mi> <mi>r</mi> <mi>k</mi> <mo>=</mo> <mn>0</mn> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> <mrow> <mo>(</mo> <mi>i</mi> <mo>=</mo> <mi>l</mi> <mo>,</mo> <mi>r</mi> <mo>)</mo> </mrow> </mrow>
    Wherein PNew_ImgiTo pass through the texture maps after crack repairing, P at new viewpointImgiFor reference texture figure, PONew_ImgiFor warp It crossing step (1) and step (2) handles the new viewpoint texture maps obtained afterwards, mark represents whether current pixel point is labeled, It is labeled point that current point is represented when mark is 1.
  5. 5. a kind of novel view synthesis method based on depth image as claimed in claim 1 or 2, it is characterised in that:The step Suddenly in (5), the expression formula of IN interpolating functions is as follows:
    <mrow> <msub> <mi>P</mi> <mrow> <mi>I</mi> <mi>n</mi> <mo>_</mo> <mi>Im</mi> <mi>g</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>&amp;alpha;</mi> <mo>)</mo> <msub> <mi>P</mi> <mrow> <mi>N</mi> <mi>e</mi> <mi>w</mi> <mo>_</mo> <mi>Im</mi> <mi>g</mi> <mi>l</mi> </mrow> </msub> <mo>(</mo> <msub> <mi>u</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>v</mi> <mn>1</mn> </msub> <mo>)</mo> <mo>+</mo> <msub> <mi>&amp;alpha;P</mi> <mrow> <mi>N</mi> <mi>e</mi> <mi>w</mi> <mo>_</mo> <mi>Im</mi> <mi>g</mi> <mi>r</mi> </mrow> </msub> <mo>(</mo> <msub> <mi>u</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>v</mi> <mn>2</mn> </msub> <mo>)</mo> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <msub> <mi>O</mi> <mi>l</mi> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>0</mn> <mo>,</mo> <msub> <mi>O</mi> <mi>r</mi> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>0</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>P</mi> <mrow> <mi>N</mi> <mi>e</mi> <mi>w</mi> <mo>_</mo> <mi>Im</mi> <mi>g</mi> <mi>l</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>u</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>v</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <msub> <mi>O</mi> <mi>l</mi> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>0</mn> <mo>,</mo> <msub> <mi>O</mi> <mi>r</mi> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>1</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>P</mi> <mrow> <mi>N</mi> <mi>e</mi> <mi>w</mi> <mo>_</mo> <mi>Im</mi> <mi>g</mi> <mi>r</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>u</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>v</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <msub> <mi>O</mi> <mi>l</mi> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>1</mn> <mo>,</mo> <msub> <mi>O</mi> <mi>r</mi> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>0</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>0</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <msub> <mi>O</mi> <mi>l</mi> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>1</mn> <mo>,</mo> <msub> <mi>O</mi> <mi>r</mi> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>1</mn> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
    WhereinFor proportion parameter, TNew_t、Tl_tAnd Tr_tRespectively new viewpoint, left reference The translation vector of viewpoint and right reference view in camera coordinate system, Oi={ Oi(u, v) | i=l, r represent image (u, V) place whether there is cavity, is 1 if it there is cavity, expression formula is as follows:
    <mrow> <msub> <mi>O</mi> <mi>l</mi> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mrow> <mo>(</mo> <msub> <mi>Z</mi> <mi>l</mi> </msub> <mo>(</mo> <mrow> <mi>u</mi> <mo>,</mo> <mi>v</mi> </mrow> <mo>)</mo> <mo>&lt;</mo> <msub> <mi>T</mi> <mi>h</mi> </msub> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mo>(</mo> <msub> <mi>Z</mi> <mi>l</mi> </msub> <mo>(</mo> <mrow> <mi>u</mi> <mo>,</mo> <mi>v</mi> </mrow> <mo>)</mo> <mo>&gt;</mo> <msub> <mi>T</mi> <mi>h</mi> </msub> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
    <mrow> <msub> <mi>O</mi> <mi>r</mi> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mrow> <mo>(</mo> <msub> <mi>Z</mi> <mi>r</mi> </msub> <mo>(</mo> <mrow> <mi>u</mi> <mo>,</mo> <mi>v</mi> </mrow> <mo>)</mo> <mo>&lt;</mo> <msub> <mi>T</mi> <mi>h</mi> </msub> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mo>(</mo> <msub> <mi>Z</mi> <mi>r</mi> </msub> <mo>(</mo> <mrow> <mi>u</mi> <mo>,</mo> <mi>v</mi> </mrow> <mo>)</mo> <mo>&gt;</mo> <msub> <mi>T</mi> <mi>h</mi> </msub> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
    Wherein Zi={ Zi(u, v) | i=l, r } for depth value of the new viewpoint at (u, v), ThFor threshold value.
CN201611251733.8A 2016-12-30 2016-12-30 A kind of novel view synthesis method based on depth image Active CN106791773B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611251733.8A CN106791773B (en) 2016-12-30 2016-12-30 A kind of novel view synthesis method based on depth image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611251733.8A CN106791773B (en) 2016-12-30 2016-12-30 A kind of novel view synthesis method based on depth image

Publications (2)

Publication Number Publication Date
CN106791773A CN106791773A (en) 2017-05-31
CN106791773B true CN106791773B (en) 2018-06-01

Family

ID=58928104

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611251733.8A Active CN106791773B (en) 2016-12-30 2016-12-30 A kind of novel view synthesis method based on depth image

Country Status (1)

Country Link
CN (1) CN106791773B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110415287A (en) * 2019-07-11 2019-11-05 Oppo广东移动通信有限公司 Filtering method, device, electronic equipment and the readable storage medium storing program for executing of depth map

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107809630B (en) * 2017-10-24 2019-08-13 天津大学 Based on the multi-view point video super-resolution rebuilding algorithm for improving virtual view synthesis
CN111311521A (en) * 2020-03-12 2020-06-19 京东方科技集团股份有限公司 Image restoration method and device and electronic equipment
CN112291549B (en) * 2020-09-23 2021-07-09 广西壮族自治区地图院 Method for acquiring stereoscopic sequence frame images of raster topographic map based on DEM
CN112308911A (en) * 2020-10-26 2021-02-02 中国科学院自动化研究所 End-to-end visual positioning method and system
CN112686877B (en) * 2021-01-05 2022-11-11 同济大学 Binocular camera-based three-dimensional house damage model construction and measurement method and system
CN117061720B (en) * 2023-10-11 2024-03-01 广州市大湾区虚拟现实研究院 Stereo image pair generation method based on monocular image and depth image rendering

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010037512A1 (en) * 2008-10-02 2010-04-08 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Intermediate view synthesis and multi-view data signal extraction
CN102307304A (en) * 2011-09-16 2012-01-04 北京航空航天大学 Image segmentation based error concealment method for entire right frame loss in stereoscopic video
CN102592275A (en) * 2011-12-16 2012-07-18 天津大学 Virtual viewpoint rendering method
CN102625127A (en) * 2012-03-24 2012-08-01 山东大学 Optimization method suitable for virtual viewpoint generation of 3D television
CN102724529A (en) * 2012-05-28 2012-10-10 清华大学 Method and device for generating video sequence of virtual viewpoints
CN103269438A (en) * 2013-05-27 2013-08-28 中山大学 Method for drawing depth image on the basis of 3D video and free-viewpoint television
CN103581648A (en) * 2013-10-18 2014-02-12 清华大学深圳研究生院 Hole filling method for new viewpoint drawing
CN104270624A (en) * 2014-10-08 2015-01-07 太原科技大学 Region-partitioning 3D video mapping method
CN104869386A (en) * 2015-04-09 2015-08-26 东南大学 Virtual viewpoint synthesizing method based on layered processing

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010037512A1 (en) * 2008-10-02 2010-04-08 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Intermediate view synthesis and multi-view data signal extraction
CN102307304A (en) * 2011-09-16 2012-01-04 北京航空航天大学 Image segmentation based error concealment method for entire right frame loss in stereoscopic video
CN102592275A (en) * 2011-12-16 2012-07-18 天津大学 Virtual viewpoint rendering method
CN102625127A (en) * 2012-03-24 2012-08-01 山东大学 Optimization method suitable for virtual viewpoint generation of 3D television
CN102724529A (en) * 2012-05-28 2012-10-10 清华大学 Method and device for generating video sequence of virtual viewpoints
CN103269438A (en) * 2013-05-27 2013-08-28 中山大学 Method for drawing depth image on the basis of 3D video and free-viewpoint television
CN103581648A (en) * 2013-10-18 2014-02-12 清华大学深圳研究生院 Hole filling method for new viewpoint drawing
CN104270624A (en) * 2014-10-08 2015-01-07 太原科技大学 Region-partitioning 3D video mapping method
CN104869386A (en) * 2015-04-09 2015-08-26 东南大学 Virtual viewpoint synthesizing method based on layered processing

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
spatio-temporal adaptive 2D to 3D video conversion for 3DTV;LONGJUN LIU;《IEEE》;20120318;全文 *
vivid-DIBR BASED 2D-3D image conversion system for 3D display;yu cheng fan等;《IEEE》;20140826;全文 *
基于DIBR 算法的新视点生成及其图像修复;曾耀先;《CNKI》;20120221;全文 *
多视点视频中视点绘制技术;王超;《cnki》;20111213;全文 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110415287A (en) * 2019-07-11 2019-11-05 Oppo广东移动通信有限公司 Filtering method, device, electronic equipment and the readable storage medium storing program for executing of depth map
CN110415287B (en) * 2019-07-11 2021-08-13 Oppo广东移动通信有限公司 Depth map filtering method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN106791773A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
CN106791773B (en) A kind of novel view synthesis method based on depth image
US8860712B2 (en) System and method for processing video images
US10115182B2 (en) Depth map super-resolution processing method
US8791941B2 (en) Systems and methods for 2-D to 3-D image conversion using mask to model, or model to mask, conversion
US20120032948A1 (en) System and method for processing video images for camera recreation
US20080259073A1 (en) System and method for processing video images
CN111325693B (en) Large-scale panoramic viewpoint synthesis method based on single viewpoint RGB-D image
CN106023303B (en) A method of Three-dimensional Gravity is improved based on profile validity and is laid foundations the dense degree of cloud
CN115082639B (en) Image generation method, device, electronic equipment and storage medium
US20080226160A1 (en) Systems and methods for filling light in frames during 2-d to 3-d image conversion
CN106023230B (en) A kind of dense matching method of suitable deformation pattern
CN103024421A (en) Method for synthesizing virtual viewpoints in free viewpoint television
CN104735435A (en) Image processing method and electronic device
CN115731336B (en) Image rendering method, image rendering model generation method and related devices
CN110567441A (en) Particle filter-based positioning method, positioning device, mapping and positioning method
Shi et al. Self-supervised visibility learning for novel view synthesis
CN115428027A (en) Neural opaque point cloud
CN109461197B (en) Cloud real-time drawing optimization method based on spherical UV and re-projection
CN108924434B (en) Three-dimensional high dynamic range image synthesis method based on exposure transformation
Yang et al. Image translation based synthetic data generation for industrial object detection and pose estimation
CN111275804B (en) Image illumination removing method and device, storage medium and computer equipment
JP2018163468A (en) Foreground extraction device and program
CN115063485B (en) Three-dimensional reconstruction method, device and computer-readable storage medium
Wang et al. Virtual view synthesis without preprocessing depth image for depth image based rendering
CN113450274B (en) Self-adaptive viewpoint fusion method and system based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Feng Yuanjing

Inventor after: Huang Chenchen

Inventor after: Huang Liangpeng

Inventor after: Li Jiajing

Inventor after: Chen Feng

Inventor after: Pan Shanwei

Inventor after: Yang Yong

Inventor after: Hu Jianqiao

Inventor after: Kong Deping

Inventor after: Chen Hong

Inventor before: Feng Yuanjing

Inventor before: Huang Liangpeng

Inventor before: Li Jiajing

Inventor before: Chen Feng

Inventor before: Xu Zenan

Inventor before: Ye Jiasheng

Inventor before: Chen Wenzhou

Inventor before: Li Dingbang

Inventor before: Wang Zenan

GR01 Patent grant
GR01 Patent grant