CN106791773B - A kind of novel view synthesis method based on depth image - Google Patents
A kind of novel view synthesis method based on depth image Download PDFInfo
- Publication number
- CN106791773B CN106791773B CN201611251733.8A CN201611251733A CN106791773B CN 106791773 B CN106791773 B CN 106791773B CN 201611251733 A CN201611251733 A CN 201611251733A CN 106791773 B CN106791773 B CN 106791773B
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- newi
- mtd
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/111—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
Abstract
Description
Claims (5)
- A kind of 1. novel view synthesis method based on depth image, which is characterized in that the visual point synthesizing method includes following Step:(1) three-dimension varying is carried out to the texture maps at the reference view of left and right and depth map, conversion process is as follows:1.1) image is stored to each pixel in coordinate system, with reference to corresponding depth information, projects to world coordinate system In:PWi=(XWi, YWi, ZWi)T=(KiRi)-1(λiPi+KiRiCi), (i=l, r)Wherein PWi={ PWi=(XWi, YWi, ZWi)T| i=l, r } represent the pixel of reference view image current position in the world Three-dimensional coordinate in coordinate system, l and r represent left and right, X respectivelyWiAnd YWiTransverse and longitudinal coordinate respectively in world coordinate system,For depth, diFor the gray value of current position depth map, MinZiAnd MaxZiPoint Minimum and maximum gray value that Wei be in the corresponding depth image of present frame,It represents The internal reference matrix of video camera, fxi={ fxi∈ R | i=l, r } and fyi={ fyi∈ R | i=l, r } it is respectively left and right cameras x directions With the focal length on y directions, R is real number, u0i={ u0i∈ R | i=l, r } and v0i={ v0i∈ R | i=l, r } it is respectively left and right camera shooting Coordinate of the principal point of machine in image stores coordinate system, Ri={ Ri∈R3×3| i=l, r } represent video camera spin matrix, R3×3 Represent the real number field space of matrices of dimension 3 × 3, λi={ λi=Fi31XWi+Fi32YWi+Fi33zWi+Fi34| i=l, r } it is video camera Homogeneous scale factor,For projection matrix, Ti={ Ti∈R3×1| i=l, r } For the translation matrix of video camera, Pi={ Pi=(ui, vi, 1)T| i=l, r } for reference view homogeneous graph picture store coordinate, ui= {ui∈ R | i=l, r } and vi={ vi∈ R | i=l, r } it is respectively horizontal, vertical seat of the current pixel point in image stores coordinate system Mark, CiFor the centre coordinate of video camera;Same processing is done to corresponding depth map:DWi=(XWi, YWi, ZWi)T=(KiRi)-1(λiDi+KiRiCi), (i=l, r)Wherein DWi={ DWi=(XWi, YWi, ZWi)T| i=l, r } represent that the pixel of reference view depth map current position is alive Three-dimensional coordinate in boundary's coordinate system, Di={ Di=(u0i, v0i, 1)T| i=l, r } it is deposited for the homogeneous graph picture of reference view depth map Store up coordinate;1.2) will be by projecting obtained each point for the first time, the inside and outside parameter of virtual camera at combining target new viewpoint, And depth information, it projects to again in the plane of delineation coordinate system of new viewpoint:PNewi=(KNewiRNewiPWi-KNewiRNewiCNewi)/λNewi, (i=l, r)Wherein PNewi={ PNewi=(uNewi, vNewi, 1) | i=l, r } it is PiThe coordinate of corresponding points, u at new viewpointNewi={ uNewi ∈ R | i=l, r } and vNewi={ vNewi∈ R | i=l, r } it is respectively horizontal, vertical seat of the current pixel point in image stores coordinate system Mark, KNewi、RNewi、CNewi、λNewiRepresent the internal reference matrix of virtual camera at new viewpoint, spin matrix, centre coordinate and its Secondary scale factor;Corresponding new viewpoint depth map is expressed as:DNewi=(KNewiRNewiDWi-KNewiRNewiCNewi)/λNewi, (i=l, r)Wherein DNewi={ DNewi=(uNewi, vNewi, 1) | i=l, r } it is DiThe coordinate of corresponding points at new viewpoint;(2) edge of object in the reference view depth map of left and right is searched for, utilizes the method in (1), edge pixel is carried out three-dimensional It transforms at new viewpoint, then erases corresponding depth pixel point at new viewpoint:DNew_edgei=0, (i=l, r)Wherein DNew_edgeiFor the target edges pixel after three-dimension varying;(3) 3 × 3 template is utilized, medium filtering is carried out to obtained depth map, removes the crackle in depth image and tiny leakage Hole, and will be filtered after image compared with the depth map obtained through three-dimension varying, mark the pixel of variation:INewi=S (M (DNew_Imgi), DNew_Imgi), (i=l, r)Wherein INewiFor labeled pixel, M is 3 × 3 medium filtering function, DNew_ImgiTo pass through step (1) and step (2) depth map at new viewpoint obtained after handling, S for comparing two images, and mark variation to compare labeling function Pixel;(4) crack generated because of three-dimension varying is repaired, mending course is as follows:4.1) back projection is carried out to labeled pixel, projected at original reference view:PINewi=W (INewi), (i=l, r)Wherein, PINewiIt represents in the new viewpoint image obtained by back projection, the pixel of cracks, W represents that step (1) describes Three-dimension varying;4.2) pixel value in initial reference texture maps is assigned in new viewpoint image again, it is identical with labeled pixel point coordinates Pixel;(5) using the new viewpoint image obtained after step (4) processing, to the empty into row interpolation of image occlusion area:PIn_Img=IN (PNew_Imgl, PNew_Imgr)Wherein, PIn_ImgFor by the new viewpoint texture maps after interpolation, PNew_ImglIt represents to pass through step by left reference picture (1) the new viewpoint texture maps obtained after all processing of-(4), PNew_ImgrRepresent by right reference picture by step (1)- (4) obtained after all processing, the new viewpoint texture maps at same viewpoint, IN is interpolating function;(6) the inpaint methods that Telea is proposed in OpenCV library functions are called, remaining cavity is repaired, are obtained final New viewpoint image:PNew_Img=inpaint (PIn_Img)Wherein inpaint be OpenCV library functions, PNew_ImgFor the new viewpoint texture image finally obtained.
- 2. a kind of novel view synthesis method based on depth image as described in claim 1, it is characterised in that:The step (2) in, the process for finding target edges pixel is as follows:<mrow> <mo>&ForAll;</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>&Element;</mo> <msub> <mi>P</mi> <mrow> <mi>Im</mi> <mi>g</mi> </mrow> </msub> <mo>,</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mo>-</mo> <mn>1</mn> </mrow> <mn>1</mn> </munderover> <munderover> <mo>&Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mo>-</mo> <mn>1</mn> </mrow> <mn>1</mn> </munderover> <mi>D</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>+</mo> <mi>i</mi> <mo>,</mo> <mi>v</mi> <mo>+</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>-</mo> <mn>9</mn> <mo>&times;</mo> <mi>D</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>></mo> <msub> <mi>T</mi> <mi>d</mi> </msub> </mrow>Wherein, PImgRepresent reference picture, D represents the depth map of reference view, TdFor self-defined threshold value, when pixel meets above formula When, it is ghost edge to represent the point.
- 3. a kind of novel view synthesis method based on depth image as claimed in claim 1 or 2, it is characterised in that:The step Suddenly in (3), the processing procedure for comparing labeling function S comprises the following steps:3.1) gray value of each pixel in the depth map before and after comparison medium filtering;3.2) coordinate of the different pixel of gray value is recorded.
- 4. a kind of novel view synthesis method based on depth image as claimed in claim 1 or 2, it is characterised in that:The step It is rapid 4.2) in, the process to mend a split is as follows:Using the mark point coordinates obtained in 4.1), the pixel value of mark point position in initial reference texture maps is chosen, is assigned to new Corresponding mark pixel, the pixel of rest position remain unchanged in visual point image:<mrow> <msub> <mi>P</mi> <mrow> <mi>N</mi> <mi>e</mi> <mi>w</mi> <mo>_</mo> <mi>Im</mi> <mi>g</mi> <mi>i</mi> </mrow> </msub> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>P</mi> <mrow> <mi>Im</mi> <mi>g</mi> <mi>i</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>P</mi> <mrow> <mi>I</mi> <mi>N</mi> <mi>e</mi> <mi>w</mi> <mi>i</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>m</mi> <mi>a</mi> <mi>r</mi> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>P</mi> <mrow> <mi>O</mi> <mi>N</mi> <mi>e</mi> <mi>w</mi> <mo>_</mo> <mi>Im</mi> <mi>g</mi> <mi>i</mi> </mrow> </msub> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>m</mi> <mi>a</mi> <mi>r</mi> <mi>k</mi> <mo>=</mo> <mn>0</mn> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> <mrow> <mo>(</mo> <mi>i</mi> <mo>=</mo> <mi>l</mi> <mo>,</mo> <mi>r</mi> <mo>)</mo> </mrow> </mrow>Wherein PNew_ImgiTo pass through the texture maps after crack repairing, P at new viewpointImgiFor reference texture figure, PONew_ImgiFor warp It crossing step (1) and step (2) handles the new viewpoint texture maps obtained afterwards, mark represents whether current pixel point is labeled, It is labeled point that current point is represented when mark is 1.
- 5. a kind of novel view synthesis method based on depth image as claimed in claim 1 or 2, it is characterised in that:The step Suddenly in (5), the expression formula of IN interpolating functions is as follows:<mrow> <msub> <mi>P</mi> <mrow> <mi>I</mi> <mi>n</mi> <mo>_</mo> <mi>Im</mi> <mi>g</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>&alpha;</mi> <mo>)</mo> <msub> <mi>P</mi> <mrow> <mi>N</mi> <mi>e</mi> <mi>w</mi> <mo>_</mo> <mi>Im</mi> <mi>g</mi> <mi>l</mi> </mrow> </msub> <mo>(</mo> <msub> <mi>u</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>v</mi> <mn>1</mn> </msub> <mo>)</mo> <mo>+</mo> <msub> <mi>&alpha;P</mi> <mrow> <mi>N</mi> <mi>e</mi> <mi>w</mi> <mo>_</mo> <mi>Im</mi> <mi>g</mi> <mi>r</mi> </mrow> </msub> <mo>(</mo> <msub> <mi>u</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>v</mi> <mn>2</mn> </msub> <mo>)</mo> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <msub> <mi>O</mi> <mi>l</mi> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>0</mn> <mo>,</mo> <msub> <mi>O</mi> <mi>r</mi> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>0</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>P</mi> <mrow> <mi>N</mi> <mi>e</mi> <mi>w</mi> <mo>_</mo> <mi>Im</mi> <mi>g</mi> <mi>l</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>u</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>v</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <msub> <mi>O</mi> <mi>l</mi> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>0</mn> <mo>,</mo> <msub> <mi>O</mi> <mi>r</mi> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>1</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>P</mi> <mrow> <mi>N</mi> <mi>e</mi> <mi>w</mi> <mo>_</mo> <mi>Im</mi> <mi>g</mi> <mi>r</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>u</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>v</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <msub> <mi>O</mi> <mi>l</mi> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>1</mn> <mo>,</mo> <msub> <mi>O</mi> <mi>r</mi> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>0</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>0</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <msub> <mi>O</mi> <mi>l</mi> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>1</mn> <mo>,</mo> <msub> <mi>O</mi> <mi>r</mi> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>1</mn> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>WhereinFor proportion parameter, TNew_t、Tl_tAnd Tr_tRespectively new viewpoint, left reference The translation vector of viewpoint and right reference view in camera coordinate system, Oi={ Oi(u, v) | i=l, r represent image (u, V) place whether there is cavity, is 1 if it there is cavity, expression formula is as follows:<mrow> <msub> <mi>O</mi> <mi>l</mi> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mrow> <mo>(</mo> <msub> <mi>Z</mi> <mi>l</mi> </msub> <mo>(</mo> <mrow> <mi>u</mi> <mo>,</mo> <mi>v</mi> </mrow> <mo>)</mo> <mo><</mo> <msub> <mi>T</mi> <mi>h</mi> </msub> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mo>(</mo> <msub> <mi>Z</mi> <mi>l</mi> </msub> <mo>(</mo> <mrow> <mi>u</mi> <mo>,</mo> <mi>v</mi> </mrow> <mo>)</mo> <mo>></mo> <msub> <mi>T</mi> <mi>h</mi> </msub> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow><mrow> <msub> <mi>O</mi> <mi>r</mi> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mrow> <mo>(</mo> <msub> <mi>Z</mi> <mi>r</mi> </msub> <mo>(</mo> <mrow> <mi>u</mi> <mo>,</mo> <mi>v</mi> </mrow> <mo>)</mo> <mo><</mo> <msub> <mi>T</mi> <mi>h</mi> </msub> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mo>(</mo> <msub> <mi>Z</mi> <mi>r</mi> </msub> <mo>(</mo> <mrow> <mi>u</mi> <mo>,</mo> <mi>v</mi> </mrow> <mo>)</mo> <mo>></mo> <msub> <mi>T</mi> <mi>h</mi> </msub> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>Wherein Zi={ Zi(u, v) | i=l, r } for depth value of the new viewpoint at (u, v), ThFor threshold value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611251733.8A CN106791773B (en) | 2016-12-30 | 2016-12-30 | A kind of novel view synthesis method based on depth image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611251733.8A CN106791773B (en) | 2016-12-30 | 2016-12-30 | A kind of novel view synthesis method based on depth image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106791773A CN106791773A (en) | 2017-05-31 |
CN106791773B true CN106791773B (en) | 2018-06-01 |
Family
ID=58928104
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611251733.8A Active CN106791773B (en) | 2016-12-30 | 2016-12-30 | A kind of novel view synthesis method based on depth image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106791773B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110415287A (en) * | 2019-07-11 | 2019-11-05 | Oppo广东移动通信有限公司 | Filtering method, device, electronic equipment and the readable storage medium storing program for executing of depth map |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107809630B (en) * | 2017-10-24 | 2019-08-13 | 天津大学 | Based on the multi-view point video super-resolution rebuilding algorithm for improving virtual view synthesis |
CN111311521A (en) * | 2020-03-12 | 2020-06-19 | 京东方科技集团股份有限公司 | Image restoration method and device and electronic equipment |
CN112291549B (en) * | 2020-09-23 | 2021-07-09 | 广西壮族自治区地图院 | Method for acquiring stereoscopic sequence frame images of raster topographic map based on DEM |
CN112308911A (en) * | 2020-10-26 | 2021-02-02 | 中国科学院自动化研究所 | End-to-end visual positioning method and system |
CN112686877B (en) * | 2021-01-05 | 2022-11-11 | 同济大学 | Binocular camera-based three-dimensional house damage model construction and measurement method and system |
CN117061720B (en) * | 2023-10-11 | 2024-03-01 | 广州市大湾区虚拟现实研究院 | Stereo image pair generation method based on monocular image and depth image rendering |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010037512A1 (en) * | 2008-10-02 | 2010-04-08 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Intermediate view synthesis and multi-view data signal extraction |
CN102307304A (en) * | 2011-09-16 | 2012-01-04 | 北京航空航天大学 | Image segmentation based error concealment method for entire right frame loss in stereoscopic video |
CN102592275A (en) * | 2011-12-16 | 2012-07-18 | 天津大学 | Virtual viewpoint rendering method |
CN102625127A (en) * | 2012-03-24 | 2012-08-01 | 山东大学 | Optimization method suitable for virtual viewpoint generation of 3D television |
CN102724529A (en) * | 2012-05-28 | 2012-10-10 | 清华大学 | Method and device for generating video sequence of virtual viewpoints |
CN103269438A (en) * | 2013-05-27 | 2013-08-28 | 中山大学 | Method for drawing depth image on the basis of 3D video and free-viewpoint television |
CN103581648A (en) * | 2013-10-18 | 2014-02-12 | 清华大学深圳研究生院 | Hole filling method for new viewpoint drawing |
CN104270624A (en) * | 2014-10-08 | 2015-01-07 | 太原科技大学 | Region-partitioning 3D video mapping method |
CN104869386A (en) * | 2015-04-09 | 2015-08-26 | 东南大学 | Virtual viewpoint synthesizing method based on layered processing |
-
2016
- 2016-12-30 CN CN201611251733.8A patent/CN106791773B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010037512A1 (en) * | 2008-10-02 | 2010-04-08 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Intermediate view synthesis and multi-view data signal extraction |
CN102307304A (en) * | 2011-09-16 | 2012-01-04 | 北京航空航天大学 | Image segmentation based error concealment method for entire right frame loss in stereoscopic video |
CN102592275A (en) * | 2011-12-16 | 2012-07-18 | 天津大学 | Virtual viewpoint rendering method |
CN102625127A (en) * | 2012-03-24 | 2012-08-01 | 山东大学 | Optimization method suitable for virtual viewpoint generation of 3D television |
CN102724529A (en) * | 2012-05-28 | 2012-10-10 | 清华大学 | Method and device for generating video sequence of virtual viewpoints |
CN103269438A (en) * | 2013-05-27 | 2013-08-28 | 中山大学 | Method for drawing depth image on the basis of 3D video and free-viewpoint television |
CN103581648A (en) * | 2013-10-18 | 2014-02-12 | 清华大学深圳研究生院 | Hole filling method for new viewpoint drawing |
CN104270624A (en) * | 2014-10-08 | 2015-01-07 | 太原科技大学 | Region-partitioning 3D video mapping method |
CN104869386A (en) * | 2015-04-09 | 2015-08-26 | 东南大学 | Virtual viewpoint synthesizing method based on layered processing |
Non-Patent Citations (4)
Title |
---|
spatio-temporal adaptive 2D to 3D video conversion for 3DTV;LONGJUN LIU;《IEEE》;20120318;全文 * |
vivid-DIBR BASED 2D-3D image conversion system for 3D display;yu cheng fan等;《IEEE》;20140826;全文 * |
基于DIBR 算法的新视点生成及其图像修复;曾耀先;《CNKI》;20120221;全文 * |
多视点视频中视点绘制技术;王超;《cnki》;20111213;全文 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110415287A (en) * | 2019-07-11 | 2019-11-05 | Oppo广东移动通信有限公司 | Filtering method, device, electronic equipment and the readable storage medium storing program for executing of depth map |
CN110415287B (en) * | 2019-07-11 | 2021-08-13 | Oppo广东移动通信有限公司 | Depth map filtering method and device, electronic equipment and readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN106791773A (en) | 2017-05-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106791773B (en) | A kind of novel view synthesis method based on depth image | |
US8860712B2 (en) | System and method for processing video images | |
US10115182B2 (en) | Depth map super-resolution processing method | |
US8791941B2 (en) | Systems and methods for 2-D to 3-D image conversion using mask to model, or model to mask, conversion | |
US20120032948A1 (en) | System and method for processing video images for camera recreation | |
US20080259073A1 (en) | System and method for processing video images | |
CN111325693B (en) | Large-scale panoramic viewpoint synthesis method based on single viewpoint RGB-D image | |
CN106023303B (en) | A method of Three-dimensional Gravity is improved based on profile validity and is laid foundations the dense degree of cloud | |
CN115082639B (en) | Image generation method, device, electronic equipment and storage medium | |
US20080226160A1 (en) | Systems and methods for filling light in frames during 2-d to 3-d image conversion | |
CN106023230B (en) | A kind of dense matching method of suitable deformation pattern | |
CN103024421A (en) | Method for synthesizing virtual viewpoints in free viewpoint television | |
CN104735435A (en) | Image processing method and electronic device | |
CN115731336B (en) | Image rendering method, image rendering model generation method and related devices | |
CN110567441A (en) | Particle filter-based positioning method, positioning device, mapping and positioning method | |
Shi et al. | Self-supervised visibility learning for novel view synthesis | |
CN115428027A (en) | Neural opaque point cloud | |
CN109461197B (en) | Cloud real-time drawing optimization method based on spherical UV and re-projection | |
CN108924434B (en) | Three-dimensional high dynamic range image synthesis method based on exposure transformation | |
Yang et al. | Image translation based synthetic data generation for industrial object detection and pose estimation | |
CN111275804B (en) | Image illumination removing method and device, storage medium and computer equipment | |
JP2018163468A (en) | Foreground extraction device and program | |
CN115063485B (en) | Three-dimensional reconstruction method, device and computer-readable storage medium | |
Wang et al. | Virtual view synthesis without preprocessing depth image for depth image based rendering | |
CN113450274B (en) | Self-adaptive viewpoint fusion method and system based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB03 | Change of inventor or designer information | ||
CB03 | Change of inventor or designer information |
Inventor after: Feng Yuanjing Inventor after: Huang Chenchen Inventor after: Huang Liangpeng Inventor after: Li Jiajing Inventor after: Chen Feng Inventor after: Pan Shanwei Inventor after: Yang Yong Inventor after: Hu Jianqiao Inventor after: Kong Deping Inventor after: Chen Hong Inventor before: Feng Yuanjing Inventor before: Huang Liangpeng Inventor before: Li Jiajing Inventor before: Chen Feng Inventor before: Xu Zenan Inventor before: Ye Jiasheng Inventor before: Chen Wenzhou Inventor before: Li Dingbang Inventor before: Wang Zenan |
|
GR01 | Patent grant | ||
GR01 | Patent grant |