CN108833879A - With time and space continuity virtual visual point synthesizing method - Google Patents

With time and space continuity virtual visual point synthesizing method Download PDF

Info

Publication number
CN108833879A
CN108833879A CN201810698366.9A CN201810698366A CN108833879A CN 108833879 A CN108833879 A CN 108833879A CN 201810698366 A CN201810698366 A CN 201810698366A CN 108833879 A CN108833879 A CN 108833879A
Authority
CN
China
Prior art keywords
sequence
virtual
visual point
reference view
virtual visual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810698366.9A
Other languages
Chinese (zh)
Inventor
姚莉
李小敏
吴含前
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201810698366.9A priority Critical patent/CN108833879A/en
Publication of CN108833879A publication Critical patent/CN108833879A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention proposes one kind to have time and space continuity virtual visual point synthesizing method, this method makes full use of the incidence relation of adjacent interframe in reference view image sequence, the General static background graphic sequence of scene is extracted from reference view image sequence, and " artifact " elimination and hole-filling are carried out to the virtual visual point image sequence that left and right reference view image sequence synthesizes based on the global context graphic sequence.The present invention has not only well solved " artifact " and empty problem on virtual visual point image, greatly improves the virtual visual point image quality of single frames, while also maintaining the space-time expending of virtual visual point image well.

Description

With time and space continuity virtual visual point synthesizing method
Technical field
The present invention relates to a kind of virtual visual point synthesizing methods, especially a kind of to have the synthesis of time and space continuity virtual view Method.
Background technique
With the fast development of 3D industry, people are no longer content with the experience of 2D scene, increasingly pursue and place oneself in the midst of 3D Enjoyment on the spot in person in scape.Under the driving of market and technology, TV tech also experiencings major transformation, from simple function, The traditional tv of single color is gradually transitions the smart television with height resolution and multimedia function, then arrives traditional 3D TV (3DTV) has welcome the free view-point TV (FTV) in growth momentum now, has changed all bring more to user each time Good viewing experience.
Free viewpoint video is generated in the virtual viewpoint rendering frame foundation of video plus depth (V+D), due to by To the limitation of video transmission bandwidth, the virtual viewpoint rendering technology (DIBR) based on depth map becomes the side being wherein generally used Method.Theoretically, DIBR method can generate the image under an any viewpoint of scene, but since the hiding relation between object makes The virtual visual point image quality drawn out is simultaneously pessimistic.
There are still following problems for virtual visual point synthesizing method at present:
(1) as shown in Fig. 2, prospect and background border are asymmetry in cromogram and depth map, in cromogram, this A little transitional regions can be smoother, but it is very sharp keen to seem in depth map.Due to the cromogram and depth map of visual point image It is not complete correspondence in foreground object edge, that is, depth value mutation transitional region, meanwhile, the light of reference view under different location According to environmental condition difference, so that will appear " artifact " region in the virtual visual point image generated, the viewing body of user is influenced very much It tests.
(2) due to hiding relation inevitable between object, make to be carried on the back under former reference view by the part of foreground occlusion Scape information is exposed under other virtual views, and big sky is formed on virtual visual point image due to lacking the information being blocked Hole influences the rendering quality of image very much.
(3) since the solution of current major part DIBR technology is all confined on processing single-frame images, image is had ignored Being associated between sequence frame and frame, so in addition to cavity mentioned above and " artifact " problem, the virtual visual point image sequence of generation The problem of column also have time discontinuity, is embodied in the scintillation frequently occurred.
Summary of the invention
Goal of the invention:In order to overcome the deficiencies of the prior art, the present invention is intended to provide a kind of have time and space continuity virtual Visual point synthesizing method, to solve the space-time discontinuity problem of virtual view composograph, including in spatial domain " artifact " and Empty problem, the flicker problem in time-domain.
Technical solution:Of the invention is included the following steps based on time and space continuity virtual visual point synthesizing method:(1) to a left side Depth map sequence in right reference view image sequence is corrected, and the correction includes:(1.1) judge each phase of depth map sequence Whether two pixels in adjacent frame at same position are in same depth level, and make to be in same depth level in different frame Pixel depth value having the same;(1.2) single frames depth map is filtered;(2) the left and right reference view after correction is combined Depth information in depth map sequence, the overall situation for extracting left and right reference view respectively from the reference view image sequence of left and right are quiet State background graphic sequence;(3) virtual view early period is generated by the forward mapping of left and right reference view image sequence and viewpoint fusion Image sequence, and it is virtual quiet by the forward mapping of the General static background graphic sequence of left and right reference view and viewpoint fusion generation State background graphic sequence;(4) region for having foreground edge to interfere in virtual visual point image sequence early period on every frame image, benefit are marked The foreground edge in frame is corresponded to left reference view to be modified the region, obtains the real of removal foreground edge interference " artifact " region, and using on the revised virtual view early period graphic sequence of virtual static background image sequence elimination generated Really " artifact " region.
Further, the above method is further comprising the steps of:(5) virtual static background image sequence is utilized, to " artifact " T frame, t+L frame, t+2L frame and the subsequent particular frame in virtual visual point image sequence after elimination directly uses static state Background image filling cavity, for the kth frame between particular frame, wherein t, L and k are positive integer, are carried out using the following formula empty It repairs in hole:
Wherein,WithIt is the virtual visual point image after directly being repaired by virtual static background image,It is the virtual visual point image for needing to be weighted fusion filling cavity, weight is the weight taken when fusion.
Further, step (1.1) further includes:Given threshold, if in the consecutive frame of depth map sequence at same position The depth differences of two pixels be less than the threshold value, then it is assumed that the two pixels are in same depth level, and by the two The depth value of pixel is updated to the average depth value of two pixels.
Further, in step (1.2), the filtering includes:Horizontal and vertical directions detects in depth map The edge contour of object out;The object edge that will test out expands two pixels to background direction;To the display foreground after expansion Gaussian smoothing is carried out with the big pixel of background depth variation.
Further, step (2) further includes:(2.1) respectively for the texture maps sequence in left and right reference view image sequence In column count present frame and subsequent neighboring frames at same position pixel structural similarity SSIM, when SSIM value be greater than predetermined threshold When, it is believed that locating pixel is static pixels, whereas if being considered as static picture when depth value difference is less than predetermined threshold Element;(2.2) the static scene part being made of in each frame of left and right reference view image sequence static pixels is extracted respectively, and is tied The timing information of each frame is closed, the General static background graphic sequence of left and right reference view is formed.
Further, it in step (3), is merged and is generated by the forward mapping and viewpoint of left and right reference view image sequence Early period, virtual visual point image sequence included:When carrying out forward mapping, to the left and right reference view figure of virtual view position two sides As sequence is mapped respectively;It is original reflect with the virtual visual point image after the mapping of left reference view when carrying out viewpoint fusion It penetrates as a result, carrying out auxiliary amendment with the virtual visual point image after the mapping of right reference view, the auxiliary amendment includes:When left reference The a certain pixel in virtual view depth map after viewpoint mapping is empty or in the right reference view mapping of the pixel position When pixel depth value is greater than the depth value of left reference view mapping, the mapping result with right viewpoint in the pixel position replaces a left side Original mappings result of the viewpoint in the pixel position.
Further, step (4) further includes following steps:(4.1) foreground edge of left reference view is extracted, and is mapped Onto virtual view imaging plane;(4.2) foreground edge is had to what is marked based on the foreground edge through mapping in step (4.1) Pixel in the region of interference is modified, and obtains real " artifact " region;(4.3) on the virtual static background figure of generation Pixel in real " artifact " region is replaced with identical bits on virtual static background image by the position for marking " artifact " pixel Set the pixel at place.
Beneficial effect:Compared with the existing technology, the present invention has the advantage that:
(1) time-domain information in video sequence is made full use of, using the relationship between consecutive frame, is extracted entire scene Static background image maintains time continuity well, while can assist amendment " artifact " and empty problem, improves void The rendering quality of quasi- visual point image.
(2) both sides correction is carried out to depth map, by the relationship between effective use depth map sequence and to list Frame depth map is filtered, and dramatically improves the rendering quality of virtual visual point image, while avoiding the flashing of image Problem.
(3) " artifact " pixel on virtual visual point image has highly precisely been obtained, and has been carried out using static background image It eliminates, substantially increases the elimination quality of " artifact ".
(4) it in empty problem, using the Weighted Fusion hole-filling algorithm based on static background, not only fills out well Cavity has been mended, empty restoring area is also made to maintain time continuity well.
(5) the virtual visual point image quality drawn is very high, for " Ballet " and " BreakDancer " data of Microsoft Collection, PSNR is in 34dB or so.Meanwhile the virtual visual point image sequence of generation maintains good time continuity, very great Cheng Scintillation is reduced on degree.
Detailed description of the invention
Fig. 1 is the overall flow figure of virtual visual point synthesizing method of the invention;
Fig. 2 (a) and 2 (b) is respectively texture maps and depth map;
Fig. 3 is the Weighted Fusion hole-filling method schematic diagram based on static background.
Specific embodiment
Below in conjunction with attached drawing, further the present invention is described in detail.
As shown in Figure 1, virtual visual point synthesizing method of the invention includes the following steps:
(1) first the depth map sequence in the two reference view image sequences in left and right is pre-processed.It should be noted that Reference view image sequence in left and right mentioned in the present invention includes the depth map sequence and texture graphic sequence of left and right reference view.This In pretreatment include:(a) since first frame depth map, by the depth difference between two pixel of consecutive frame depth map and in advance The threshold value first set is compared, and so that the depth value for the pixel for meeting threshold range is taken the average value of two pixel depth values, later Next frame depth map is continued with, until processing terminate.By the pretreatment to depth map sequence, can largely avoid The appearance of " artifact ".(b) correction is filtered to single frames depth image, including:The horizontal and vertical directions in depth map Detect the edge contour of object;The object edge that will test out expands two pixels to background direction;To the image after expansion Prospect carries out Gaussian smoothing with the big pixel of background depth variation.By being filtered correction, edge transition to depth map Region can smooth out, can be corresponding with the transition of cromogram fringe region, can weaken to a certain extent due to the two transition The asymmetry in region and caused by " artifact ".
(2) General static background image sequence is extracted:By structural similarity (SSIM) calculation method, respectively for left and right The texture graphic sequence of reference view calculates the structural similarity SSIM of pixel at same position in present frame and subsequent neighboring frames, In, the calculation method of SSIM value can refer to Image quality assessment:from error visibility to Structural similarity [J], Wang Z, Bovik A C, Sheikh H R, et.al.IEEE Trans Image Process.2004,13(4):600-612.When SSIM value is greater than predetermined threshold, it is believed that locating pixel is static pixels, instead It is considered as static pixels if depth value difference is less than predetermined threshold;From the present frame of texture graphic sequence extract by Static pixels composition static scene part, and correspondingly extracted from the correspondence frame of depth map sequence with it is quiet in texture maps Pixel at state pixel same position, using the texture maps and depth map as General static background image.Subsequently, based on ginseng The timing information for examining multi-view image sequences accumulates each General static background figure extracted in the direction of time, most The General static background graphic sequence of left and right reference view is extracted eventually.It should be noted that for left reference view or right reference For viewpoint, General static background figure refers to the combination of General static background texture maps and depth map.
(3) before being carried out to the General static background image sequence of left and right reference view image sequence and left and right reference view to Mapping and fusion:According to 3D Warping equation, by the correspondence depth value and the reference that combine reference view image slices vegetarian refreshments Left and right reference view image can be mapped on the imaging plane of virtual view by the inside and outside parameter of camera and virtual camera.It Afterwards, on the basis of based on the mapping of left reference view, by the way that compared with right reference view depth value, it is empty that fusion generates early period Quasi- visual point image.Wherein, generating virtual visual point image includes to generate corresponding texture maps and depth map.Specifically, carry out When forward mapping, the left reference view and right reference view of virtual view position two sides are mapped respectively.Two with reference to view Point mapping fusion is it is possible to prevente effectively from mutually block bring adverse effect between object.It, be with a left side when carrying out viewpoint fusion Based on reference view, supplemented by right reference view.It particularly, is original reflect with the virtual visual point image after the mapping of left reference view It penetrates as a result, carrying out auxiliary amendment with the virtual visual point image after the mapping of right reference view, i.e.,:The void after mapping of left reference view A certain pixel in quasi- viewpoint depth map is empty or is greater than in the pixel depth value of the right reference view mapping of the pixel position When the depth value of left reference view mapping, left view point is replaced in the pixel position in the mapping result of the pixel position with right viewpoint Set the original mappings result at place.By the above-mentioned means, the virtual visual point image of fused early period can be made closer to true.
The forward mapping and fusion process and left and right reference view image of the General static background image of left and right reference view The forward mapping of sequence and fusion process are completely the same.The General static background image sequence of left and right reference view is by preceding to reflecting Penetrate with merge generate obtain virtual static background image sequence.
(4) " artifact " based on static background is eliminated:It will appear a part on fused virtual visual point image early period " artifact " region, the artifact region be due to left reference view image mapping complete virtual visual point image on have part by The cavity caused by object blocks, when the hole region is filled up by the mapping of right reference view, empty edge will form " pseudo- Shadow ".In order to eliminate " artifact ", it is necessary first to empty edge pixel is marked in fused virtual view early period depth map, But there is the edge of foreground object in empty edge pixel, so empty edge pixel is not accurate " artifact " pixel, need Extract the foreground edge of left reference view and be mapped on virtual view imaging plane, to mark empty edge pixel (that is, The pixel for thering is prospect to interfere) it is modified, real " artifact " pixel for taking out foreground edge interference is obtained, and utilize virtual quiet State background image is eliminated.Here elimination can be accomplished by the following way:In the virtual static background figure subscript of generation Pixel in real " artifact " region is replaced with same position on virtual static background image by the position for remembering " artifact " pixel The pixel at place.
(5) hole-filling:For the cavity occurred on virtual visual point image, in order to can be more while improving repairing quality Retention time continuity well needs the virtual static background image generated before, to the specific of virtual view graphic sequence The cavity occurred in frame is directly filled up,, can be in order to seamlessly transit adjacent interframe for the intermediate frame between particular frame Dynamic weighting fusion is carried out to fill up.For example, as shown in figure 3, the 1st frame V can be set1, the 6th frame V6With the 11st frame V11It is specific Frame, the cavity in this three frame is directly filled up by virtual static background image, for such as V1And V6Between four frame virtual view figures Picture needs to utilize the V after repairing1And V6Two images, by comparing and V1And V6Adjacent distance progress dynamic weighting melts respectively It closes, thus retention time continuity well.Specifically, dynamic weighting fusion can be carried out by the following formula:
Wherein, t frame and t+L frame are the particular frame directly repaired by virtual static background image, and t, k and L are positive Integer,WithIt is the virtual visual point image after directly being repaired by virtual static background image,It is to need It is weighted the virtual visual point image of fusion filling cavity, weight is the weight taken when fusion.It is filled out by Weighted Fusion The smooth transition in adjacent interframe may be implemented in short covering hole.
By above step, final virtual view graphic sequence can be obtained.

Claims (7)

1. one kind has time and space continuity virtual visual point synthesizing method, which is characterized in that include the following steps:
(1) depth map sequence in the reference view image sequence of left and right is corrected, the correction includes:(1.1) judgement is deep Whether two pixels in degree each consecutive frame of graphic sequence at same position are in same depth level, and make in different frame in same The pixel depth value having the same of one depth level;(1.2) single frames depth map is filtered;
(2) depth information in the depth map sequence of the left and right reference view after correction is combined, from left and right reference view image sequence The General static background graphic sequence of left and right reference view is extracted in column respectively;
(3) virtual visual point image early period sequence is generated by the forward mapping of left and right reference view image sequence and viewpoint fusion, And virtual static background figure is generated by the forward mapping of the General static background graphic sequence of left and right reference view and viewpoint fusion Sequence;
(4) region for having foreground edge to interfere in virtual visual point image sequence early period on every frame image is marked, left reference is utilized Viewpoint corresponds to the foreground edge in frame and is modified to the region, obtains real " artifact " area of removal foreground edge interference Domain, and utilize the real " puppet on the revised virtual view early period graphic sequence of virtual static background image sequence elimination generated Shadow " region.
2. it is according to claim 1 have time and space continuity virtual visual point synthesizing method, which is characterized in that further include with Lower step:
(5) virtual static background image sequence is utilized, the t frame in virtual visual point image sequence, t after eliminating to " artifact " + L frame, t+2L frame and subsequent particular frame directly use static background image filling cavity, for the kth between particular frame Frame, wherein t, L and k are positive integer, carry out empty reparation using the following formula:
Wherein,WithIt is the virtual visual point image after directly being repaired by virtual static background image, It is the virtual visual point image for needing to be weighted fusion filling cavity, weight is the weight taken when fusion.
3. according to claim 1 have time and space continuity virtual visual point synthesizing method, which is characterized in that step (1.1) further include:Given threshold, if the depth difference of two pixels in the consecutive frame of depth map sequence at same position is small In the threshold value, then it is assumed that the two pixels are in same depth level, and the depth value of the two pixels is updated to two The average depth value of pixel.
4. according to claim 1 have time and space continuity virtual visual point synthesizing method, which is characterized in that in step (1.2) in, the filtering includes:Horizontal and vertical directions detects the edge contour of object in depth map;It will test Object edge out expands two pixels to background direction;To the display foreground and the big picture of background depth variation after expansion Element carries out Gaussian smoothing.
5. according to claim 1 have time and space continuity virtual visual point synthesizing method, which is characterized in that step (2) Further include:
(2.1) phase in present frame and subsequent neighboring frames is calculated for the texture graphic sequence in left and right reference view image sequence respectively With the structural similarity SSIM of pixel at position, when SSIM value is greater than predetermined threshold, it is believed that locating pixel is static pixels, , whereas if being considered as static pixels when depth value difference is less than predetermined threshold;
(2.2) the static scene part being made of in each frame of left and right reference view image sequence static pixels is extracted respectively, and In conjunction with the timing information of each frame, the General static background graphic sequence of left and right reference view is formed.
6. according to claim 1 have time and space continuity virtual visual point synthesizing method, which is characterized in that in step (3) in, virtual visual point image early period sequence packet is generated by forward mapping and the viewpoint fusion of left and right reference view image sequence It includes:When carrying out forward mapping, the left and right reference view image sequence of virtual view position two sides is mapped respectively;Into When row viewpoint merges, using the virtual visual point image after the mapping of left reference view as original mappings as a result, with the mapping of right reference view Virtual visual point image afterwards carries out auxiliary amendment, and the auxiliary amendment includes:The virtual view after mapping of left reference view is deep The a certain pixel spent in figure is empty or is greater than in the pixel depth value that the right reference view of the pixel position maps left with reference to view When the depth value of point mapping, left view point is replaced in the original of the pixel position in the mapping result of the pixel position with right viewpoint Beginning mapping result.
7. according to claim 1 have time and space continuity virtual visual point synthesizing method, which is characterized in that step (4) It further include following steps:
(4.1) foreground edge of left reference view is extracted, and is mapped on virtual view imaging plane;
(4.2) based on the foreground edge through mapping in step (4.1) to the picture in the region for thering is foreground edge to interfere marked Element is modified, and obtains real " artifact " region;
(4.3) position that " artifact " pixel is marked on the virtual static background figure of generation, will be in real " artifact " region Pixel replaces with the pixel on virtual static background image at same position.
CN201810698366.9A 2018-06-29 2018-06-29 With time and space continuity virtual visual point synthesizing method Pending CN108833879A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810698366.9A CN108833879A (en) 2018-06-29 2018-06-29 With time and space continuity virtual visual point synthesizing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810698366.9A CN108833879A (en) 2018-06-29 2018-06-29 With time and space continuity virtual visual point synthesizing method

Publications (1)

Publication Number Publication Date
CN108833879A true CN108833879A (en) 2018-11-16

Family

ID=64133602

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810698366.9A Pending CN108833879A (en) 2018-06-29 2018-06-29 With time and space continuity virtual visual point synthesizing method

Country Status (1)

Country Link
CN (1) CN108833879A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109767413A (en) * 2019-01-11 2019-05-17 深圳岚锋创视网络科技有限公司 A kind of the HDR method, apparatus and portable terminal of anti-motion artifacts
CN110660131A (en) * 2019-09-24 2020-01-07 宁波大学 Virtual viewpoint hole filling method based on depth background modeling
CN115908162A (en) * 2022-10-28 2023-04-04 中山职业技术学院 Virtual viewpoint generation method and system based on background texture recognition

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1466737A (en) * 2000-08-09 2004-01-07 动态数字视距研究有限公司 Image conversion and encoding techniques
CN101771893A (en) * 2010-01-05 2010-07-07 浙江大学 Video frequency sequence background modeling based virtual viewpoint rendering method
CN103269438A (en) * 2013-05-27 2013-08-28 中山大学 Method for drawing depth image on the basis of 3D video and free-viewpoint television
CN104010180A (en) * 2014-06-13 2014-08-27 华为技术有限公司 Method and device for filtering three-dimensional video
US20160150208A1 (en) * 2013-07-29 2016-05-26 Peking University Shenzhen Graduate School Virtual viewpoint synthesis method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1466737A (en) * 2000-08-09 2004-01-07 动态数字视距研究有限公司 Image conversion and encoding techniques
CN101771893A (en) * 2010-01-05 2010-07-07 浙江大学 Video frequency sequence background modeling based virtual viewpoint rendering method
CN103269438A (en) * 2013-05-27 2013-08-28 中山大学 Method for drawing depth image on the basis of 3D video and free-viewpoint television
US20160150208A1 (en) * 2013-07-29 2016-05-26 Peking University Shenzhen Graduate School Virtual viewpoint synthesis method and system
CN104010180A (en) * 2014-06-13 2014-08-27 华为技术有限公司 Method and device for filtering three-dimensional video

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
席明: "基于自由立体显示器的三维视频重建和显示技术研究", 《CNKI》 *
王浩: "多视点信息融合与合成技术研究", 《CNKI》 *
陈坤斌: "构造全局背景的虚拟视点合成算法", 《信号处理》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109767413A (en) * 2019-01-11 2019-05-17 深圳岚锋创视网络科技有限公司 A kind of the HDR method, apparatus and portable terminal of anti-motion artifacts
CN109767413B (en) * 2019-01-11 2022-11-29 影石创新科技股份有限公司 HDR method and device for resisting motion artifacts and portable terminal
CN110660131A (en) * 2019-09-24 2020-01-07 宁波大学 Virtual viewpoint hole filling method based on depth background modeling
CN110660131B (en) * 2019-09-24 2022-12-27 宁波大学 Virtual viewpoint hole filling method based on deep background modeling
CN115908162A (en) * 2022-10-28 2023-04-04 中山职业技术学院 Virtual viewpoint generation method and system based on background texture recognition
CN115908162B (en) * 2022-10-28 2023-07-04 中山职业技术学院 Virtual viewpoint generation method and system based on background texture recognition

Similar Documents

Publication Publication Date Title
CN103581648B (en) Draw the hole-filling method in new viewpoint
CN104780355B (en) Empty restorative procedure based on the degree of depth in a kind of View Synthesis
CN102075779B (en) Intermediate view synthesizing method based on block matching disparity estimation
CN109712067A (en) A kind of virtual viewpoint rendering method based on depth image
CN110660131B (en) Virtual viewpoint hole filling method based on deep background modeling
CN104378619B (en) A kind of hole-filling algorithm rapidly and efficiently based on front and back's scape gradient transition
CN104756489B (en) A kind of virtual visual point synthesizing method and system
KR101415147B1 (en) A Boundary Noise Removal and Hole Filling Method for Virtual Viewpoint Image Generation
CN108833879A (en) With time and space continuity virtual visual point synthesizing method
CN103414909B (en) A kind of hole-filling method being applied to dimensional video virtual viewpoint synthesis
CN102325259A (en) Method and device for synthesizing virtual viewpoints in multi-viewpoint video
CN111047709B (en) Binocular vision naked eye 3D image generation method
Do et al. Quality improving techniques for free-viewpoint DIBR
CN106060509B (en) Introduce the free view-point image combining method of color correction
CN106791774A (en) Virtual visual point image generating method based on depth map
CN103269438A (en) Method for drawing depth image on the basis of 3D video and free-viewpoint television
CN111325693A (en) Large-scale panoramic viewpoint synthesis method based on single-viewpoint RGB-D image
CN107018401B (en) Virtual view hole-filling method based on inverse mapping
CN106028020B (en) A kind of virtual perspective image cavity complementing method based on multi-direction prediction
CN104661013B (en) A kind of virtual viewpoint rendering method based on spatial weighting
CN108924434B (en) Three-dimensional high dynamic range image synthesis method based on exposure transformation
CN110062219B (en) 3D-HEVC (high efficiency video coding) whole frame loss error concealment method by combining virtual viewpoint drawing
TW201333880A (en) Visual angle composing method for detecting mismatch depth and reducing the depth error
CN113450274B (en) Self-adaptive viewpoint fusion method and system based on deep learning
CN103379350B (en) Virtual viewpoint image post-processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20181116

RJ01 Rejection of invention patent application after publication