CN108234985B - Filtering method under dimension transformation space for rendering processing of reverse depth map - Google Patents

Filtering method under dimension transformation space for rendering processing of reverse depth map Download PDF

Info

Publication number
CN108234985B
CN108234985B CN201810235544.4A CN201810235544A CN108234985B CN 108234985 B CN108234985 B CN 108234985B CN 201810235544 A CN201810235544 A CN 201810235544A CN 108234985 B CN108234985 B CN 108234985B
Authority
CN
China
Prior art keywords
depth map
filtering
image
dimension
under
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810235544.4A
Other languages
Chinese (zh)
Other versions
CN108234985A (en
Inventor
刘伟
郑扬冰
张玉
张新刚
崔明月
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanyang Normal University
Original Assignee
Nanyang Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanyang Normal University filed Critical Nanyang Normal University
Priority to CN201810235544.4A priority Critical patent/CN108234985B/en
Publication of CN108234985A publication Critical patent/CN108234985A/en
Application granted granted Critical
Publication of CN108234985B publication Critical patent/CN108234985B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping

Abstract

The invention discloses a filtering method under a dimension transformation space for rendering processing of a reverse depth map, which mainly comprises the following steps: step 1: combining the reference view image I under the dimension transform domain by adopting a joint filter F1oriFor the original depth map DoriCarrying out joint filtering to generate a first-time filtered depth map D1; step 2: adopting a joint filter F2 to carry out joint filtering on the depth map D1 after forward 3D mapping and the scene gradient structure G after forward 3D mapping under a dimension transform domain to generate an optimized depth map D under a target view angle2(ii) a And step 3: combining with the depth map D2, for the reference view image IoriPerforming reverse 3D texture mapping to generate a new view angle image Inew. The filtering method in the dimension transformation space for the rendering processing of the reverse depth map can realize the hole repair and optimization of the depth map under the virtual viewpoint, replaces the complex image repair technology in the prior art, and ensures the operation efficiency of the rendering processing process of the reverse depth map.

Description

Filtering method under dimension transformation space for rendering processing of reverse depth map
Technical Field
The invention relates to the technical field of three-dimensional views, in particular to a filtering method under a dimension transformation space for rendering processing of a reverse depth map.
Background
At present, three-dimensional (3D) video is gradually popularized, and a chinese central television station (CCTV) also tries to broadcast a 3D channel in the year 2012, and 3D video has gradually become a trend of current development. However, video source deficiencies are a major bottleneck limiting the rise of this industry. In this case, converting 2D video to 3D video is an effective way to solve this problem.
The virtual viewpoint synthesis method is one of the key technologies in converting 2D video into 3D video. Currently, among many synthesis methods, Depth-based Rendering (DIBR) has become an internationally recognized technical solution. The method is characterized in that a depth map corresponding to each frame is added on the basis of an original video, and finally, a display terminal embedded with a DIBR processing module outputs and converts the depth map into a binocular stereo video (see' overview [ J ] of movie 2D/3D conversion technology, Liuwei, Wuyi Yihong, Huzhuang, the bulletin of computer aided design and graphics, 2012,24(1): 14-28). The synthesis technology has obvious technical advantages in the aspects of compression transmission efficiency, compatibility of different devices, real-time depth of field adjustment, rapid rendering synthesis and the like, plays an absolute leading role in the markets of emerging 3DTV, 3D mobile terminals and the like, and is the direction of future development of the 3D rendering technology.
The traditional DIBR rendering is based on forward three-dimensional image transformation, namely under the condition that a reference viewpoint image and a corresponding depth map are known, according to a three-dimensional mapping (3D warping) equation, three-dimensional coordinates of each pixel point in a space under a reference view angle are restored, and then the three-dimensional coordinates are projected to a two-dimensional imaging plane of a virtual viewpoint, so that the virtual viewpoint image is obtained. Although this technique has many advantages, there are some problems that are difficult to avoid, such as overlapping, resampling, hole phenomenon, etc. For these problems, the processing flow shown in fig. 1 is often adopted at present, and a plurality of problems are solved by merging a plurality of depth map preprocessing methods before DIBR and merging a plurality of image restoration techniques after DIBR. Although the processing effect of the scheme flow is obvious, a balance point is difficult to find in the aspects of rendering effect and conversion efficiency due to a plurality of link coping problems.
Besides, there is also a reverse depth map rendering processing scheme. Under the scheme, the depth image of the virtual viewpoint is obtained through traditional 3D transformation, and then reverse 3D transformation is carried out on the basis of the depth image optimized under the virtual viewpoint, so that the color image of the virtual viewpoint is obtained. The scheme avoids the many-to-one relation of pixels in the 3D mapping of the traditional rendering method in principle through a reverse flow, so that the problems of overlapping and oversampling can be well solved, and the virtual view is greatly improved in subjective effect. However, in such methods, the processing aiming at the void phenomenon is still put in the image restoration link after the virtual image is generated at present, the calculation complexity is high, and the operation efficiency of the whole processing flow is influenced to a certain extent.
Disclosure of Invention
The invention aims to provide a filtering method under a dimension transformation space for rendering processing of a reverse depth map, so as to realize hole restoration and optimization of the depth map under a virtual viewpoint, replace a complex image restoration technology and ensure the advantage of operating efficiency in a three-dimensional image rendering process.
In order to achieve the purpose, the invention adopts the technical scheme that: the filtering method under the dimension transformation space for the rendering processing of the reverse depth map comprises the following steps:
step 1: combining the reference view image I under the dimension transform domain by adopting a joint filter F1oriFor the original depth map DoriCarrying out joint filtering to generate a first-time filtered depth map D1;
step 2: adopting a joint filter F2 to carry out joint filtering on the depth map D1 after forward 3D mapping and the scene gradient structure G after forward 3D mapping under a dimension transform domain to generate an optimized depth map D under a target view angle2
And step 3: combining with the depth map D2, for the reference view image IoriPerforming reverse 3D texture mapping to generate a new view angle image Inew
Further, the filtering function adopted by the joint filter F1 in step 1 is:
Figure BDA0001603814640000021
wherein D isori[n]Representing pixel values of a row or column on the original depth map, a ∈ (0,1) being the feedback coefficient of the diffusion function, d1 representing the neighboring sample x in the dimension transform domain of the joint filter F1nAnd xn-1The distance between them.
Further, the dimension transform domain neighboring samples x in the joint filter F1nAnd xn-1The distance between is defined as:
d1=ct1(xn)-ct1(xn-1)
wherein, ct1(u) denotes the dimension transform domain in the joint filter F1,
the dimension transformation process is then:
Figure BDA0001603814640000031
wherein, | I'ori(x) I represents the gradient strength of the input reference view image, σsAnd σrRespectively, spatial and value-domain parameters of the joint filter, for adjusting the effect of the filtering, sigmasThe value range is 200-2500, sigmarThe value range is 0.1-10.
Further, the filtering function adopted by the joint filter F2 in step 2 is:
Figure BDA0001603814640000032
wherein D iswarp[n]Representing pixel values of a row or column on the depth map after mapping under the target view, operator xiwarprAnd beta) represents a depth map alpha based on a reference view anglerForward 3D mapping of image β at the reference view, a ∈ (0,1) being the feedback coefficient of the diffusion function, D2 representing the neighboring sample x in the dimension transform domain of the joint filter F2nAnd xn-1The distance between them.
Further, the dimension transform domain neighboring samples x in the joint filter F2nAnd xn-1The distance between is defined as:
d2=ct2(xn)-ct2(xn-1)
wherein, ct2(u) denotes the dimension in the joint filter F2And (4) degree transformation domain, the dimension transformation process is as follows:
Figure BDA0001603814640000041
wherein G iswarpRepresenting the scene gradient strength and operator xi under the target view after mappingwarprAnd beta) represents a depth map alpha based on a reference view anglerForward 3D mapping of image β at reference view, | I'ori(x) L represents the gradient strength of the input reference view angle image, | S'oriI represents the gradient strength, sigma, of the visual saliency distribution map corresponding to the reference perspective imagesAnd σrRespectively, spatial and value-domain parameters of the joint filter, for adjusting the effect of the filtering, sigmasThe value range is 200-2500, sigmarThe value range is 0.1-10, gamma is a weight factor, and the value range is 1-5.
Further, generating a new view angle image I in step 3newThe process comprises the following steps:
Figure BDA0001603814640000042
wherein the operator
Figure BDA0001603814640000043
Representing a depth map alpha based on the target view angletThe image beta at the reference view is inversely 3D mapped.
Further, the filtering process of the joint filter F1 and the joint filter F2 is an iterative process, and to implement symmetric filtering, if filtering is performed in the image from left to right and from top to bottom in one iteration, filtering is performed in the image from right to left and from bottom to top in the next iteration. The iteration times are 2-10 times.
The invention has the beneficial technical effects that:
based on the reverse depth map rendering processing scheme, the invention realizes the hole restoration and optimization of the depth map under the virtual viewpoint through two filters in the dimension transformation space, replaces the complex image restoration technology, and ensures the operation efficiency of the three-dimensional image rendering process, thereby realizing an efficient filtering method under the dimension transformation space for the reverse depth map rendering processing.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a flow chart of a conventional DIBR system process;
FIG. 2 is a flow chart of a filtering method in a dimension transformation space for inverse depth map rendering according to the present invention;
FIG. 3(a) is a reference view image of an experiment performed according to an embodiment of the present invention;
FIG. 3(b) is an initial depth image for experiments performed by embodiments of the present invention;
FIG. 3(c) is an initial depth map after being processed by the combined filter F1 in the experiment according to the embodiment of the present invention;
FIG. 3(D) is a depth map mapped to a virtual view by 3D warping performed in an experiment according to an embodiment of the present invention;
fig. 3(e) is a depth map under a virtual view obtained after performing secondary filtering optimization in an experiment according to the embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
FIG. 1 shows a conventional DIBR system process flow. The process is an important step in the 2D/3D conversion method, describes an accurate point-to-point mapping relationship, and can render a virtual stereo video by using depth information, thereby finally completing the 'qualitative conversion' from 2D to 3D. Although this technique has many advantages, there are some problems that are difficult to avoid, such as overlapping, resampling, hole phenomenon, etc. In view of these problems, as shown in fig. 1, a plurality of problems are solved by merging a plurality of depth map preprocessing methods before DIBR and merging a plurality of image restoration techniques after DIBR. Although the processing effect of the scheme flow is obvious, a balance point is difficult to find in the aspects of rendering effect and conversion efficiency due to a plurality of link coping problems.
3D forwarding in the traditional DIBR system processing flow is a key technology, and the theoretical basis is a three-dimensional mapping equation proposed by MCMILLAN in "Modeling and rendering architecture from photos: a hybrid geometry-and image-based approach" in 1997, which is a forward mapping (forward mapping) process, i.e. a process of mapping from a reference viewpoint to a virtual viewpoint. In addition, Morvan in 2009 provides a reverse 3D image mapping method in "Acquisition, compression and rendering of depth and texture for multi-view video", which can solve the problems of overlapping and resampling well, and greatly improve the subjective effect of a virtual view. Compared with the traditional DIBR processing flow, the scheme has great advantages, but at present, the scheme still adopts an image restoration method to process the void phenomenon, and the improvement of the conversion efficiency is influenced to a certain extent.
Aiming at the problem, the method designs two filters in the dimension transformation space to replace the image restoration technology, thereby realizing an efficient rendering processing method of the reverse depth map in the dimension transformation space based on the restoration of the depth map in the virtual view.
The method of the invention takes the reference visual angle image and the initial depth map as input data sources, and generates a new synthesized visual angle image after processing. Fig. 2 is a flow chart of the method of the present invention, and an embodiment of the present invention is described in conjunction with fig. 2.
The method is mainly realized by a joint filter under two dimension transform domains in a reverse depth map rendering process, and specifically comprises the following steps:
combining the reference view image I under the dimension transform domain by adopting a joint filter F1oriFor the original depth map DoriCarrying out joint filtering to generate a first-time filtered depth map D1;
the filter function is defined as follows:
Figure BDA0001603814640000061
wherein D isori[n]Representing pixel values of a row or column on the original depth map, a ∈ (0,1) being the feedback coefficient of the diffusion function, d1 representing the neighboring sample x in the dimension transform domain of filter F1nAnd xn-1The distance between them. Dimension transform domain neighboring samples x in joint filter F1nAnd xn-1The distance between is again defined as: d1=ct1(xn)-ct1(xn-1)。
The dimension transformation Domain is a transformation space obtained by a method proposed by 2011 Eduardo S.L.Gastal et al in the article "Domain transformation for edge-aware image and video processing", and the maximum advantage of the dimension transformation Domain is that the multidimensional space is reduced into a one-dimensional space on the premise of ensuring the texture characteristics of an image, so that the calculation efficiency is greatly improved. Specifically, ct1(u) denotes the dimension transform domain in the joint filter F1, the dimension transform process is:
Figure BDA0001603814640000071
wherein, | I'ori(x) I represents the gradient strength of the input reference view image, σsAnd σrRespectively, filter space and value domain parameters, for adjusting the effect of filtering, σsThe value range is 200-2500, sigmarThe value range is 0.1-10.
The filter is positioned in front of the depth map 3D warping, so that the function of the filter is similar to that of a depth map preprocessing link in the traditional DIBR flow, and the depth transformation gradient between layers is reduced through depth map preprocessing, so that the generation of holes in the 3D warping is reduced. It should be noted that, depth map filtering in the conventional DIBR process only exists before 3D forwarding and global filtering is mostly used, so that a larger filtering kernel is often selected to suppress the generation of holes as much as possible, thereby possibly causing deformation of a part of a straight line region. According to the method, the secondary filtering is adopted after the 3D warping to specially process the cavity generated in the 3D warping, so that the filtering core can be controlled in a limited way; more importantly, the filter is a joint filter based on dimension space transformation, and is therefore an adaptive local filter in nature, and can more effectively carry out filtering with different intensities aiming at different regions.
Adopting a joint filter F2 to carry out joint filtering on the depth map D1 after forward 3D mapping and the scene gradient structure G after forward 3D mapping under a dimension transform domain to generate an optimized depth map D under a target view angle2. The joint filter F2 in the dimension transform domain is such that the filter function is defined as follows:
Figure BDA0001603814640000072
wherein D iswarp[n]Representing pixel values of a row or column on the depth map after mapping under the target view, operator xiwarprAnd beta) represents a depth map alpha based on a reference view anglerForward 3D mapping of image β at the reference view, a ∈ (0,1) being the feedback coefficient of the diffusion function, D2 representing the neighboring sample x in the dimension transform domain of filter F2nAnd xn-1The distance between them. d2 may be further defined as:
d2=ct2(xn)-ct2(xn-1)
wherein, ct2(u) denotes the dimension transform domain in the joint filter F2, the dimension transform process is:
Figure BDA0001603814640000081
wherein G iswarpRepresenting the scene gradient strength and operator xi under the target view after mappingwarprAnd beta) represents a depth map alpha based on a reference view anglerForward 3D mapping of image β at reference view, | I'ori(x) L represents the gradient strength of the input reference view angle image, | S'oriI represents the gradient strength, sigma, of the visual saliency distribution map corresponding to the reference perspective imagesAnd σrRespectively, filter space and value domain parameters, for adjusting the effect of filtering, σsThe value range is 200-2500, sigmarThe value range is 0.1-10, gamma is a weight factor, and the value range is 1-5.
The basic framework of the filter is similar to filter F1. Although the filter F1 has suppressed the generation of the holes to a large extent, the holes caused by the large parallax cannot be avoided. The filter F2 is used to replace the image restoration method in the prior reverse processing flow, and the remaining holes are eliminated by filtering. Because the filter is also based on the dimension transformation framework, the processing efficiency of the filter is greatly higher than that of the image restoration method. Wherein SoriThe visual saliency distribution graph corresponding to the original image is obtained based on a method in a 'significant region detection view high-dimensional color transform and local spatial support' in the embodiment of the invention, and can also be obtained by other similar methods, and the main function of the visual saliency distribution graph is to strengthen the joint filtering constraint of a filter F2 and prevent secondary filtering from excessively smoothing the region with high saliency.
Although the method of the invention carries out filtering twice, the dimension of the filter is reduced under the dimension transformation space, the operation efficiency is far higher than that of the combined filter under the traditional two-dimensional space, and the processing efficiency of the system can be ensured. The traditional depth smoothing filter operates in a two-dimensional space, the two filters designed by the method of the invention reduce the dimension to operate in a one-dimensional space, and in order to achieve the same effect, in a specific embodiment, the filtering is realized in an iterative mode. Also, because the above-defined dimension transformation process is asymmetric, to implement symmetric filtering, if filtering is performed in the image from left to right and from top to bottom in one iteration, filtering is performed in the image from right to left and from bottom to top in the next iteration. The iteration frequency is 2-10 times, the filtering effect can be stable after 3 times of general iteration, and the iteration frequency in a simulation experiment is 3 times.
Combining with the depth map D2, for the reference view image IoriPerforming reverse 3D texture mapping to generate a new view angle image Inew
After obtaining the depth map without holes after optimization under the virtual view, an image under a new view can be generated according to a reverse three-dimensional mapping process mentioned by Morvan in Acquisition, compression and rendering of depth and texture for multi-view video':
Figure BDA0001603814640000091
wherein the operator
Figure BDA0001603814640000092
Representing a depth map alpha based on the target view angletThe image beta at the reference view is inversely 3D mapped.
Experimental verification of the method of the invention is as follows.
1) The experimental conditions are as follows:
in the CPU as
Figure BDA0001603814640000093
CoreTMExperiments were performed on a 2 Quad CPU Q9400 @ 2.66GHz, memory 4G, Windows 7 system;
2) the experimental contents are as follows:
details of the experimental implementation of the method according to the invention are described in detail below with reference to fig. 3;
fig. 3 is the case when two sets of experimental images are processed. Fig. 3(a) is a reference view image, and fig. 3(b) is an initial depth image. Fig. 3(c) is the initial depth image after being processed by the filter F1, and it can be seen that under the effect of the joint filtering, the original depth image achieves different degrees of smoothing in different areas according to the scene structure. Fig. 3(D) is a depth map mapped to a virtual viewing angle through 3D warping, and it can be seen that a partial hole region still appears at an image layer transition with a large disparity transformation. Fig. 3(e) is a depth map under a virtual viewing angle obtained after the second filtering optimization, and it can be seen that not only the void region is completely eliminated, but also details in some background regions are enhanced, for example, in a house region of the background of the first group of experimental images, the depth map has a clearer outline.
The method is based on a reverse depth map rendering processing scheme, but is different from the prior method in that the method replaces image restoration with a front filtering process and a rear filtering process based on dimensional space transformation to solve the problem of the hole. On one hand, the filtering process can obtain better processing efficiency than the image restoration method; on the other hand, the core of the reverse flow is to optimize the depth map under the virtual view, and the depth map has a plurality of layers with smooth regions compared with the texture image, so that the depth map is more suitable for processing the hole problem by adopting the filtering technology. Therefore, the method can effectively ensure the operation efficiency of the algorithm while realizing the improvement of the conversion effect of the 3D video.
At least the following beneficial effects can be achieved:
based on the reverse depth map rendering processing scheme, the invention realizes the hole restoration and optimization of the depth map under the virtual viewpoint through two filters in the dimension transformation space, replaces the complex image restoration technology, and ensures the operation efficiency of the three-dimensional image rendering process, thereby realizing an efficient filtering method under the dimension transformation space for the reverse depth map rendering processing.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (3)

1. The filtering method under the dimension transformation space for the rendering processing of the reverse depth map is characterized by comprising the following steps of:
step 1: combining the reference view image I under the dimension transform domain by adopting a joint filter F1oriFor the original depth map DoriCarrying out combined filtering to generate a depth map D after first filtering1
Step 2: depth map D under dimension transform domain using joint filter F21Performing combined filtering on the result subjected to forward 3D mapping and the result subjected to forward 3D mapping on the original scene gradient structure G to generate an optimized depth map D under a target visual angle2
And step 3: combined depth map D2For reference view angle image IoriPerforming reverse 3D texture mapping to generate a new view angle image Inew
The filtering function adopted by the joint filter F1 in step 1 is:
Figure 251160DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 202936DEST_PATH_IMAGE002
pixel values representing pixel points of a line on the original depth map,
Figure 941085DEST_PATH_IMAGE003
is the feedback coefficient of the diffusion function, d1Adjacent samples x in the dimension transform domain representing a joint filter F1nAnd xn-1The distance between them;
neighboring samples x in the dimension transform domain of the joint filter F1nAnd xn-1The distance between is defined as:
Figure 852538DEST_PATH_IMAGE004
the dimension transformation process is as follows:
Figure 308928DEST_PATH_IMAGE005
wherein the content of the first and second substances,
Figure 798815DEST_PATH_IMAGE006
representing the dimension transform domain in the joint filter F1,
Figure 391470DEST_PATH_IMAGE007
representing the gradient strength, σ, of the input reference view imagesAnd σrRespectively, filter space and value domain parameters, for adjusting the effect of filtering, σsThe value range is 200-2500, sigmarThe value range is 0.1-10;
the filtering function adopted by the joint filter F2 in step 2 is:
Figure 941400DEST_PATH_IMAGE008
wherein the content of the first and second substances,
Figure 619506DEST_PATH_IMAGE009
representing pixel values of pixel points in a line on the depth map after mapping under the target view angle, and operator
Figure 929396DEST_PATH_IMAGE010
Representing depth maps D based on reference views1For depth map D at reference view angle1A forward 3D mapping is performed and,
Figure 376558DEST_PATH_IMAGE011
is the feedback coefficient of the diffusion function, d2Adjacent samples in the dimension transform domain representing filter F2xnAnd xn-1The distance between them;
neighboring samples x in the dimension transform domain of the joint filter F2nAnd xn-1The distance between is defined as:
Figure 97389DEST_PATH_IMAGE012
the dimension transformation process is as follows:
Figure 528370DEST_PATH_IMAGE013
wherein the content of the first and second substances,
Figure 360060DEST_PATH_IMAGE014
representing the dimension transform domain, G, in a joint filter F2warpRepresenting scene gradient strength at the mapped target view, operator
Figure 176575DEST_PATH_IMAGE015
Representing depth maps D based on reference views1The scene gradient strength G at the reference view is forward 3D mapped,
Figure 68308DEST_PATH_IMAGE016
represents the gradient strength of the input reference view image,
Figure 721006DEST_PATH_IMAGE017
the gradient strength, σ, representing the visual saliency map corresponding to the reference perspective imagesAnd σrRespectively, filter space and value domain parameters, for adjusting the effect of filtering, σsThe value range is 200-2500, sigmarThe value range is 0.1-10, gamma is a weight factor, and the value range is 1-5.
2. The filtering method under the dimension transformation space for the inverse depth map rendering process according to claim 1, wherein the step 3 of generating the new view image is:
Figure 621966DEST_PATH_IMAGE018
wherein the operator
Figure 778141DEST_PATH_IMAGE019
Representing depth map D based on target view2For image I under reference view angleoriInverse 3D mapping is performed.
3. The filtering method under the dimension transform space for the inverse depth map rendering process of claim 1, wherein: the filtering process of the combined filters F1 and F2 is an iteration process, and for realizing symmetric filtering, if filtering is performed in an image from left to right and from top to bottom in one iteration, filtering is performed from right to left and from bottom to top in the next iteration, and the iteration frequency is 2-10 times.
CN201810235544.4A 2018-03-21 2018-03-21 Filtering method under dimension transformation space for rendering processing of reverse depth map Active CN108234985B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810235544.4A CN108234985B (en) 2018-03-21 2018-03-21 Filtering method under dimension transformation space for rendering processing of reverse depth map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810235544.4A CN108234985B (en) 2018-03-21 2018-03-21 Filtering method under dimension transformation space for rendering processing of reverse depth map

Publications (2)

Publication Number Publication Date
CN108234985A CN108234985A (en) 2018-06-29
CN108234985B true CN108234985B (en) 2021-09-03

Family

ID=62659857

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810235544.4A Active CN108234985B (en) 2018-03-21 2018-03-21 Filtering method under dimension transformation space for rendering processing of reverse depth map

Country Status (1)

Country Link
CN (1) CN108234985B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109544671B (en) * 2018-11-12 2022-07-19 浙江大学 Projection mapping method of video in three-dimensional scene based on screen space
US11257283B2 (en) 2019-03-07 2022-02-22 Alibaba Group Holding Limited Image reconstruction method, system, device and computer-readable storage medium
CN111669564B (en) * 2019-03-07 2022-07-26 阿里巴巴集团控股有限公司 Image reconstruction method, system, device and computer readable storage medium
CN112634339A (en) * 2019-09-24 2021-04-09 阿里巴巴集团控股有限公司 Commodity object information display method and device and electronic equipment
CN110738626B (en) * 2019-10-24 2022-06-28 广东三维家信息科技有限公司 Rendering graph optimization method and device and electronic equipment
CN112203074B (en) * 2020-12-07 2021-03-02 南京爱奇艺智能科技有限公司 Camera translation new viewpoint image generation method and system based on two-step iteration

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101458986B1 (en) * 2013-04-22 2014-11-13 광운대학교 산학협력단 A Real-time Multi-view Image Synthesis Method By Using Kinect
CN105103453A (en) * 2013-04-08 2015-11-25 索尼公司 Data encoding and decoding

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103208110B (en) * 2012-01-16 2018-08-24 展讯通信(上海)有限公司 The conversion method and device of video image
CN102831603A (en) * 2012-07-27 2012-12-19 清华大学 Method and device for carrying out image rendering based on inverse mapping of depth maps
CN103391446A (en) * 2013-06-24 2013-11-13 南京大学 Depth image optimizing method based on natural scene statistics
US10462445B2 (en) * 2016-07-19 2019-10-29 Fotonation Limited Systems and methods for estimating and refining depth maps
CN106780705B (en) * 2016-12-20 2020-10-16 南阳师范学院 Depth map robust smooth filtering method suitable for DIBR preprocessing process

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105103453A (en) * 2013-04-08 2015-11-25 索尼公司 Data encoding and decoding
KR101458986B1 (en) * 2013-04-22 2014-11-13 광운대학교 산학협력단 A Real-time Multi-view Image Synthesis Method By Using Kinect

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A Novel Stereoscopic View Generation Approach Based on Visual Attention Analysis;Wei Liu等;《Proceedings of the 36th Chinese Control Conference》;20170628;全文 *
自由视点立体电视系统的虚拟视点合成技术研究;王震;《CNKI》;20120710;全文 *

Also Published As

Publication number Publication date
CN108234985A (en) 2018-06-29

Similar Documents

Publication Publication Date Title
CN108234985B (en) Filtering method under dimension transformation space for rendering processing of reverse depth map
Daribo et al. A novel inpainting-based layered depth video for 3DTV
CN102592275B (en) Virtual viewpoint rendering method
Yang et al. DIBR based view synthesis for free-viewpoint television
CN102625127A (en) Optimization method suitable for virtual viewpoint generation of 3D television
CN104869386A (en) Virtual viewpoint synthesizing method based on layered processing
Solh et al. Hierarchical hole-filling (HHF): Depth image based rendering without depth map filtering for 3D-TV
Zhu et al. An improved depth image based virtual view synthesis method for interactive 3D video
Xi et al. Depth-image-based rendering with spatial and temporal texture synthesis for 3DTV
Lu et al. A survey on multiview video synthesis and editing
CN106791770B (en) A kind of depth map fusion method suitable for DIBR preprocessing process
Feng et al. Asymmetric bidirectional view synthesis for free viewpoint and three-dimensional video
CN106780705B (en) Depth map robust smooth filtering method suitable for DIBR preprocessing process
CN103945209A (en) DIBR method based on block projection
Yao et al. Fast and high-quality virtual view synthesis from multi-view plus depth videos
Wang et al. Virtual view synthesis without preprocessing depth image for depth image based rendering
Daribo et al. Bilateral depth-discontinuity filter for novel view synthesis
Lee et al. High-Resolution Depth Map Generation by Applying Stereo Matching Based on Initial Depth Informaton
CN107592519A (en) Depth map preprocess method based on directional filtering under a kind of dimension transformation space
WO2022155950A1 (en) Virtual viewpoint synthesis method, electronic device and computer readable medium
Lin et al. Fast multi-view image rendering method based on reverse search for matching
Cheng et al. A DIBR method based on inverse mapping and depth-aided image inpainting
Gao et al. A newly virtual view generation method based on depth image
Lee et al. Hole Filling in Image Conversion Using Weighted Local Gradients.
Guo et al. Efficient image warping in parallel for multiview three-dimensional displays

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant