CN108234985A - The filtering method under the dimension transformation space of processing is rendered for reversed depth map - Google Patents

The filtering method under the dimension transformation space of processing is rendered for reversed depth map Download PDF

Info

Publication number
CN108234985A
CN108234985A CN201810235544.4A CN201810235544A CN108234985A CN 108234985 A CN108234985 A CN 108234985A CN 201810235544 A CN201810235544 A CN 201810235544A CN 108234985 A CN108234985 A CN 108234985A
Authority
CN
China
Prior art keywords
depth map
under
dimension
image
reversed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810235544.4A
Other languages
Chinese (zh)
Other versions
CN108234985B (en
Inventor
刘伟
郑扬冰
张玉
张新刚
崔明月
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanyang Normal University
Original Assignee
Nanyang Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanyang Normal University filed Critical Nanyang Normal University
Priority to CN201810235544.4A priority Critical patent/CN108234985B/en
Publication of CN108234985A publication Critical patent/CN108234985A/en
Application granted granted Critical
Publication of CN108234985B publication Critical patent/CN108234985B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses the filtering methods under the dimension transformation space that processing is rendered for reversed depth map, mainly include:Step 1:Using associated filters F1, reference viewing angle image I is combined under dimension transform domainoriTo original depth-map DoriCarry out Federated filter, generation first time filtered depth map D1;Step 2:Federated filter is carried out, generates the depth map D after optimizing under aspect using associated filters F2, the scene gradient-structure G after depth map D1 and forward direction 3D mappings after being mapped under dimension transform domain positive 3D2;Step 3:With reference to depth map D2, to reference viewing angle image IoriCarry out reversed 3D texture mappings, generation New Century Planned Textbook image Inew.Filtering method under the dimension transformation space that processing is rendered for reversed depth map of the present invention, it can realize that the cavity of depth map under virtual view is repaired and optimized, instead of image repair technology complicated in existing scheme, it ensure that reversed depth map renders the operational efficiency of processing procedure.

Description

The filtering method under the dimension transformation space of processing is rendered for reversed depth map
Technical field
The present invention relates to 3-D view technical fields, and in particular, to the dimension that processing is rendered for reversed depth map becomes Change the filtering method under space.
Background technology
At present, three-dimensional (3D) video is gradually popularized, and Chinese Central Television (CCTV) also pilots when New Year's Day in 2012 3D channels, 3D videos have been increasingly becoming a kind of trend of current development.However, video source deficiency, which becomes, restricts this production The main bottleneck that industry is risen.In this case, it is to solve the problems, such as this effective way 2D videos to be switched to 3D videos.
Virtual visual point synthesizing method is that 2D videos switch to one of key technology in 3D videos.At present, in numerous synthesis sides In method, the rendering (Depth Image-based Rendering, DIBR) based on depth map has become a kind of public in the world The technical solution recognized.It is the depth map that attached on the basis of former video corresponding to each frame, finally by embedded DIBR The display terminal output of reason module is converted to binocular tri-dimensional video (referring to " film 2D/3D switch technologies summarize [J] ", Liu Wei, Wu Firm red, Hu Zhanyi,《CAD and graphics journal》, 2012,24 (1):14-28).This synthetic technology is being compressed Efficiency of transmission, the compatibility of distinct device and real time field depth adjustment and Fast rendering synthesis etc. have apparent technology Advantage occupies absolute leading position in markets such as emerging 3DTV, 3D mobile terminals, is the side of 3D Rendering future developments To.
Traditional DIBR is rendered to be converted based on positive 3-D view, i.e., in known reference visual point image and corresponding depth map In the case of, (3D warping) equation is mapped according to three-dimensional, first recovers three of each pixel under reference viewing angle in space Dimension coordinate, the two-dimensional imaging plane of reprojection to virtual view, so as to obtain virtual visual point image.Although this technology has very much Advantage, but the problem of be difficult to avoid that there are still some, such as overlapping, resampling, cavitation.For these problem mesh It is preceding frequently with process flow as shown in Figure 1, by merging a variety of depth map preprocess methods before DIBR and after DIBR A variety of image repair technologies are merged to come to coping with great number of issues.Although program flow treatment effect is apparent, due to being related to Link reply problem is numerous, it is difficult to an equalization point is found in rendering effect and transfer efficiency.
In addition to this, also a kind of reversed depth map renders processing scheme.In suc scheme, pass through traditional 3D first Transformation obtains the depth image of virtual view, is then based on the depth image after optimizing under virtual view and carries out reversed 3D transformation, So as to obtain the coloured image of virtual view.Since this scheme by reversed flow avoids traditional rendering intent in principle The many-to-one relationship of pixel when 3D maps so as to solve the problems, such as overlapping and resampling well, makes virtual view exist Have greatly improved on subjective effect.But in this kind of method, the processing currently for cavitation has still been placed on void Intend the image repair link after image generation, computation complexity is higher, affects the fortune of entire process flow to a certain extent Line efficiency.
Invention content
It is an object of the present invention in view of the above-mentioned problems, propose that the dimension transformation that processing is rendered for reversed depth map is empty Between under filtering method, repaired with the cavity for realizing depth map under virtual view and optimization, instead of complicated image repair skill Art, the advantages of ensure that the operational efficiency of stereo-picture render process.
To achieve the above object, the technical solution adopted by the present invention is:The dimension that processing is rendered for reversed depth map becomes The filtering method under space is changed, is included the following steps:
Step 1:Using associated filters F1, reference viewing angle image I is combined under dimension transform domainoriTo original depth-map DoriCarry out Federated filter, generation first time filtered depth map D1;
Step 2:Using associated filters F2, depth map D1 and forward direction 3D after being mapped under dimension transform domain positive 3D Scene gradient-structure G after mapping carries out Federated filter, generates the depth map D after optimizing under aspect2
Step 3:With reference to depth map D2, to reference viewing angle image IoriReversed 3D texture mappings are carried out, generate New Century Planned Textbook image Inew
Further, the filter function that associated filters F1 described in step 1 use for:
Wherein, Dori[n] represents the pixel value of original depth-map lastrow or a row, and a ∈ (0,1) are the anti-of spread function Feedforward coefficient, d1 represent adjacent sample x in the dimension transform domain of associated filters F1nAnd xn-1The distance between.
Further, dimension transform domain adjacent sample x in the associated filters F1nAnd xn-1The distance between be defined as:
d1=ct1(xn)-ct1(xn-1)
Wherein, ct1(u) the dimension transform domain in associated filters F1 is represented,
Then dimension conversion process is:
Wherein, | I 'ori(x) | represent the gradient intensity of input reference viewing angle image, σsAnd σrIt is associated filters sky respectively Between and codomain parameter, for adjusting the influence of filtering, σsValue range is 200~2500, σrValue range is 0.1~10.
Further, the filter function that associated filters F2 described in step 2 use for:
Wherein, Dwarp[n] represents the pixel value of depth map lastrow or a row after being mapped under aspect, operator ξwarpr, β) and it represents based on the depth map α under reference viewing anglerForward direction 3D mappings are carried out to the image β under reference viewing angle, a ∈ (0,1) are The feedback factor of spread function, d2 represent adjacent sample x in the dimension transform domain of associated filters F2nAnd xn-1The distance between.
Further, dimension transform domain adjacent sample x in the associated filters F2nAnd xn-1The distance between be defined as:
d2=ct2(xn)-ct2(xn-1)
Wherein, ct2(u) the dimension transform domain in associated filters F2 is represented, then dimension conversion process is:
Wherein, GwarpRepresent the scene gradient intensity under aspect, operator ξ after mappingwarpr, β) and it represents based on ginseng Examine the depth map α under visual anglerForward direction 3D mappings are carried out to the image β under reference viewing angle, | I 'ori(x) | represent input reference viewing angle The gradient intensity of image, | S 'ori| represent the gradient intensity of the corresponding visual saliency distribution map of reference viewing angle image, σsAnd σrPoint It is not associated filters space and codomain parameter, for adjusting the influence of filtering, σsValue range is 200~2500, σrValue model It is weight factor to enclose for 0.1~10, γ, and value range is 1~5.
Further, New Century Planned Textbook image I is generated described in step 3newProcess be:
Wherein, operatorIt represents based on the depth map α under aspecttTo the image β under reference viewing angle into The reversed 3D mappings of row.
Further, the filtering of the associated filters F1 and associated filters F2 is iterative process, and to realize Symmetrical filtering, if filtering is according to from left to right in an iteration, sequence from top to bottom carries out in the picture, then next time Filtering is carried out according to sequence from right to left, from top to bottom in iteration.Iterations are 2~10 times.
The advantageous effects of the present invention:
The present invention is based on reversed depth maps to render processing scheme, is realized by two wave filters in dimension transformation space The cavity of depth map is repaired and is optimized under virtual view, instead of complicated image repair technology, ensure that stereo-picture renders The operational efficiency of process, it is achieved thereby that the filter under a kind of dimension transformation space that processing is efficiently rendered for reversed depth map Wave method.
Other features and advantages of the present invention will be illustrated in the following description, also, partly becomes from specification It obtains it is clear that being understood by implementing the present invention.
Below by drawings and examples, technical scheme of the present invention is described in further detail.
Description of the drawings
Attached drawing is used to provide further understanding of the present invention, and a part for constitution instruction, the reality with the present invention Example is applied together for explaining the present invention, is not construed as limiting the invention.In the accompanying drawings:
Fig. 1 is traditional DIBR system process charts;
Fig. 2 is the flow of the filtering method under the dimension transformation space of the present invention that processing is rendered for reversed depth map Figure;
Fig. 3 (a) is the reference viewing angle image that the embodiment of the present invention is tested;
Fig. 3 (b) is the initial depth image that the embodiment of the present invention is tested;
Fig. 3 (c) carries out experiment by associated filters F1 treated initial depth figures for the embodiment of the present invention;
Fig. 3 (d) is that the embodiment of the present invention carries out the depth map that experiment is mapped to by 3D warping under virtual perspective;
Fig. 3 (e) is tested the depth under the virtual perspective obtained after secondary filtering optimizes for the embodiment of the present invention Figure.
Specific embodiment
The preferred embodiment of the present invention is illustrated below in conjunction with attached drawing, it should be understood that preferred reality described herein It applies example to be merely to illustrate and explain the present invention, be not intended to limit the present invention.
Fig. 1 shows traditional DIBR system process flows.The flow is the important step in 2D/3D conversion methods, it The mapping relations of an accurate point-to-point are described, virtual three-dimensional video-frequency can be rendered using depth information, so as to most 2D to 3D " fundamental change " is completed eventually.Although this technology has many advantages, there are still what some were difficult to avoid that ask Topic, such as overlapping, resampling, cavitation.For these problems, as shown in Figure 1, by merging a variety of depth before DIBR Figure preprocess method and merged after DIBR a variety of image repair technologies come to cope with great number of issues.Although program flow is handled It is with obvious effects, but it is numerous due to being related to link reply problem, it is difficult to a balance is found in rendering effect and transfer efficiency Point.
3D warping are key technologies in traditional DIBR system process flows, and theoretical foundation is 1997 MCMILLAN is in " Modeling and rendering architecture from photographs:a hybrid The three-dimensional mapping equation proposed in geometry-and image-based approach ", this is a Direct mapping (forward warping) process, i.e., the process mapped from reference view to virtual view.In addition to this, Morvan is in 2009 Year is in " Acquisition, compression and rendering of depth and texture for multi- View video " propose a kind of reversed 3D rendering mapping method, can be very good to solve the problems, such as overlapping and resampling, make void Intend view on subjective effect to have greatly improved.The program has larger advantage compared with traditional DIBR process flows, But such current scheme still mostly handles cavitation using image repair method, affects transfer efficiency to a certain extent It improves.
For this problem, the method for the present invention devises two wave filters to replace image repair in dimension transformation space Technology, so as to realize reversed depth map under a kind of efficient dimension transformation space based on the reparation of depth map under virtual perspective Render processing method.
The method of the present invention is with reference viewing angle image and initial depth figure data source as input, is generated after treatment New synthesis multi-view image.Fig. 2 is flow chart of the method for the present invention, and the specific embodiment of the present invention is described with reference to Fig. 2.
The method of the present invention mainly renders the associated filters under two dimension transform domains in flow by reversed depth map It realizes, specifically includes following steps:
Using associated filters F1, reference viewing angle image I is combined under dimension transform domainoriTo original depth-map DoriIt carries out Federated filter, generation first time filtered depth map D1;
Filter function is defined as follows:Wherein, Dori[n] represents original depth The pixel value of figure lastrow or a row is spent, a ∈ (0,1) are the feedback factors of spread function, and d1 represents that the dimension of wave filter F1 becomes Change adjacent sample x in domainnAnd xn-1The distance between.Dimension transform domain adjacent sample x in associated filters F1nAnd xn-1Between Distance is defined as again:d1=ct1(xn)-ct1(xn-1)。
Here dimension transform domain is in article " Domain with Eduardo S.L.Gastal in 2011 et al. The transformation that the method proposed in transform for edge-aware image and video processing " obtains is empty Between, its sharpest edges are that hyperspace is reduced to the one-dimensional space under the premise of it can ensure image texture characteristic, so as to big Improve computational efficiency greatly.Specifically, ct1(u) the dimension transform domain in associated filters F1 is represented, then dimension conversion process For:
Wherein, | I 'ori(x) | represent the gradient intensity of input reference viewing angle image, σsAnd σrBe respectively filter space and Codomain parameter, for adjusting the influence of filtering, σsValue range is 200~2500, σrValue range is 0.1~10.
The wave filter is located at before depth map 3D warping, therefore its effect and the depth map in tradition DIBR flows It is similar to pre-process link, is pre-processed by depth map, reduces the depth conversion gradient between figure layer, so as to reduce 3D warping The generation in middle cavity.It should be noted that the depth map filtering in tradition DIBR flows be merely present in before 3D warping and Mostly using global filtering, therefore in order to inhibit empty generation as far as possible, larger filtering core is often selected, thus may Partial straight lines region is caused to deform upon.The method of the present invention due to employing secondary filtering special disposal again after 3D warping The cavity generated in 3D warping, therefore filtering core can be by limited control here;Importantly, the wave filter is A kind of associated filters based on dimensional space transformation, thus inherently a kind of adaptive local filtering device, can more have The filtering that varying strength is carried out for different zones of effect.
Using associated filters F2, after the depth map D1 and forward direction 3D mappings after being mapped under dimension transform domain positive 3D Scene gradient-structure G carry out Federated filter, generate aspect under optimize after depth map D2.Joint under dimension transform domain Wave filter F2 is:Filter function is defined as follows:
Wherein, Dwarp[n] represents the pixel value of depth map lastrow or a row after being mapped under aspect, operator ξwarpr, β) and it represents based on the depth map α under reference viewing anglerForward direction 3D mappings are carried out to the image β under reference viewing angle, a ∈ (0,1) are The feedback factor of spread function, d2 represent adjacent sample x in the dimension transform domain of wave filter F2nAnd xn-1The distance between.D2 can To be further defined as:
d2=ct2(xn)-ct2(xn-1)
Wherein, ct2(u) the dimension transform domain in associated filters F2 is represented, then dimension conversion process is:
Wherein, GwarpRepresent the scene gradient intensity under aspect, operator ξ after mappingwarpr, β) and it represents based on ginseng Examine the depth map α under visual anglerForward direction 3D mappings are carried out to the image β under reference viewing angle, | I 'ori(x) | represent input reference viewing angle The gradient intensity of image, | S 'ori| represent the gradient intensity of the corresponding visual saliency distribution map of reference viewing angle image, σsAnd σrPoint It is not filter space and codomain parameter, for adjusting the influence of filtering, σsValue range is 200~2500, σrValue range is 0.1~10, γ are weight factors, and value range is 1~5.
The basic framework of the wave filter is similar with wave filter F1.Although wave filter F1 largely inhibits sky The generation in hole, but it is empty caused by not still being avoided that big parallax.The effect of wave filter F2 is exactly instead of existing reverse process Image repair method in flow eliminates remaining cavity with filtering method.Since this wave filter is also based on dimension transformation Frame, therefore its treatment effeciency is much higher than image repair method.Wherein SoriFor the corresponding visual saliency distribution of original image Scheme, " Salient region detection via high-dimensional color are based in the embodiment of the present invention Method in transform and local spatial support " obtains, and can also be obtained with other similar approach, Main function is the Federated filter constraint for strengthening wave filter F2, and the region for preventing secondary filtering high to significance carries out excessively flat It is sliding.
Although the method for the present invention is filtered twice, but the dimension of wave filter reduces under dimension transformation space, Operation efficiency is significantly larger than the associated filters under conventional two-dimensional space, it is ensured that the treatment effeciency of system.Traditional depth Smoothing filter is run under two-dimensional space, and two wave filter dimensionality reductions of the method for the present invention design are run to the one-dimensional space, In order to reach same effect, in the particular embodiment, filtering is realized all by the way of iteration.Again because of above-mentioned definition Dimension conversion process it is asymmetric, so to realize symmetrical filtering, if filtering is according to from left to right in an iteration, from upper Sequence under carries out in the picture, then filtering is carried out according to sequence from right to left, from top to bottom in next iteration.Iteration Number is 2~10 times, and general 3 filter effects of iteration can reach stabilization, and iterations are 3 times in emulation experiment.
With reference to depth map D2, to reference viewing angle image IoriCarry out reversed 3D texture mappings, generation New Century Planned Textbook image Inew
After obtaining the depth map for not having cavity after optimizing under virtual perspective, it is possible to exist according to Morvan " Acquisition, compression and rendering of depth and texture for multi-view Image under the reversed three-dimensional mapping process generation New Century Planned Textbook that video " is mentioned:
Wherein, operatorIt represents based on the depth map α under aspecttTo the image β under reference viewing angle into The reversed 3D mappings of row.
It is the experimental verification of the method for the present invention below.
1) experiment condition:
It is in CPU CoreTM2 Quad CPU Q9400@2.66GHz, 7 system of memory 4G, Windows On tested;
2) experiment content:
Referring to Fig. 3 details is realized come the experiment specifically described according to the method for the present invention;
Fig. 3 is situation when handling two groups of experimental images.Wherein, Fig. 3 (a) is reference viewing angle image, and Fig. 3 (b) is first Beginning depth image.Fig. 3 (c) is filtered device F1 treated initial depth images, it can be seen that in the effect of Federated filter Under, original depth-map realizes different degrees of smooth according to scene structure in different zones.Fig. 3 (d) is by 3D Warping is mapped to the depth map under virtual perspective, it can be seen that has part hole region to still appear at parallax transformation larger Figure layer excessively locate.Fig. 3 (e) is the depth map under the virtual perspective obtained after secondary filtering optimizes, it can be seen that not only Hole region is eliminated completely, and is strengthened in some background area details, the room of such as first group experimental image background Subregion, depth-map silhouette are more clear.
The method of the present invention is based on reversed depth map and renders processing scheme, but different from existing method, before the method for the present invention is used Latter two realizes the solution of empty problem instead of image repair based on the filtering of dimensional space transformation.On the one hand, it filters Wave process can be obtained obtains better treatment effeciency than image repair method;On the other hand, the core of reversed flow is optimization Depth map under virtual perspective, and depth map with the smooth figure layer of multiple regions, therefore is more suitable for adopting compared with texture image Empty problem is handled with filtering technique.In this way, the method for the present invention can while the conversion effect for realizing 3D videos is promoted The operational efficiency of algorithm is effectively ensured.
Following advantageous effect can at least be reached:
The present invention is based on reversed depth maps to render processing scheme, is realized by two wave filters in dimension transformation space The cavity of depth map is repaired and is optimized under virtual view, instead of complicated image repair technology, ensure that stereo-picture renders The operational efficiency of process, it is achieved thereby that the filter under a kind of dimension transformation space that processing is efficiently rendered for reversed depth map Wave method.
Finally it should be noted that:The foregoing is only a preferred embodiment of the present invention, is not intended to restrict the invention, Although the present invention is described in detail referring to the foregoing embodiments, for those skilled in the art, still may be used To modify to the technical solution recorded in foregoing embodiments or carry out equivalent replacement to which part technical characteristic. All within the spirits and principles of the present invention, any modification, equivalent replacement, improvement and so on should be included in the present invention's Within protection domain.

Claims (7)

1. the filtering method under the dimension transformation space of processing is rendered for reversed depth map, which is characterized in that including walking as follows Suddenly:
Step 1:Using associated filters F1, reference viewing angle image I is combined under dimension transform domainoriTo original depth-map DoriInto Row Federated filter, generation first time filtered depth map D1;
Step 2:It is mapped using associated filters F2, depth map D1 and forward direction 3D after being mapped under dimension transform domain positive 3D Scene gradient-structure G afterwards carries out Federated filter, generates the depth map D after optimizing under aspect2
Step 3:With reference to depth map D2, to reference viewing angle image IoriCarry out reversed 3D texture mappings, generation New Century Planned Textbook image Inew
2. the filtering method under the dimension transformation space according to claim 1 that processing is rendered for reversed depth map, Be characterized in that, the filter function that associated filters F1 described in step 1 use for:
Wherein, Dori[n] represents the pixel value of original depth-map lastrow or a row, and a ∈ (0,1) are the feedback systems of spread function It counts, adjacent sample x in the dimension transform domain of d1 expression associated filters F1nAnd xn-1The distance between.
3. the filtering method under the dimension transformation space according to claim 1 or 2 that processing is rendered for reversed depth map, It is characterized in that, dimension transform domain adjacent sample x in the associated filters F1nAnd xn-1The distance between be defined as:
d1=ct1(xn)-ct1(xn-1)
Wherein, ct1(u) the dimension transform domain in associated filters F1 is represented,
Then dimension conversion process is:
Wherein, | Iori(x) | represent the gradient intensity of input reference viewing angle image, σsAnd σrBe respectively associated filters space and Codomain parameter, for adjusting the influence of filtering, σsValue range is 200~2500, σrValue range is 0.1~10.
4. the filtering method under the dimension transformation space according to claim 1 that processing is rendered for reversed depth map, Be characterized in that, the filter function that associated filters F2 described in step 2 use for:
Wherein, Dwarp[n] represents the pixel value of depth map lastrow or a row after being mapped under aspect, operator ξwarpr, β) represent based on the depth map α under reference viewing anglerForward direction 3D mappings are carried out to the image β under reference viewing angle, a ∈ (0,1) are to expand Dissipate the feedback factor of function, d2 represents adjacent sample x in the dimension transform domain of associated filters F2nAnd xn-1The distance between.
5. the filtering method under the dimension transformation space that processing is rendered for reversed depth map according to claim 1 or 4, It is characterized in that, dimension transform domain adjacent sample x in the associated filters F2nAnd xn-1The distance between be defined as:
d2=ct2(xn)-ct2(xn-1)
Wherein, ct2(u) the dimension transform domain in associated filters F2 is represented, then dimension conversion process is:
Wherein, GwarpRepresent the scene gradient intensity under aspect, operator ξ after mappingwarpr, β) and represent that being based on reference regards Depth map α under anglerForward direction 3D mappings are carried out to the image β under reference viewing angle, | Iori(x) | represent input reference viewing angle image Gradient intensity, | Sori| represent the gradient intensity of the corresponding visual saliency distribution map of reference viewing angle image, σsAnd σrIt is respectively Associated filters space and codomain parameter, for adjusting the influence of filtering, σsValue range is 200~2500, σrValue range is 0.1~10, γ are weight factors, and value range is 1~5.
6. the filtering method under the dimension transformation space according to claim 1 that processing is rendered for reversed depth map, It is characterized in that, New Century Planned Textbook image I is generated described in step 3newProcess be:
Wherein, operatorIt represents based on the depth map α under aspecttImage β under reference viewing angle is carried out anti- It is mapped to 3D.
7. the filtering method under the dimension transformation space that processing is rendered for reversed depth map according to claim 2 or 4, It is characterized in that, the filtering of the associated filters F1 and associated filters F2 is iterative process, and to realize symmetrical filter Wave, if filtering is according to from left to right in an iteration, sequence from top to bottom carries out in the picture, then in next iteration Filtering is carried out according to sequence from right to left, from top to bottom.Iterations are 2~10 times.
CN201810235544.4A 2018-03-21 2018-03-21 Filtering method under dimension transformation space for rendering processing of reverse depth map Active CN108234985B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810235544.4A CN108234985B (en) 2018-03-21 2018-03-21 Filtering method under dimension transformation space for rendering processing of reverse depth map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810235544.4A CN108234985B (en) 2018-03-21 2018-03-21 Filtering method under dimension transformation space for rendering processing of reverse depth map

Publications (2)

Publication Number Publication Date
CN108234985A true CN108234985A (en) 2018-06-29
CN108234985B CN108234985B (en) 2021-09-03

Family

ID=62659857

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810235544.4A Active CN108234985B (en) 2018-03-21 2018-03-21 Filtering method under dimension transformation space for rendering processing of reverse depth map

Country Status (1)

Country Link
CN (1) CN108234985B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109544671A (en) * 2018-11-12 2019-03-29 浙江大学 It is a kind of based on the video of screen space in three-dimensional scenic projection mapping method
CN110738626A (en) * 2019-10-24 2020-01-31 广东三维家信息科技有限公司 Rendering graph optimization method and device and electronic equipment
CN111669564A (en) * 2019-03-07 2020-09-15 阿里巴巴集团控股有限公司 Image reconstruction method, system, device and computer readable storage medium
CN112203074A (en) * 2020-12-07 2021-01-08 南京爱奇艺智能科技有限公司 Camera translation new viewpoint image generation method and system based on two iterations
CN112634339A (en) * 2019-09-24 2021-04-09 阿里巴巴集团控股有限公司 Commodity object information display method and device and electronic equipment
US11257283B2 (en) 2019-03-07 2022-02-22 Alibaba Group Holding Limited Image reconstruction method, system, device and computer-readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831603A (en) * 2012-07-27 2012-12-19 清华大学 Method and device for carrying out image rendering based on inverse mapping of depth maps
CN103208110A (en) * 2012-01-16 2013-07-17 展讯通信(上海)有限公司 Video image converting method and device
CN103391446A (en) * 2013-06-24 2013-11-13 南京大学 Depth image optimizing method based on natural scene statistics
KR101458986B1 (en) * 2013-04-22 2014-11-13 광운대학교 산학협력단 A Real-time Multi-view Image Synthesis Method By Using Kinect
CN105103453A (en) * 2013-04-08 2015-11-25 索尼公司 Data encoding and decoding
CN106780705A (en) * 2016-12-20 2017-05-31 南阳师范学院 Suitable for the depth map robust smooth filtering method of DIBR preprocessing process
US20180027224A1 (en) * 2016-07-19 2018-01-25 Fotonation Limited Systems and Methods for Estimating and Refining Depth Maps

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103208110A (en) * 2012-01-16 2013-07-17 展讯通信(上海)有限公司 Video image converting method and device
CN102831603A (en) * 2012-07-27 2012-12-19 清华大学 Method and device for carrying out image rendering based on inverse mapping of depth maps
CN105103453A (en) * 2013-04-08 2015-11-25 索尼公司 Data encoding and decoding
KR101458986B1 (en) * 2013-04-22 2014-11-13 광운대학교 산학협력단 A Real-time Multi-view Image Synthesis Method By Using Kinect
CN103391446A (en) * 2013-06-24 2013-11-13 南京大学 Depth image optimizing method based on natural scene statistics
US20180027224A1 (en) * 2016-07-19 2018-01-25 Fotonation Limited Systems and Methods for Estimating and Refining Depth Maps
CN106780705A (en) * 2016-12-20 2017-05-31 南阳师范学院 Suitable for the depth map robust smooth filtering method of DIBR preprocessing process

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SHUJIE LIU: "Joint Trilateral Filtering for Depth Map Compression", 《SPIE》 *
WEI LIU等: "A Novel Stereoscopic View Generation Approach Based on Visual Attention Analysis", 《PROCEEDINGS OF THE 36TH CHINESE CONTROL CONFERENCE》 *
王震: "自由视点立体电视系统的虚拟视点合成技术研究", 《CNKI》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109544671A (en) * 2018-11-12 2019-03-29 浙江大学 It is a kind of based on the video of screen space in three-dimensional scenic projection mapping method
CN109544671B (en) * 2018-11-12 2022-07-19 浙江大学 Projection mapping method of video in three-dimensional scene based on screen space
CN111669564A (en) * 2019-03-07 2020-09-15 阿里巴巴集团控股有限公司 Image reconstruction method, system, device and computer readable storage medium
US11257283B2 (en) 2019-03-07 2022-02-22 Alibaba Group Holding Limited Image reconstruction method, system, device and computer-readable storage medium
US11341715B2 (en) 2019-03-07 2022-05-24 Alibaba Group Holding Limited Video reconstruction method, system, device, and computer readable storage medium
CN111669564B (en) * 2019-03-07 2022-07-26 阿里巴巴集团控股有限公司 Image reconstruction method, system, device and computer readable storage medium
US11521347B2 (en) 2019-03-07 2022-12-06 Alibaba Group Holding Limited Method, apparatus, medium, and device for generating multi-angle free-respective image data
CN112634339A (en) * 2019-09-24 2021-04-09 阿里巴巴集团控股有限公司 Commodity object information display method and device and electronic equipment
CN110738626A (en) * 2019-10-24 2020-01-31 广东三维家信息科技有限公司 Rendering graph optimization method and device and electronic equipment
CN112203074A (en) * 2020-12-07 2021-01-08 南京爱奇艺智能科技有限公司 Camera translation new viewpoint image generation method and system based on two iterations
CN112203074B (en) * 2020-12-07 2021-03-02 南京爱奇艺智能科技有限公司 Camera translation new viewpoint image generation method and system based on two-step iteration

Also Published As

Publication number Publication date
CN108234985B (en) 2021-09-03

Similar Documents

Publication Publication Date Title
CN108234985A (en) The filtering method under the dimension transformation space of processing is rendered for reversed depth map
CN102592275B (en) Virtual viewpoint rendering method
CN103236082B (en) Towards the accurate three-dimensional rebuilding method of two-dimensional video of catching static scene
Kim et al. Multi-perspective stereoscopy from light fields
US9159154B2 (en) Image processing method and apparatus for generating disparity value
CN104954780A (en) DIBR (depth image-based rendering) virtual image restoration method applicable to high-definition 2D/3D (two-dimensional/three-dimensional) conversion
CN104506872B (en) A kind of method and device of converting plane video into stereoscopic video
Do et al. GPU-accelerated real-time free-viewpoint DIBR for 3DTV
CN106028020B (en) A kind of virtual perspective image cavity complementing method based on multi-direction prediction
CN106791770B (en) A kind of depth map fusion method suitable for DIBR preprocessing process
Lu et al. A survey on multiview video synthesis and editing
Zhao et al. Convolutional neural network-based depth image artifact removal
CN106780705B (en) Depth map robust smooth filtering method suitable for DIBR preprocessing process
CN103945209A (en) DIBR method based on block projection
Yao et al. Fast and high-quality virtual view synthesis from multi-view plus depth videos
CN107592519A (en) Depth map preprocess method based on directional filtering under a kind of dimension transformation space
CN106998460B (en) A kind of hole-filling algorithm based on depth transition and depth item total variational
CN110149508A (en) A kind of array of figure generation and complementing method based on one-dimensional integrated imaging system
Jung et al. 2D to 3D conversion with motion-type adaptive depth estimation
CN113450274B (en) Self-adaptive viewpoint fusion method and system based on deep learning
WO2022155950A1 (en) Virtual viewpoint synthesis method, electronic device and computer readable medium
CN109934863B (en) Light field depth information estimation method based on dense connection type convolutional neural network
Guo et al. Efficient image warping in parallel for multiview three-dimensional displays
Wang et al. Depth image segmentation for improved virtual view image quality in 3-DTV
Lee et al. Hole Filling in Image Conversion Using Weighted Local Gradients.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant