CN106791770A - A kind of depth map fusion method suitable for DIBR preprocessing process - Google Patents

A kind of depth map fusion method suitable for DIBR preprocessing process Download PDF

Info

Publication number
CN106791770A
CN106791770A CN201611185808.7A CN201611185808A CN106791770A CN 106791770 A CN106791770 A CN 106791770A CN 201611185808 A CN201611185808 A CN 201611185808A CN 106791770 A CN106791770 A CN 106791770A
Authority
CN
China
Prior art keywords
map
depth
depth map
dibr
degree distribution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611185808.7A
Other languages
Chinese (zh)
Other versions
CN106791770B (en
Inventor
刘伟
郑扬冰
刘红钊
崔明月
张新刚
马世榜
叶铁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanyang Normal University
Original Assignee
Nanyang Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanyang Normal University filed Critical Nanyang Normal University
Priority to CN201611185808.7A priority Critical patent/CN106791770B/en
Publication of CN106791770A publication Critical patent/CN106791770A/en
Application granted granted Critical
Publication of CN106791770B publication Critical patent/CN106791770B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a kind of depth map fusion method for DIBR preprocessing process, comprise the following steps:Gradient analysis is carried out to original depth-map, the hole region in new viewpoint image is estimated out;Initial effects degree distribution map is generated according to hole region distribution is estimated;Initial effects degree distribution map is diffused using texture image under dimension transformation space;The depth map of original depth-map and pre-smoothed is merged based on the disturbance degree distribution map after diffusion, the depth map after generation optimization.The method of the present invention is under efficient dimension transformation space, merged come the depth map to original depth-map and pre-smoothed by the disturbance degree distribution map spread based on textural characteristics, so as to realize self-adaptative adjustment of the depth map after optimization in different zones smoothed intensity, the smoothing effect of hole region is not only effectively retained to eliminate empty influence, and effectively prevent the extra twisted phenomena caused to occurring without smoothing excessively for hole region.

Description

A kind of depth map fusion method suitable for DIBR preprocessing process
Technical field
It is particularly a kind of to be applicable the invention belongs to 3 D video technical field, and in particular to the Video Quality Metric technology of 2D/3D In the depth map fusion method of DIBR preprocessing process.
Background technology
At present, three-dimensional (3D) video is gradually popularized, and Chinese Central Television (CCTV) also pilots when New Year's Day in 2012 3D channels, 3D videos have been increasingly becoming a kind of trend of current development.However, not enough the turning into of video source restricts this product The Main Bottleneck that industry is risen.In this case, 2D videos are switched into the effective way that 3D videos are this problems of solution.
2D videos are switched into 3D videos and generally speaking renders mode in the presence of two kinds:One of which is by using some way The right and left eyes image pair with parallax is directly reconstructed out from single frame of video;Another kind is rendering based on depth map (Depth Image-based Rendering, DIBR), its transformation result is that each frame is addition of on the basis of former video Corresponding depth map, being finally converted to after binocular tri-dimensional video by the display terminal output of embedded DIBR processing modules just can be with Viewed and admired (referring to " film 2D/3D switch technologies summarize [J] ", Liu Wei, Wu Yihong, Hu Zhanyi,《CAD with Graphics journal》, 2012,24 (1):14-28).Compared with the former, three original features that the latter has with it:Efficient pressure The depth of field that the compatibility of contracting efficiency of transmission and existing 2D technologies and distinct device has by force and in real time tridimensional video generation The technical advantage such as adjustment and Fast rendering synthesis, in the absolute leading position of market share such as emerging 3DTV, 3D mobile terminal, It is the direction of 3D Rendering future developments.
It is the important step being based in the 2D/3D conversion methods of depth map that DIBR is rendered, and it can utilize depth information wash with watercolours Virtual three-dimensional video-frequency is dyed, so as to be finally completed 2D to 3D " fundamental change ".Although this technology has many advantages, Still there is its limitation.Because the mapping relations that DIBR is changed from reference picture according to depth map fictionalize right and left eyes image, The change of viewpoint may cause the part background area blocked by foreground object in original image to be exposed in new images, and this Subregion does not have corresponding texture mapping in conversion process, therefore cavitation will be produced on target image.This Problem is DIBR technologies study hotspot in recent years, is also the importance for improving 3D rendering quality.It is current for this problem Frequently with handling process as shown in Figure 1, link is pre-processed and after DIBR based on figure by adding depth map before DIBR Filling up for cavity is completed as recovery technique.
Depth map pretreatment is typically with all kinds of wave filters and depth map is smoothed, and so draws the new viewpoint for obtaining In will be conducive to further filling up comprising less cavity.This kind of method operational efficiency is high, and effect is obvious, but smooth filter The object edges areas (the especially edge of vertical direction) that ripple may result in virtual image produce geometric deformation.Therefore exist Depth map pretreatment link cannot be effectively ensured undistorted turn of virtual image in existing DIBR technologies in 2D/3D Video Quality Metrics Synthesis is changed, so as to have impact on the actual converted effect of 3D videos.
The content of the invention
In view of this, the purpose of the present invention is directed to the deficiency of existing DIBR depth maps preconditioning technique link, by dimension The disturbance degree distribution map that generation is spread under degree transform domain is merged come the depth map to original depth-map and pre-smoothed, so that Self-adaptative adjustment of the depth map after optimization in different zones smoothed intensity is realized, the smooth of hole region has not only been effectively retained Effect is showed with eliminating empty influence, and effectively prevent the extra distortion caused to occurring without smoothing excessively for hole region As from lifting 3D virtual image rendering effects.
To reach above-mentioned purpose, the present invention uses following technical scheme:
A kind of depth map fusion method suitable for DIBR preprocessing process, comprises the following steps:
A) in original depth-map DOriIn, change in depth rule of the operator according to right and left eyes virtual image is estimated to producing using cavity Raw hole region RHEstimated;
B) generation initial effects degree distribution map If-init
C) to initial effects degree distribution map I under dimension transform domainf-initIt is diffused, the disturbance degree distribution map after generation optimization If
D) with the disturbance degree distribution map I after optimizationfTo original depth-map DOriDepth map with pre-smoothed carries out fusion DPre, it is raw Into the depth map D after optimizationFin
Wherein, step A) described in cavity estimate operator and be:
Wherein, RHThe cavity of prediction is represented, r (x, y) is represented in original depth-map DOriThe depth value at middle coordinate (x, y) place, Dmax Be setting generation virtual image parallax maximum number of pixels, α be normalization factor (in 8bit gray level images, α= 255), DwidthIt is the number of pixels of picture traverse, λHIt is default threshold factor;If the virtual view of new synthesis is that left eye is regarded Scheme, then i=l, otherwise, i=r.
Wherein, step B) in initial effects degree distribution map If-initIt is specifically defined as:
Wherein, RHRepresent the cavity of prediction, DeP () represents point p to the distance for estimating empty edge.
Wherein, step C) under dimension transform domain to initial effects degree distribution map If-initIt is diffused specially:Diffusion letter Number is defined as follows:
If[n]=(1-ad)If-init[n]+adIf[n-1]
Wherein, If-init[n] represents the pixel value of initial effects degree distribution map lastrow or a row, and a ∈ (0,1) are spread functions Feedback factor, d represents adjacent sample x in dimension transform domainnAnd xn-1The distance between.
Wherein, adjacent sample x in dimension transform domainnAnd xn-1The distance between be defined as:
D=ct (xn)-ct(xn-1)
Wherein, ct (u) represents dimension transform domain, and dimension conversion process is:
Wherein, ItextureX () represents the texture image of input, | I 'texture(x) | represent the gradient intensity of texture image, σsAnd σr It is respectively transmission device space and codomain parameter, for adjusting the influence of propagation.σsSpan is 200~2500, σrSpan It is 0.1~10.
Wherein, be diffused as iterative process, and to realize symmetric propagation, if spread in an iteration according to from a left side to The right side, order from top to bottom is propagated in the picture, then spread in next iteration and passed according to order from right to left, from top to bottom Broadcast.Iterations is 2~10 times.
Wherein, step D) in depth map fusion formula be:
DFin=IfDPre+(1-If)DOri
The beneficial effects of the invention are as follows:Disturbance degree distribution map in the present invention is based on line under efficient dimension transformation space Reason feature and the original cavity diffusion of prediction, can reflect the distributed intelligence of structuring.Depth integration is carried out with this, can be with Traditional Man parameter limiting mode is replaced to determine that the filtering for strengthening area to be repaired is imitated come more efficient with self adaptation diffusion way Really, and the filter effect for reducing non-hole region is excluded, so as to overcome the depth map in conventional depth figure preprocessing process to distort With smoothing problasm excessively, the virtual rendering effects of 3D are significantly improved while realizing that cavity is repaired.
Brief description of the drawings
Fig. 1 is existing DIBR systems process chart;
Fig. 2 is flow chart of the method for the present invention;
Fig. 3 is the depth map and virtual image effect contrast figure using the inventive method.
Specific embodiment
The invention will be further described with reference to the accompanying drawings and examples:
Fig. 1 shows existing DIBR systems handling process, for the original depth-map being input into, first in pretreatment link to depth Degree figure carries out smothing filtering, reduces the generation in most of cavity when rendering by the optimization of depth map part-structure with this;Then, Processing method is rendered using DIBR, using depth image and the camera parameter demarcated, the pixel in reference picture is mapped to In target image;Finally, using hole-filling method, a small amount of cavity retained in render process is repaired, and export what is changed out Right and left eyes virtual image.
Wherein, it is the important step in 2D/3D conversion methods that DIBR renders treatment, and it describes an accurate point-to-point Mapping relations, it is possible to use depth information renders virtual three-dimensional video-frequency, so as to be finally completed 2D to 3D " fundamental change ". Although this technology there are many advantages, still there is its limitation.Because DIBR turns from reference picture according to depth map The mapping relations changed fictionalize right and left eyes image, and the change of viewpoint may cause the part blocked by foreground object in original image to be carried on the back Scene area is exposed in new images, and this subregion does not have corresponding texture mapping in conversion process, therefore will Cavitation is produced on target image.This problem is DIBR technologies study hotspot in recent years, is also to improve 3D rendering matter The importance of amount.
There are three class solutions for what this problem was commonly used at present:
1) depth of seam division video (LDV) form.Such method by new data Layer fundamentally solve due to block and The cavitation produced in depth map.But using special equipment during this technical requirements video acquisition, so being not appropriate for In 2D/3D conversions;
2) hole-filling.Such method is the hole-filling link after the DIBR shown in Fig. 1.The main thought of such method It is the textural characteristics according to image, chooses sizeable texture block, most phase therewith is then found around region to be repaired Near Texture Matching block substitutes the texture block.This kind of method can repair large area region cavity, but when repairing block matching Based on greedy search, obvious reparation mistake is likely to result in.In addition, such method amount of calculation is larger, therefore often auxiliary Depth map pre-processes link to repair a small amount of cavity of reservation;
3) pretreatment of depth image, such method is the link before the DIBR shown in Fig. 1.This kind of method energy depth of smoothness So as to reduce the cavity in depth map, the intensity for increasing gaussian filtering can be improved in discontinuous (depth acute variation) region in figure Generate the quality of stereotome.Because this kind of method computational efficiency is higher, can be eliminated in advance by the local optimum of depth map The major part cavity being likely to occur, only comprising less cavity, is conducive to further filling up, therefore is cavity in DIBR systems The important step filled up.But on the other hand filtering easily causes the torsional deformation of object rectilinear direction fringe region.Although mesh Before asymmetric smothing filtering and bilateral filtering have been proposed alleviate this problem, but these global parameters set wave filter Still local excessively smooth phenomenon fully effective cannot be avoided, when local smoothing effect is excessive, the new of synthesis is still resulted in The subregion object of viewpoint view produces geometric deformation.
Therefore the depth map pretreatment link in existing DIBR systems in 2D/3D Video Quality Metrics still cannot have completely Effect ensures the undistorted conversion synthesis of virtual image, so as to have impact on the actual converted effect of 3D videos.Therefore, the inventive method Secondary fusion is carried out to pretreated depth map and original depth-map by introducing disturbance degree distribution map, self adaptation is realized Part filter, carries out DIBR and renders with the depth map after further being optimized.
The inventive method is the pre-smoothed obtained with texture image, original depth image and by certain filtering method Depth map generates the depth map after fusion optimization after treatment as the data source of input.Fig. 2 is method of the present invention flow Figure, is described with reference to Fig. 2 to specific embodiment of the invention.
If smooth discontinuous (depth acute variation) region that may only occur in cavity of depth map is carried out, then non- The depth plot quality of hole region can be just effectively maintained, and crossing smoothing effect also can preferably be suppressed.It is based on This thought, the present invention proposes a kind of new depth map fusion method, adaptive to realize by the disturbance degree distribution map for spreading The local smoothing method answered, specifically includes following steps:
A) in original depth-map DOriIn, change in depth rule of the operator according to right and left eyes virtual image is estimated to producing using cavity Raw hole region RHEstimated.If defining in depth map the distance nearer apart from observer with the larger value of numerical value come table Show, represented with the less value of numerical value apart from observer's larger distance.So specifically, it is empty in left eye virtual image Hole concentrates on the region of the ascending acute variation of depth value;In right eye virtual image, cavity is concentrated in depth value by big To the region of small acute variation.Based on this, cavity estimates operator definitions and is:
Wherein, RHThe cavity of prediction is represented, r (x, y) is represented in original depth-map DOriThe depth value at middle coordinate (x, y) place, Dmax Be setting generation virtual image parallax maximum number of pixels, α be normalization factor (such as in 8bit gray level images, α= 255), DwidthIt is the number of pixels of picture traverse, λHIt is default threshold factor, span is 1~5, λ in emulation experimentH Take 2;If the virtual view of new synthesis is left-eye view, i=l, otherwise, i=r;
B) generation initial effects degree distribution map If-init.To eliminate cavity influence, the disturbance degree distribution map of present invention definition is around pre- The hole region estimated is launched.Based on this, initial effects degree distribution map If-initIt is specifically defined as:
Wherein, RHRepresent the cavity of prediction, DeP () represents point p to the distance for estimating empty edge.From DeP the definition of () can be seen Go out, closer to the region of hollow center, the depth map of its corresponding pre-smoothed is bigger to eliminating empty effect;In non-NULL Hole region, the depth map of its corresponding pre-smoothed is to eliminating empty effect very little;
C) to initial effects degree distribution map I under dimension transform domainf-initIt is diffused;
Because initial effects degree distribution map only only accounts for the direct distribution in cavity, if there is edge line to pass through in virtual image Near cavity, then may have influence on edge to empty smoothing and be allowed to produce deformation.In order to solve this problem, it is necessary to right Initial effects degree distribution map carries out appropriate diffusion according to the textural characteristics of image, and nearby structure similar area has phase to make cavity As disturbance degree, the generation of deformation is overcome while local smoothing method is realized with this;
Spread function is defined as follows:
If[n]=(1-ad)If-init[n]+adIf[n-1]
Wherein, If-init[n] represents the pixel value of initial effects degree distribution map lastrow or a row, and a ∈ (0,1) are spread functions Feedback factor, d=ct (xn)-ct(xn-1) represent adjacent sample x in dimension transform domainnAnd xn-1The distance between.Here dimension Degree transform domain is in article " Domain transform for edge- with Eduardo S.L.Gastal in 2011 et al. The transformation space that the method proposed in aware image and video processing " is obtained, its sharpest edges be Hyperspace is being reduced to the one-dimensional space on the premise of ensure that image texture characteristic, so as to substantially increase computational efficiency. Specifically, ct (u) represents dimension transform domain, and dimension conversion process is:
Wherein, ItextureX () represents the texture image of input, | I 'texture(x) | represent the gradient intensity of texture image, σsAnd σr It is respectively transmission device space and codomain parameter, for adjusting the influence of propagation.σsSpan is 200~2500, σrSpan It is 0.1~10;
It can be seen that, in dimension conversion process, the scene structure feature reflected in texture image is taken into account, and becomes just The important evidence of beginning disturbance degree distribution map diffusion.The similar two-sided filter of whole diffusion process effect, disturbance degree distribution map with The scene characteristic of image is propagated further diffusion near cavity, but due to the reduction of dimension under dimension transformation space, it Operation efficiency is far longer than traditional two-sided filter, and traditional two-sided filter is run under two bit spaces, above-mentioned fixed Although the dimension conversion process of justice substantially increases operation efficiency, but wave filter simply under the one-dimensional space.It is same in order to reach Effect, in the particular embodiment, diffusion is realized with the mode of iteration.Again because dimension conversion process defined above is non- Symmetrically, so to realize symmetric propagation, if spread in an iteration according to from left to right, order from top to bottom is in image Middle propagation, then spread according to sequence spread from right to left, from top to bottom in next iteration.Iterations is 2~10 times, one As 3 diffusion effects of iteration can reach stabilization, in emulation experiment iterations be 3 times;
D) with the disturbance degree distribution map I after optimizationfTo original depth-map DOriDepth map with pre-smoothed carries out fusion DPre, it is raw Into the depth map D after optimizationFin.Depth map fusion formula is:
DFin=IfDPre+(1-If)DOri
Using the disturbance degree distribution map after diffusion, the depth map and original depth-map after smooth are carried out into secondary fusion, retained Under favourable smooth region is eliminated to cavity.Due to disturbance degree distribution map IfCodomain may change in diffusion process, because This needs to normalize to region [0,1] when implementing;
Depth map after fusion optimization has inhibited a large amount of cavities to produce, so last carry out DIBR wash with watercolours with it and texture image Dye, then by filling up the cavity of a small amount of reservation using simple interpolation method, it is possible to generate right and left eyes virtual image.
It is below the experimental verification of DIBR virtual images restorative procedure of the invention;
1) experiment condition:
It is in CPUCoreTM2 Quad CPU Q9400@2.66GHz, are carried out in the system of internal memory 4G, Windows 7 Experiment;
2) experiment content:
To specifically describe realize details according to the experiment of the inventive method and render quality to 3D and bring referring to Fig. 3 and carry Rise.
Fig. 3 is situation when processing one group of experimental image.Wherein, Fig. 3 (a) is original texture image, here we It directly as right eye virtual image, then need to generate left eye virtual image by DIBR methods.Fig. 3 (b) show not The left eye virtual image for obtaining directly is rendered with DIBR by depth map pretreatment and hole-filling, in newly-generated virtual image Black discontinuously part is cavity, it is seen that cavity is mainly distributed in Fig. 3 (c) in original depth-map depth value from small to large The foreground object left side edge of drastic change.Fig. 3 (d) is the depth map of the pre-smoothed obtained by bilateral filtering, it is seen that after smooth The region of depth value drastic change is significantly less in depth map, therefore, it is possible to substantially suppress the generation in cavity.But such as Fig. 3 B shown in (), cavity is concentrated mainly on foreground object left side edge, therefore smoothing to cavity on the right side of foreground object in Fig. 3 (d) Elimination has no obvious effect.The inventive method propose diffusion after disturbance degree distribution map such as Fig. 3 (e) shown in, it is seen that it divide , with hole region as core, the architectural feature that can reflect according to image is diffused for cloth and intensity.For example in image man is left Texture in the background of side on wall is vertically distributed, and the background shutter texture on the left of Ms is distributed in the horizontal direction, Embodiment well is obtained in these features disturbance degree distribution map after the diffusion.Fig. 3 (f) is by original depth based on Fig. 3 (e) Depth map after the optimization obtained after the secondary fusion of depth map Fig. 3 (d) of degree figure Fig. 3 (c) and pre-smoothed, with Fig. 3 (d) phases Smooth region is crossed than, it can be seen that foreground object right side to be suppressed, and the smooth effect of hole region is then according to around scheming As feature is effectively maintained.Fig. 3 (g) and Fig. 3 (h) are respectively to be input into obtain as depth map with Fig. 3 (d) and Fig. 3 (f) DIBR rendering results, it can be seen that non-hole region crosses oval tag slot of the smoothing effect in Fig. 3 (g) in Fig. 3 (d) More obvious pattern distortion is generated in domain, and the depth map after being processed through the inventive method in Fig. 3 (h) renders image then not There are problems that this.As can be seen here depth map fusion method proposed by the present invention in DIBR depth map preprocessing process to improve The castering action that 3D visual effect quality is played.
Finally illustrate, the above embodiments are merely illustrative of the technical solutions of the present invention and it is unrestricted, this area is common Other modifications or equivalent that technical staff is made to technical scheme, without departing from technical solution of the present invention Spirit and scope, all should cover in the middle of scope of the presently claimed invention.

Claims (7)

1. a kind of depth map fusion method for DIBR preprocessing process, it is characterised in that:Comprise the following steps:
A) in original depth-map DOriIn, change in depth rule of the operator according to right and left eyes virtual image is estimated to producing using cavity Raw hole region RHEstimated;
B) generation initial effects degree distribution map If-init
C) to initial effects degree distribution map I under dimension transform domainf-initIt is diffused, the disturbance degree distribution map after generation optimization If
D) with the disturbance degree distribution map I after optimizationfTo original depth-map DOriDepth map with pre-smoothed carries out fusion DPre, it is raw Into the depth map D after optimizationFin
2. a kind of depth map fusion method for DIBR preprocessing process as claimed in claim 1, it is characterised in that step A operator is estimated in the cavity described in):
R H = { r ( x , y ) | η i ( x , y ) - r ( x , y ) > α · λ H · D m a x D w i d t h }
η i ( x , y ) = r ( x + 1 , y ) i f i = l r ( x - 1 , y ) i f i = r
Wherein, RHThe cavity of prediction is represented, r (x, y) is represented in original depth-map DOriThe depth value at middle coordinate (x, y) place, DmaxIt is The number of pixels of setting generation virtual image parallax maximum, α be normalization factor (in 8bit gray level images, α=255), DwidthIt is the number of pixels of picture traverse, λHIt is default threshold factor;If the virtual view of new synthesis is left-eye view, I=l, otherwise, i=r.
3. a kind of depth map fusion method for DIBR preprocessing process as claimed in claim 1, it is characterised in that step B the initial effects degree distribution map I described in)f-initIt is specifically defined as:
I f - i n i t ( p ) = 1 - exp ( - D e ( p ) ) i f x ∈ R H 0 i f x ∉ R H
Wherein, RHRepresent the cavity of prediction, DeP () represents point p to the distance for estimating empty edge.
4. a kind of depth map fusion method for DIBR preprocessing process as claimed in claim 1, it is characterised in that step C described in) under dimension transformation space to initial effects degree distribution map If-initIt is diffused specially:Spread function is defined such as Under:
If[n]=(1-ad)If-init[n]+adIf[n-1]
Wherein, If-init[n] represents the pixel value of initial effects degree distribution map lastrow or a row, and a ∈ (0,1) are spread functions Feedback factor, d represents adjacent sample x in dimension transform domainnAnd xn-1The distance between.
5. a kind of depth map fusion method for DIBR preprocessing process as claimed in claim 4, it is characterised in that
Adjacent sample x in the dimension transform domainnAnd xn-1The distance between be defined as:
D=ct (xn)-ct(xn-1)
Wherein, ct (u) represents dimension transform domain, and dimension conversion process is:
c t ( u ) = ∫ 0 t 1 + σ s σ r | I t e x t u r e ′ ( x ) | d x
Wherein, ItextureX () represents the texture image of input, | I 'texture(x) | represent the gradient intensity of texture image, σsAnd σr It is respectively transmission device space and codomain parameter, for adjusting the influence of propagation, σsSpan is 200~2500, σrSpan It is 0.1~10.
6. a kind of depth map fusion method for DIBR preprocessing process as described in claim 1 or claim 4, it is special Levy and be,
It is described to be diffused as iterative process, and to realize symmetric propagation, if spread in an iteration according to from left to right, from upper Order under is propagated in the picture, then spread according to sequence spread from right to left, from top to bottom, iteration in next iteration Number of times is 2~10 times.
7. a kind of depth map fusion method for DIBR preprocessing process as claimed in claim 1, it is characterised in that step D the depth map fusion formula described in) is:
DFin=IfDPre+(1-If)DOri
CN201611185808.7A 2016-12-20 2016-12-20 A kind of depth map fusion method suitable for DIBR preprocessing process Expired - Fee Related CN106791770B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611185808.7A CN106791770B (en) 2016-12-20 2016-12-20 A kind of depth map fusion method suitable for DIBR preprocessing process

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611185808.7A CN106791770B (en) 2016-12-20 2016-12-20 A kind of depth map fusion method suitable for DIBR preprocessing process

Publications (2)

Publication Number Publication Date
CN106791770A true CN106791770A (en) 2017-05-31
CN106791770B CN106791770B (en) 2018-08-10

Family

ID=58896179

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611185808.7A Expired - Fee Related CN106791770B (en) 2016-12-20 2016-12-20 A kind of depth map fusion method suitable for DIBR preprocessing process

Country Status (1)

Country Link
CN (1) CN106791770B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107592519A (en) * 2017-09-30 2018-01-16 南阳师范学院 Depth map preprocess method based on directional filtering under a kind of dimension transformation space
CN109951705A (en) * 2019-03-15 2019-06-28 武汉大学 A kind of reference frame synthetic method and device towards Vehicle Object coding in monitor video
CN112203074A (en) * 2020-12-07 2021-01-08 南京爱奇艺智能科技有限公司 Camera translation new viewpoint image generation method and system based on two iterations
US11341615B2 (en) * 2017-09-01 2022-05-24 Sony Corporation Image processing apparatus, image processing method, and moving body to remove noise in a distance image

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090116732A1 (en) * 2006-06-23 2009-05-07 Samuel Zhou Methods and systems for converting 2d motion pictures for stereoscopic 3d exhibition
CN102307312A (en) * 2011-08-31 2012-01-04 四川虹微技术有限公司 Method for performing hole filling on destination image generated by depth-image-based rendering (DIBR) technology
CN102595167A (en) * 2012-03-07 2012-07-18 中国科学院自动化研究所 Depth uniformization method and device for 2D/3D video conversion
US20120274626A1 (en) * 2011-04-29 2012-11-01 Himax Media Solutions, Inc. Stereoscopic Image Generating Apparatus and Method
WO2013067441A1 (en) * 2011-11-02 2013-05-10 Google Inc. Depth-map generation for an input image using an example approximate depth-map associated with an example similar image
KR20130067474A (en) * 2011-12-14 2013-06-24 연세대학교 산학협력단 Hole filling method and apparatus
CN103248909A (en) * 2013-05-21 2013-08-14 清华大学 Method and system of converting monocular video into stereoscopic video
CN103957402A (en) * 2014-05-07 2014-07-30 四川虹微技术有限公司 Real-time full-high-definition 2D-to-3D system line reading and writing time sequence design method
CN104052990A (en) * 2014-06-30 2014-09-17 山东大学 Method and device for fully automatically converting two-dimension into three-dimension based on depth clue fusion
CN104240275A (en) * 2013-06-13 2014-12-24 深圳深讯和科技有限公司 Image repairing method and device
CN104506872A (en) * 2014-11-26 2015-04-08 深圳凯澳斯科技有限公司 Method and device for converting planar video into stereoscopic video
CN104751508A (en) * 2015-03-14 2015-07-01 杭州道玄影视科技有限公司 Full-automatic rapid generating and completing method for new view in manufacturing of 3D film
CN104954780A (en) * 2015-07-01 2015-09-30 南阳师范学院 DIBR (depth image-based rendering) virtual image restoration method applicable to high-definition 2D/3D (two-dimensional/three-dimensional) conversion
KR101618776B1 (en) * 2015-02-11 2016-05-12 광주과학기술원 Method for Enhancing 3-Dimensional Depth Image

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090116732A1 (en) * 2006-06-23 2009-05-07 Samuel Zhou Methods and systems for converting 2d motion pictures for stereoscopic 3d exhibition
US20120274626A1 (en) * 2011-04-29 2012-11-01 Himax Media Solutions, Inc. Stereoscopic Image Generating Apparatus and Method
CN102307312A (en) * 2011-08-31 2012-01-04 四川虹微技术有限公司 Method for performing hole filling on destination image generated by depth-image-based rendering (DIBR) technology
WO2013067441A1 (en) * 2011-11-02 2013-05-10 Google Inc. Depth-map generation for an input image using an example approximate depth-map associated with an example similar image
KR20130067474A (en) * 2011-12-14 2013-06-24 연세대학교 산학협력단 Hole filling method and apparatus
CN102595167A (en) * 2012-03-07 2012-07-18 中国科学院自动化研究所 Depth uniformization method and device for 2D/3D video conversion
CN103248909A (en) * 2013-05-21 2013-08-14 清华大学 Method and system of converting monocular video into stereoscopic video
CN104240275A (en) * 2013-06-13 2014-12-24 深圳深讯和科技有限公司 Image repairing method and device
CN103957402A (en) * 2014-05-07 2014-07-30 四川虹微技术有限公司 Real-time full-high-definition 2D-to-3D system line reading and writing time sequence design method
CN104052990A (en) * 2014-06-30 2014-09-17 山东大学 Method and device for fully automatically converting two-dimension into three-dimension based on depth clue fusion
CN104506872A (en) * 2014-11-26 2015-04-08 深圳凯澳斯科技有限公司 Method and device for converting planar video into stereoscopic video
KR101618776B1 (en) * 2015-02-11 2016-05-12 광주과학기술원 Method for Enhancing 3-Dimensional Depth Image
CN104751508A (en) * 2015-03-14 2015-07-01 杭州道玄影视科技有限公司 Full-automatic rapid generating and completing method for new view in manufacturing of 3D film
CN104954780A (en) * 2015-07-01 2015-09-30 南阳师范学院 DIBR (depth image-based rendering) virtual image restoration method applicable to high-definition 2D/3D (two-dimensional/three-dimensional) conversion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
许小艳等: "基于深度图像绘制的视图合成", 《系统仿真学报》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11341615B2 (en) * 2017-09-01 2022-05-24 Sony Corporation Image processing apparatus, image processing method, and moving body to remove noise in a distance image
CN107592519A (en) * 2017-09-30 2018-01-16 南阳师范学院 Depth map preprocess method based on directional filtering under a kind of dimension transformation space
CN109951705A (en) * 2019-03-15 2019-06-28 武汉大学 A kind of reference frame synthetic method and device towards Vehicle Object coding in monitor video
CN112203074A (en) * 2020-12-07 2021-01-08 南京爱奇艺智能科技有限公司 Camera translation new viewpoint image generation method and system based on two iterations
CN112203074B (en) * 2020-12-07 2021-03-02 南京爱奇艺智能科技有限公司 Camera translation new viewpoint image generation method and system based on two-step iteration

Also Published As

Publication number Publication date
CN106791770B (en) 2018-08-10

Similar Documents

Publication Publication Date Title
CN106791770B (en) A kind of depth map fusion method suitable for DIBR preprocessing process
CN108234985B (en) Filtering method under dimension transformation space for rendering processing of reverse depth map
CN104954780B (en) A kind of DIBR virtual image restorative procedure suitable for the conversion of high definition 2D/3D
CN102930530B (en) Stereo matching method of double-viewpoint image
CN103905813B (en) Based on the DIBR hole-filling method of background extracting and divisional reconstruction
Hosni et al. Temporally consistent disparity and optical flow via efficient spatio-temporal filtering
CN103927727B (en) Method for converting scalar image into vector image
CN104065946B (en) Based on the gap filling method of image sequence
CN104850847B (en) Image optimization system and method with automatic thin face function
CN111899295B (en) Monocular scene depth prediction method based on deep learning
CN104378619B (en) A kind of hole-filling algorithm rapidly and efficiently based on front and back's scape gradient transition
Hung et al. Consistent binocular depth and scene flow with chained temporal profiles
CN106204461B (en) In conjunction with the compound regularized image denoising method of non local priori
US20150195510A1 (en) Method of integrating binocular stereo video scenes with maintaining time consistency
CN106028020B (en) A kind of virtual perspective image cavity complementing method based on multi-direction prediction
Seo et al. Mixnerf: Modeling a ray with mixture density for novel view synthesis from sparse inputs
CN104506872A (en) Method and device for converting planar video into stereoscopic video
CN102881018A (en) Method for generating depth maps of images
Pham et al. Efficient spatio-temporal local stereo matching using information permeability filtering
CN103218771A (en) Parameter self-adaptation selecting method based on autoregressive model depth recovery
CN107592519A (en) Depth map preprocess method based on directional filtering under a kind of dimension transformation space
CN104661014B (en) The gap filling method that space-time combines
CN112991421B (en) Robot vision stereo matching method
CN106780705A (en) Suitable for the depth map robust smooth filtering method of DIBR preprocessing process
Yang et al. Image defogging based on amended dark channel prior and 4‐directional L1 regularisation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180810

Termination date: 20191220