CN106060509B - Introduce the free view-point image combining method of color correction - Google Patents
Introduce the free view-point image combining method of color correction Download PDFInfo
- Publication number
- CN106060509B CN106060509B CN201610334492.7A CN201610334492A CN106060509B CN 106060509 B CN106060509 B CN 106060509B CN 201610334492 A CN201610334492 A CN 201610334492A CN 106060509 B CN106060509 B CN 106060509B
- Authority
- CN
- China
- Prior art keywords
- view
- virtual view
- virtual
- occlusion areas
- depth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/122—Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/111—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of free view-point image combining method for introducing color correction, mainly solve the problems, such as in existing free view-point synthetic technology composograph color discontinuously and empty edge blurry.Implementation step is:Left and right viewpoint view and each self-corresponding depth map are inputted, converts to obtain left and right virtual view by 3D;Non-blocking region according to position relationship by left and right virtual view synthetic mesophase virtual view;The color distortion between virtual view occlusion areas is substituted with the color distortion between virtual view background area, and the occlusion areas by color correction is obtained using Histogram Matching algorithm;Non-blocking region and the occlusion areas by color correction are merged, obtains the middle view image of cavity point;The cavity for having the middle view image that cavity is put to carry out successively is filled, obtains final synthesis virtual view.The present invention improves the quality of synthesis virtual image, the viewing comfort level of 3D videos is improved, available for three-dimensional multimedia.
Description
Technical field
The invention belongs to technical field of image processing, more particularly to a kind of free view-point image combining method, available for standing
Body multimedia.
Background technology
Because three-dimensional multimedia 3DTV can provide more true nature, lifelike visual environment is wide in recent years
Big consumer is known to be approved and progressively penetrates into multimedia market, and free view-point synthesizes vast as the core technology in 3DTV fields
Scientific research personnel's Learning Studies.Because 3D videos require to viewing location, i.e., diverse location should receive different regard
Difference is depth information, and this means that each viewing location should have a corresponding seat in the plane to carry out scene capture, due to
Stereoscopic camera cost is high and shooting condition limits, the too difficult realization of omnidirectional shooting.The synthesis of free view-point image effectively solves
This technical barrier, the development to rich solid video resource and 3DTV fields are most important.
The synthesis of free view-point image is based on left and right viewpoint view and its range image integration intermediate-view virtual view
One technology.Synthesize in new viewpoint image process, due to the conversion of viewpoint, Partial occlusion region can be exposed in the visual field again,
This just needs left and right visual point image to be synthesized to fill up new exposed region.2009, Y.Mori et al. is proposed first to be compared
Scientific system free view-point synthetic method (Y.Mori, N.Fukushima, T.Yendo, T.Fujii and M.Tanimoto,
View generation with 3D warping using depth information for FTV,Signal
Processing:Image Communication, 24,65-72,2009), it is broadly divided into following four step:
1) 3D is converted:3D conversion is substantially the process of projection, and Mori is become using forward projection's mode based on space
Relation is changed, left and right view and its depth image are projected into target view position, obtains the virtual left and right view of target view.
2) rounding error removes:During 3D forward projections, to the virtual view after causing conversion that rounds up of coordinate
The middle missing for a pixel occur is, it is necessary to detect error pixel and filled with surrounding pixel, and then complete to error pixel
The removal of point.
3) left and right virtual view synthesizes:Left and right virtual view is merged by position relationship and obtains non-blocking region, and respectively
The occlusion areas that left and right sides in medial view newly exposes is filled up with left and right virtual view, obtains destination virtual view.
4) image repair:Because depth is mutated, the pixel missing of another form is there is also in 3D conversion processes, is claimed
For cavity, it is necessary to which passing through some image repair algorithms carries out empty filling.
In being converted for 3D the problem of rounding error, K.J.Oh et al. proposes one kind and regarded to the left and right by target view position
Backwards projection strategy (K.J.Oh, S.Yea, A.Vetro and Y.S.Ho, the Virtual View of point position projection
Synthesis Method and Self-Evaluation Metrics for Free Viewpoint Television
and 3D Video,International Journal of Imaging Systems and Technology,20,378–
390,2010), rounding error is taken at the viewpoint position coordinate of left and right, ensureing the target view in each non-blocking region has pair
The mapping pixel answered, it effectively prevent the problem of error pixel lacks.
In the virtual view building-up process of left and right, because the occlusion areas exposed will use left and right view simultaneously to fill out
Mend, and left and right viewpoint shooting condition can not possibly be consistent, and left and right view has a color, and the difference in brightness and saturation degree, this is just produced
The discontinuous problem of composograph color is given birth to.It it is also proposed a kind of histogram in the text for this problem, K.J.Oh et al.
Matching algorithm, one of left and right view is considered as front view, another is considered as auxiliary view.Position relationship is first depending on, by main view
Figure and auxiliary view synthetic mesophase virtual view;Then the histogram frequency distribution diagram of virtual view and front view is calculated, by straight
Side's figure matching, makes virtual view and front view have identical color characteristic;Finally filled up with front view and auxiliary view and virtually regarded
Occlusion areas in figure.This method efficiently solves the color discontinuous problem between composograph and front view, but synthesizes
The discontinuous situation of color between image and assistant images is still fairly obvious.
Cavity filling is that the synthesis of free view-point image and 2D turn major issue in 3D.K.J.Oh proposes cavity from the back of the body
Scape (K.J.Oh, S.Yea and Y.S.Ho, Hole filling method using depth based in-painting
for view synthesis in free viewpoint television and 3-d video,Proc.Picture
Coding Symposium, 1-4,2009), and prospect background is isolated according to depth information, and then carry out sky with background information
Hole is filled, but specific filling mode and not exhaustive and preferable.M.Solh proposes a kind of simple efficient successively empty filling side
Formula (M.Solh and G.AlRegib, Hierarchical hole-filling for depth-based view
synthesis in FTV and 3D video,IEEE Journal on Selected Topics in Signal
Processing, 6,495-504,2012), its shortcoming is to have ignored the necessity of background filling cavity, causes filling region side
The fuzzy consequence of edge.
The content of the invention
The purpose of the present invention is directed to the deficiency of above-mentioned prior art, proposes to introduce the free view-point image synthesis of color correction
Method, so that discontinuous, the definition at raising filling region edge of color in composograph is completely eliminated.
The technical scheme is that:View synthesis based on color correction is carried out by the left and right view after being converted to 3D,
And the successively cavity based on depth information is carried out to synthesis virtual view and is filled, obtain the continuous high quality intermediate-view of color and regard
Figure.Its step includes as follows:
1) left and right viewpoint view and depth maps corresponding to them are inputted, based on position relationship and projection equation by left and right view
And depth map projects to intermediate-view plane, left virtual view W is obtainedL, right virtual view WR, left virtual depth figure DLWith right void
Intend depth map DR;
2) by the occlusion areas W of left virtual viewL' with the occlusion areas W of right virtual viewR' overlap, by left virtual view
Non-blocking region MLWith the non-blocking region M of right virtual viewROverlap;
3) by the non-blocking region M of left virtual viewLWith the non-blocking region M of right virtual viewRFusion is weighted, is obtained
To the non-blocking region part M of intermediate virtual view;
4) by the non-blocking region N of left virtual depth figureLWith the non-blocking region N of right virtual depth figureRIt is weighted and melts
Close, obtain the non-blocking region N of intermediate virtual depth map;
5) by the non-blocking region N of intermediate virtual depth map, the occlusion areas D of left virtual depth figureL' and right virtual depth
The occlusion areas D of figureR' merged, obtain final intermediate virtual depth map A0;
6) image segmentation is carried out to the non-blocking region M of middle virtual view, isolates prospect MfWith background Mb;
7) non-blocking regional background and the histogram of occlusion areas are counted:
7a) the background area M by being partitioned intob, the corresponding background area M found out in left virtual viewLbWith right virtual view
In background area MRb, make intermediate virtual view background area M respectivelybStatistic histogram Hb, left virtual view background MLb's
Statistic histogram HLbWith right virtual view background MRbStatistic histogram HRb;
7b) make left virtual view occlusion areas WL' statistic histogram HL' and right virtual view occlusion areas WR' statistics
Histogram HR';
8) difference between the histogram of background area is calculated, with the difference between its replacement occlusion areas histogram, is obtained
The statistic histogram C of occlusion areas on the left of intermediate virtual viewLWith the statistic histogram C of right side occlusion areasR;
9) Histogram Matching algorithm is used, by the statistic histogram H of left virtual view occlusion areasL' it is matched to middle void
Intend the statistic histogram C of occlusion areas on the left of viewL, obtain occlusion areas Cf on the left of the intermediate virtual view after color correctionL,
Similarly by the statistic histogram H of right virtual view occlusion areasR' be matched to occlusion areas on the right side of intermediate virtual view statistics it is straight
Side figure CR, obtain occlusion areas Cf on the right side of the intermediate virtual view after color correctionR;
10) by the non-blocking region M of intermediate virtual view, left side occlusion areas CfLWith right side occlusion areas CfRMelted
Close, obtain new intermediate virtual view B0;
11) according to virtual depth figure A0In depth information, choose background pixel to itself and intermediate virtual view B0Carry out
Successively down-sampling, obtain the down-sampled virtual depth figure A of each layerkWith virtual view Bk, until the virtual depth figure A of end layer S layersS
With virtual view BSIn without cavity;
12) since S layers, down-sampled virtual view B is successively filled upwardsk'In cavity, obtain the reparation figure of each layer
As Fk', until obtaining initiation layer repairs image F0, that is, final free view-point image.
The present invention has the characteristics that compared with prior art:
1. the present invention is with statistical theory and replaces thought and carries out color correction, with medial view with left and right view is non-closes
The color distortion between color distortion reaction medial view and the left and right view occlusion areas between region is filled in, and then solves synthesis
Color discontinuous problem between virtual view occlusion areas and non-blocking region.
2. the present invention use image segmentation algorithm, the segmentation of prospect background is carried out to non-blocking region, with composograph and
The color distortion between color distortion reaction occlusion areas between the view background area of left and right, because occlusion areas is from the back of the body
Scape, the color distortion that occlusion areas is reacted with the color distortion of background area are more accurate reasonable.
3. the present invention uses Histogram Matching algorithm, by the Histogram Matching of occlusion areas in former left and right view to color school
The histogram of occlusion areas after just, the occlusion areas image of reconstruct with non-blocking region no color differnece not only naturally, but also can hold in the mouth
Connect.
4. the present invention uses the successively empty filling algorithm based on depth, based on depth information, autotelic selection background
Neighborhood territory pixel point fills vacancy pixel, and this accurate fill method is effectively improved the picture quality of synthesis virtual view.
The simulation experiment result shows, the present invention combine color correction algorithm based on Histogram Matching and based on depth by
The empty filling algorithm of layer carries out the synthesis of virtual view, can obtain the composograph of true nature, be that one kind can significantly improve
Watch the free view-point View synthesis algorithm of the system perfecting of comfort level.
Brief description of the drawings
Fig. 1 be the present invention realize general flow chart;
Fig. 2 is the successively cavity filling sub-process figure based on depth in the present invention;
Fig. 3 is the test image used in l-G simulation test;
Fig. 4 be to test set Ballet, with the free view-point image that of the invention and existing two kinds of typical methods synthesize with it is true
Comparative result between value;
Fig. 5 is to test set Breakdancing, with the free view-point figure of of the invention and existing two kinds of typical methods synthesis
Picture and the Comparative result between true value.
Embodiment
The embodiment and effect of the present invention are described in further detail below in conjunction with accompanying drawing:
Reference picture 1, embodiment of the invention are as follows:
Step 1, left and right viewpoint view and depth map corresponding to them are inputted and carries out 3D conversions.
1a) input left view point view L to be synthesized and left view point depth map L corresponding to itD, right viewpoint view R and it is right
The right viewpoint depth map R answeredD;
3D transformation by reciprocal directions 1b) are carried out to them based on position relationship and projection equation, left view point view L is projected into centre
Viewpoint plane obtains left virtual view WL, right viewpoint view R is projected into intermediate-view plane and obtains right virtual view WR, by a left side
Viewpoint depth map LDProject to intermediate-view plane and obtain left virtual depth figure DL, by right viewpoint depth map RDProject to Intermediate View
Point plane obtains right virtual depth figure DR。
Here the left and right viewpoint view used, left and right viewpoint depth map, positional information and projection matrix are all from micro-
The database that soft research institute provides, Microsoft Research, Image-Based Realities-3D Video
Download,/http://research.microsoft.com/ivm/3DVideoDownload/S。
Step 2, the occlusion areas of left virtual view and right virtual view is overlapped, overlaps left virtual view and right virtual view
Non-blocking region.
Due to the change at visual angle, the left virtual view W after 3D is convertedLWith right virtual view WRThere will be new exposure
Region occurs, and does not have the image information of the new exposed region in this part in view is originally inputted, this excalation image information
New exposed region is occlusion areas, if there is one piece of occlusion areas in left virtual view, just it is artificial erase in right view with
Symmetrical region image information, if there is one piece of occlusion areas in right virtual view, just artificial erase in left view therewith
The image information of symmetrical region, and then left virtual view and right virtual view is had the left virtual view occlusion areas M of coincidenceLWith
Right virtual view occlusion areas MR, while also have the left virtual view non-blocking region W of coincidenceL' and right virtual view non-occlusion area
Domain WR'。
Step 3, the non-blocking region M of synthetic mesophase virtual view.
3a) based on the position relationship provided in database, left view point is calculated to the geometric distance t of intermediate-viewLRegarded with the right side
Geometric distance t of the point to intermediate-viewR, according to geometric distance, calculate the process in synthetic mesophase virtual view non-blocking region
In, the weight coefficient 1- α in the weight coefficient α in left virtual view non-blocking region and right virtual view non-blocking region, wherein:
Weight coefficient 3b) is based on, to left virtual view non-blocking region MLWith right virtual view non-blocking region MRAdded
Power fusion, obtains the non-blocking region part of intermediate virtual view:M=α ML+(1-α)·MR。
Step 4, the non-blocking region N of synthetic mesophase virtual depth figure.
According to the weight coefficient α in step 3, by the non-blocking region N of left virtual depth figureLIt is non-with right virtual depth figure
Occlusion areas NRFusion is weighted, obtains the non-blocking region N of intermediate virtual depth map:N=α NL+(1-α)·NR;
Step 5, synthetic mesophase virtual depth figure A0。
Merge non-blocking region N, the occlusion areas D of left virtual depth figure of intermediate virtual depth mapL' and right virtual depth
The occlusion areas D of figureR', obtain final intermediate virtual depth map:A0=N+DL'+DR'。
Step 6, image segmentation is carried out to the non-blocking region M of middle virtual view.
Foreground and background can not be isolated exactly due to relying solely on depth information, can be compared using some existing
Effective image segmentation algorithm, image segmentation is carried out to the non-blocking region M of middle virtual view, accurately isolates prospect Mf
With background Mb.Conventional images dividing method can be found in document:
[1].D.Comaniciu,P.Meer,“Mean shift:a robust approach toward feature
space analysis,”IEEE Transactions on Pattern Analysis and Machine
Intelligence,vol.24,no.5,pp.603–619,2002;
[2].P.Meer,B.Georgescu,“Edge detection with embedded confidence,”IEEE
Trans.Pattern Anal.Machine Intell,vol.2 8,2001;
[3].C.Christoudias,B.Georgescu,P.Meer,“Synergism in low level
vision,”International Conference of Pattern Recognition,2001.
Step 7, non-blocking regional background and the histogram of occlusion areas are counted.
7a) according to intermediate virtual view background area Mb, the corresponding background area M found out in left virtual viewLbWith right void
Intend the background area M in viewRb, according to statistics with histogram algorithm, in the statistics section that pixel value is [0,255], make respectively
Intermediate virtual view background area MbStatistic histogram Hb, left virtual view background MLbStatistic histogram HLbVirtually regarded with the right side
Figure background MRbStatistic histogram HRb;
7b) according to statistics with histogram algorithm, in the statistics section that pixel value is [0,255], make left virtual view occlusion
Region WL' statistic histogram HL' and right virtual view occlusion areas WR' statistic histogram HR'。
Step 8, the left side occlusion areas statistic histogram C of intermediate virtual view is madeLNogata is counted with right side occlusion areas
Scheme CR。
Due to occlusion areas, that is, the image section being blocked derives from background, so left view regards with intermediate virtual
The color distortion of figure occlusion areas, it can be substituted with the color distortion between left view and intermediate virtual view background area,
Similarly right view and the color distortion of intermediate virtual view occlusion areas, right view and intermediate virtual view background area can be used
Between color distortion substitute.
8a) with the statistic histogram H of left virtual view backgroundLbSubtract the statistic histogram H of intermediate virtual view backgroundb,
Obtain the statistical average difference value histogram Diff between left virtual view background area and intermediate virtual view background areaL;Together
Reason, with the statistic histogram H of right virtual view backgroundRbSubtract the statistic histogram H of intermediate virtual view backgroundb, obtain right void
Intend the statistical average difference value histogram Diff between view background area and intermediate virtual view background areaR;
8b) based on replacement thought above, by statistical average difference value histogram DiffLIt is added in left virtual view occlusion areas
Statistic histogram HL' on, obtain the statistic histogram C of occlusion areas on the left of intermediate virtual viewL;Similarly, by statistical average
Difference value histogram DiffRIt is added in the statistic histogram H of right virtual view occlusion areasR' on, obtain on the right side of intermediate virtual view
The statistic histogram C of occlusion areasR。
Step 9, color is carried out to left virtual view occlusion areas and right virtual view occlusion areas by Histogram Matching
Correction.
9a) pixel value of pixel in left virtual view occlusion areas is projected to left and right adjacent pixels value, to change a left side
The statistic histogram H of virtual view occlusion areasL', it is equal to the statistics Nogata of occlusion areas on the left of intermediate virtual view
Scheme CL, i.e., by Histogram Matching, by the statistic histogram H of left virtual view occlusion areasL' it is matched to an intermediate virtual view left side
The statistic histogram C of side occlusion areasL, obtain the left virtual view occlusion areas Cf by color correctionL;
9b) pixel value of pixel in right virtual view occlusion areas is projected to left and right adjacent pixels value, to change the right side
The statistic histogram H of virtual view occlusion areasR', it is equal to the statistics Nogata of occlusion areas on the left of intermediate virtual view
Scheme CR, i.e., by Histogram Matching, by the statistic histogram H of right virtual view occlusion areasR' it is matched to the intermediate virtual view right side
The statistic histogram C of side occlusion areasR, obtain the right virtual view occlusion areas Cf by color correctionR。
Step 10, synthetic mesophase virtual view B0。
Based on the left virtual view occlusion areas Cf by color correctionL, right virtual view occlusion areas CfRAfter fusion
Intermediate virtual view non-blocking region M, synthetic mesophase virtual view:B0=M+CfL+CfR。
Step 11, to middle virtual depth figure A0With intermediate virtual view B0Carry out successively down-sampling.
The successively empty filling algorithm that M.Solh is proposed can be used for filling the cavity in synthesis virtual view, carry out first
Successively down-sampling, obtain the down-sampling virtual view of each layer, then by the bottom, successively repair the down-sampling of each layer upwards
Virtual view, obtain the reparation image of initiation layer.The algorithm ignores cavity from this advantageous information of background, and what is obtained repaiies
Complex pattern is in hole region edge blurry, therefore the present invention adds background information during down-sampling, solves hole region side
The problem of edge obscures, its step are as follows:
Shown in the filled arrows of reference picture 2, this step is implemented as follows:
11a) according to virtual depth figure A0In depth information, choose background pixel successively down-sampling is carried out to it, to most
There is no empty point in whole layer, obtain A successively0Each layer down-sampled images A1, A2, Ak, AS, wherein kth layer
Virtual depth figure AkIt is to be based on its last layer virtual depth figure Ak-1Obtain, virtual depth figure AkMiddle any point (m, n) is asked for
Formula is as follows:
Wherein, Xm,nIt is -1 layer of virtual depth figure A of kthk-1In size be 5 × 5 matrix-block, its central point (2 ×
M+3,2 × n+3) place;ω is the Gaussian kernel of one 5 × 5;Qh is the threshold value for dividing prospect and background, and depth value is more than qh
It is prospect less than qh for background;L (x) functions are used for choosing background pixel point,Nz (u) representing matrixs u
The number of middle non-zero points;Num (v) represents the element number for meeting condition v;K value is incremented by one by one by 1 to S, and S is to make virtually
Depth map ASIn do not have cavity put end layer.
That is, virtual depth figure AkThe pixel value A of middle any point (m, n)k(m, n), it is by virtual depth figure
Ak-1In matrix-block Xm,nThe anisotropy smothing filtering based on depth is carried out to try to achieve:
As matrix-block Xm,nIn pixel be free of empty point, it is right and when all pixels point belongs to background or prospect
Xm,nMiddle all pixels point carries out Gaussian smoothing, obtains the pixel value A of point (m, n)k(m,n);
As matrix-block Xm,nIn contain empty point, but when all non-cavity points belong to background or prospect, to Xm,nMiddle institute
There is non-cavity point pixel value to be weighted average, obtain the pixel value A of point (m, n)k(m,n);
As matrix-block Xm,nIn non-cavity point when partly belonging to foreground part and belonging to background, choose Xm,nIn had powerful connections
Pixel carries out the weighted average of pixel value, obtains the pixel value A of point (m, n)k(m,n);
As matrix-block Xm,nIn pixel distribution to be not belonging to above-mentioned three kinds of situations be the pixel value A of point (m, n)k(m,n)
It is zero.
11b) according to virtual depth figure A0In depth information, choose background pixel to virtual view B0Progress is successively adopted down
Sample, there is no empty point into end layer, obtain B successively0Each layer down-sampled images B1, B2, Bk, BS, its
In any kth layer virtual view BkIt is to be based on its last layer virtual view Bk-1Obtain, virtual view BkMiddle any point (m, n)
It is as follows to ask for formula:
Wherein, Ym,nIt is -1 layer of down-sampling virtual view B of kthk-1In size be 5 × 5 matrix-block, its central point exists
(2 × m+3,2 × n+3) place, Xm,nStep 11a) in -1 layer of virtual depth figure A of kth for mentioningk-1In matrix-block, its size
With position all with Ym,nIt is corresponding, for providing depth information, that is to say, that virtual view BkThe pixel value of middle any point (m, n)
Bk(m, n), it is by virtual view Bk-1In matrix-block Ym,nThe anisotropy smothing filtering based on depth is carried out to try to achieve, it is deep
Information is spent by matrix-block Xm,nThere is provided.
Step 12, cavity is successively repaired by up-sampling, obtains final free view-point image F0。
Shown in the hollow arrow of reference picture 2, this step is implemented as follows:
12a) the down-sampling virtual view B according to S layersSIn do not have cavity characteristic, by the virtual view B of S layersSDeng
It is same as the reparation image F of S layersS, i.e. FS=BS;
12b) by linear interpolation, image F is repaired to S layersSUp-sampled, obtained swollen with the resolution ratio such as S-1 layers
Swollen virtual view ES-1, wherein ES-1In positioned at pth row, q arrange point (p, q) pixel value ES-1(p, q) is asked as follows
Take:
Wherein i' only takes even number { -2,0,2 }, and j' equally only takes even number { -2,0,2 }, and i', j', which are used for choosing, repairs image FS
In 3 × 3 matrix-block centered on point (p, q), by carrying out being based on weight vectors to selected 3 × 3 matrix-blockSmooth filter
Ripple, obtain expanding virtual view ES-1The pixel value E at midpoint (p, q)S-1(p,q);Weight vectorsFor determining above-mentioned reparation image FSIn each element is shared in selected matrix-block weighs
Weight:
Work as i'=-2, during j'=-2, elementThat is FSWeight shared by (p-1, q-1)For:
Work as i'=-2, during j'=0, element FSWeight shared by (p-1, q)For 0.02;
Work as i'=-2, during j'=2, element FSWeight shared by (p-1, q+1)For 0.052;
Work as i'=0, during j'=-2, element FSWeight shared by (p, q-1)For 0.02;
Work as i'=0, during j'=0, element FSWeight shared by (p, q)For:0.042;
Work as i'=0, during j'=2, element FSWeight shared by (p, q+1)For 0.02;
Work as i'=2, during j'=-2, element FSWeight shared by (p+1, q-1)For 0.052;
Work as i'=2, during j'=0, element FSWeight shared by (p+1, q)For 0.02;
Work as i'=2, during j'=2, element FSWeight shared by (p+1, q+1)For 0.052。
12c) with expansion virtual view ES-1In pixel, fill the virtual view B of same layerS-1Pixel at middle cavity
Point, obtain the reparation image F of S-1 layersS-1, wherein FS-1In positioned at pth row, q arrange point (p, q) pixel value FS-1(p, q) is pressed
Equation below is asked for:
Step 12b) and 12c) give S-1 layers are transitioned into by S layers, and obtain S-1 layers and repair image FS-1Process, under
Face by this process be applied to any kth ' layer, and obtain the reparation image F of k'-1 layersk'-1;
12d) by step 12b), to any kth ' layer repairs image Fk'Up-sampled, obtain differentiating with k'-1 layers etc.
The expansion virtual view E of ratek'-1, then by step 12c) use expansion virtual view Ek'-1In pixel filling same layer it is virtual
View Bk'-1Pixel at middle cavity, obtains the reparation image F of k'-1 layersk'-1, k' value successively decreased one by one by S-1 to 0, i.e.,
By S-1 layers, successively upward repetitive cycling step 12b) and 12c), the reparation image F of each layer is obtained successivelyS-1,
FS-2, Fk', F0, initiation layer reparation image F0As final free view-point image.
The effect of the present invention can be further illustrated by following experiment:
1. simulated conditions:
It is Core (TM), 3.20GHZ, internal memory 4.00G, WINDOWS XP systems in CPU, on Matlab R2012b platforms
Emulated.
Two groups of test images of present invention selection are emulated, and this two groups of test images such as Fig. 3, wherein Fig. 3 (a) are Ballet
The left view point view of test set, Fig. 3 (b) are the left view point depth maps of Ballet test sets, and Fig. 3 (c) is Ballet test sets
Right viewpoint view, Fig. 3 (d) are the right viewpoint depth maps of Ballet test sets;Fig. 3 (e) is a left side for Breakdancing test sets
Viewpoint view, Fig. 3 (f) are the left view point depth maps of Breakdancing test sets, and Fig. 3 (g) is Breakdancing test sets
Right viewpoint view, Fig. 3 (h) is the right viewpoint depth map of Breakdancing test sets.
Emulation mode:1. the free view-point image combining method based on 3D conversion that Y.Mori is proposed
2. the free view-point image combining method based on the filling of background cavity that K.J.Oh is proposed
3. present invention introduces the free view-point image combining method of color correction
2. emulation content:
Emulation 1, is utilized respectively above-mentioned to the Ballet test sets shown in Fig. 3 (a), Fig. 3 (b), Fig. 3 (c) and Fig. 3 (d)
Three kinds of methods carry out free view-point image synthesis, as a result if Fig. 4, wherein Fig. 4 (a) are synthesized by the Y.Mori methods proposed
Free view-point image, Fig. 4 (b) are the free view-point images that the method proposed by K.J.Oh synthesizes, and Fig. 4 (c) is by this hair
The free view-point image of bright method synthesis, Fig. 4 (d) is actual reference picture.
Fig. 4 (a) and Fig. 4 (b) shows that the method that Y.Mori and K.J.Oh is proposed has obvious color and discontinuously asked
Topic, it can be seen that the present invention efficiently solves the color discontinuous problem between dancer's the right and left and background from Fig. 4 (c),
And rationally accurately it is filled with the cavity at dancer's shoulder.Comparison diagram 4 (a), Fig. 4 (b), Fig. 4 (c) and Fig. 4 (d), it can be seen that
Empty filling algorithm proposed by the present invention not only can effectively solve color discontinuous problem, acceptable accurate filling cavity,
Obtain clearly edge.
Emulation 2, it is sharp respectively to the Breakdancing test sets shown in Fig. 3 (e), Fig. 3 (f), Fig. 3 (g) and Fig. 3 (h)
Free view-point image synthesis is carried out with above-mentioned three kinds of methods, as a result if Fig. 5, wherein Fig. 5 (a) are the methods proposed by Y.Mori
The free view-point image of synthesis, Fig. 5 (b) are the free view-point images that the method proposed by K.J.Oh synthesizes, and Fig. 5 (c) is logical
The free view-point image of the inventive method synthesis is crossed, Fig. 5 (d) is actual reference picture.
Fig. 5 (a) and Fig. 5 (b) reflects that the method that Y.Mori and K.J.Oh is proposed has a certain degree of color and do not connected
Continuous problem is right it can be seen that the present invention solves the color discontinuous problem between dancer's leg both sides and background from Fig. 5 (c)
Than Fig. 5 (a), Fig. 5 (b), Fig. 5 (c) and Fig. 5 (d), it can be seen that method proposed by the present invention solves color discontinuous problem,
Edge is maintained, experimental result is stable, effectively increases viewing comfort level.
Claims (9)
1. a kind of free view-point image combining method for introducing color correction, including:
1) left and right viewpoint view and depth maps corresponding to them are inputted, based on position relationship and projection equation by left and right view and depth
Degree figure projects to intermediate-view plane, obtains left virtual view WL, right virtual view WR, left virtual depth figure DLIt is virtual deep with the right side
Degree figure DR;
2) by the occlusion areas W of left virtual viewL' with the occlusion areas W of right virtual viewR' overlap, by the non-of left virtual view
Occlusion areas MLWith the non-blocking region M of right virtual viewROverlap;
3) by the non-blocking region M of left virtual viewLWith the non-blocking region M of right virtual viewRFusion is weighted, in obtaining
Between virtual view non-blocking region part M;
4) by the non-blocking region N of left virtual depth figureLWith the non-blocking region N of right virtual depth figureRFusion is weighted, is obtained
To the non-blocking region N of intermediate virtual depth map;
5) by the non-blocking region N of intermediate virtual depth map, the occlusion areas D of left virtual depth figureL' and right virtual depth figure
Occlusion areas DR' merged, obtain final intermediate virtual depth map A0;
6) image segmentation is carried out to the non-blocking region M of middle virtual view, isolates prospect MfWith background Mb;
7) non-blocking regional background and the histogram of occlusion areas are counted:
7a) the background area M by being partitioned intob, the corresponding background area M found out in left virtual viewLbIn right virtual view
Background area MRb, make intermediate virtual view background area M respectivelybStatistic histogram Hb, left virtual view background MLbStatistics
Histogram HLbWith right virtual view background MRbStatistic histogram HRb;
7b) make left virtual view occlusion areas WL' statistic histogram HL' and right virtual view occlusion areas WR' statistics Nogata
Scheme HR';
8) difference between the histogram of background area is calculated, with the difference between its replacement occlusion areas histogram, obtains centre
The statistic histogram C of occlusion areas on the left of virtual viewLWith the statistic histogram C of right side occlusion areasR;
9) Histogram Matching algorithm is used, by the statistic histogram H of left virtual view occlusion areasL' it is matched to intermediate virtual view
The statistic histogram C of left side occlusion areasL, obtain occlusion areas Cf on the left of the intermediate virtual view after color correctionL, similarly will
The statistic histogram H of right virtual view occlusion areasR' it is matched to the statistic histogram C of occlusion areas on the right side of intermediate virtual viewR,
Obtain occlusion areas Cf on the right side of the intermediate virtual view after color correctionR;
10) by the non-blocking region M of intermediate virtual view, left side occlusion areas CfLWith right side occlusion areas CfRMerged, obtained
To new intermediate virtual view B0;
11) according to virtual depth figure A0In depth information, choose background pixel to itself and intermediate virtual view B0Carry out successively
Down-sampling, obtain the down-sampled virtual depth figure A of each layerkWith virtual view Bk, until the virtual depth figure A of end layer S layersSAnd void
Intend view BSIn without cavity;
12) since S layers, down-sampled virtual view B is successively filled upwardskIn cavity, obtain the reparation image F of each layerk',
Until obtaining initiation layer repairs image F0, that is, final free view-point image.
2. the free view-point image combining method according to claim 1 for introducing color correction, wherein by a left side in step 3)
The non-blocking region M of virtual viewLWith the non-blocking region M of right virtual viewRFusion is weighted, is carried out as follows:
3a) by the distance t of left view point to intermediate-viewLWith the distance t of right viewpoint to intermediate-viewR, calculate virtual in synthetic mesophase
During view non-blocking region, the weight coefficient α in left virtual view non-blocking region and right virtual view non-blocking region
Weight coefficient 1- α, wherein:
Weight coefficient 3b) is based on, left virtual view non-blocking region is weighted with right virtual view non-blocking region and merged,
Obtain the non-blocking region part of intermediate virtual view:M=α ML+(1-α)·MR。
3. the free view-point image combining method according to claim 1 for introducing color correction, wherein by a left side in step 4)
The non-blocking region N of virtual depth figureLWith the non-blocking region N of right virtual depth figureRFusion is weighted, is to be based on step 3)
In obtained weight coefficient α, by the non-blocking region N of left virtual depth figureLWith the non-blocking region N of right virtual depth figureRCarry out
Weighting, obtains the non-blocking region part of intermediate virtual depth map:N=α NL+(1-α)·NR。
4. the free view-point image combining method according to claim 1 for introducing color correction, wherein in step 8), with the back of the body
Difference between scene area histogram substitutes the difference between occlusion areas histogram, carries out as follows:
8a) with the statistic histogram H of left virtual view backgroundLbSubtract the statistic histogram H of intermediate virtual view backgroundb, obtain
Statistical average difference value histogram Diff between left virtual view background area and intermediate virtual view background areaL;Similarly, use
The statistic histogram H of right virtual view backgroundRbSubtract the statistic histogram H of intermediate virtual view backgroundb, obtain right virtual view
Statistical average difference value histogram Diff between background area and intermediate virtual view background areaR;
8b) by statistical average difference value histogram DiffLIt is added in the statistic histogram H of left virtual view occlusion areasL' on, obtain
The statistic histogram C of occlusion areas on the left of intermediate virtual viewL;Similarly, by statistical average difference value histogram DiffRIt is added in right void
Intend the statistic histogram H of view occlusion areasR' on, obtain the statistic histogram C of occlusion areas on the right side of intermediate virtual viewR。
5. the free view-point image combining method according to claim 1 for introducing color correction, wherein by a left side in step 9)
The statistic histogram H of virtual view occlusion areasL' it is matched to the statistic histogram C of occlusion areas on the left of intermediate virtual viewL, it is
The pixel value of pixel in left virtual view occlusion areas is projected to left and right adjacent pixels value, to change left virtual view occlusion
The statistic histogram H in regionL', it is equal to the statistic histogram C of occlusion areas on the left of intermediate virtual viewL。
6. the free view-point image combining method according to claim 1 for introducing color correction, wherein by the right side in step 9)
The statistic histogram H of virtual view occlusion areasR' it is matched to the statistic histogram C of occlusion areas on the right side of intermediate virtual viewR, it is
The pixel value of pixel in right virtual view occlusion areas is projected to left and right adjacent pixels value, to change right virtual view occlusion
The statistic histogram H in regionR', it is equal to the statistic histogram C of occlusion areas on the right side of intermediate virtual viewR。
7. the free view-point image combining method according to claim 1 for introducing color correction, wherein basis in step 11)
Virtual depth figure A0In depth information, choose background pixel successively down-sampling is carried out to it, be to have cavity put virtual depth
Degree figure A0Successively down-sampled processing is carried out, does not have empty point into end layer, obtains A successively0Each layer down-sampled images A1,
A2, Ak, AS, wherein kth layer virtual depth figure AkIt is to be based on its last layer virtual depth figure Ak-1Obtain, it is empty
Intend depth map AkMiddle any point (m, n) to ask for formula as follows:
Wherein, Xm,nIt is -1 layer of virtual depth figure A of kthk-1In size be 5 × 5 matrix-block, its central point (2 × m+3,
2 × n+3) place;ω is the Gaussian kernel of one 5 × 5;Qh is the threshold value for dividing prospect and background, and depth value is more than qh for the back of the body
Scape, it is prospect less than qh;L (x) functions are used for choosing background pixel point,It is non-in nz (u) representing matrixs u
The number of zero point;Num (v) represents the element number for meeting condition v;K value is incremented by one by one by 1 to S, and S is to make virtual depth
Scheme ASIn do not have cavity put end layer.
8. the free view-point image combining method according to claim 1 for introducing color correction, wherein basis in step 11)
Virtual depth figure A0In depth information, choose background pixel to virtual view B0Successively down-sampling is carried out, is to there is cavity to put
Intermediate virtual view B0Successively down-sampled processing is carried out, does not have empty point into end layer, obtains B successively0Each layer down-sampling figure
As B1, B2, Bk, BS, wherein any kth layer virtual view BkIt is to be based on its last layer virtual view Bk-1
Arrive, virtual view BkMiddle any point (m, n) to ask for formula as follows:
Wherein, Xm,nIt is -1 layer of virtual depth figure A of kthk-1In size be 5 × 5 matrix-block, its central point (2 × m+3,
2 × n+3) place;Ym,nIt is -1 layer of down-sampling virtual view B of kthk-1In size be 5 × 5 matrix-block, its central point is (2
× m+3,2 × n+3) place;ω is the Gaussian kernel of one 5 × 5;Qh is the threshold value for dividing prospect and background, and depth value is more than
Qh is background, is prospect less than qh;L (x) functions are used for choosing background pixel point,Nz (u) representing matrixs
The number of non-zero points in u;Num (v) represents the element number for meeting condition v;K value is incremented by one by one by 1 to S, and S is to make virtually
View BSIn do not have cavity put end layer, while be also make virtual depth figure ASIn do not have cavity put end layer.
9. the free view-point image combining method according to claim 1 for introducing color correction, from the wherein in step 12)
S layers start, and successively fill down-sampled virtual view B upwardsk'In cavity, obtain the reparation image F of each layerk', until obtaining just
Beginning layer repairs image F0, carry out as follows:
12a) the down-sampling virtual view B according to S layersSIn do not have cavity characteristic, by the virtual view B of S layersSIt is equal to
The reparation image F of S layersS, i.e. FS=BS;
12b) by linear interpolation, image F is repaired to S layersSUp-sampled, obtained virtual with the expansion of the resolution ratio such as S-1 layers
View ES-1, wherein ES-1In positioned at pth row, q arrange point (p, q) pixel value ES-1(p, q) is asked for as follows:
WhereinI', j', which are used for choosing, repairs image FSIn centered on point (p, q) 3
× 3 matrix-block, by obtaining expanding virtual view E to the matrix-block smothing filteringS-1The pixel value E at midpoint (p, q)S-1(p,
Q), i' here, j' value need to be even number, to meet to repair image FSIn pointFor effective coordinate points;
12c) with expansion virtual view ES-1In pixel, fill the virtual view B of same layerS-1Pixel at middle cavity,
Obtain the reparation image F of S-1 layersS-1, wherein FS-1In positioned at pth row, q arrange point (p, q) pixel value FS-1(p, q) is by such as
Lower formula is asked for:
12d) by step 12b), to any kth ' layer repairs image Fk'Up-sampled, obtained and the resolution ratio such as k'-1 layers
Expand virtual view Ek'-1, then by step 12c) use expansion virtual view Ek'-1In pixel filling same layer virtual view
Bk'-1Pixel at middle cavity, obtains the reparation image F of k'-1 layersk'-1, k' value is successively decreased one by one by S-1 to 0, i.e., by S-1
Layer starts, successively upward repetitive cycling step 12b) and 12c), the reparation image F of each layer is obtained successivelyS-1, FS-2,
Fk', F0, initiation layer reparation image F0As final free view-point image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610334492.7A CN106060509B (en) | 2016-05-19 | 2016-05-19 | Introduce the free view-point image combining method of color correction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610334492.7A CN106060509B (en) | 2016-05-19 | 2016-05-19 | Introduce the free view-point image combining method of color correction |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106060509A CN106060509A (en) | 2016-10-26 |
CN106060509B true CN106060509B (en) | 2018-03-13 |
Family
ID=57177160
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610334492.7A Active CN106060509B (en) | 2016-05-19 | 2016-05-19 | Introduce the free view-point image combining method of color correction |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106060509B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111729283A (en) * | 2020-06-19 | 2020-10-02 | 杭州赛鲁班网络科技有限公司 | Training system and method based on mixed reality technology |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109194888B (en) * | 2018-11-12 | 2020-11-27 | 北京大学深圳研究生院 | DIBR free viewpoint synthesis method for low-quality depth map |
CN109840912B (en) * | 2019-01-02 | 2021-05-04 | 厦门美图之家科技有限公司 | Method for correcting abnormal pixels in image and computing equipment |
CN112116602A (en) * | 2020-08-31 | 2020-12-22 | 北京的卢深视科技有限公司 | Depth map repairing method and device and readable storage medium |
CN112330545B (en) * | 2020-09-08 | 2021-10-19 | 中兴通讯股份有限公司 | Hole filling method, small region removing method, device and medium |
CN113421315B (en) * | 2021-06-24 | 2022-11-11 | 河海大学 | Panoramic image hole filling method based on view zooming |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102239506A (en) * | 2008-10-02 | 2011-11-09 | 弗兰霍菲尔运输应用研究公司 | Intermediate view synthesis and multi-view data signal extraction |
CN103714573A (en) * | 2013-12-16 | 2014-04-09 | 华为技术有限公司 | Virtual view generating method and virtual view generating device |
CN104661014A (en) * | 2015-01-29 | 2015-05-27 | 四川虹微技术有限公司 | Space-time combined cavity filling method |
CN104809719A (en) * | 2015-04-01 | 2015-07-29 | 华南理工大学 | Virtual view synthesis method based on homographic matrix partition |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140002595A1 (en) * | 2012-06-29 | 2014-01-02 | Hong Kong Applied Science And Technology Research Institute Co., Ltd. | Apparatus, system and method for foreground biased depth map refinement method for dibr view synthesis |
-
2016
- 2016-05-19 CN CN201610334492.7A patent/CN106060509B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102239506A (en) * | 2008-10-02 | 2011-11-09 | 弗兰霍菲尔运输应用研究公司 | Intermediate view synthesis and multi-view data signal extraction |
CN103714573A (en) * | 2013-12-16 | 2014-04-09 | 华为技术有限公司 | Virtual view generating method and virtual view generating device |
CN104661014A (en) * | 2015-01-29 | 2015-05-27 | 四川虹微技术有限公司 | Space-time combined cavity filling method |
CN104809719A (en) * | 2015-04-01 | 2015-07-29 | 华南理工大学 | Virtual view synthesis method based on homographic matrix partition |
Non-Patent Citations (2)
Title |
---|
自由视点视频系统中绘制技术研究;钱健;《中国优秀硕士学位论文全文数据库(电子期刊) 信息科技辑》;20130315;第I138-1526页 * |
面向自由视点系统的虚拟视点绘制技术研究;余思文;《中国优秀硕士学位论文全文数据库(电子期刊) 信息科技辑》;20150315;第I138-2690页 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111729283A (en) * | 2020-06-19 | 2020-10-02 | 杭州赛鲁班网络科技有限公司 | Training system and method based on mixed reality technology |
CN111729283B (en) * | 2020-06-19 | 2021-07-06 | 杭州赛鲁班网络科技有限公司 | Training system and method based on mixed reality technology |
Also Published As
Publication number | Publication date |
---|---|
CN106060509A (en) | 2016-10-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106060509B (en) | Introduce the free view-point image combining method of color correction | |
CN103400409B (en) | A kind of coverage 3D method for visualizing based on photographic head attitude Fast estimation | |
CN102592275B (en) | Virtual viewpoint rendering method | |
CN101902657B (en) | Method for generating virtual multi-viewpoint images based on depth image layering | |
CN107833270A (en) | Real-time object dimensional method for reconstructing based on depth camera | |
CN101400001B (en) | Generation method and system for video frame depth chart | |
CN111325693B (en) | Large-scale panoramic viewpoint synthesis method based on single viewpoint RGB-D image | |
CN104616286B (en) | Quick semi-automatic multi views depth restorative procedure | |
CN102325259A (en) | Method and device for synthesizing virtual viewpoints in multi-viewpoint video | |
CN103077509A (en) | Method for synthesizing continuous and smooth panoramic video in real time by using discrete cubic panoramas | |
CN104850847B (en) | Image optimization system and method with automatic thin face function | |
CN103248911A (en) | Virtual viewpoint drawing method based on space-time combination in multi-view video | |
CN113223070A (en) | Depth image enhancement processing method and device | |
CN106791774A (en) | Virtual visual point image generating method based on depth map | |
CN115298708A (en) | Multi-view neural human body rendering | |
CN104079914A (en) | Multi-view-point image super-resolution method based on deep information | |
CN102368826A (en) | Real time adaptive generation method from double-viewpoint video to multi-viewpoint video | |
CN111047709A (en) | Binocular vision naked eye 3D image generation method | |
CN116664782B (en) | Neural radiation field three-dimensional reconstruction method based on fusion voxels | |
CN110245199A (en) | A kind of fusion method of high inclination-angle video and 2D map | |
Zhu et al. | An improved depth image based virtual view synthesis method for interactive 3D video | |
Jang et al. | Egocentric scene reconstruction from an omnidirectional video | |
KR101125061B1 (en) | A Method For Transforming 2D Video To 3D Video By Using LDI Method | |
Nguyen et al. | New hole-filling method using extrapolated spatio-temporal background information for a synthesized free-view | |
CN109218706A (en) | A method of 3 D visual image is generated by single image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |