CN103383776B - A kind of laddering Stereo Matching Algorithm based on two stage cultivation and Bayesian Estimation - Google Patents

A kind of laddering Stereo Matching Algorithm based on two stage cultivation and Bayesian Estimation Download PDF

Info

Publication number
CN103383776B
CN103383776B CN201310296015.2A CN201310296015A CN103383776B CN 103383776 B CN103383776 B CN 103383776B CN 201310296015 A CN201310296015 A CN 201310296015A CN 103383776 B CN103383776 B CN 103383776B
Authority
CN
China
Prior art keywords
matching
cost
depth
value
probability
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310296015.2A
Other languages
Chinese (zh)
Other versions
CN103383776A (en
Inventor
贾丙西
刘山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201310296015.2A priority Critical patent/CN103383776B/en
Publication of CN103383776A publication Critical patent/CN103383776A/en
Application granted granted Critical
Publication of CN103383776B publication Critical patent/CN103383776B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a kind of laddering Stereo Matching Algorithm based on two stage cultivation and Bayesian Estimation, comprise the steps of 1) divide the image into as marginal area and sectional area based on Sobel filter response, use the Stereo matching strategy based on window and two stage cultivation strategy to mate respectively, and merging obtains pre-matching depth map; 2) for the Null Spot in pre-matching depth map, available point matching least square plane in its support window is used, thus estimating the degree of depth at Null Spot place, by pre-matching figure denseization; 3) the pre-matching figure for obtaining, uses the method for Bayesian MAP probability that the degree of depth at every bit place is modified, it is considered to pre-matching value as prior probability, considers that the flatness of the similarity of image and the degree of depth is as posterior probability simultaneously. The present invention completes from sparse to dense with the structure gone forward one by one, and the depth image from coarse to fine extracts, and has simultaneously taken account of local edge and flatness, thus obtaining accurately smooth depth image.

Description

A kind of laddering Stereo Matching Algorithm based on two stage cultivation and Bayesian Estimation
Technical field
The present invention relates to the solid matching method of computer vision field, particularly relate to the laddering Stereo Matching Algorithm of a kind of computer vision field.
Background technology
Stereo Matching Technology is to find the match point of correspondence from two width that scene is shot with different view or multiple image, thus calculating the degree of depth at each pixel place in image, is the pith of stereovision technique. Stereo matching is the focus in current computer vision research and difficult point, is widely used in three-dimensionalreconstruction, object modelling and identification, robot path planning.
Stereo Matching Algorithm can be divided into overall situation Stereo Matching Algorithm and local Stereo Matching Algorithm optimal way. Wherein, overall situation Stereo Matching Algorithm calculates the depth map of whole image by searching for the globally optimal solution of entire image, has higher accuracy, but operand is bigger, it is impossible to real-time operation. Sectional perspective matching algorithm is usually in the window near pixel and defines evaluation function, search locally optimal solution, fast operation, but there will be error hiding when texture is unintelligible and object blocks, and the depth image of result is smooth not, maintenance edge feature that can not be intact, lost more environmental information.
Summary of the invention
The present invention is in order to overcome the deficiencies in the prior art, provide a kind of laddering Stereo Matching Algorithm based on two stage cultivation and Bayesian Estimation, the method is while ensureing arithmetic speed, can effectively process the weak texture in scene and block, obtain the depth image smoothed, good maintenance local edge.
A kind of laddering Stereo Matching Algorithm based on segmentation pre-matching and Bayesian Estimation, for densely extracting depth information from binocular image, use a kind of matching strategy from coarse to fine, from sparse to dense, include segmentation pre-matching, Null Spot is estimated and three steps of Bayesian Estimation, specific as follows:
1) segmentation pre-matching: image is divided into marginal area and sectional area based on Sobel filter response;Wherein said sectional area is divided into horizontal and vertical segmentation, and described marginal area and transverse direction/longitudinal divisions region are carried out pre-matching respectively; The pre-matching of described marginal area uses the solid matching method based on window, and the evaluation function of marginal area pre-matching considers the difference of the pixel value in the window centered by pixel and Sobel filter response; The pre-matching of described sectional area uses the strategy of translation, and the evaluation function of sectional area pre-matching considers average color difference in the color distortion of two segmentation intersection, coincidence ratio, difference in length and segmentation; Described transverse direction/longitudinal divisions region pre-matching result is merged, then merges with the result of marginal area pre-matching and obtain denser pre-matching depth map;
2) Null Spot is estimated: the point that in described pre-matching depth map, it fails to match becomes Null Spot, for each Null Spot at search about and its color distortion pixel in certain threshold value, forms support window; Matching least square plane in described support window, three-dimensional coordinate is image row, column and depth value respectively; The image coordinate of described Null Spot is substituted into least square plane and obtains estimating depth value, thus by described pre-matching depth map denseization;
3) Bayesian Estimation: according to Bayes's conditional probability principle, calculates the every bit place probability distribution about depth value, using described pre-matching depth map denseization result as priori, using image similarity and depth smooth degree as posterior probability; The Gauss distribution that definition prior probability is is average with pre-matching value; Definition image similarity is to consider the census function in the window centered by a little; Definition smoothness be by considers in the window centered by some the difference of the degree of depth with; According to the probability distribution about depth value obtained, Maximun Posterior Probability Estimation Method is used to obtain depth map after taking the degree of depth of maximum probability.
Step 1) described in marginal area pre-matching evaluation function use formula (2) definition, formula (2) is:
Cost=costedge+costdata
cost e d g e = 1 - exp ( Σ i ∈ { r . g . b } Σ ( r o w . c o l ) ∈ W S | S L i ( r o w , c o l ) - S R i ( r o w , c o l - d ) | / λ e d g e )
cost d a t a = 1 - exp ( Σ i ∈ { r , g k } Σ ( r o w , c o l ) ∈ W S | I L i ( r o w , c o l ) - I R i ( r o w , c o l - d ) | / λ d a t a )
Wherein, cost is evaluation function value, costedge, costdataThe evaluation function component that respectively Sobel filter value is corresponding with pixel value, (row, col) represents the coordinate considering point, WsFor putting the set of the distance point composition less than wsize with consideration, d is depth value, and i represents arbitrary component of rgb space, IL i,IR iRespectively left images pixel value on i component, SL i,SR iRespectively left images Sobel filter response on rgb space i component, λdataedgeFor the constant value set.
Step 1) described in the evaluation function of transverse direction/longitudinal divisions region pre-matching for two, left and right segmentation SegL,SegRUsing formula (3) definition, formula (3) is:
Cost=costad+costco_length+costlength_diff+costaver_color
cost a d = 1 - exp ( - Σ ( r o w , c o l ) ∈ P o Σ i ∈ { r . g . b } | I L i ( r o w , c o l ) - I R i ( r o w , c o l - d ) | / λ a d )
cost c o _ l e n g t h = 1 - exp ( l e n g t h ( P O ) l e n g t h ( Seg L ) / λ c o _ l e n g t h )
cost l e n g t h _ d i f f = 1 - exp ( | l e n g t h ( Seg L ) - l e n g t h ( Seg R ) | l e n g t h ( Seg L ) / λ l e n g t h _ d i f f )
cost a v e r _ c o l o r = 1 - exp ( Σ i ∈ { r , g , b } | I L i ‾ - I R i ‾ | / λ a v e r _ c o l o r )
Wherein, cost is evaluation function result, costad,costco_length,costlength_diff,costaver_colorRepresent respectively and consider the evaluation function component of average color difference in intersection color distortion, coincidence ratio, two section length differences and two segmentations. POFor SegRWith Seg after d pixel of right translationLIntersection, the function that length () is computational length, i represents arbitrary component of rgb space,Respectively two segmentations average pixel value on i component, λadco_lengthlength_diffaver_colorThe constant respectively set.
Step 1) transverse direction/longitudinal divisions region pre-matching result merges and carries out according to formula (5), and formula (5) is:
d = d h i f d v = 0 d v i f d h = 0 0 i f | d h - d v | > τ d d h d v o t h e r w i s e
Wherein d is depth results, dh,dvRespectively horizontal and vertical matching result, τdFor maximum allowable gap.
Step 2) described in composition support window color distortion threshold value determine according to formula (6), formula (6) is:
τ ( i , j ) = - τ m a x L m a x · ( r o w - i ) 2 + ( c o l - j ) 2 + τ m a x .
Described step 3) in calculate every bit place and carry out according to formula (9) and (11) about the probability distribution of depth value, formula (9) and (11) are respectively as follows:
p(d,Il,Ir)∝p(d)p(Il,Ir|d)
p(Il,Ir| d)=pd·ps
Wherein, the Gauss distribution that p (d) is is average with pre-matching value, pdFor the probability about image similarity, it is the Gauss distribution about similarity evaluation function, psFor the probability about depth value smoothness, it it is the Gauss distribution about smoothness evaluation function.
The present invention is compared with the prior art and provides the benefit that: the present invention is a kind of laddering Stereo Matching Algorithm, pre-matching strategy based on two stage cultivation can efficiently solve weak texture ground matching problem, and transverse direction, the method longitudinally carrying out mating and merge respectively can catch more characteristics of image, reduce the situation of error hiding. It addition, in the process of Bayesian Estimation, consider the flatness of image similarity and the degree of depth, optimize and obtain accurately smooth depth map.
Accompanying drawing explanation
Fig. 1 is the frame diagram of neutral body matching algorithm of the present invention.
Fig. 2 is the schematic diagram of image segmentation.
Fig. 3 is the schematic diagram of image segmentation coupling.
Fig. 4 is the flow chart that actual example is embodied as.
Detailed description of the invention
Below in conjunction with detailed description of the invention and compare accompanying drawing the present invention is described in detail.
A kind of Stereo Matching Algorithm based on two stage cultivation and Bayesian Estimation, according to the left figure obtained from binocular camera and right figure, mates each pixel in left figure with the pixel in right figure, obtains the dense depth map of scene. Basic ideas are to use the laddering structure from coarse to fine, from sparse to dense, first carry out sparse and rough segmentation pre-matching, the method re-using least square estimates that the depth value of Null Spot is by its denseization, finally use that Bayes's conditional probability method considers pre-matching value, the degree of depth of every bit is modified by image similarity, depth smooth, thus obtaining dense and accurate depth map. The flow chart of the solid matching method of the present invention is as it is shown in figure 1, be:
Step 1, segmentation pre-matching, comprise the steps of
1) image Sobel filter response on RGB color is calculated, such as formula (1):
G x i = - 1 0 1 - 2 0 2 - 1 0 1 * I i G y i = 1 2 1 0 0 0 - 1 - 2 1 * I i G i = ( G x i ) 2 + ( G y i ) 2 i ∈ { r , g , b } - - - ( 1 )
Wherein, i represents arbitrary component of image RGB, IiRepresent image pixel value on i component, GiRepresent the filter result on i component, Gx i, Gy iIt is illustrated respectively on x and y direction the filter result of i component.
2) according to wave filter response value, use threshold value that image carries out horizontal and vertical segmentation respectively, image is divided into marginal area and sectional area, such as Fig. 2. Marginal area refers to the region with the change of bigger color, and namely the wave filter at this some place responds more than certain threshold value, Sobel (I) > τedge. It is transversal sectional with between two adjacent marginal areas of a line, is in like manner longitudinal divisions with between two adjacent edge region of string.
3) local matching algorithm is used to mate the point of marginal area, its evaluation function such as formula (2):
cos t = cost e d g e + cost d a t a cost e d g e = 1 - exp ( Σ i ∈ { r , g , b } Σ ( r o w , c o l ) ∈ W S | S L i ( r o w , c o l ) - S R i ( r o w , c o l - d ) | / λ e d g e ) cost d a t a = 1 - exp ( Σ i ∈ { r , g , b } Σ ( r o w , c o l ) ∈ W S | I L i ( r o w , c o l ) - I R i ( r o w , c o l - d ) | / λ d a t a ) - - - ( 2 )
Wherein IL i, SL iAnd IR i, SR iRepresent the pixel value on i component and the Sobel filter value of left image and right image respectively.
4) transversal sectional being mated, main method is each segmentation Seg for left imageL, the segmentation Seg of same line search candidate in right imageR, candidate conditional such as formula (3):
row L = row R startCol L < endCol R + M a x D i s p endCol L > startCol R - - - ( 3 )
Wherein MaxDisp is maximum depth value, row, startCol, and endCol represents the row-coordinate of this segmentation respectively, starts row and end column.For each candidate segment, it is assumed that be d to it to right translation, then d is the depth value of candidate, such as Fig. 3, and evaluation function such as formula (4):
cos t = cost a d + cost c o _ l e n g t h + cost l e n g t h _ d i f f + cost a v e r _ c o l o r cost a d = - exp ( - &Sigma; ( r o w , c o l ) &Element; P O &Sigma; i &Element; { r , g , b } | I L i ( r o w , c o l ) - I R i ( r o w , c o l - d ) | / &lambda; a d ) cost c o _ l e n g t h = 1 - exp ( l e n g t h ( P O ) l e n g t h ( Seg L ) / &lambda; c o _ l e n g t h ) cost l e n g t h _ d i f f = 1 - exp ( | l e n g t h ( Seg L ) - l e n g t h ( Seg R ) | l e n g t h ( Seg L ) / &lambda; l e n g t h _ d i f f ) cost a v e r _ c o l o r = 1 - exp ( &Sigma; i &Element; { r , g , b } | I L i &OverBar; - I R i &OverBar; | / &lambda; a v e r _ c o l o r ) - - - ( 4 )
This evaluation function has considered intersection PoColor distortion, coincidence ratio, average color difference in the difference in length of two segmentations and segmentation, by d to be obtained the evaluation function value of optimum in [0, maxDisp] interval search, namely obtain the depth value of this segmentation.
5) in like manner longitudinal divisions is mated.
6) horizontal and vertical two stage cultivation result is merged, such as formula (5):
d = d h i f d v = 0 d v i f d h = 0 0 i f | d h - d v | > &tau; d d h d v o t h e r w i s e - - - ( 5 )
In formula, dh,dvRespectively horizontal and vertical matching result, can remove the point of error hiding by this strategy, and in conjunction with the result of two kinds of couplings, improves accuracy.
7) edge matching result is merged with two stage cultivation result, obtain pre-matching depth map.
Step 2, the Null Spot in the pre-matching depth map that step 1 is obtained is estimated, for the Null Spot (row, col) in each left image, carries out following steps:
1) (row, col) around search and its color distortion available point in certain threshold value (i, j) as support point, forms support window SW, this threshold definitions such as formula (6):
&tau; ( i , j ) = - &tau; m a x L m a x &CenterDot; ( r o w - i ) 2 + ( c o l - j ) 2 + &tau; max - - - ( 6 )
Wherein, τmax,LmaxRespectively max-thresholds and ultimate range.
2) for all of support point, matching least square plane, the three-dimensional coordinate of every bit respectively image line row, image column col, depth value d. Plane equation such as formula (7):
D=a0·row+a1·col+a2(7) method of estimation of parameter can be passed through to solve the acquisition of following linear equation group:
&Sigma; row i 2 &Sigma; row i col i &Sigma; row i &Sigma; row i col i &Sigma; col i 2 &Sigma; col i &Sigma; row i &Sigma; col i n a 0 a 1 q 2 = &Sigma; row i d i &Sigma; cold 1 &Sigma; d i - - - ( 8 )
3) image coordinate of Null Spot is substituted into plane equation, obtain depth value.
Step 3, the dense pre-matching depth map that step 2 is generated is modified. Based on Bayes's condition probability formula, for 1 I in left imagel, point corresponding in right image is Ir, about the probability distribution such as formula (9) of depth value d:
p(d,Il,Ir)∝p(d)p(Il,Ir|d)(9)
P (d) is the prior probability based on pre-matching value, for pre-matching value deCentered by Gauss distribution, such as formula (10):
p ( d ) &Proportional; &lambda; e + exp ( - ( d - d e ) 2 2 &sigma; e 2 ) i f | d - d e | < 3 &sigma; 0 o t h e r w i s e - - - ( 10 )
Posterior probability p (Il,Ir| d) consider the similarity between corresponding point and some depth smooth degree around, its probability distribution such as formula (11):
p(Il,Ir| d)=pd·ps(11)
Wherein, pdFor similarity probability distribution, psFor considering the probability distribution of flatness, respectively such as formula (12), (13):
p d &Proportional; &lambda; d + exp ( - c e n s u s ( I l , I r ) 2 2 &sigma; d 2 ) - - - ( 12 )
p s &Proportional; &lambda; s + exp ( - ( &Sigma; d i &Element; d S W | d - d i | N ) 2 / 2 &sigma; s 2 ) - - - ( 13 )
dSWFor the set of depth values all in support window, census (Il,Ir) represent census evaluation function, it is used for describing the similarity of 2 in left images:
c e n s u s ( I l , I r ) = &Sigma; ( x , y ) &Element; W S &rho; ( z , y , d ) &rho; ( x , y , d ) = 0 i f I R ( x + d , y ) &Delta;I l andI L ( x , y ) &Delta;I r 1 o t h e r w i s e &Delta; &Element; { > , < , = } - - - ( 14 )
Every bit is asked for the depth value d of maximum a posteriori probability, such as formula (15):
d * = argmax d &Element; D c p ( d , I l , I ) - - - ( 15 )
Every bit in image is carried out the process of step 3, obtains result depth image.
Embodiment
As Fig. 4 illustrates the flow process that the laddering Stereo Matching Algorithm based on segmentation pre-matching and Bayesian Estimation is embodied as. First binocular image is obtained; In described segmentation pre-matching step, first carry out horizontal pre-matching and longitudinal pre-matching, remerge as comparatively sparse and coarse pre-matching depth map; In described invalid value estimating step, the Null Spot in pre-matching depth map is estimated according to described least squares estimate, by its denseization; In described Bayesian Estimation step, use described method that depth value is modified, obtain dense accurate depth map again. From the results, it was seen that the algorithm in the present invention from coarse to fine, from sparse to densely extracting depth map from binocular image, there is good effect.

Claims (6)

1. the laddering Stereo Matching Algorithm based on segmentation pre-matching and Bayesian Estimation, for densely extracting depth information from binocular image, it is characterized in that, use from coarse to fine, laddering structure from sparse to dense, first carry out sparse and rough segmentation pre-matching, the method re-using least square estimates that the depth value of Null Spot is by its denseization, Bayes's conditional probability method is finally used to consider pre-matching value, image similarity, the degree of depth of every bit is modified by depth smooth, thus obtaining dense and accurate depth map, include segmentation pre-matching, Null Spot is estimated and three steps of Bayesian Estimation, specific as follows:
1) segmentation pre-matching: image is divided into marginal area and sectional area based on Sobel filter response;Wherein said sectional area is divided into horizontal and vertical segmentation, and described marginal area and transverse direction/longitudinal divisions region are carried out pre-matching respectively; The pre-matching of described marginal area uses the solid matching method based on window, and the evaluation function of marginal area pre-matching considers the difference of the pixel value in the window centered by pixel and Sobel filter response; The pre-matching of described sectional area uses the strategy of translation, and the evaluation function of sectional area pre-matching considers average color difference in the color distortion of two segmentation intersection, coincidence ratio, difference in length and segmentation; Described transverse direction/longitudinal divisions region pre-matching result is merged, then merges with the result of marginal area pre-matching and obtain denser pre-matching depth map;
2) Null Spot is estimated: the point that in described pre-matching depth map, it fails to match becomes Null Spot, for each Null Spot at search about and its color distortion pixel in certain threshold value, forms support window; Matching least square plane in described support window, three-dimensional coordinate is image row, column and depth value respectively; The image coordinate of described Null Spot is substituted into least square plane and obtains estimating depth value, thus by described pre-matching depth map denseization;
3) Bayesian Estimation: according to Bayes's conditional probability principle, calculate the every bit place probability distribution about depth value, obtain prior probability based on described pre-matching depth map denseization result, obtain posterior probability according to image similarity and depth smooth degree; The Gauss distribution that definition prior probability is is average with pre-matching value; Definition image similarity is to consider the census function in the window centered by a little; Definition smoothness be by considers in the window centered by some the difference of the degree of depth with; According to the probability distribution about depth value obtained, Maximun Posterior Probability Estimation Method is used to obtain depth map after taking the degree of depth of maximum probability.
2. algorithm according to claim 1, it is characterised in that: step 1) described in marginal area pre-matching evaluation function use formula (2) definition, formula (2) is:
Cost=costedge+costdata
cost e d g e = 1 - exp ( &Sigma; i &Element; { r , g , b } &Sigma; ( r o w , c o l ) &Element; W S | S L i ( r o w , c o l ) - S R i ( r o w , c o l - d ) | / &lambda; e d g e )
cost d a t a = 1 - exp ( &Sigma; i &Element; { r , g , b } &Sigma; ( r o w , c o l ) &Element; W S | I L i ( r o w , c o l ) - I R i ( r o w , c o l - d ) | / &lambda; d a t a )
Wherein, cost is evaluation function value, costedge, costdataThe evaluation function component that respectively Sobel filter value is corresponding with pixel value, (row, col) represents the coordinate considering point, WsFor putting the set of the distance point composition less than wsize with consideration, d is depth value, and i represents arbitrary component of rgb space, IL i,IR iRespectively left images pixel value on i component, SL i,SR iRespectively left images Sobel filter response on rgb space i component, λdataedgeFor the constant value set.
3. algorithm according to claim 1, it is characterised in that: step 1) described in the evaluation function of transverse direction/longitudinal divisions region pre-matching for two, left and right segmentation SegL,SegRUsing formula (3) definition, formula (3) is:
Cost=costad+costco_length+costlength_diff+costaver_color
cost a d = 1 - exp ( - &Sigma; ( r o w , c o l ) &Element; P O &Sigma; i &Element; { r , g , b } | I L i ( r o w , c o l ) - I R i ( r o w , c o l - d ) | / &lambda; a d )
cost c o _ l e n g t h = 1 - exp ( l e n g t h ( P O ) l e n g t h ( Seg L ) / &lambda; c o _ l e n g t h )
cost l e n g t h _ d i f f = 1 - exp ( | l e n g t h ( Seg L ) - l e n g t h ( Seg R ) | l e n g t h ( Seg L ) / &lambda; l e n g t h _ d i f f )
cost a v e r _ c o l o r = 1 - exp ( &Sigma; i &Element; { r , g , b } | I L i &OverBar; - I R i &OverBar; | / &lambda; a v e r _ c o l o r )
Wherein, cost is evaluation function result, costad,costco_length,costlength_diff,costaver_colorRepresent respectively and consider the evaluation function component of average color difference in intersection color distortion, coincidence ratio, two section length differences and two segmentations; POFor SegRWith Seg after d pixel of right translationLIntersection, the function that length () is computational length, i represents arbitrary component of rgb space,Respectively two segmentations average pixel value on i component, λadco_lengthlength_diffaver_colorThe constant respectively set.
4. algorithm according to claim 1, it is characterised in that: step 1) transverse direction/longitudinal divisions region pre-matching result merges and carries out according to formula (5), and formula (5) is:
d = d h i f d v = 0 d v i f d n = 0 0 i f | d h - d v | > &tau; d d h d v o t h e r w i s e
Wherein d is depth results, dh,dvRespectively horizontal and vertical matching result, τdFor maximum allowable gap.
5. algorithm according to claim 1, it is characterised in that: step 2) described in composition support window color distortion threshold value determine according to formula (6), formula (6) is:
&tau; ( i , j ) = - &tau; max L max &CenterDot; ( r o w - i ) 2 + ( c o l - j ) 2 + &tau; max .
6. algorithm according to claim 1, it is characterized in that: described step 3) in calculate every bit place and carry out according to formula (9) and (11) about the probability distribution of depth value, formula (9) and (11) are respectively as follows:
p(d,Il,Ir)∝p(d)p(Il,Ir|d)
p(Il,Ir| d)=pd·ps
Wherein, the Gauss distribution that p (d) is is average with pre-matching value, pdFor the probability about image similarity, it is the Gauss distribution about similarity evaluation function, psFor the probability about depth value smoothness, it it is the Gauss distribution about smoothness evaluation function.
CN201310296015.2A 2013-07-14 2013-07-14 A kind of laddering Stereo Matching Algorithm based on two stage cultivation and Bayesian Estimation Expired - Fee Related CN103383776B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310296015.2A CN103383776B (en) 2013-07-14 2013-07-14 A kind of laddering Stereo Matching Algorithm based on two stage cultivation and Bayesian Estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310296015.2A CN103383776B (en) 2013-07-14 2013-07-14 A kind of laddering Stereo Matching Algorithm based on two stage cultivation and Bayesian Estimation

Publications (2)

Publication Number Publication Date
CN103383776A CN103383776A (en) 2013-11-06
CN103383776B true CN103383776B (en) 2016-06-15

Family

ID=49491559

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310296015.2A Expired - Fee Related CN103383776B (en) 2013-07-14 2013-07-14 A kind of laddering Stereo Matching Algorithm based on two stage cultivation and Bayesian Estimation

Country Status (1)

Country Link
CN (1) CN103383776B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103729882B (en) * 2013-12-30 2016-09-28 浙江大学 A kind of some cloud relative pose estimation method based on three-dimensional curve coupling
US9589359B2 (en) * 2014-04-24 2017-03-07 Intel Corporation Structured stereo
US9582888B2 (en) * 2014-06-19 2017-02-28 Qualcomm Incorporated Structured light three-dimensional (3D) depth map based on content filtering
CN104902253B (en) * 2015-02-09 2016-11-09 北京理工大学 A kind of based on the stereoscopic image generation method improving Bayesian model
CN104807465B (en) * 2015-04-27 2018-03-13 安徽工程大学 Robot synchronously positions and map creating method and device
CN105686936B (en) * 2016-01-12 2017-12-29 浙江大学 A kind of acoustic coding interactive system based on RGB-IR cameras
WO2017143550A1 (en) * 2016-02-25 2017-08-31 SZ DJI Technology Co., Ltd. Imaging system and method
CN106097336B (en) * 2016-06-07 2019-01-22 重庆科技学院 Front and back scape solid matching method based on belief propagation and self similarity divergence measurement
CN107563656B (en) * 2017-09-11 2020-06-16 东北大学 Method for evaluating running state of gold hydrometallurgy cyaniding leaching process
CN108765486A (en) * 2018-05-17 2018-11-06 长春理工大学 Based on sparse piece of aggregation strategy method of relevant Stereo matching in color
CN110322518B (en) * 2019-07-05 2021-12-17 深圳市道通智能航空技术股份有限公司 Evaluation method, evaluation system and test equipment of stereo matching algorithm
CN116844732B (en) * 2023-07-27 2024-02-02 北京中益盛启科技有限公司 Hypertension diagnosis and treatment data distributed regulation and control system and method based on big data analysis

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101833762A (en) * 2010-04-20 2010-09-15 南京航空航天大学 Different-source image matching method based on thick edges among objects and fit
CN102881021A (en) * 2012-10-25 2013-01-16 上海交通大学 Aortic valve ultrasonic image segmentation method based on probability distribution and continuous maximum flow

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101833762A (en) * 2010-04-20 2010-09-15 南京航空航天大学 Different-source image matching method based on thick edges among objects and fit
CN102881021A (en) * 2012-10-25 2013-01-16 上海交通大学 Aortic valve ultrasonic image segmentation method based on probability distribution and continuous maximum flow

Also Published As

Publication number Publication date
CN103383776A (en) 2013-11-06

Similar Documents

Publication Publication Date Title
CN103383776B (en) A kind of laddering Stereo Matching Algorithm based on two stage cultivation and Bayesian Estimation
Li et al. Simultaneous video defogging and stereo reconstruction
Klappstein et al. Moving object segmentation using optical flow and depth information
CN103310421B (en) The quick stereo matching process right for high-definition image and disparity map acquisition methods
Wedel et al. Detection and segmentation of independently moving objects from dense scene flow
CN101765019B (en) Stereo matching algorithm for motion blur and illumination change image
CN103458261B (en) Video scene variation detection method based on stereoscopic vision
Nurunnabi et al. Robust segmentation for multiple planar surface extraction in laser scanning 3D point cloud data
CN108986150B (en) Image optical flow estimation method and system based on non-rigid dense matching
Zhu et al. Robustness meets deep learning: An end-to-end hybrid pipeline for unsupervised learning of egomotion
CN108629809B (en) Accurate and efficient stereo matching method
CN102740096A (en) Space-time combination based dynamic scene stereo video matching method
Sizintsev et al. Spatiotemporal stereo and scene flow via stequel matching
Zhang et al. Robust stereo matching with surface normal prediction
CN106530336A (en) Stereo matching algorithm based on color information and graph-cut theory
Zhang et al. Image sequence segmentation using 3-D structure tensor and curve evolution
Schauwecker et al. A comparative study of stereo-matching algorithms for road-modeling in the presence of windscreen wipers
Cigla et al. Gaussian mixture models for temporal depth fusion
El Ansari et al. A new regions matching for color stereo images
Park et al. Shape-indifferent stereo disparity based on disparity gradient estimation
da Silva Vieira et al. Stereo vision methods: from development to the evaluation of disparity maps
Chowdhury et al. Fast window based stereo matching for 3D scene reconstruction.
US9384417B1 (en) System and method for object dimension estimation
McCarthy et al. Surface extraction from iso-disparity contours
Akshay Single moving object detection and tracking using Horn-Schunck optical flow method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160615

Termination date: 20180714

CF01 Termination of patent right due to non-payment of annual fee