CN104408710A - Global parallax estimation method and system - Google Patents

Global parallax estimation method and system Download PDF

Info

Publication number
CN104408710A
CN104408710A CN201410604055.3A CN201410604055A CN104408710A CN 104408710 A CN104408710 A CN 104408710A CN 201410604055 A CN201410604055 A CN 201410604055A CN 104408710 A CN104408710 A CN 104408710A
Authority
CN
China
Prior art keywords
point
image
searching
visual
parallax
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410604055.3A
Other languages
Chinese (zh)
Other versions
CN104408710B (en
Inventor
彭祎
王荣刚
王振宇
高文
董胜富
王文敏
赵洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University Shenzhen Graduate School
Original Assignee
Peking University Shenzhen Graduate School
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University Shenzhen Graduate School filed Critical Peking University Shenzhen Graduate School
Priority to CN201410604055.3A priority Critical patent/CN104408710B/en
Publication of CN104408710A publication Critical patent/CN104408710A/en
Application granted granted Critical
Publication of CN104408710B publication Critical patent/CN104408710B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to a global parallax estimation method and system. When matching space calculation is carried out, a sampling point is selected from an image according to a preset rule; and calculation of a first matching space unit and a second matching space unit is carried out based on constraint conditions. The employed constraint conditions include a linear constraint condition and a sampling-point space wire harness constraint condition; the linear constrain condition expresses the constraint of the color Euclidean distance between a current pixel point and a searching point; and the space wire harness constraint condition expresses the constraint of the color Euclidean distance between the searching point and the sampling point. Because of simultaneous utilization of the two constraint conditions, the calculated matching space is closer to the edge of the object in the image, thereby improving accuracy of the matching space calculation and guaranteeing the accuracy of the final parallax calculation.

Description

A kind of global disparity method of estimation and system
Technical field
The application relates to Stereo matching image processing field, is specifically related to a kind of global disparity method of estimation and system.
Background technology
In conventional video systems, user can only be passive the picture that photographed by video camera of viewing, cannot from the picture of other visual angles viewing different points of view, multi-angle video (Multi-View Video) then allows user to watch from multiple viewpoint, enhance interactivity and 3D sensory effects, have wide practical use in fields such as stereotelevision, video conference, self-navigation, virtual realities.But stronger interactivity and sensory effects also add the data volume of video simultaneously, burden is added to the storage and transmission etc. of video, how to solve the study hotspot that problems has become current.
Stereo matching, also claims disparity estimation, is the many orders view data (being generally binocular) obtained according to front-end camera, estimates the geometric relationship between the pixel in correspondence image.Utilize disparity estimation, the information of corresponding viewpoint can be obtained by the information of a viewpoint and the degree of depth (parallax) information thereof, thus decrease original data volume, for the transmission of how visual frequency and storage are provided convenience.
According to the difference of specific implementation details, solid matching method can be roughly divided into sectional perspective matching algorithm and overall Stereo Matching Algorithm (can see Scharstein D, Szeliski R.A taxonomy and evaluationof dense two-frame stereo correspondence algorithms [J] .International journal ofcomputer vision, 2002,47 (1-3): 7-42.).Sectional perspective matching algorithm accuracy is not high, but speed, be unfavorable for practical application, overall situation Stereo Matching Algorithm obtains parallax result based on to the energy function optimization of the overall situation, its accuracy is higher, but speed is slower, but, more existing overall Stereo Matching Algorithm improved create the speed suitable with sectional perspective matching algorithm, as quick belief propagation algorithm (can see Pedro F.Felzenszwalb, Daniel P.Huttenlocher.Efficient Belief Propagation for EarlyVision.International Journal of Computer Vision October 2006, Volume 70, Issue 1, pp 41-54).
Comprehensively above-mentioned describe known, Stereo matching, as the important step in multi-angle video, is subject to extensive concern, and has a large amount of Stereo Matching Algorithm to emerge in large numbers.But Stereo matching remains in a lot of problem, particularly correctness and stability, need to improve further.
Summary of the invention
According to the first aspect of the application, this application provides a kind of global disparity method of estimation, comprising:
Read in the first visual point image and the second visual point image, the first visual point image is the image of the target from the first viewpoint acquisition, and the second visual point image is the image of the target from the second viewpoint acquisition;
On the first visual point image, sampled point is chosen according to preset rules;
On the first viewpoint figure successively selected pixels point as current pixel point, take current pixel point as initial point, along first axle positive dirction and negative direction, search for using individual element point as Searching point, stop until searching when not meeting the point of constraint condition preset, and using the institute meeting described constraint condition that searches a little as the first match point; Respectively with each first match point for initial point, along the second axis positive dirction and negative direction, explore as Searching point using individual element point, stop until searching when not meeting the point of constraint condition preset, and using the institute meeting described constraint condition that searches a little as the second match point; Using the first match point and the second match point the first package space as current pixel point;
Take current pixel point as initial point, along the second axis positive dirction and negative direction, search for as Searching point using individual element point, stop until searching when not meeting the point of constraint condition preset, and using the institute meeting described constraint condition that searches a little as the 3rd match point; Respectively with each the 3rd match point for initial point, along first axle positive dirction and negative direction, explore as Searching point using individual element point, stop until searching when not meeting the point of constraint condition preset, and using the institute meeting described constraint condition that searches a little as the 4th match point; Using the 3rd match point and the 4th match point the second package space as current pixel point;
Described constraint condition comprises Linear Constraints and the space constraints based on sampled point, described Linear Constraints is the constraint of the Euclidean distance between current pixel point and Searching point in color, described space constraints is the constraint of the Euclidean distance between Searching point and sampled point in color, and described first axle is mutually vertical with the second axis;
Calculate institute's Matching power flow sum a little in the first package space, calculate the second package space interior Matching power flow sum a little;
According in the first package space in Matching power flow sum a little and the second package space Matching power flow sum a little calculate initial parallax, and screening obtains reliable point;
Image block is carried out to the first visual point image and the second visual point image;
Based on described image block, and calculate the final parallax of each pixel in the first visual point image and the second visual point image respectively according to the initial parallax of described reliable point.
In one embodiment, described constraint condition is:
O lab ( p , q ) < k 1 ( 1 1 < w 1 ) O lab ( p , q ) < k 2 ( w 1 &le; 1 1 &le; w 2 ) O lab ( p , q ) < O lab ( q , e i ) ( k 3 * 1 1 < 1 2 < k 4 * 1 1 )
Wherein, l 1for pixel p is to the distance of Searching point q, pixel p is current pixel point, l 2for pixel p is to sampled point e idistance, O lab(p, q) be the Euclidean distance in color for pixel p and Searching point q, O lab(q, ei) is Searching point q and sampled point e ieuclidean distance in color, k 1, k 2, k 3, k 4, w 1, w 2for custom parameter, and k 1>k 2, k 4>k 3, w 2>w 1.
In one embodiment, described preset rules makes each sampled point be a predeterminable range from the distance of four the neighbouring sample points in its upper and lower, left and right.
In one embodiment, after reading in the first visual point image and the second visual point image, before choosing sampled point, also comprise: polar curve correction is carried out to the first visual point image and the second visual point image.
In one embodiment, after image block is carried out to the first visual point image and the second visual point image, before calculating final parallax, also comprise the occlusion area in marking image, be specially: get in each row from left end first of each piecemeal of the first visual point image and reliably put L (p), according to the parallax d of a L (p) pcalculate to obtain its some R (p-d corresponding to the second visual point image p); From a R (p-d in the second visual point image p-1) start to find first reliably to put Rq to the left side, find out its parallax d q, calculate invocation point Rq corresponding to the some L (q+d in the first visual point image q), two somes L (p) and the L (q+d of level q) between point be and block a little.
In one embodiment, according in the first package space in Matching power flow sum a little and the second package space Matching power flow sum a little, adopt quick belief propagation Global Algorithm to calculate initial parallax.
In one embodiment, image block is carried out to the first visual point image and the second visual point image, comprising:
First visual point image and the second visual point image are divided into several image blocks;
Merge according to Color pair image block: image block pixel quantity being less than preset value is adjacent the immediate image block of color in image block and merges; And/or, determine two adjacent image block colors close, and when two image block pixel quantity sums are less than preset value, two image blocks are merged;
Merge image block according to parallax: the image block reliably putting quantity and be less than preset value is adjacent the immediate image block of color in image block and merges, described reliable point is obtain according to the initial parallax screening of pixel each in original image; And/or, judge that whether the parallax change of two adjacent images block is level and smooth, if so, then two image blocks are merged.
In one embodiment, the first visual point image and the second visual point image are divided into several image blocks, are specially: based on superpixel color piecemeal, image is divided into several image blocks.
In one embodiment, judge that whether the parallax change of two adjacent images block is level and smooth, comprising:
Find out current image block S and be adjacent image block S kborder consecutive point to P s(i), P sk(i), P s(i) and P ski () is block S and block S ki-th consecutive point pair;
With P ssearch for the rectangular box of an a*b centered by (i), calculate in this square frame the average V of the parallax of the reliable point belonging to block S si (), with P sksearch for the rectangular box of an a*b centered by (i), calculate in this square frame and belong to block S kthe average V of parallax of reliable point sk(i), wherein, a, b are default pixel wide;
Work as max|V s(i)-V sk(i) | during <j, be judged as that current image block S is adjacent image block S kparallax change level and smooth, wherein, i ∈ W s, Sk, W s, Skfor block S and block S kthe indexed set of border all-pair, j is preset value.
According to the second aspect of the application, present invention also provides a kind of global disparity estimating system, comprising:
Image reads in module, and for reading in the first visual point image and the second visual point image, the first visual point image is the image of the target from the first viewpoint acquisition, and the second visual point image is the image of the target from the second viewpoint acquisition;
Package space computing module, after choosing sampled point according to preset rules on the first visual point image, on the first viewpoint figure successively selected pixels point as current pixel point, take current pixel point as initial point, along first axle positive dirction and negative direction, search for as Searching point using individual element point, stop until searching when not meeting the point of constraint condition preset, and using the institute meeting described constraint condition that searches a little as the first match point; Respectively with each first match point for initial point, along the second axis positive dirction and negative direction, explore as Searching point using individual element point, stop until searching when not meeting the point of constraint condition preset, and using the institute meeting described constraint condition that searches a little as the second match point; Using the first match point and the second match point the first package space as current pixel point;
Take current pixel point as initial point, along the second axis positive dirction and negative direction, search for as Searching point using individual element point, stop until searching when not meeting the point of constraint condition preset, and using the institute meeting described constraint condition that searches a little as the 3rd match point; Respectively with each the 3rd match point for initial point, along first axle positive dirction and negative direction, explore as Searching point using individual element point, stop until searching when not meeting the point of constraint condition preset, and using the institute meeting described constraint condition that searches a little as the 4th match point; Using the 3rd match point and the 4th match point the second package space as current pixel point;
Described constraint condition comprises Linear Constraints and the space constraints based on sampled point, described Linear Constraints is the constraint of the Euclidean distance between current pixel point and Searching point in color, described space constraints is the constraint of the Euclidean distance between Searching point and sampled point in color, and described first axle is mutually vertical with the second axis;
Matching power flow computing module, for calculate in the first package space Matching power flow sum a little, and in calculating second package space Matching power flow sum a little;
Initial parallax computing module, for according in the first package space in Matching power flow sum a little and the second package space Matching power flow sum a little calculate initial parallax, and screening obtains reliable point;
Image block module, for carrying out image block to the first visual point image and the second visual point image;
Final disparity computation module, for based on described image block, and calculates the final parallax of each pixel in the first visual point image and the second visual point image according to the initial parallax of described reliable point.
In the global disparity method of estimation that the application provides and system, when carrying out package space and calculating, on image, choose sampled point according to preset rules, then carry out the calculating of the first package space and the second package space according to constraint condition.Wherein, the constraint condition adopted comprises Linear Constraints and the space wire harness condition based on sampled point, Linear Constraints is the constraint of the Euclidean distance between current pixel point and Searching point in color, space constraints is the constraint of the Euclidean distance between Searching point and sampled point in color, owing to have employed above-mentioned two constraint conditions simultaneously, make the package space calculated more close to the edge of objects in images, therefore, the accuracy that package space calculates can be improved, thus ensure the accuracy of final disparity computation.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of global disparity method of estimation in a kind of embodiment of the application;
Fig. 2 is the schematic diagram choosing sampled point in a kind of embodiment package space of the application computing method;
Fig. 3 is the calculating schematic diagram of the first package space in a kind of embodiment package space of the application computing method;
Fig. 4 is the module diagram of global disparity estimating system in a kind of embodiment of the application;
The test result of global disparity method of estimation on Middlebury test platform that Fig. 5 provides for adopting the embodiment of the present application.
Embodiment
By reference to the accompanying drawings the application is described in further detail below by embodiment.
Please refer to Fig. 1, this implementation column provides a kind of global disparity method of estimation, comprises step below:
S00: read in the first visual point image and the second visual point image, the first visual point image is the image of the target from the first viewpoint acquisition, and the second visual point image is the image of the target from the second viewpoint acquisition.For the ease of being described the application, with the first visual point image for left visual point image (being called for short left figure below), the second visual point image is right visual point image (being called for short right figure below) for example is described.Left figure and right figure can be the image taken by binocular camera in the binocular sequence obtained, or monocular-camera takes the two width images obtained under certain level displacement.Usually, left figure and right figure is coloured image, in certain embodiments, also can be achromatic image.
In certain embodiments, the left figure read in and right figure is that namely the polar curve of two width images is horizontal parallel, carries out Matching power flow calculating so that follow-up through the image that polar curve corrects, if two width images of input correct without polar curve, then also need first to carry out polar curve correction to left figure and right figure.
S10: calculate initial parallax, and screening obtains reliable point.
When calculating initial parallax, first need the package space of pixel in computed image, in the present embodiment, package space comprises the first package space and the second package space, and its computing method are as follows:
Sampled point is chosen according to preset rules.First in left map space, choose sampled point e, concrete, each sampled point is a predeterminable range d from the distance of four the neighbouring sample points in its upper and lower, left and right, and all sampled points are formed latticed, as shown in Figure 2.In other embodiments, choosing of sampled point also can adopt other stipulated forms, namely for selecting the preset rules of sampled point to formulate according to the actual requirements.
The first package space and the second package space is calculated according to constraint condition, wherein, constraint condition comprises Linear Constraints and the space constraints based on sampled point, Linear Constraints is the constraint of the Euclidean distance between current pixel point and Searching point in color, and space constraints is the constraint of the Euclidean distance between Searching point and sampled point in color.
For 1 p of certain in left figure, to prolong according to colour-difference respectively to X-axis (first axle) both direction and Y-axis (the second axis) both direction from it and bear a segment distance, for the calculating of package space.
On left figure successively selected pixels point as current pixel point p, with a p for initial point, along X-axis positive dirction and negative direction, search for using individual element point as Searching point, stop until searching when not meeting the point of constraint condition preset, and using the institute meeting constraint condition that searches a little as the first match point; Respectively with each first match point for initial point, along Y-axis positive dirction and negative direction, explore as Searching point using individual element point, stop until searching when not meeting the point of constraint condition preset, and using the institute meeting constraint condition that searches a little as the second match point; Using the first match point and the second match point the first package space S as a p 1.As shown in Figure 3, be the first package space S 1computation process schematic diagram.
Afterwards, then with a p for initial point, along Y-axis positive dirction and negative direction, search for as Searching point using individual element point, stop until searching when not meeting the point of constraint condition preset, and using the institute meeting constraint condition that searches a little as the 3rd match point; Respectively with each the 3rd match point for initial point, along X-axis positive dirction and negative direction, explore as Searching point using individual element point, stop until searching when not meeting the point of constraint condition preset, and using the institute meeting constraint condition that searches a little as the 4th match point; Using the 3rd match point and the 4th match point the second package space S as a p 2.
With a p for initial point, search the point meeting constraint condition, the right arm namely shown in Fig. 2, left arm, upper arm, underarm along X-axis positive dirction, X-axis negative direction, Y-axis positive dirction, Y-axis negative direction respectively.
In a particular embodiment, constraint condition is:
Wherein, l 1for pixel p is to the distance of Searching point q, pixel p is current pixel point, l 2for pixel p is to sampled point e idistance, choose by condition k 3* l 1<l 2<k 4* l 1determine, O lab(p, q) be the Euclidean distance in color for pixel p and Searching point q, O lab(q, e i) be Searching point q and sampled point e ieuclidean distance in color, k 1, k 2, k 3, k 4, w 1, w 2for custom parameter, and k 1>k 2, k 4>k 3, w 2>w 1.Such as, k 1=15, k 2=5, k 3=1.5, k 4=3, w 1=10, w 2=100.In the present embodiment, O lab(p, q) be the Euclidean distance in lab color for pixel p and Searching point q, O lab(q, e i) be Searching point q and sampled point e ieuclidean distance in lab color.It should be noted that, i-th sampled point e ithe value of middle i is by arranging suitable k 3value and k 4value, makes the value of i unique, to determine a unique sampled point.
In constraint condition (1), 2. 1. condition belong to linear restriction, and 3. condition belongs to the space constraint based on sampled point.When calculating package space, because the color change speed on different picture is different, the color change speed of the zones of different of same secondary picture is also different, makes to be difficult to make algorithmic stability in single linear constraint.In the present embodiment, the space constraint of introducing is mainly used for the borderline region point improving objects in images, makes the package space calculated more close to the edge of objects in images, owing to reference to how rational colouring information, also enhances the stability of algorithm.Therefore, under the prerequisite of linear restriction, in conjunction with the space constraint based on sampled point, accuracy rate and the stability of Stereo matching better can be ensured.In other embodiments, above-mentioned constraint condition can suitably be changed according to the actual requirements.
After calculating package space a little, also comprise the step of the Matching power flow of calculation level.
For 1 L of certain in left figure p, mate in the specified scope Ω of right figure, to calculate within the scope of this institute a little with a L pmatching power flow, scope Ω is hunting zone, also the span of i.e. parallax value, and this hunting zone is and a L pon same sweep trace (polar curve), because left and right figure corrects through polar curve, and polar curve is horizontal parallel, and therefore hunting zone Ω is the line segment of a horizontal direction.For the parallax d of each point in scope Ω, adopt some L pthe first package space S 1in each some w, go to mate R in right figure w+dpoint, the calculating of the Matching power flow that each point is right is obtained by mixing cost function, and final Matching power flow is the Matching power flow sum C of all-pair 1.With a L pthe second package space S 2calculate Matching power flow sum C in an identical manner 2.
The right Matching power flow function of each point is made up of three parts: a gray space census converts (central transformation), the absolute value difference in a color space (being denoted as AD), a two-way gradient, shown in being specifically calculated as follows of each several part:
(1) use scenes of census conversion carries out on gray-scale map, first cromogram is converted to gray-scale map, the gray-scale value of the p point in former figure represents with GS (p), simultaneously, calculate the census value x (p that there is some q and p generation in the institute removed in the 7x9 window centered by p outside p, q), computing formula is as follows:
X (p, q) is connected into binary string B (p) according to the relative position of p and q.Two corresponding bit strings can be obtained after respectively left and right figure being calculated, describe the difference between them by Hamming distance, obtain cost value as follows:
h(p,d)=Ham(B L(p),B R(p-d))…………(3)
Wherein d represents the parallax between corresponding pixel points.
(2) AD value
Absolute value difference is the comparatively conventional method of measurement two some similarities, adopts its AD value in color space in the present embodiment, as follows according to the cost value that AD value obtains:
C AD ( p , d ) = &Sigma; i = R , G , B | | I i L ( p ) - I i R ( p - d ) | | &CenterDot; &CenterDot; &CenterDot; ( 4 )
Wherein, for the RGB color of left figure mid point p, for pressing the RGB color of point corresponding to parallax d with left figure p in right figure, represent the Euclidean distance of these two colors.
(3) gradient
Choose gradient as cost item, in the present embodiment, adopt two-way gradient, be i.e. the gradient of level and vertical direction.Wherein N xand N ybe illustrated respectively in the derivative (gradient) in x and y direction, I lp () is the gray-scale value of point to be calculated (left figure), I r(p-d) for it is at the gray-scale value of another pictures (right figure) corresponding point, d is the parallax of point-to-point transmission, then
C GDx(p,d)=||N x(I L(p))-N x(I R(p-d))||
C GDy(p,d)=||N y(I L(p))-N y(I R(p-d))||…………(5)
C GD=C GDx+C GDy
(4) cost function is mixed
Final cost function is formed by above-mentioned three cost item weighted blend, and shown in (6), wherein a, b, g are every weight, in order to represent every contribution to final cost function value.
C(x,y,d)=aC census+bC AD(p,d)+gC GD…………(6)
Wherein, x, y represent coordinate figure, the parallax of d representative point (x, y).C censusbe h (p, the d) value of the respective point of trying to achieve in formula (3).
Preferably, to calculate in the first package space Matching power flow sum C a little 1with in the second package space Matching power flow sum C a little 2after, according to C 1and C 2adopt quick belief propagation Global Algorithm to calculate initial parallax, to improve accuracy rate and the stability of Stereo matching, concrete account form is as follows:
The pass of degree of confidence B and energy function E is:
B=e -E…………(7)
Now the maximized process of degree of confidence B is just equivalent to the minimized process of energy function E, then put P parallax d penergy function can be expressed as:
E p ( d p ) = D p ( d p ) + &Sigma; r &Element; N ( p ) m r &RightArrow; p T ( d p ) &CenterDot; &CenterDot; &CenterDot; ( 8 )
N (p) is the set of up and down 4 points adjacent with a p, for the energy transmitted after T iteration to a p from a r, then putting p parallax is d ptime local matching cost be D p(d p):
D p(d p)=[c 1(p,d p)+c 2(p,d p)]/2…………(9)
And the computing formula putting the energy that p transmits after t iteration to some q can be as follows:
m p &RightArrow; q t ( d q ) = min d p &Element; &Omega; [ c ( d p - d q ) 2 + D p ( d p ) + &Sigma; s &Element; N ( p ) \ q m s &RightArrow; p t - 1 ( q p ) ] &CenterDot; &CenterDot; &CenterDot; ( 10 )
N (p) q be the set removing q point in 4 points up and down adjacent with a p.
The best parallax d of some p * p(i.e. initial parallax) can be obtained by minimization of energy function E, and its formula is as follows:
d p * = arg min d p &Element; &Omega; b p ( d p ) &CenterDot; &CenterDot; &CenterDot; ( 11 )
Ω is the span of parallax.
S20: the reliable point of screening further
It is insecure for much putting due to the initial parallax calculated, and they can affect last result of calculation, therefore, in the present embodiment, adopts the coupling of horizontal parallax figure to carry out the further screening of reliable point, d lp () represents the parallax of p point in left figure.Then screening formula is:
It is reliable that match (p) equals 1 expression p point, equals 0 expression p node failure.
In global disparity is estimated, also comprise the step of image being carried out to piecemeal.When carrying out piecemeal to image, first image is divided into the minimum fragment of several sizes (image block), preferably, image is divided into several image blocks based on superpixel color piecemeal by the present embodiment, afterwards, according to color and parallax, image block is merged respectively on this basis.Color piecemeal based on Superpixel refers to, gets some (usual quantity is larger) super-pixel point in space, then utilizes spatial information and colouring information to judge and the immediate pixel of each super-pixel point.Each super-pixel point forms a block with its immediate pixel, and the super-pixel of getting some number is equal with the number of the block of generation.Based on the color piecemeal of Superpixel, its algorithm is better to the division effect of object boundary when super-pixel point is abundant, but due to the piecemeal number in this case produced too much, bring negative impact by calculating.
S30: merge according to Color pair image block: image block pixel quantity being less than preset value is adjacent the immediate image block of color in image block and merges; And/or, determine two adjacent image block colors close, and when two image block pixel quantity sums are less than preset value, two image blocks are merged.
In the present embodiment, suppose for image block s, its pixel quantity is p (s), and reliable some quantity is r (s).
(1) minimum owing to dividing the crumb size that obtains, therefore its number is very large, makes process required memory below very big, so for the few block of pixel number, by the merged block of itself and surrounding.As p (s) <k 1(k 1for preset value) time, this block and immediate piece of its color are merged, judges that the degree of closeness of color can adopt any-mode of the prior art to carry out.
(2) if the color of adjacent two blocks is enough close, also merged, to improve the stability of piecemeal.To ensure that the block after merging is unlikely to excessive, for block s simultaneously 1and s 2, as p (s 1)+p (s 2) <k 2(k 2for preset value) time, merge s 1and s 2.
S40: image block is merged according to parallax: the image block reliably putting quantity and be less than preset value is adjacent the immediate image block of color in image block and merges, reliable point is for obtaining according to the initial parallax screening of pixel each in original image; And/or, judge that whether the parallax change of two adjacent images block is level and smooth, if so, then two image blocks are merged.
Because image block is for last disparity estimation (calculating of final parallax), and calculate initial parallax in preceding step.Therefore, according to parallax block merged and make last block be more suitable for doing disparity estimation by contributing to, improve accuracy.
(1) according to reliable some screening step above, due to reliable some negligible amounts of some block, to such an extent as to carry out merging according to parallax and will affect accuracy, therefore, be necessary first these blocks and other blocks to be merged.In the present embodiment, as r (s) <k 3(k 3for preset value) time, this block and immediate piece of its color are merged.For finding out and immediate piece of current block color, any one mode in prior art can be adopted, such as, by the color of current block and around it block carry out contrast and draw.
(2) according to the feature of disparity estimation, the place of parallax smooth change is necessary to be classified as one piece, therefore can by judging whether the parallax change between adjacent block smoothly determines whether these two merged block, if level and smooth, then merge, otherwise, then nonjoinder.
In the present embodiment, when judging that whether the parallax change of two adjacent images block is level and smooth, first find out current image block S and be adjacent image block S kborder consecutive point to P s(i), P sk(i), P s(i) and P ski () is block S and block S ki-th consecutive point pair; Again with P ssearch for the rectangular box of an a*b centered by (i), calculate in this square frame the average V of the parallax of the reliable point belonging to block S si (), with P sksearch for the rectangular box of an a*b centered by (i), calculate in this square frame and belong to block S kthe average V of parallax of reliable point sk(i), wherein, a, b are default pixel wide; Work as max|V s(i)-V sk(i) | during <j, be judged as that current image block S is adjacent image block S kparallax change level and smooth, wherein, i ∈ W s, Sk, W s, Skfor block S and block S kthe indexed set of border all-pair, j is preset value.
Namely can be defined as follows formula:
th [ s ] [ s k ] = max | v s ( i ) - v s k ( i ) | i &Element; &omega; s , s k &CenterDot; &CenterDot; &CenterDot; ( 13 )
As th [s] [s k] <j time, block s and s kmerge.
In the image block method for global disparity estimation that the present embodiment provides, not only make use of colouring information and carry out piecemeal, also introduce parallax information, the accuracy of the final parallax finally calculated can be improved further.
Because left and right two width figure is the picture that different visual angles is observed, some part has and does not have in right figure in left figure, and some part has and do not have in left figure in right figure, and these parts all belong to occlusion area.Because these regions only exist in a width figure, the result that the disparity computation done by previous methods calculates is substantially all wrong, these mistakes can affect final estimation result, therefore, need to utilize color piecemeal to be found out by occlusion area, and be marked as unreliable point, to improve last accuracy.
For left figure, the mode of being observed by people's right and left eyes, can be known and be present in the right-hand member of in the color piecemeal each piece part adjacent with other blocks for left figure occlusion area, and the adjacent part of left end is unshielding.The left end of in the color piecemeal each piece part adjacent with other blocks is present in for right figure occlusion area, and the adjacent part of right-hand member is unshielding.
In the present embodiment, after carrying out color of image piecemeal, before calculating final parallax, also comprise the occlusion area in marking image, be specially: get in each row from left end first of each piecemeal of left figure and reliably put L (p), according to the parallax d of a L (p) pcalculate to obtain its some R (p-d corresponding to right figure p); From a R (p-d in right figure p-1) start to find first reliably to put Rq to the left side, find out its parallax d q, calculate invocation point Rq corresponding to the some L (q+d in left figure q), two somes L (p) and the L (q+d of level q) between point be and block a little.
In order to improve accuracy further, in the present embodiment, also comprising step: based on color piecemeal, medium filtering is done to existing reliable point, again removing the reliable point of part.Namely, when S20 performs after S30, S20, when carrying out further screening and reliably putting, can utilize the information in S30.It should be noted that, the part steps in Fig. 1 does not limit strict execution sequence, and its execution sequence can be determined according to real needs.
For left figure, for each reliable some p in left figure, first estimate its gradient in X-axis and Y-axis.Evaluation method is, X-axis is chosen some and the reliable point of p point in a color piecemeal, calculate itself and these put the gradient that formed respectively, finally get intermediate value, i.e. the gradient derivationX (p) that estimates in X-axis of p point.Obtain derivationY (p) in the same manner in the Y direction.Then for each some p in left figure, to get around it in a*b square frame all with the reliable some q of a p at same piece i, utilize its parallax d (q i), the gradient derivationX (q of X-direction i), the gradient derivationY (q of Y-direction i) estimate the parallax d (p putting p i).Concrete formula is as follows:
d(p i)=d(q i)+(x[p]-x[q i])*derivationX[q i]+(y[p]-y[q i])*derivationY[q i]
…………(14)
To all d (p i) sort to get intermediate value and carry out round to this value and see whether it equals d (p), if do not waited, this point of elimination.
S50: calculate final parallax.
In the present embodiment, for left figure, for each some p in left figure, to get around it in e*f square frame all with the reliable some q of p at same piecemeal i, wherein e, f are default pixel wide, utilize its parallax d (q i) (initial parallax namely calculated in preceding step), X-direction gradient derivationX (q i), the gradient derivationY (q of Y-direction i) estimate the parallax d (q putting p i), computing formula is as follows:
d(p i)=d(q i)+(x[p]-x[q i])*derivationX[q i]+(y[p]-y[q i])*derivationY[q i]
To all d (p i) sequence gets intermediate value and carry out to this value final parallax d (p) that value that round obtains is a p.In other embodiments, any one mode of the prior art also can be adopted to obtain final parallax.
Please refer to Fig. 4, be directed to the global disparity method of estimation that the present embodiment provides, the present embodiment is also corresponding provides a kind of global disparity estimating system, comprises image and reads in module 1000, package space computing module 1001, Matching power flow computing module 1002, initial parallax computing module 1003, image block module 1004 and final disparity computation module 1005.
Image reads in module 1000 for reading in the first visual point image and the second visual point image, and the first visual point image is the image of the target from the first viewpoint acquisition, and the second visual point image is the image of the target from the second viewpoint acquisition.
Package space computing module 1001 after choosing sampled point according to preset rules on the first visual point image, on the first viewpoint figure successively selected pixels point as current pixel point, take current pixel point as initial point, along first axle positive dirction and negative direction, search for using individual element point as Searching point, stop until searching when not meeting the point of constraint condition preset, and using the institute meeting constraint condition that searches a little as the first match point; Respectively with each first match point for initial point, along the second axis positive dirction and negative direction, explore as Searching point using individual element point, stop until searching when not meeting the point of constraint condition preset, and using the institute meeting constraint condition that searches a little as the second match point; Using the first match point and the second match point the first package space as current pixel point.Package space computing module 1001 is also for taking current pixel point as initial point, along the second axis positive dirction and negative direction, search for using individual element point as Searching point, stop until searching when not meeting the point of constraint condition preset, and using the institute meeting constraint condition that searches a little as the 3rd match point; Respectively with each the 3rd match point for initial point, along first axle positive dirction and negative direction, explore as Searching point using individual element point, stop until searching when not meeting the point of constraint condition preset, and using the institute meeting constraint condition that searches a little as the 4th match point; Using the 3rd match point and the 4th match point the second package space as current pixel point.Constraint condition comprises Linear Constraints and the space constraints based on sampled point, Linear Constraints is the constraint of the Euclidean distance between current pixel point and Searching point in color, space constraints is the constraint of the Euclidean distance between Searching point and sampled point in color, and first axle is mutually vertical with the second axis.
Matching power flow computing module 1002 for calculate in the first package space Matching power flow sum a little, and in calculating second package space Matching power flow sum a little.
Initial parallax computing module 1003 for according in the first package space in Matching power flow sum a little and the second package space Matching power flow sum a little calculate initial parallax, and screening obtains reliable point.
Image block module 1004 adopt the present embodiment above-mentioned any one, for carrying out image block to original image.
Final disparity computation module 1005, for based on image block, calculates the final parallax of each pixel in the first visual point image.
The global disparity estimating system that the present embodiment provides is corresponding with above-mentioned global disparity method of estimation, no longer repeats its principle of work herein.
Please refer to Fig. 5, for the experimental result picture of global disparity method of estimation on Middlebury data set adopting the embodiment of the present application to provide, test result on Middlebury test platform shows, the result (the 2nd row result) that the global disparity method of estimation adopting the embodiment of the present application to provide obtains is better than current most method.Adopt " de-occlusion region (nonocc) " " all regions (all) " " discontinuity zone (disc) " as evaluation index in Fig. 5, error rate threshold is set as 1.0, namely differs by more than 1 with true parallax (ground truth) and is designated as erroneous point.
The global disparity method of estimation that the application provides and system, the mixing cost function based on a kind of robust obtains the mixing cost of pixel, adopts the polymerization space improved to be polymerized single-point cost; Then quick belief propagation Global Algorithm is adopted to carry out the optimization computation of overall cost; The last mark that have employed again the special image block for disparity estimation and block a little, therefore, it is possible to greatly improve the accuracy of final disparity computation.
It will be appreciated by those skilled in the art that, in above-mentioned embodiment, all or part of step of various method can be carried out instruction related hardware by program and completes, this program can be stored in a computer-readable recording medium, and storage medium can comprise: ROM (read-only memory), random access memory, disk or CD etc.
Above content is the further description done the application in conjunction with concrete embodiment, can not assert that the concrete enforcement of the application is confined to these explanations.For the application person of an ordinary skill in the technical field, under the prerequisite not departing from the present application design, some simple deduction or replace can also be made.

Claims (10)

1. a global disparity estimating system, is characterized in that, comprising:
Image reads in module, and for reading in the first visual point image and the second visual point image, the first visual point image is the image of the target from the first viewpoint acquisition, and the second visual point image is the image of the target from the second viewpoint acquisition;
Package space computing module, after choosing sampled point according to preset rules on the first visual point image, on the first viewpoint figure successively selected pixels point as current pixel point, take current pixel point as initial point, along first axle positive dirction and negative direction, search for as Searching point using individual element point, stop until searching when not meeting the point of constraint condition preset, and using the institute meeting described constraint condition that searches a little as the first match point; Respectively with each first match point for initial point, along the second axis positive dirction and negative direction, explore as Searching point using individual element point, stop until searching when not meeting the point of constraint condition preset, and using the institute meeting described constraint condition that searches a little as the second match point; Using the first match point and the second match point the first package space as current pixel point;
Take current pixel point as initial point, along the second axis positive dirction and negative direction, search for as Searching point using individual element point, stop until searching when not meeting the point of constraint condition preset, and using the institute meeting described constraint condition that searches a little as the 3rd match point; Respectively with each the 3rd match point for initial point, along first axle positive dirction and negative direction, explore as Searching point using individual element point, stop until searching when not meeting the point of constraint condition preset, and using the institute meeting described constraint condition that searches a little as the 4th match point; Using the 3rd match point and the 4th match point the second package space as current pixel point;
Described constraint condition comprises Linear Constraints and the space constraints based on sampled point, described Linear Constraints is the constraint of the Euclidean distance between current pixel point and Searching point in color, described space constraints is the constraint of the Euclidean distance between Searching point and sampled point in color, and described first axle is mutually vertical with the second axis;
Matching power flow computing module, for calculate in the first package space Matching power flow sum a little, and in calculating second package space Matching power flow sum a little;
Initial parallax computing module, for according in the first package space in Matching power flow sum a little and the second package space Matching power flow sum a little calculate initial parallax, and screening obtains reliable point;
Image block module, for carrying out image block to the first visual point image and the second visual point image;
Final disparity computation module, for based on described image block, and calculates the final parallax of each pixel in the first visual point image and the second visual point image according to the initial parallax of described reliable point.
2. the system as claimed in claim 1, is characterized in that, described constraint condition is:
O lab ( p , q ) < k 1 ( l 1 < w 1 ) O lab ( p , q ) < k 2 ( w 1 &le; l 1 &le; w 2 ) O lab ( p , q ) < O lab ( q , e i ) ( k 3 * l 1 < l 2 < k 4 * l 1 )
Wherein, l 1for pixel p is to the distance of Searching point q, pixel p is current pixel point, l 2for pixel p is to sampled point e idistance, O lab(p, q) be the Euclidean distance in color for pixel p and Searching point q, O lab(q, e i) be Searching point q and sampled point e ieuclidean distance in color, k 1, k 2, k 3, k 4, w 1, w 2for custom parameter, and k 1>k 2, k 4>k 3, w 2>w 1.
3. the system as claimed in claim 1, is characterized in that, described preset rules makes each sampled point be a predeterminable range from the distance of four the neighbouring sample points in its upper and lower, left and right.
4. the system as claimed in claim 1, is characterized in that, image reads in module also for after reading in the first visual point image and the second visual point image, carries out polar curve correction to the first visual point image and the second visual point image.
5. the system as claimed in claim 1, it is characterized in that, also comprise occlusion area mark module, for after described image block module carries out image block to original image, before final disparity computation module calculates final parallax, occlusion area in marking image, be specially: get in each row from left end first of each piecemeal of the first visual point image and reliably put L (p), calculate according to the parallax dp of a L (p) and to obtain some R (p-dp) that it corresponds to the second visual point image; In the second visual point image, from a R (p-dp-1), find first to the left side reliably put Rq, find out its parallax dq, calculate invocation point Rq corresponding to the some L (q+dq) in the first visual point image, the point between two somes L (p) of level and L (q+dq) is and blocks a little.
6. the system as claimed in claim 1, it is characterized in that, initial parallax computing module according in the first package space in Matching power flow sum a little and the second package space Matching power flow sum a little, adopt quick belief propagation Global Algorithm to calculate initial parallax.
7. the system as described in any one of claim 1-6, is characterized in that, when image block module carries out image block to the first visual point image and the second visual point image:
First visual point image and the second visual point image are divided into several image blocks by image block module;
Merge according to Color pair image block: image block pixel quantity being less than preset value is adjacent the immediate image block of color in image block and merges; And/or, determine two adjacent image block colors close, and when two image block pixel quantity sums are less than preset value, two image blocks are merged;
Merge image block according to parallax: the image block reliably putting quantity and be less than preset value is adjacent the immediate image block of color in image block and merges, described reliable point is obtain according to the initial parallax screening of pixel each in original image; And/or, judge that whether the parallax change of two adjacent images block is level and smooth, if so, then two image blocks are merged.
8. system as claimed in claim 7, is characterized in that, when the first visual point image and the second visual point image are divided into several image blocks by image block module: image is divided into several image blocks based on superpixel color piecemeal by image block module.
9. system as claimed in claim 7, is characterized in that, when image block module judges that whether the parallax change of two adjacent images block is level and smooth:
Image block module is found out current image block S and is adjacent image block S kborder consecutive point to P s(i), P sk(i), P s(i) and P ski () is block S and block S ki-th consecutive point pair;
With P ssearch for the rectangular box of an a*b centered by (i), calculate in this square frame the average V of the parallax of the reliable point belonging to block S si (), with P sksearch for the rectangular box of an a*b centered by (i), calculate in this square frame and belong to block S kthe average V of parallax of reliable point sk(i), wherein, a, b are default pixel wide;
Work as max|V s(i)-V sk(i) | during <j, be judged as that current image block S is adjacent image block S kparallax change level and smooth, wherein, i ∈ W s, Sk, W s, Skfor block S and block S kthe indexed set of border all-pair, j is preset value.
10. a global disparity method of estimation, is characterized in that, comprising:
Read in the first visual point image and the second visual point image, the first visual point image is the image of the target from the first viewpoint acquisition, and the second visual point image is the image of the target from the second viewpoint acquisition;
On the first visual point image, sampled point is chosen according to preset rules;
On the first viewpoint figure successively selected pixels point as current pixel point, take current pixel point as initial point, along first axle positive dirction and negative direction, search for using individual element point as Searching point, stop until searching when not meeting the point of constraint condition preset, and using the institute meeting described constraint condition that searches a little as the first match point; Respectively with each first match point for initial point, along the second axis positive dirction and negative direction, explore as Searching point using individual element point, stop until searching when not meeting the point of constraint condition preset, and using the institute meeting described constraint condition that searches a little as the second match point; Using the first match point and the second match point the first package space as current pixel point;
Take current pixel point as initial point, along the second axis positive dirction and negative direction, search for as Searching point using individual element point, stop until searching when not meeting the point of constraint condition preset, and using the institute meeting described constraint condition that searches a little as the 3rd match point; Respectively with each the 3rd match point for initial point, along first axle positive dirction and negative direction, explore as Searching point using individual element point, stop until searching when not meeting the point of constraint condition preset, and using the institute meeting described constraint condition that searches a little as the 4th match point; Using the 3rd match point and the 4th match point the second package space as current pixel point;
Described constraint condition comprises Linear Constraints and the space constraints based on sampled point, described Linear Constraints is the constraint of the Euclidean distance between current pixel point and Searching point in color, described space constraints is the constraint of the Euclidean distance between Searching point and sampled point in color, and described first axle is mutually vertical with the second axis;
Calculate institute's Matching power flow sum a little in the first package space, calculate the second package space interior Matching power flow sum a little;
According in the first package space in Matching power flow sum a little and the second package space Matching power flow sum a little calculate initial parallax, and screening obtains reliable point;
Image block is carried out to the first visual point image and the second visual point image;
Based on described image block, and calculate the final parallax of each pixel in the first visual point image and the second visual point image respectively according to the initial parallax of described reliable point.
CN201410604055.3A 2014-10-30 2014-10-30 Global parallax estimation method and system Active CN104408710B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410604055.3A CN104408710B (en) 2014-10-30 2014-10-30 Global parallax estimation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410604055.3A CN104408710B (en) 2014-10-30 2014-10-30 Global parallax estimation method and system

Publications (2)

Publication Number Publication Date
CN104408710A true CN104408710A (en) 2015-03-11
CN104408710B CN104408710B (en) 2017-05-24

Family

ID=52646339

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410604055.3A Active CN104408710B (en) 2014-10-30 2014-10-30 Global parallax estimation method and system

Country Status (1)

Country Link
CN (1) CN104408710B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016065578A1 (en) * 2014-10-30 2016-05-06 北京大学深圳研究生院 Global disparity estimation method and system
CN109791697A (en) * 2016-09-12 2019-05-21 奈安蒂克公司 Using statistical model from image data predetermined depth
CN110223338A (en) * 2019-06-11 2019-09-10 中科创达(重庆)汽车科技有限公司 Depth information calculation method, device and electronic equipment based on image zooming-out

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070031037A1 (en) * 2005-08-02 2007-02-08 Microsoft Corporation Stereo image segmentation
CN101976455A (en) * 2010-10-08 2011-02-16 东南大学 Color image three-dimensional reconstruction method based on three-dimensional matching
CN102999913A (en) * 2012-11-29 2013-03-27 清华大学深圳研究生院 Local three-dimensional matching method based on credible point spreading
CN103996202A (en) * 2014-06-11 2014-08-20 北京航空航天大学 Stereo matching method based on hybrid matching cost and adaptive window

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070031037A1 (en) * 2005-08-02 2007-02-08 Microsoft Corporation Stereo image segmentation
CN101976455A (en) * 2010-10-08 2011-02-16 东南大学 Color image three-dimensional reconstruction method based on three-dimensional matching
CN102999913A (en) * 2012-11-29 2013-03-27 清华大学深圳研究生院 Local three-dimensional matching method based on credible point spreading
CN103996202A (en) * 2014-06-11 2014-08-20 北京航空航天大学 Stereo matching method based on hybrid matching cost and adaptive window

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
DANIEL SCHARSTEIN 等: "A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms", 《INTERNATIONAL JOURNAL OF COMPUTER VISION》 *
KE ZHANG 等: "Cross-Based Local Stereo Matching Using Orthogonal Integral Images", 《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》 *
PEDRO F. FELZENSZWALB 等: "Efficient Belief Propagation for Early Vision", 《INTERNATIONAL JOURNAL OF COMPUTER VISION》 *
RADHAKRISHNA ACHANTA 等: "SLIC Superpixels Compared to State-of-the-Art Superpixel Methods", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
XING MEI 等: "On Building an Accurate Stereo Matching System on Graphics Hardware", 《IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS》 *
张惊雷 等: "基于图像区域分割和置信传播的立体匹配算法", 《计算机工程》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016065578A1 (en) * 2014-10-30 2016-05-06 北京大学深圳研究生院 Global disparity estimation method and system
CN109791697A (en) * 2016-09-12 2019-05-21 奈安蒂克公司 Using statistical model from image data predetermined depth
CN109791697B (en) * 2016-09-12 2023-10-13 奈安蒂克公司 Predicting depth from image data using statistical models
CN110223338A (en) * 2019-06-11 2019-09-10 中科创达(重庆)汽车科技有限公司 Depth information calculation method, device and electronic equipment based on image zooming-out

Also Published As

Publication number Publication date
CN104408710B (en) 2017-05-24

Similar Documents

Publication Publication Date Title
Strasdat et al. Double window optimisation for constant time visual SLAM
CN104331890A (en) Method and system for estimating global disparity
Majdik et al. Air‐ground matching: Appearance‐based GPS‐denied urban localization of micro aerial vehicles
CN102750711B (en) A kind of binocular video depth map calculating method based on Iamge Segmentation and estimation
Sucar et al. Bayesian scale estimation for monocular slam based on generic object detection for correcting scale drift
EP3274964B1 (en) Automatic connection of images using visual features
KR101869605B1 (en) Three-Dimensional Space Modeling and Data Lightening Method using the Plane Information
CN104680510A (en) RADAR parallax image optimization method and stereo matching parallax image optimization method and system
CN102982334B (en) The sparse disparities acquisition methods of based target edge feature and grey similarity
CN103996202A (en) Stereo matching method based on hybrid matching cost and adaptive window
CN103702098A (en) In-depth extracting method of three-viewpoint stereoscopic video restrained by time-space domain
CN103218799A (en) Method and apparatus for camera tracking
CN103020963B (en) A kind of multi-eye stereo matching process cut based on the figure of self-adaptation watershed divide
CN107492107A (en) The object identification merged based on plane with spatial information and method for reconstructing
CN104182968A (en) Method for segmenting fuzzy moving targets by wide-baseline multi-array optical detection system
CN104966290A (en) Self-adaptive weight three-dimensional matching method based on SIFT descriptor
CN113989758A (en) Anchor guide 3D target detection method and device for automatic driving
CN104408710A (en) Global parallax estimation method and system
Lee et al. Robust uncertainty-aware multiview triangulation
EP2947626B1 (en) Method and apparatus for generating spanning tree, method and apparatus for stereo matching, method and apparatus for up-sampling, and method and apparatus for generating reference pixel
CN114913472B (en) Infrared video pedestrian significance detection method combining graph learning and probability propagation
McCarthy et al. Surface extraction from iso-disparity contours
CN103942810A (en) Three-dimensional matching method based on improved bidirectional dynamic programming
Lu et al. A geometric convolutional neural network for 3d object detection
WO2016065579A1 (en) Global disparity estimation method and system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant