CN101976455B - Color image three-dimensional reconstruction method based on three-dimensional matching - Google Patents

Color image three-dimensional reconstruction method based on three-dimensional matching Download PDF

Info

Publication number
CN101976455B
CN101976455B CN2010105039870A CN201010503987A CN101976455B CN 101976455 B CN101976455 B CN 101976455B CN 2010105039870 A CN2010105039870 A CN 2010105039870A CN 201010503987 A CN201010503987 A CN 201010503987A CN 101976455 B CN101976455 B CN 101976455B
Authority
CN
China
Prior art keywords
pixel
image
parallax
arbitrary
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2010105039870A
Other languages
Chinese (zh)
Other versions
CN101976455A (en
Inventor
达飞鹏
曹云云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Wanjia Textile Co., Ltd.
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN2010105039870A priority Critical patent/CN101976455B/en
Publication of CN101976455A publication Critical patent/CN101976455A/en
Application granted granted Critical
Publication of CN101976455B publication Critical patent/CN101976455B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to a color image three-dimensional reconstruction method based on three-dimensional matching, comprising the following steps of: (1) simultaneously and respectively taking an image from proper angles by using two color cameras; (2) respectively calibrating the internal parameter matrixes and the external parameter matrixes of the two cameras; (3) carrying out polar line correction and image transformation according to calibrated data; (4) working out matching cost for each pixel point in the two corrected images by applying a self-adaption weight window algorithm and acquiring an initial parallax image; (5) marking the reliability coefficient of the pixel initial matching result by adopting matching cost reliability detection and left and right consistency verification; (6) carrying out color segmentation on the images through a Mean-Shift algorithm; (7) carrying out global optimization by a selective confidence propagation algorithm on the basis of color segmentation and pixel reliability classification results to obtain a final parallax image; and (8) working out the three-dimensional coordinates of actual object points on the images according to the calibrated data and the matching relation, thereby reconstructing the three-dimensional point cloud of an object.

Description

A kind of coloured image three-dimensional rebuilding method based on the solid coupling
Technical field
The invention belongs to the technique of binocular stereoscopic vision field; Relate to problem, refer more particularly to and a kind ofly use the adaptive weighting algorithm to calculate the coupling cost and obtain the method for pixel matching relationship with reconstructed object thing three-dimensional point cloud based on the selectivity confidence spread algorithm of image segmentation and pixel fiduciary level classification results based on the colored real scene shooting image of three-dimensional matching treatment.
Background technology
Technique of binocular stereoscopic vision is a kind of method for three-dimensional measurement of passive type, and its implementation is flexible, low to environmental requirement, man-machine interaction is friendly, is a kind of technology popular in the three-dimensional reconstruction algorithm.Binocular stereo vision is intended to the mechanism of simulating human eyes identification scene three-dimensional information; Obtain the two dimensional image of scene from two angles, according to the matching relationship reconstruction of three-dimensional model of setting up between the image, mainly comprise camera calibration again; Image is to coupling, and three-dimensional information reduction etc. are process mainly.The process of setting up two width of cloth image pixel point correspondences is exactly the process of three-dimensional coupling, and it is the core of technique of binocular stereoscopic vision.
The main task of three-dimensional coupling is to obtain smooth dense disparity map true to nature; Stereo Matching Algorithm mainly is divided into local algorithm and Global Algorithm; Local algorithm utilizes the neighborhood information of pixel to mate; Computation complexity is lower, but matching precision is not high, is easy to generate mistake at low texture, parallax discontinuity zone especially.Global Algorithm adds the flatness cost in the calculating of coupling cost, makes coupling be converted into global optimum's process of energy function, mainly contains figure and cuts algorithm, confidence spread algorithm and dynamic programming algorithm.Wherein the dynamic programming algorithm computation complexity is minimum; Fastest, but be easy to generate strip flaw problem, it is higher that confidence spread algorithm and figure cut the algorithmic match precision; The disparity map edge region that calculates and the effect of degree of depth discontinuity zone are better; Comparatively speaking, it is consuming time for a long time that figure cuts algorithm, and real-time performance remains to be improved.
The existing shortcoming that has the following aspects based on the three-dimensional reconstruction algorithm of binocular stereo vision:
(1) making up suitable neighborhood window is the key of local algorithm; Window is too little, then can't comprise the enough neighborhood informations of pixel to be matched, and window is too big; Then mate in the calculating of cost and will comprise neighborhood information, the generation of these couplings that all can lead to errors with directive significance.
(2) the relatively low dynamic programming algorithm of computation complexity is limited to global energy optimization in the Global Algorithm
In the sweep trace of one dimension; Lost the slickness constraint of other directions; It is consuming time more of a specified duration that figure cuts algorithm, is difficult to satisfy the real-time requirement of real scene shooting 3-dimensional reconstruction, and the confidence spread algorithm is propagated degree of confidence indiscriminately between neighborhood territory pixel; And between parallax discontinuity zone neighborhood territory pixel, possibly not satisfy the parallax continuity constraint, the result causes reconstruction point cloud obscure boundary clear.
Owing to there is above shortcoming, existing three-dimensional reconstruction algorithm based on the solid coupling can not obtain gratifying result in practical application.
Summary of the invention
The purpose of this invention is to provide a kind of coloured image three-dimensional rebuilding method based on the solid coupling, can accurate fast automatic reconstructed image three-dimensional point cloud.
The technical scheme that the present invention adopts is: at first gather the colored real scene shooting image of two width of cloth; Carry out camera calibration; Carry out polar curve is proofreaied and correct and image transformation according to nominal data; Calculate coupling cost and initial parallax figure through initial matching, utilize confidence level detection of coupling cost and left and right sides consistency desired result that the initial matching result is classified according to fiduciary level, then the left image after proofreading and correct is carried out colour and cut apart; Carry out global optimization and obtain final parallax to have optionally the confidence spread algorithm again, utilize nominal data and matching result reconstruction of three-dimensional point cloud at last and show.
Method of the present invention specifically comprises following step:
Step 1: Image Acquisition
Use two colour TV cameras from two angles that are more or less the same same scene to be taken two width of cloth images simultaneously, what wherein the video camera on the left side was taken is the original left image, and what the video camera on the right was taken is the original right image;
Step 2: camera calibration
Respectively two video cameras are demarcated, set up the relation between camera review location of pixels and the scene location, obtain the intrinsic parameter matrix A of the video camera on the left side L, the right the intrinsic parameter matrix A of video camera ROuter parameter matrix [R with the video camera on the left side Lt L], the outer parameter matrix [R of the video camera on the right Rt R];
Step 3: image is proofreaied and correct polar curve
The camera interior and exterior parameter utilization method for correcting polar line that obtains according to step 2 carries out polar curve to captured left and right sides image to be proofreaied and correct and obtains run-in index binocular vision model, makes matched pixel to having identical ordinate, and left image and right image after the correction are designated as I respectively lAnd I r
Step 4: initial matching:
Step 4.1: confirm candidate's disparity range D:
D=(d min,d max),
D wherein MinBe minimum parallax, d Min=0, d MaxBe maximum disparity, through the matched pixel point between mark benchmark image and the registering images to trying to achieve:
Ten pixels in the picked at random benchmark image pl1, and pl2, pl3 ..., pl10} seeks and { pl1 respectively in registering images; Pl2, pl3 ..., pl10} has ten estimation matched pixel points { pr1, pr2, pr3 of identical ordinate and similar color information;, pr10}, so obtain ten groups the estimation matched pixel to (pl1, pr1), (pl2, pr2); (pl3, pr3) ..., (pl10, pr10) }, to the thoroughly deserve one group parallax value { d1 of each group matched pixel to the difference of the horizontal ordinate that calculates two pixels; D2, d3 ..., d10}, maximum disparity d Max=max{d1, d2 ..., d10}+5;
Step 4.2: adaptive weighting window algorithm
With the left image I after proofreading and correct lBe benchmark image, with the right image I after proofreading and correct rBe registering images, adopt the adaptive weighting windowhood method that each pixel in the benchmark image is calculated the coupling cost and obtains initial left disparity map, then, with the right image I after proofreading and correct rBe benchmark image, with the left image I after proofreading and correct lBe registering images, adopt the adaptive weighting windowhood method that each pixel in the benchmark image is calculated the coupling cost and obtains initial right disparity map, described adaptive weighting windowhood method is:
Step 4.2.1: weight coefficient calculates
At first benchmark image is designated as I 1, registering images is designated as I 2, utilize color and spatial information each pixel in two width of cloth images to be calculated the weight coefficient E of all pixels in the neighborhood window then Pq:
E pq = e - ( αΔpq + β | | p - q | | 2 ) ,
Wherein p is the pixel in benchmark image or the registering images, and q is for pixel p center, size being the arbitrary pixel in the neighborhood window of n * n, and n=35, Δ pq are illustrated in the color distortion between the pixel p and q under the rgb space, || p-q|| 2Be two pixels Euclidean distances before, α and β are constant coefficient, α=0.1, β=0.047;
Step 4.2.2: the coupling cost is calculated
Under horizontal polar curve constraint, to the corresponding coupling cost C (p of all parallax value in each the pixel calculated candidate disparity range in the benchmark image 1, d):
C ( p 1 , d ) = Σ ( q 1 , q 2 ) ∈ W p 1 × W p 2 E p 1 q 1 × E p 2 q 2 × S ( q 1 , q 2 ) Σ ( q 1 , q 2 ) ∈ W p 1 × W p 2 E p 1 q 1 × E p 2 q 2 ,
P wherein 1Be arbitrary pixel in the benchmark image, p 1Coordinate do
Figure BSA00000299141600042
D is the arbitrary parallax value in candidate's disparity range D, pixel p 2Be p 1In registering images corresponding to the candidate matches pixel of parallax d, when benchmark image is left image, p 2Coordinate be When benchmark image is right image, p 2Coordinate be
Figure BSA00000299141600044
Figure BSA00000299141600045
Represent respectively with pixel p 1, p 2For center, size are the neighborhood window of n * n, pixel q 1Be window
Figure BSA00000299141600046
Interior arbitrary neighborhood territory pixel point, coordinate does
Figure BSA00000299141600047
Pixel q 2Be window
Figure BSA00000299141600048
In and q 1Corresponding pixel,, when benchmark image is left image, q 2Coordinate be When benchmark image is right image, q 2Coordinate be With
Figure BSA000002991416000412
Be the weight coefficient of trying to achieve, S (q according to step 4.2.1 1, q 2) be that respective pixel is to (q 1, q 2) diversity factor;
Step 4.2.3: calculate the initial parallax value
Each pixel is calculated the minimum parallax value d of coupling cost 0(p 1):
Figure BSA000002991416000413
P wherein 1Be the arbitrary pixel in the benchmark image, D is candidate's disparity range, d MinAnd d MaxBe minimum parallax and maximum disparity, C (p 1, d) be the coupling cost that calculates according to step 4.2.1; The minimum parallax value d of coupling cost 0(p 1) be pixel p 1Initial matching parallax result;
Step 4.2.4: set up the initial parallax image
Set up initial parallax image D 0: D 0(i, j)=d 0(p Ij), wherein i and j are respectively the horizontal ordinate and the ordinate of anaglyph pixel, p IjBe that coordinate is (i, pixel j), d in the benchmark image 0(p Ij) be the p that calculates among the step 4.2.3 IjInitial matching parallax result;
If benchmark image is left image I l, with initial parallax figure D 0Assignment is given initial left disparity map D l 0If benchmark image is right image I r, with initial parallax figure D 0Assignment is given initial right disparity map D r 0
Step 5: pixel fiduciary level mark:
Step 5.1: coupling cost confidence level check
With left image I lAll pixels are according to the classification of coupling cost confidence level, and the higher set of confidence level is designated as M Hc, the lower set of confidence level is for being designated as M Lc: left image I lIn arbitrary pixel p lCoupling cost confidence level is r (p l):
Figure BSA00000299141600051
C Min1Be p lThe coupling cost that initial matching parallax result is corresponding, i.e. smallest match cost value, and C Min2Be p lThe second little coupling cost, setting threshold dist then is as r (p L1During)>dist, p lThe matching result confidence level be higher, p l∈ M Hc, otherwise confidence level is for lower, p l∈ M Lc, wherein threshold value dist gets 0.04;
Step 5.2: left and right sides consistency desired result
For the arbitrary pixel p in the left image l, coordinate does
Figure BSA00000299141600052
p lThe initial parallax result The matched pixel p of correspondence in right image rCoordinate do The initial right anaglyph D that obtains according to step 4 r 0Obtain pixel p rThe initial parallax result
Figure BSA00000299141600055
If d 1=d 2, pixel p then lThrough left and right sides consistency desired result, be designated as p l∈ M Ac, otherwise, pixel p l, be not designated as p through left and right sides consistency desired result l∈ M Bc, M wherein AcAnd M BcBe respectively through the set of left and right sides consistency desired result and the set through left and right sides consistency desired result;
Step 5.3: pixel fiduciary level coefficient mark
According to step 5.1 and 5.2 result to each element marking fiduciary level coefficient Con (p in the left image l):
Con ( p l ) = 4 , if p l ∈ M hc ∩ M ac 3 , if p l ∈ M lc ∩ M ac 2 , if p l ∈ M hc ∩ M bc 1 , if p l ∈ M lc ∩ M bc
P wherein lBe the arbitrary pixel in the left image, Con (p l) be p lThe fiduciary level coefficient;
Step 6: image segmentation:
With the Mean-Shift algorithm left image is cut apart, to the cut zone S (p under each element marking l), p wherein lBe arbitrary pixel in the left image, S (p l) be pixel p lAffiliated region labeling;
Step 7: global optimization
Step 7.1: the level and smooth cost of pixel is calculated
Calculate the level and smooth cost J (p between four neighborhood territory pixels up and down of each pixel and this pixel in the left image with respect to all parallax value in the scope of the inspection D l, q l, d p, d q):
J(p l,q l,d p,d q)=min{|d p-d q|,|d max-d min|/8},
P wherein lBe the arbitrary pixel in the left image, q lBe pixel p lArbitrary neighbours territory pixel, d pAnd d qBe respectively pixel p lAnd q lThe arbitrary parallax in disparity range D, d MaxAnd d MinBe maximum disparity and minimum parallax;
Step 7.2: the degree of confidence message of calculating pixel node
Iterative computation degree of confidence message, t is a number of iterations, and initial value is 0, when t=50, stops iteration, and the computation process of iteration is each time:
During t iteration; This pixel was propagated to neighbours territory pixel when each pixel node in the left image was calculated next iteration, with respect to the degree of confidence message
Figure BSA00000299141600061
of each parallax value in the disparity range D
M p l q l t ( d ) = min d x ∈ D ( C ( p l , d ) + Σ q s ∈ N 1 ( p l ) \ q l M q s p l t - 1 ( d x ) + J ( p l , q l , d , d x ) ) ,
P wherein lBe arbitrary pixel in the left image, q lBe pixel p lAny neighbours territory pixel, D is the disparity range of definition in the step 4.1, d is the arbitrary parallax value in the D, C (p l, d) coupling cost, d for calculating among the step 4.2.2 xBe the arbitrary parallax value in the disparity range D, J (p l, q l, d, d x) the level and smooth cost of trying to achieve for step 7.1,
Figure BSA00000299141600063
For t-1 iteration try to achieve from pixel q sTo p lThe parallax of propagating is d xDegree of confidence message, during t=1
Figure BSA00000299141600064
Be 0, d xBe the arbitrary parallax value in the disparity range D, q sBe pixel p lSelectivity neighborhood N 1(p l) in, be different from pixel q lArbitrary pixel, described selectivity neighborhood N 1(p l) be:
N 1(p l)={q f|q f∈N(p l),Con(q f)≥Con(p l)and?S(q f)=S(p l)},
N in the formula 1(p l) be pixel p lThe territory of neighbours up and down, Con (q f) and Con (p l) be the fiduciary level coefficient of mark in the step 5.3, S (q f) and S (p l) be the pixel q that tries to achieve in the step 6 fAnd p lAffiliated cut zone label;
Step 7.3: calculate each pixel in the left image with respect to might parallax degree of confidence b (p l, d):
P wherein lBe the arbitrary pixel in the left image, d is the arbitrary parallax value in the D, C (p l, the coupling cost that d) obtains for step 4.2.2,
Figure BSA00000299141600072
Be the 50th iterative computation obtain from pixel p sTo p lThe parallax of propagating is the degree of confidence message of d, p sBe N 1(p l) interior arbitrary pixel, N 1(p l) be the p of definition in the step 7.2 lThe selectivity neighborhood;
Step 7.4: calculate anaglyph
The optimum parallax value d of confidence calculations (p according to each pixel l):
d ( p l ) = arg min d ∈ D b ( p l , d ) ,
P wherein lBe the arbitrary pixel in the left image, b (p l, the degree of confidence that d) calculates for step 7.3, D is a disparity range, d is the arbitrary parallax value in the inspection scope D;
Optimum parallax according to each pixel in the left image is set up final parallax as D Out: D Out(x, y)=d (p Xy), wherein x and y are respectively anaglyph D OutThe horizontal ordinate of pixel and ordinate, p XyBe that coordinate is (x, pixel y), d (p in the benchmark image Xy) be p XyOptimum parallax value;
Step 8: the three-dimensional information of reconstructed object thing
The camera interior and exterior parameter matrix A that obtains according to step 2 L, A R[R Lt L], [R Rt R], and the disparity map D that obtains of step 7 Out, calculate the three-dimensional point cloud model of whole object through the space method of crossing.Beneficial effect: compared with prior art; The present invention has following advantage: the adaptive weighting window algorithm calculates its weight with respect to pixel to be matched according to the space and the colouring information of neighborhood territory pixel, has avoided the intrinsic self-adapting window building process of difficulty of local algorithm; The tradition certainty factor algebra propagates degree of confidence message between all neighbors; Do not have directive significance owing to do not satisfy the initial matching result of parallax continuity constraint and some pixel between the part neighbor; There is irrational travel path among the tradition certainty factor algebra; Cause that matching accuracy is not high, optimal speed waits problem slowly, the present invention utilizes color images and pixel fiduciary level classification results to instruct the scope and the direction of degree of confidence message propagation, and this have optionally that the confidence spread algorithm has cut off irrational part among traditional certainty factor algebra; Make the path of global energy optimization be optimized; Computation complexity reduces and has more specific aim, and the matching result of low fiduciary level pixel constantly obtains revising in the process of iteration optimization, finally obtains the higher disparity map of matching precision.The present invention has fully combined the advantage of local optimum algorithm and global optimum's algorithm, and the two is combined, and has overcome existing three-dimensional reconstruction technology in the contradiction of rebuilding between accuracy and the reconstruction speed, and has improved the automaticity of process of reconstruction.
Description of drawings
Fig. 1 is entire flow figure of the present invention.
Fig. 2 is the process flow diagram of the adaptive weighting window matching algorithm that adopts in step 4 initial matching of the present invention.
Fig. 3 is the process flow diagram of step 5 pixel fiduciary level labeling algorithm of the present invention.
Fig. 4 is the optionally confidence spread algorithm flow chart that has that adopts in the step 7 of the present invention.
Fig. 5 system model and principle schematic.
Fig. 6 polar curve is proofreaied and correct synoptic diagram.
Fig. 7 adaptive weighting window synoptic diagram.
Fig. 8 respective pixel diversity factor is calculated synoptic diagram.
Fig. 9 pixel fiduciary level classification synoptic diagram.
Figure 10 tradition confidence spread algorithm travel path synoptic diagram.
Figure 11 is based on the confidence spread path synoptic diagram of pixel fiduciary level classification.
Figure 12 is based on the confidence spread path synoptic diagram of image segmentation result.
Figure 13 is calculated the 3 d space coordinate synoptic diagram of object point on the picture by matching relationship and nominal data.
Embodiment
With reference to the accompanying drawings, specific embodiments of the present invention is done more detailed description.The programming implementation tool is selected Visual C++6.0 and OpenCV Flame Image Process function library for use, has taken the coloured image that two width of cloth contain the discontinuous and low texture region of more parallax in the indoor environment.
Fig. 1 is entire flow figure of the present invention
Fig. 5 is system model of the present invention and principle schematic.Use two colored CCDs to take a width of cloth coloured image, O simultaneously from two different angles respectively L, O RBe respectively the photocentre of two video cameras, I L, I RBe respectively the imaging plane of two video cameras, P is a space object point of treating on the object of reconstruct, P L, P RBe object point P imaging point on two video camera imaging planes respectively.This is a pair of match point by the same space object point imaging point on the different cameras imaging plane.Appoint and to get wherein that a width of cloth is a benchmark image, another width of cloth is a registering images, and the process of in alignment image, search for corresponding match point for each pixel in the benchmark image is called three-dimensional the coupling.After obtaining the matching relationship of pixel,,, carry out reverse computing, just can obtain the 3 d space coordinate of corresponding object point, thereby realize the three-dimensionalreconstruction of image in conjunction with demarcating the camera interior and exterior parameter that obtains according to system model.
Fig. 6 proofreaies and correct synoptic diagram for polar curve.For pixel p among the left figure l, matched pixel p rSearch only need in right figure corresponding to p lPolar curve on carry out, and all polar curves of parallel stereovision model all are parallel to the line O of photocentre lO r, then stereo-picture can further reduce the search difficulty in the case to having only horizontal shift, and the search of corresponding point only gets final product along same line search.But in reality, this master pattern is difficult to satisfy, and imaging plane can be proofreaied and correct through polar curve and make the imaging plane rotation, thereby obtain two virtual parallel imaging planes not on same plane.Through rotating initial projection matrix around photocentre up to two focal plane coplanes, baseline is also contained in the focal plane, thereby obtains two new projection matrixes.Limit is located in infinite distant place like this; Therefore polar curve is parallel.In order to make that simultaneously polar curve is a level, baseline must be parallel to the new X-direction of two cameras.In addition, in order to obtain correct correction, conjugate points is to having identical ordinate, and this can obtain through letting new camera configuration have identical intrinsic parameter.
Method of the present invention specifically comprises following step:
Step 1: Image Acquisition
Use two colour TV cameras from two angles that are more or less the same same scene to be taken two width of cloth images simultaneously, what wherein the video camera on the left side was taken is the original left image, and what the video camera on the right was taken is the original right image;
Step 2: camera calibration
Respectively two video cameras are demarcated, set up the relation between camera review location of pixels and the scene location, obtain the intrinsic parameter matrix A of the video camera on the left side L, the right the intrinsic parameter matrix A of video camera ROuter parameter matrix [R with the video camera on the left side Lt L], the outer parameter matrix [R of the video camera on the right Rt R];
The camera calibration technology is comparative maturity now; List of references " A Flexible New Technique for Camera Calibration " (Zhang Z Y; IEEE Transactions on Pattern Analysis and Machine Intelligence; 2000,20 (11): 1330-1334) proposed a kind of calibration algorithm that is called the plane template method, adopted this method respectively two video cameras to be demarcated among the present invention;
Step 3: image is proofreaied and correct polar curve
The camera interior and exterior parameter utilization method for correcting polar line that obtains according to step 2 carries out polar curve to captured left and right sides image to be proofreaied and correct and obtains run-in index binocular vision model, makes matched pixel to having identical ordinate, and left image and right image after the correction are designated as I respectively lAnd I r
Adopt list of references " A compact algorithm for rectification of stereo pairs.Machine Vision and Applications " (Fusiello A; Trucco E; Verri A.2000,12 (1): the method for correcting polar line of the proposition 16-22) carries out polar curve is proofreaied and correct to captured left and right sides image, and is as shown in Figure 6; When if the pixel coordinate after the conversion in the image corresponds on the non-integer coordinates in the original image; Then carry out bilinear interpolation, obtain run-in index binocular vision model at last, the image after overcorrect is undistorted; Error between the right ordinate of matched pixel is less than a pixel, and the space complexity of coupling reduces greatly.
Step 4: initial matching:
Step 4.1: confirm candidate's disparity range D:
D=(d min,d max),
D wherein MinBe minimum parallax, d Min=0, d MaxBe maximum disparity, through the matched pixel point between mark benchmark image and the registering images to trying to achieve:
Ten pixels in the picked at random benchmark image pl1, and pl2, pl3 ..., pl10} seeks and { pl1 respectively in registering images; Pl2, pl3 ..., pl10} has ten estimation matched pixel points { pr1, pr2, pr3 of identical ordinate and similar color information;, pr10}, so obtain ten groups the estimation matched pixel to (pl1, pr1), (pl2, pr2); (pl3, pr3) ..., (pl10, pr10) }, to the thoroughly deserve one group parallax value { d1 of each group matched pixel to the difference of the horizontal ordinate that calculates two pixels; D2, d3 ..., d10}, maximum disparity d Max=max{d1, d2 ..., d10}+5;
Step 4.2: adaptive weighting window algorithm
With the left image I after proofreading and correct lBe benchmark image, with the right image I after proofreading and correct rBe registering images, adopt the adaptive weighting windowhood method that each pixel in the benchmark image is calculated the coupling cost and obtains initial left disparity map, then, with the right image I after proofreading and correct rBe benchmark image, with the left image I after proofreading and correct lBe registering images, adopt the adaptive weighting windowhood method that each pixel in the benchmark image is calculated the coupling cost and obtains initial right disparity map, described adaptive weighting windowhood method is:
Step 4.2.1: weight coefficient calculates
At first benchmark image is designated as I 1, registering images is designated as I 2, utilize color and spatial information each pixel in two width of cloth images to be calculated the weight coefficient E of all pixels in the neighborhood window then Pq:
E pq = e - ( αΔpq + β | | p - q | | 2 ) ,
Wherein p is the pixel in benchmark image or the registering images, and q is for pixel p center, size being the arbitrary pixel in the neighborhood window of n * n, and n=35, Δ pq are illustrated in the color distortion between the pixel p and q under the rgb space,
Figure BSA00000299141600111
The r of c presentation video wherein, g or b passage, I c(p) and I c(q) remarked pixel p and the q color component under the c passage, || p-q|| 2Be two pixels Euclidean distances before, α and β are constant coefficient, α=0.1, β=0.047;
Step 4.2.2: the coupling cost is calculated
Under horizontal polar curve constraint, to the corresponding coupling cost C (p of all parallax value in each the pixel calculated candidate disparity range in the benchmark image 1, d):
C ( p 1 , d ) = Σ ( q 1 , q 2 ) ∈ W p 1 × W p 2 E p 1 q 1 × E p 2 q 2 × S ( q 1 , q 2 ) Σ ( q 1 , q 2 ) ∈ W p 1 × W p 2 E p 1 q 1 × E p 2 q 2 ,
P wherein 1Be arbitrary pixel in the benchmark image, p 1Coordinate do
Figure BSA00000299141600113
D is the arbitrary parallax value in candidate's disparity range D, pixel p 2Be p 1In registering images corresponding to the candidate matches pixel of parallax d, when benchmark image is left image, p 2Coordinate be
Figure BSA00000299141600114
When benchmark image is right image, p 2Coordinate be
Figure BSA00000299141600115
Represent respectively with pixel p 1, p 2For center, size are the neighborhood window of n * n, pixel q 1Be window Interior arbitrary neighborhood territory pixel point, coordinate does
Figure BSA00000299141600118
Pixel q 2Be window
Figure BSA00000299141600119
In and q 1Corresponding pixel,, when benchmark image is left image, q 2Coordinate be
Figure BSA000002991416001110
When benchmark image is right image, q 2Coordinate be
Figure BSA000002991416001111
Figure BSA000002991416001112
With
Figure BSA000002991416001113
Be the weight coefficient of trying to achieve, S (q according to step 4.2.1 1, q 2) be that respective pixel is to (q 1, q 2) diversity factor;
It is as shown in Figure 8,
Figure BSA000002991416001114
Figure BSA000002991416001115
Wherein, q 2lBe q 2Left neighborhood territory pixel, coordinate does q 2rBe q 2Right neighborhood territory pixel, coordinate does
Figure BSA000002991416001117
I 2(q 2), I 2(q 2l) and I 2(q 2r) be respectively pixel q 2, q 2lAnd q 2rAt registering images I 2In the mean value of RGB triple channel component, definition then,
Figure BSA000002991416001118
Figure BSA000002991416001119
Pixel q then 1And q 2Diversity factor be:
S=max{0,I 1(q 1)-I max,I min-I 1(q 1)},
I wherein 1(q 1) be pixel q 1At benchmark image I 1In the mean value of RGB triple channel component;
Step 4.2.3: calculate the initial parallax value
Each pixel is calculated the minimum parallax value d of coupling cost 0(p 1):
Figure BSA00000299141600121
P wherein 1Be the arbitrary pixel in the benchmark image, D is candidate's disparity range, d MinAnd d MaxBe minimum parallax and maximum disparity, C (p 1, d) be the coupling cost that calculates according to step 4.2.1; The minimum parallax value d of coupling cost 0(p 1) be pixel p 1Initial matching parallax result;
Step 4.2.4: set up the initial parallax image
Set up initial parallax image D 0: D 0(i, j)=d 0(p Ij), wherein i and j are respectively the horizontal ordinate and the ordinate of anaglyph pixel, p IjBe that coordinate is (i, pixel j), d in the benchmark image 0(p Ij) be the p that calculates among the step 4.2.3 IjInitial matching parallax result;
If benchmark image is left image I l, with initial parallax figure D 0Assignment is given initial left disparity map D l 0If benchmark image is right image I r, with initial parallax figure D 0Assignment is given initial right disparity map D r 0
Step 5: pixel fiduciary level mark:
Step 5.1: coupling cost confidence level check
With left image I lAll pixels are according to the classification of coupling cost confidence level, and the higher set of confidence level is designated as M Hc, the lower set of confidence level is for being designated as M Lc: left image I lIn arbitrary pixel p lCoupling cost confidence level is r (p l):
Figure BSA00000299141600122
C Min1Be p lThe coupling cost that initial matching parallax result is corresponding, i.e. smallest match cost value, and C Min2Be p lThe second little coupling cost, setting threshold dist then is as r (p L1During)>dist, p lThe matching result confidence level be higher, p l∈ M Hc, otherwise confidence level is for lower, p l∈ M Lc, wherein threshold value dist gets 0.04;
Step 5.2: left and right sides consistency desired result
For the arbitrary pixel p in the left image l, coordinate does
Figure BSA00000299141600123
p lThe initial parallax result
Figure BSA00000299141600124
The matched pixel p of correspondence in right image rCoordinate do The initial right anaglyph D that obtains according to step 4 r 0Obtain pixel p rThe initial parallax result
Figure BSA00000299141600131
If d 1=d 2, pixel p then lThrough left and right sides consistency desired result, be designated as p l∈ M Ac, otherwise, pixel p l, be not designated as p through left and right sides consistency desired result l∈ M Bc, M wherein AcAnd M BcBe respectively through the set of left and right sides consistency desired result and the set through left and right sides consistency desired result;
Step 5.3: pixel fiduciary level coefficient mark
According to step 5.1 and 5.2 result to each element marking fiduciary level coefficient Con (p in the left image l):
Con ( p l ) = 4 , if p l ∈ M hc ∩ M ac 3 , if p l ∈ M lc ∩ M ac 2 , if p l ∈ M hc ∩ M bc 1 , if p l ∈ M lc ∩ M bc
P wherein lBe the arbitrary pixel in the left image, Con (p l) be p lThe fiduciary level coefficient;
Step 6: image segmentation:
With the Mean-Shift algorithm left image is cut apart, to the cut zone S (p under each element marking l), p wherein lBe arbitrary pixel in the left image, S (p l) be pixel p lAffiliated region labeling;
Parameter is set to: spatial bandwidth h s=7, color bandwidth h r=6.5, smallest region size M=35;
Step 7: global optimization
Traditional confidence spread path is shown in figure 10, pixel p 0Obtain degree of confidence from four neighborhood territory pixels, solid arrow is represented the direction of propagation among the figure, and dotted arrow is represented the direction of last round of confidence spread.Suppose pixel p 01Be insecure pixel, so in the confidence spread process, from p 01Information M P01p0Fiduciary level also just relatively low, thereby pixel p 0The coupling cost added insecure information, the matching result that possibly lead to errors in calculating.To this problem; The present invention is on the basis of pixel fiduciary level classification; Improved the path of traditional confidence spread; Shown in figure 11; Fiduciary level with four kinds of different pattern remarked pixels; The fiduciary level of
Figure BSA00000299141600133
remarked pixel is the highest; The fiduciary level coefficient is 4; remarked pixel fiduciary level coefficient is 3,
Figure BSA00000299141600135
remarked pixel fiduciary level coefficient be 2,
Figure BSA00000299141600136
remarked pixel fiduciary level is minimum; The fiduciary level coefficient is 1; We are defined in when propagating degree of confidence between the neighbor, if two pixel fiduciary levels do not wait, the direction of propagation is low by the high sensing fiduciary level of fiduciary level; If two pixel fiduciary levels equate, take the principle of two-way propagation.Match information flows to insecure network from the reliable network that approaches true parallax makes the global optimization approach of this paper have selectivity in the direction of propagation.
The parallax continuity is the prerequisite of confidence spread, and is shown in figure 10, pixel p 02With pixel p 0The both sides and the true parallax that lay respectively at the object edge differ greatly, so from p 02The degree of confidence of propagating and coming is to pixel p 0To not have directive significance.Yet have the discontinuous zone of many degree of depth in the three-dimensional scenic, it is unaccommodated in these zones, propagating degree of confidence.The zone of parallax saltus step often is accompanied by change in color, and based on this fact, the present invention utilizes colored carve information to retrain the scope of confidence spread, avoids the regional spread degree of confidence in color generation saltus step, and is shown in figure 12, s 1And s 2Represent two different cut zone respectively, be defined in and propagate degree of confidence in the identical block that the travel path that belongs between two pixels of different blocks breaks off.This algorithm that receives carve information constraint can reduce degree of confidence effectively to be propagated between parallax differs bigger neighbor, has improved the matching performance of BP algorithm at the parallax discontinuity zone.
Step 7.1: the level and smooth cost of pixel is calculated
Calculate the level and smooth cost J (p between four neighborhood territory pixels up and down of each pixel and this pixel in the left image with respect to all parallax value in the scope of the inspection D l, q l, d p, d q):
J(p l,q l,d p,d q)=min{|d p-d q|,|d max-d min|/8},
P wherein lBe the arbitrary pixel in the left image, q lBe pixel p lArbitrary neighbours territory pixel, d pAnd d qBe respectively pixel p lAnd q lThe arbitrary parallax in disparity range D, d MaxAnd d MinBe maximum disparity and minimum parallax;
Step 7.2: the degree of confidence message of calculating pixel node
Iterative computation degree of confidence message, t is a number of iterations, and initial value is 0, when t=50, stops iteration, and the computation process of iteration is each time:
During t iteration; This pixel was propagated to neighbours territory pixel when each pixel node in the left image was calculated next iteration, with respect to the degree of confidence message
Figure BSA00000299141600141
of each parallax value in the disparity range D
M p l q l t ( d ) = min d x ∈ D ( C ( p l , d ) + Σ q s ∈ N 1 ( p l ) \ q l M q s p l t - 1 ( d x ) + J ( p l , q l , d , d x ) ) ,
P wherein lBe arbitrary pixel in the left image, q lBe pixel p lAny neighbours territory pixel, D is the disparity range of definition in the step 4.1, d is the arbitrary parallax value in the D, C (p l, d) coupling cost, d for calculating among the step 4.2.2 xBe the arbitrary parallax value in the disparity range D, J (p l, q l, d, d x) the level and smooth cost of trying to achieve for step 7.1,
Figure BSA00000299141600151
For t-1 iteration try to achieve from pixel q sTo p lThe parallax of propagating is d xDegree of confidence message, during t=1
Figure BSA00000299141600152
Be 0, d xBe the arbitrary parallax value in the disparity range D, q sBe pixel p lSelectivity neighborhood N 1(p l) in, be different from pixel q lArbitrary pixel, described selectivity neighborhood N 1(p l) be:
N 1(p l)={q f|q f∈N(p l),Con(q f)≥Con(p l)and?S(q f)=S(p l)},
N in the formula 1(p l) be pixel p lThe territory of neighbours up and down, Con (q f) and Con (p l) be the fiduciary level coefficient of mark in the step 5.3, S (q f) and S (p l) be the pixel q that tries to achieve in the step 6 fAnd p lAffiliated cut zone label;
Step 7.3: calculate each pixel in the left image with respect to might parallax degree of confidence b (p l, d):
Figure BSA00000299141600153
P wherein lBe the arbitrary pixel in the left image, d is the arbitrary parallax value in the D, C (p l, the coupling cost that d) obtains for step 4.2.2,
Figure BSA00000299141600154
Be the 50th iterative computation obtain from pixel p sTo p lThe parallax of propagating is the degree of confidence message of d, p sBe N 1(p l) interior arbitrary pixel, N 1(p l) be the p of definition in the step 7.2 lThe selectivity neighborhood;
Step 7.4: calculate anaglyph
The optimum parallax value d of confidence calculations (p according to each pixel l):
d ( p l ) = arg min d ∈ D b ( p l , d ) ,
P wherein lBe the arbitrary pixel in the left image, b (p l, the degree of confidence that d) calculates for step 7.3, D is a disparity range, d is the arbitrary parallax value in the inspection scope D;
Optimum parallax according to each pixel in the left image is set up final parallax as D Out: D Out(x, y)=d (p Xy), wherein x and y are respectively anaglyph D OutThe horizontal ordinate of pixel and ordinate, p XyBe that coordinate is (x, pixel y), d (p in the benchmark image Xy) be p XyOptimum parallax value;
Step 8: the three-dimensional information of reconstructed object thing
The camera interior and exterior parameter matrix A that obtains according to step 2 L, A R[R Lt L], [R Rt R], and the disparity map D that obtains of step 7 Out, calculate the three-dimensional point cloud model of whole object through the space method of crossing.
Figure 13 is the cross synoptic diagram of method of space, O L, O RBe respectively the photocentre of two video cameras, S L, S RBe respectively the imaging plane of two video cameras, P L, P RIt is a pair of match point in two shot by camera images.Following relation of plane is arranged between the pixel coordinate on object point three-dimensional coordinate and the imaging plane in the space:
s u v 1 = A R T 0 T 1 X w Y w Z w 1
Wherein (u, the v) pixel coordinate of representation space object point imaging vegetarian refreshments on imaging plane, (X w, Y w, Z w) expression object point volume coordinate.It has represented a straight-line equation through video camera photocentre, imaging point, space object point.
Arbitrary pixel p in the image of a left side lMatched pixel point in right figure is p r, p rCoordinate do
Figure BSA00000299141600162
Wherein
Figure BSA00000299141600163
The optimum anaglyph that calculates for step 7.4,
Figure BSA00000299141600164
Be p lCoordinate, therefore as long as according to match point to { p l, p rCoordinate just can calculate two straight-line equations that project a pair of matched pixel point on two imaging planes through same object point; Calculate the intersection point of two straight lines; Just can obtain the 3 d space coordinate of object point; Because all have error in each processes such as demarcation, coupling, computing, reverse two straight lines that calculate can just in time not intersect probably, then get the mid point of its common vertical line this moment.

Claims (1)

1. coloured image three-dimensional rebuilding method based on solid coupling is characterized in that this method for reconstructing contains following steps successively:
Step 1: Image Acquisition
Use two colour TV cameras from two angles that are more or less the same same scene to be taken two width of cloth images simultaneously, what wherein the video camera on the left side was taken is the original left image, and what the video camera on the right was taken is the original right image;
Step 2: camera calibration
Respectively two video cameras are demarcated, set up the relation between camera review location of pixels and the scene location, obtain the intrinsic parameter matrix A of the video camera on the left side L, the right the intrinsic parameter matrix A of video camera ROuter parameter matrix [R with the video camera on the left side Lt L], the outer parameter matrix [R of the video camera on the right Rt R];
Step 3: image is proofreaied and correct polar curve
The camera interior and exterior parameter utilization method for correcting polar line that obtains according to step 2 carries out polar curve to captured left and right sides image to be proofreaied and correct and obtains run-in index binocular vision model, makes matched pixel to having identical ordinate, and left image and right image after the correction are designated as I respectively lAnd I r
Step 4: initial matching:
Step 4.1: confirm candidate's disparity range D:
D=(d min,d max),
D wherein MinBe minimum parallax, d Min=0, d MaxBe maximum disparity, through the matched pixel point between mark benchmark image and the registering images to trying to achieve:
Ten pixels in the picked at random benchmark image pl1, and pl2, pl3 ..., pl10} seeks and { pl1 respectively in registering images; Pl2, pl3 ..., pl10} has ten estimation matched pixel points { pr1, pr2, pr3 of identical ordinate and similar color information;, pr10}, so obtain ten groups the estimation matched pixel to (pl1, pr1), (pl2, pr2); (pl3, pr3) ..., (pl10, pr10) }, to the thoroughly deserve one group parallax value { d1 of each group matched pixel to the difference of the horizontal ordinate that calculates two pixels; D2, d3 ..., d10}, maximum disparity d Max=max{d1, d2 ..., d10}+5;
Step 4.2: adaptive weighting window algorithm
With the left image I after proofreading and correct lBe benchmark image, with the right image I after proofreading and correct rBe registering images, adopt the adaptive weighting windowhood method that each pixel in the benchmark image is calculated the coupling cost and obtains initial left disparity map, then, with the right image I after proofreading and correct rBe benchmark image, with the left image I after proofreading and correct lBe registering images, adopt the adaptive weighting windowhood method that each pixel in the benchmark image is calculated the coupling cost and obtains initial right disparity map, described adaptive weighting windowhood method is:
Step 4.2.1: weight coefficient calculates
At first benchmark image is designated as I 1, registering images is designated as I 2, utilize color and spatial information each pixel in two width of cloth images to be calculated the weight coefficient E of all pixels in the neighborhood window then Pq:
E pq = e - ( αΔpq + β | | p - q | | 2 ) ,
Wherein p is the pixel in benchmark image or the registering images, and q is for pixel p center, size being the arbitrary pixel in the neighborhood window of n * n, and n=35, Δ pq are illustrated in the color distortion between the pixel p and q under the rgb space, || p-q|| 2Be two pixels Euclidean distances before, α and β are constant coefficient, α=0.1, β=0.047;
Step 4.2.2: the coupling cost is calculated
Under horizontal polar curve constraint, to the corresponding coupling cost C (p of all parallax value in each the pixel calculated candidate disparity range in the benchmark image 1, d):
C ( p 1 , d ) = Σ ( q 1 , q 2 ) ∈ W p 1 × W p 2 E p 1 q 1 × E p 2 q 2 × S ( q 1 , q 2 ) Σ ( q 1 , q 2 ) ∈ W p 1 × W p 2 E p 1 q 1 × E p 2 q 2 ,
P wherein 1Be arbitrary pixel in the benchmark image, p 1Coordinate do
Figure FSB00000648830600023
D is the arbitrary parallax value in candidate's disparity range D, pixel p 2Be p 1In registering images corresponding to the candidate matches pixel of parallax d, when benchmark image is left image, p 2Coordinate be
Figure FSB00000648830600024
When benchmark image is right image, p 2Coordinate be
Figure FSB00000648830600025
Represent respectively with pixel p 1, p 2For center, size are the neighborhood window of n * n, pixel q 1Be window
Figure FSB00000648830600026
Interior arbitrary neighborhood territory pixel point, coordinate does
Figure FSB00000648830600027
Pixel q 2Be window
Figure FSB00000648830600028
In and q 1Corresponding pixel, when benchmark image is left image, q 2Coordinate be
Figure FSB00000648830600029
When benchmark image is right image, q 2Coordinate be With
Figure FSB000006488306000211
Be the weight coefficient of trying to achieve, S (q according to step 4.2.1 1, q 2) be that respective pixel is to (q 1, q 2) diversity factor;
Step 4.2.3: calculate the initial parallax value
Each pixel is calculated the minimum parallax value d of coupling cost 0(p 1):
Figure FSB00000648830600031
P wherein 1Be the arbitrary pixel in the benchmark image, D is candidate's disparity range, d MinAnd d MaxBe minimum parallax and maximum disparity, C (p 1, d) be the coupling cost that calculates according to step 4.2.1; The minimum parallax value d of coupling cost 0(p 1) be pixel p 1Initial matching parallax result;
Step 4.2.4: set up the initial parallax image
Set up initial parallax image D 0: D 0(i, j)=d 0(p Ij), wherein i and j are respectively the horizontal ordinate and the ordinate of anaglyph pixel, p IjBe that coordinate is (i, pixel j), d in the benchmark image 0(p Ij) be the p that calculates among the step 4.2.3 IjInitial matching parallax result;
If benchmark image is left image I l, with initial parallax image D 0Assignment is given initial left disparity map D l 0If benchmark image is right image I r, with initial parallax image D 0Assignment is given initial right disparity map D r 0
Step 5: pixel fiduciary level mark:
Step 5.1: coupling cost confidence level check
With left image I lAll pixels are according to the classification of coupling cost confidence level, and the higher set of confidence level is designated as M Hc, the lower set of confidence level is for being designated as M Lc: left image I lIn arbitrary pixel p lCoupling cost confidence level is r (p l):
Figure FSB00000648830600032
C Min1Be p lThe coupling cost that initial matching parallax result is corresponding, i.e. smallest match cost value, and C Min2Be p lThe second little coupling cost, setting threshold dist then is as r (p lDuring)>dist, p lThe matching result confidence level be higher, p l∈ M Hc, otherwise confidence level is for lower, p l∈ M Lc, wherein threshold value dist gets 0.04;
Step 5.2: left and right sides consistency desired result
For the arbitrary pixel p in the left image l, coordinate does p lThe initial parallax result The matched pixel p of correspondence in right image rCoordinate do
Figure FSB00000648830600035
The initial right anaglyph D that obtains according to step 4 r 0Obtain pixel p rInitial matching parallax result
Figure FSB00000648830600041
If d 1=d 2, pixel p then lThrough left and right sides consistency desired result, be designated as p l∈ M Ac, otherwise, pixel p l, be not designated as p through left and right sides consistency desired result l∈ M Bc, M wherein AcAnd M BcBe respectively through the set of left and right sides consistency desired result and the set through left and right sides consistency desired result;
Step 5.3: pixel fiduciary level coefficient mark
According to step 5.1 and 5.2 result to each element marking fiduciary level coefficient Con (p in the left image l):
Con ( p l ) = 4 , if p l ∈ M hc ∩ M ac 3 , if p l ∈ M lc ∩ M ac 2 , if p l ∈ M hc ∩ M bc 1 , if p l ∈ M lc ∩ M bc
P wherein lBe the arbitrary pixel in the left image, Con (p l) be p lThe fiduciary level coefficient;
Step 6: image segmentation:
With the Mean-Shift algorithm left image is cut apart, to the cut zone S (p under each element marking l), p wherein lBe arbitrary pixel in the left image, S (p l) be pixel p lAffiliated region labeling;
Step 7: global optimization
Step 7.1: the level and smooth cost of pixel is calculated
Calculate the level and smooth cost J (p between four neighborhood territory pixels up and down of each pixel and this pixel in the left image with respect to all parallax value in candidate's disparity range D l, q 1, d p, d q):
J(p l,q l,d p,d q)=min{|d p-d q|,|d max-d min|/8},
P wherein lBe the arbitrary pixel in the left image, q lBe pixel p lArbitrary neighbours territory pixel, d pAnd d qBe respectively pixel p lAnd q lThe arbitrary parallax in disparity range D, d MaxAnd d MinBe maximum disparity and minimum parallax;
Step 7.2: the degree of confidence message of calculating pixel node
Iterative computation degree of confidence message, t is a number of iterations, and initial value is 0, when t=50, stops iteration, and the computation process of iteration is each time:
During t iteration; This pixel was propagated to neighbours territory pixel when each pixel node in the left image was calculated next iteration, with respect to the degree of confidence message
Figure FSB00000648830600043
of each parallax value in candidate's disparity range D
M p l q l t ( d ) = min d x ∈ D ( C ( p l , d ) + Σ q s ∈ N 1 ( p l ) \ q l M q s p l t - 1 ( d x ) + J ( p l , q l , d , d x ) ) ,
P wherein lBe arbitrary pixel in the left image, q lBe pixel p lAny neighbours territory pixel, D is candidate's disparity range of definition in the step 4.1, d is the arbitrary parallax value in the D, C (p l, d) coupling cost, d for calculating among the step 4.2.2 xBe the arbitrary parallax value in the disparity range D, J (p l, q l, d, d x) the level and smooth cost of trying to achieve for step 7.1,
Figure FSB00000648830600051
For t-1 iteration try to achieve from pixel q sTo p lThe parallax of propagating is d xDegree of confidence message, during t=1
Figure FSB00000648830600052
Be 0, d xBe the arbitrary parallax value in the disparity range D, q sBe pixel p lSelectivity neighborhood N 1(p l) in, be different from pixel q lArbitrary pixel, described selectivity neighborhood N 1(p l) be:
N 1(p l)={q f|q f∈N(p l),Con(q f)≥Con(p l)and?S(q f)=S(p l)},
N in the formula 1(p l) be pixel p lThe territory of neighbours up and down, Con (q f) and Con (p l) be the fiduciary level coefficient of mark in the step 5.3, S (q f) and S (p l) be the pixel q that tries to achieve in the step 6 fAnd p lAffiliated cut zone label;
Step 7.3: calculate each pixel in the left image with respect to might parallax degree of confidence b (p l, d): b ( p l , d ) = C ( p l , d ) + Σ p s ∈ N 1 ( p l ) M p s p l 50 ( d ) , P wherein lBe the arbitrary pixel in the left image, d is the arbitrary parallax value in the D, C (p l, the coupling cost that d) obtains for step 4.2.2, Be the 50th iterative computation obtain from pixel p sTo p lThe parallax of propagating is the degree of confidence message of d, p sBe N 1(p l) interior arbitrary pixel, N 1(p l) be the p of definition in the step 7.2 lThe selectivity neighborhood;
Step 7.4: calculate anaglyph
The optimum parallax value d of confidence calculations (p according to each pixel l):
d ( p l ) = arg min d ∈ D b ( p l , d ) ,
P wherein lBe the arbitrary pixel in the left image, b (p l, the degree of confidence that d) calculates for step 7.3, D is candidate's disparity range, d is the arbitrary parallax value in candidate's disparity range D;
Optimum parallax according to each pixel in the left image is set up final parallax as D Out: D Out(x, y)=d (p Xy), wherein x and y are respectively anaglyph D OutThe horizontal ordinate of pixel and ordinate, p XyBe that coordinate is (x, pixel y), d (p in the benchmark image Xy) be p XyOptimum parallax value;
Step 8: the three-dimensional information of reconstructed object thing
The camera interior and exterior parameter matrix A that obtains according to step 2 L, A R[R Lt L], [R Rt R], and the anaglyph D that obtains of step 7 Out, calculate the three-dimensional point cloud model of whole object through the space method of crossing.
CN2010105039870A 2010-10-08 2010-10-08 Color image three-dimensional reconstruction method based on three-dimensional matching Expired - Fee Related CN101976455B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010105039870A CN101976455B (en) 2010-10-08 2010-10-08 Color image three-dimensional reconstruction method based on three-dimensional matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010105039870A CN101976455B (en) 2010-10-08 2010-10-08 Color image three-dimensional reconstruction method based on three-dimensional matching

Publications (2)

Publication Number Publication Date
CN101976455A CN101976455A (en) 2011-02-16
CN101976455B true CN101976455B (en) 2012-02-01

Family

ID=43576337

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010105039870A Expired - Fee Related CN101976455B (en) 2010-10-08 2010-10-08 Color image three-dimensional reconstruction method based on three-dimensional matching

Country Status (1)

Country Link
CN (1) CN101976455B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107862742A (en) * 2017-12-21 2018-03-30 华中科技大学 A kind of dense three-dimensional rebuilding methods based on more hypothesis joint views selections

Families Citing this family (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102111637A (en) * 2011-03-29 2011-06-29 清华大学 Stereoscopic video depth map generation method and device
JP2012253666A (en) * 2011-06-06 2012-12-20 Sony Corp Image processing apparatus and method, and program
CN102333234B (en) * 2011-10-28 2014-04-23 清华大学 Binocular stereo video state information monitoring method and device
CN103310482B (en) * 2012-03-12 2016-08-10 山东智慧生活数据系统有限公司 A kind of three-dimensional rebuilding method and system
CN102750694B (en) * 2012-06-04 2014-09-10 清华大学 Local optimum belief propagation algorithm-based binocular video depth map solution method
CN102880444B (en) * 2012-08-24 2016-03-09 浙江捷尚视觉科技股份有限公司 A kind of detection method of fighting based on the analysis of stereoscopic vision sports ground
CN102930530B (en) * 2012-09-26 2015-06-17 苏州工业职业技术学院 Stereo matching method of double-viewpoint image
CN103971356B (en) * 2013-02-04 2017-09-08 腾讯科技(深圳)有限公司 Street view image Target Segmentation method and device based on parallax information
CN103106688B (en) * 2013-02-20 2016-04-27 北京工业大学 Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering
CN103279973A (en) * 2013-06-13 2013-09-04 清华大学 Three-dimensional image matching system based on mixing and parallel
CN104427324A (en) * 2013-09-02 2015-03-18 联咏科技股份有限公司 Parallax error calculation method and three-dimensional matching device thereof
CN103868460B (en) * 2014-03-13 2016-10-05 桂林电子科技大学 Binocular stereo vision method for automatic measurement based on parallax optimized algorithm
CN103955920B (en) * 2014-04-14 2017-04-12 桂林电子科技大学 Binocular vision obstacle detection method based on three-dimensional point cloud segmentation
US9195904B1 (en) * 2014-05-08 2015-11-24 Mitsubishi Electric Research Laboratories, Inc. Method for detecting objects in stereo images
CN104112270B (en) * 2014-05-14 2017-06-20 苏州科技学院 A kind of any point matching algorithm based on the multiple dimensioned window of adaptive weighting
CN104200453B (en) * 2014-09-15 2017-01-25 西安电子科技大学 Parallax image correcting method based on image segmentation and credibility
WO2016065578A1 (en) * 2014-10-30 2016-05-06 北京大学深圳研究生院 Global disparity estimation method and system
CN104408710B (en) * 2014-10-30 2017-05-24 北京大学深圳研究生院 Global parallax estimation method and system
CN104331890B (en) * 2014-10-30 2017-06-16 北京大学深圳研究生院 A kind of global disparity method of estimation and system
CN104376593B (en) * 2014-11-25 2017-04-05 四川大学 Based on the related three-dimensional reconstruction method of multiwindow phase place
CN104778748A (en) * 2015-04-03 2015-07-15 四川大学 High-precision three-dimensional reconstruction method for uncalibrated images
CN105277169B (en) * 2015-09-25 2017-12-22 安霸半导体技术(上海)有限公司 Binocular distance-finding method based on image segmentation
CN106887018B (en) * 2015-12-15 2021-01-05 株式会社理光 Stereo matching method, controller and system
CN106887021B (en) * 2015-12-15 2020-11-24 株式会社理光 Stereo matching method, controller and system for stereo video
CN105574875B (en) * 2015-12-18 2019-02-01 燕山大学 A kind of fish eye images dense stereo matching process based on polar geometry
CN105638613B (en) * 2015-12-22 2018-12-28 中国农业大学 A kind of medicament sprays robot system and control method
CN105444696B (en) * 2015-12-30 2018-04-24 天津大学 A kind of binocular ranging method and its application based on perspective projection line measurement model
CN105761270B (en) * 2016-03-15 2018-11-27 杭州电子科技大学 A kind of tree-shaped filtering solid matching method based on EP point range conversion
CN106228605A (en) * 2016-07-29 2016-12-14 东南大学 A kind of Stereo matching three-dimensional rebuilding method based on dynamic programming
CN106447661A (en) * 2016-09-28 2017-02-22 深圳市优象计算技术有限公司 Rapid depth image generating method
CN107155100B (en) * 2017-06-20 2019-07-12 国家电网公司信息通信分公司 A kind of solid matching method and device based on image
CN107506782B (en) * 2017-07-06 2020-04-17 武汉市工程科学技术研究院 Dense matching method based on confidence weight bilateral filtering
CN107767388B (en) * 2017-11-01 2021-02-09 重庆邮电大学 Image segmentation method combining cloud model and level set
CN108062765A (en) * 2017-12-19 2018-05-22 上海兴芯微电子科技有限公司 Binocular image processing method, imaging device and electronic equipment
CN107917701A (en) * 2017-12-28 2018-04-17 人加智能机器人技术(北京)有限公司 Measuring method and RGBD camera systems based on active binocular stereo vision
CN108303037B (en) * 2018-01-31 2020-05-08 广东工业大学 Method and device for detecting workpiece surface shape difference based on point cloud analysis
CN110533663B (en) * 2018-05-25 2022-03-04 杭州海康威视数字技术股份有限公司 Image parallax determining method, device, equipment and system
CN109255811B (en) * 2018-07-18 2021-05-25 南京航空航天大学 Stereo matching method based on reliability map parallax optimization
CN109241855B (en) * 2018-08-10 2022-02-11 西安交通大学 Intelligent vehicle travelable area detection method based on stereoscopic vision
CN109360268B (en) * 2018-09-29 2020-04-24 清华大学 Surface optimization method and device for reconstructing dynamic object
CN109916322B (en) * 2019-01-29 2020-02-14 同济大学 Digital speckle full-field deformation measurement method based on adaptive window matching
CN109872344A (en) * 2019-02-25 2019-06-11 广州视源电子科技股份有限公司 Tracking, matching process and coordinate acquiring method, the device of image characteristic point
CN111627067B (en) * 2019-02-28 2023-08-22 海信集团有限公司 Calibration method of binocular camera and vehicle-mounted equipment
CN109903379A (en) * 2019-03-05 2019-06-18 电子科技大学 A kind of three-dimensional rebuilding method based on spots cloud optimization sampling
CN109934786B (en) * 2019-03-14 2023-03-17 河北师范大学 Image color correction method and system and terminal equipment
CN115442515B (en) 2019-03-25 2024-02-02 华为技术有限公司 Image processing method and apparatus
CN111197976A (en) * 2019-12-25 2020-05-26 山东唐口煤业有限公司 Three-dimensional reconstruction method considering multi-stage matching propagation of weak texture region
CN112767455B (en) * 2021-01-08 2022-09-02 合肥的卢深视科技有限公司 Calibration method and system for binocular structured light
CN113674407B (en) * 2021-07-15 2024-02-13 中国地质大学(武汉) Three-dimensional terrain reconstruction method, device and storage medium based on binocular vision image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6639596B1 (en) * 1999-09-20 2003-10-28 Microsoft Corporation Stereo reconstruction from multiperspective panoramas
CN1920886A (en) * 2006-09-14 2007-02-28 浙江大学 Video flow based three-dimensional dynamic human face expression model construction method
CN101625768A (en) * 2009-07-23 2010-01-13 东南大学 Three-dimensional human face reconstruction method based on stereoscopic vision
CN101853508A (en) * 2010-06-08 2010-10-06 浙江工业大学 Binocular stereo vision matching method based on generalized belief propagation of direction set

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100374784B1 (en) * 2000-07-19 2003-03-04 학교법인 포항공과대학교 A system for maching stereo image in real time

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6639596B1 (en) * 1999-09-20 2003-10-28 Microsoft Corporation Stereo reconstruction from multiperspective panoramas
CN1920886A (en) * 2006-09-14 2007-02-28 浙江大学 Video flow based three-dimensional dynamic human face expression model construction method
CN101625768A (en) * 2009-07-23 2010-01-13 东南大学 Three-dimensional human face reconstruction method based on stereoscopic vision
CN101853508A (en) * 2010-06-08 2010-10-06 浙江工业大学 Binocular stereo vision matching method based on generalized belief propagation of direction set

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107862742A (en) * 2017-12-21 2018-03-30 华中科技大学 A kind of dense three-dimensional rebuilding methods based on more hypothesis joint views selections

Also Published As

Publication number Publication date
CN101976455A (en) 2011-02-16

Similar Documents

Publication Publication Date Title
CN101976455B (en) Color image three-dimensional reconstruction method based on three-dimensional matching
CN112435325B (en) VI-SLAM and depth estimation network-based unmanned aerial vehicle scene density reconstruction method
CN111462329B (en) Three-dimensional reconstruction method of unmanned aerial vehicle aerial image based on deep learning
CN106228605A (en) A kind of Stereo matching three-dimensional rebuilding method based on dynamic programming
CN104299261B (en) Three-dimensional imaging method and system for human body
CN104835158A (en) 3D point cloud acquisition method based on Gray code structure light and polar constraints
CN101887589A (en) Stereoscopic vision-based real low-texture image reconstruction method
CN103248911B (en) Based on the virtual viewpoint rendering method combined during sky in multi-view point video
CN104539928A (en) Three-dimensional printing image synthesizing method for optical grating
CN104809719A (en) Virtual view synthesis method based on homographic matrix partition
CN101625768A (en) Three-dimensional human face reconstruction method based on stereoscopic vision
CN110688905B (en) Three-dimensional object detection and tracking method based on key frame
CN106056622B (en) A kind of multi-view depth video restored method based on Kinect cameras
CN104182968B (en) The fuzzy moving-target dividing method of many array optical detection systems of wide baseline
CN103093460A (en) Moving camera virtual array calibration method based on parallel parallax
CN107330973A (en) A kind of single-view method for reconstructing based on various visual angles supervision
CN110363838A (en) Big field-of-view image three-dimensionalreconstruction optimization method based on more spherical surface camera models
CN102074005B (en) Interest-region-oriented stereo matching method
CN112907573B (en) Depth completion method based on 3D convolution
CN111028281A (en) Depth information calculation method and device based on light field binocular system
CN112734839A (en) Monocular vision SLAM initialization method for improving robustness
CN115035235A (en) Three-dimensional reconstruction method and device
CN101383046B (en) Three-dimensional reconstruction method on basis of image
Liu et al. Dense stereo matching strategy for oblique images that considers the plane directions in urban areas
CN105574875A (en) Fish-eye image dense stereo algorithm based on polar curve geometry

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: JIANGSU JIEWANJIA TEXTILE CO., LTD.

Free format text: FORMER OWNER: SOWTHEAST UNIV.

Effective date: 20131018

Owner name: SOWTHEAST UNIV.

Effective date: 20131018

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 210096 NANJING, JIANGSU PROVINCE TO: 226600 NANTONG, JIANGSU PROVINCE

TR01 Transfer of patent right

Effective date of registration: 20131018

Address after: A group of the temple village of Haian town of Haian County, Jiangsu city of Nantong province 226600

Patentee after: Jiangsu Wanjia Textile Co., Ltd.

Patentee after: Southeast University

Address before: 210096 Jiangsu city Nanjing Province four pailou No. 2

Patentee before: Southeast University

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120201

Termination date: 20191008

CF01 Termination of patent right due to non-payment of annual fee