CN104537707B - Image space type stereoscopic vision moves real-time measurement system online - Google Patents

Image space type stereoscopic vision moves real-time measurement system online Download PDF

Info

Publication number
CN104537707B
CN104537707B CN201410745020.1A CN201410745020A CN104537707B CN 104537707 B CN104537707 B CN 104537707B CN 201410745020 A CN201410745020 A CN 201410745020A CN 104537707 B CN104537707 B CN 104537707B
Authority
CN
China
Prior art keywords
stereopsis
image
stereo
dimensional
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410745020.1A
Other languages
Chinese (zh)
Other versions
CN104537707A (en
Inventor
邢帅
王栋
徐青
葛忠孝
李鹏程
耿迅
张军军
侯晓芬
周杨
夏琴
江腾达
李建胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
PLA Information Engineering University
Original Assignee
PLA Information Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by PLA Information Engineering University filed Critical PLA Information Engineering University
Priority to CN201410745020.1A priority Critical patent/CN104537707B/en
Publication of CN104537707A publication Critical patent/CN104537707A/en
Application granted granted Critical
Publication of CN104537707B publication Critical patent/CN104537707B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The present invention relates to image space type stereoscopic vision to move real-time measurement system online, carries out camera calibration first;In the moving process of stereoscopic camera, some groups of stereo-pictures are obtained;Image preprocessing;Feature extraction and Stereo matching;Three-dimensional reconstruction;Stereo-picture model connects;For the stereopsis of any instant, obtain the corresponding image points in the stereopsis of adjacent moment, tie point using them as two groups of stereopsis, the model points of the same name of two groups of three-dimensional models are calculated by forward intersection, are transformed to two groups of three-dimensional models under the same space coordinate system by spacial similarity transformation;The stereopsis of subsequent time is equally handled successively, all stereo-picture models are connected into a block mold for being directed to whole scene.

Description

Image space type stereoscopic vision moves real-time measurement system online
Technical field
The present invention relates to a kind of image space type stereoscopic vision to move real-time measurement system online.
Background technology
The basic principle of stereoscopic vision is from the same scenery of two or more viewing points, to obtain object in different visual angles Under image, the position deviation (i.e. parallax) between image pixel is calculated by principle of triangulation and obtains three-dimensional information.It is three-dimensional Vision measuring method is long-standing, and is widely applied in commercial measurement and photogrammetric field.
《Certain robot target positioning based on stereoscopic vision》(Institutes Of Technology Of Nanjing's Master's thesis) summarizes existing vertical The key step of body vision, including:
1) image acquisition;
2) camera calibration;
3) image preprocessing and feature extraction;
4) Stereo matching;
5) three-dimensional reconstruction.
The above method is primarily present problems with:
(1) scene three-dimensional information obtained is imperfect.Some systems only obtain the three-dimensional information of part marker in scene, Some systems only obtain the three-dimensional information of fractional object in scene, some systems although obtain the three-dimensional information of whole scene but Density is inadequate, it is impossible to expresses some detailed information in scene.
(2) scene three-dimensional information obtained fails to integrate.The stereo-picture that stereoscopic camera is shot each time can generate The three-dimensional information of corresponding scene, but with the movement of stereoscopic camera, the shooting of a larger space can be completed, but be each time System handles the scene three-dimensional information of this generation in isolation, does not consider between the scene three-dimensional information that generates at different moments Relation, can not realize and the entirety of a wide range of scene is measured.
(3) system real time deficiency.In said system, the system for only obtaining marker three-dimensional information in part in scene For, it can meet the requirement of real-time response substantially, but for the system for needing to obtain whole scene subtle three-dimensional information, its Processing is often under line, is not met by the requirement of real-time response.
The image space type stereoscopic vision of this Project design moves real-time measurement system online, by optimize stereo-picture processing with Matching algorithm, combining camera movement continuously acquire stereo-picture and stereo-picture vision measurement model join algorithm, and realization is directed to The continuous observation of same target simultaneously generates that whole target is complete, accurate three-dimensional information in real time.
The content of the invention
The object of the present invention is to provide a kind of image space type stereoscopic vision to move real-time measurement system online, existing to solve The scene information that is obtained in technology is imperfect, the problem of failing to integrate scene three-dimensional information, can also further improve real-time.
To achieve the above object, the solution of the present invention includes:
Image space type stereoscopic vision moves real-time measurement system online, including is fixed and apart from adjustment by two cameras, camera The stereoscopic camera that device is formed, stereoscopic camera connection control and computing device, the control are gathered with computing device for camera Control, storage, processing and the handling result output of data;Measurement process is as follows:
1) camera calibration;2) in the moving process of stereoscopic camera, some groups of stereo-pictures are obtained;3) image preprocessing; 4) feature extraction and Stereo matching;5) three-dimensional reconstruction;6) stereo-picture model connects;For the stereopsis of any instant, obtain The corresponding image points in the stereopsis of adjacent moment is taken, the tie point using them as two groups of stereopsis, passes through forward intersection meter Calculation obtains the model points of the same name of two groups of three-dimensional models, and two groups of three-dimensional models are transformed to the same space by spacial similarity transformation sits Under mark system;The stereopsis of subsequent time is equally handled successively, all stereo-picture models are connected into a pin To the block mold of whole scene.
The camera calibration method includes:Obtain stereopsis at the same time;Extract scaling board angle point;Calibration resolves.
Image preprocessing includes filtering and grayscale equalization processing.
The feature extraction includes with Stereo matching:The tie point between three-dimensional model is obtained using SURF operators;Calculate Relative position and posture between two images, carry out relative orientation resolving;The core line relation between stereopsis is determined, to three-dimensional shadow As being corrected to obtain the stereopsis arranged by core line;The dense matching that stereopsis is carried out with SGM algorithms is intensive to generate Corresponding image points.
The intensive corresponding image points obtained according to matching rebuilds the three-dimensional information of target scene using multi-disc forward intersection, Realize the three-dimensional reconstruction.
The measuring method of the present invention, is in the moving process of stereoscopic camera, constantly obtains stereopsis and real-time reconstruction The three-dimensional model of target scene.The result for just needing to rebuild each time to form a complete model of place connects Come.The principle of stereo-picture model connection is according to the corresponding image points in two groups of stereopsis for obtaining adjacent moment, with them For the tie point of two groups of stereopsis, the model points of the same name of two groups of three-dimensional models are calculated by forward intersection, finally by Spacial similarity transformation transforms to two groups of three-dimensional models under the same space coordinate system.The stereopsis obtained to following instant is same Processing, you can all stereo-picture models are connected into a block mold for being directed to whole scene.
Brief description of the drawings
Fig. 1 is lens supporting structure;
Fig. 2 is the design diagram of hardware platform;
Fig. 3 is software workflow figure;
Fig. 4 is gridiron pattern scaling board and its coordinate system;
Fig. 5 is SURF operator flow charts;
Fig. 6 is that relative orientation completion secondary homonym projection ray intersects at target point;
Fig. 7 is independent as to relative orientation flow chart;
Fig. 8 is the Epipolar geometry relation of parallel Binocular Stereo Vision System;
Fig. 9 is SGM matching algorithm flow charts;
Figure 10 is forward intersection measuring principle;
Figure 11 is stereo-picture model connection flow chart;
Figure 12 is that system is moved the process schematic measured in real time.
Embodiment
The present invention will be further described in detail below in conjunction with the accompanying drawings.
In the moving process of stereoscopic camera, the three-dimensional mould of stereopsis and real-time reconstruction target scene can be constantly obtained Type.But what reconstruction obtained is all the partial model of target in the range of current field every time, in order to form a complete scene mould The result that type just needs to rebuild each time connects, and to be realized here it is basic scheme of the invention.
The present invention basic scheme the step of be:1) camera calibration;2) in the moving process of stereoscopic camera, obtain some Group stereo-picture;3) image preprocessing;4) feature extraction and Stereo matching;5) three-dimensional reconstruction;6) stereo-picture model connects;It is right In the stereopsis of any instant, the corresponding image points in the stereopsis of adjacent moment is obtained, using them as two groups of stereopsis Tie point, the model points of the same name of two groups of three-dimensional models are calculated by forward intersection, by spacial similarity transformation by two groups Three-dimensional model is transformed under the same space coordinate system;The stereopsis of subsequent time is equally handled successively, will be all Stereo-picture model connects into a block mold for being directed to whole scene.
Connected by stereo-picture model, the target three-dimensional information that system is rebuild in moving process can be incorporated into one In the coordinate system of a first group of stereo-picture model, whole one complete geometrical model of scene is formed.
Lower mask body introduces a kind of image space type stereoscopic vision and moves real-time measurement system online.The system mainly by hardware and Software two parts are formed (the method scheme that software is the present invention).As shown in Figure 1, wherein hardware mainly includes two cameras, mirrors Head, data cable, collection conversion equipment, camera fixing device, control and computing device etc., software then includes camera calibration, image Pretreatment, feature extracting and matching, relative orientation, the connection of stereo-picture model, core line video generation, dense matching and three-dimensional Rebuild etc..
1.1 hardware components
As shown in Figure 1, hardware needed for the structure of the system is as follows:
(1) camera:Digital industrial camera, resolution ratio are more than 1,000,000 pixels, and picking rate is not less than 30 frames/second, and USB connects Confession electricity, network interface, fire-wire interfaces or USB interface carry out data transmission.
(2) camera lens:With the pieceable standard interface camera lens of camera, lens focus is not less than 24 millimeters.
(3) data cable:Gigabit network cable, firewire, the USB data line of standard interface.
(4) conversion equipment is gathered:For the conversion equipment and agreement that data cable is connected with computer.
(5) camera fixing device:Two cameras can be installed and there is the fixed frame (as shown in Figure 1) of certain length, two Distance between camera can be adjusted by adjusting the distance between erecting bed.
(6) control and computing device:Export, recommend for the control of camera gathered data, storage, processing and handling result Use tablet computer or laptop with the corresponding interface.
The design diagram of the system hardware platform is as shown in Figure 2.
1.2 software section
The system software is by camera calibration, image preprocessing, feature extracting and matching, relative orientation, stereo-picture model Connection, core line video generation, dense matching and 8 module compositions of three-dimensional reconstruction, its workflow are as shown in Figure 3.
The key technology and method that modules are related to are as follows.
1.2.1 camera calibration
The calibration of stereoscopic camera is to obtain the position relationship between the inner parameter of each camera, camera.Wherein, it is internal Distortion parameter of focal length of the parameter including video camera, principal point coordinate and camera lens etc., and position relationship is including between video camera Spin matrix, translation matrix.
At present, common camera calibration method has field experiment method, Zhang Zhengyou methods, Tsai two-step methods and self-calibration method etc..Through Cross it was found that, Zhang Zhengyou methods and Tsai two-step methods are easy to operate, reliable and stable and precision is preferable, but the distortion that the latter resolves Number of parameters is less than the former, therefore the system uses the former scaling method as camera, to obtain more preferable distortion rectification Effect.Each step and situation involved in calibration process is as follows:
The first step:Obtain stereopsis at the same time.Image information includes the size of image, color (gray scale);Scene information contains There is one piece of 12 × 9 gridiron pattern scaling board, grid edge size fixes (30mm), may make up an object coordinates system (such as Fig. 4 institutes Show), scaling board is located at the center in stereopsis region of the same name and distribution is relatively uniform.
Second step:Extract scaling board angle point.Bidimensional image coordinate x, y of 88 angle points are extracted on stereopsis respectively, Its positioning accuracy generally reaches sub-pixel-level, and (with reference to pertinent literature, [Zhang Guangjun vision measurements [M] science goes out Angular Point Extracting Method Version society, 2008:55-61]).
3rd step:Calibration resolves.The inner parameter of double camera can be obtained with Zhang Zhengyou standardizations, comprising focal length for f, as Principal point coordinate (cx,cy), radial distortion k1,k2,k3, tangential distortion p1,p2;External parameter includes the spin matrix R of camera, translation Vector T.Specific calibration is given below and resolves principle.
1st, calibration of camera
If the focal length of camera is f, principal point coordinate is (cx,cy), image picpointed coordinate is (x, y), and (X, Y, Z) is its thing Point coordinates (since gridiron pattern is demarcated as plane, thus Z=0), then the relation between them is
Wherein, [r1 r2 r3]=R is spin matrix, H=sM [r1 r2T] it is homography matrix, and
Due to known to the object coordinates of angle point in scaling board, and its corresponding image coordinate can be obtained by image procossing, Matrix H can be calculated according to formula (3), [h is expressed as with vector form1 h2 h3], its result is expressed as
Or
Wherein, λ=1/s.According to the property of spin matrix, can obtain
So as to
Make B=M-TM-1, it is skew symmetric matrix that can obtain B, and expansion has
Therefore, it is possible to use the dot product of 6 element vectors carrys out representation formula (4), then have
UtilizeTo define two constraintss, can be written as
If camera obtains K chessboard table images at the same time, equation group can be set up:Vb=0.Wherein, V is 2K × 6 Matrix, can be calculated by homography matrix H.Since every width image has two constraint equations, thus can as K >=3 Outgoing vector b is solved using least square method, and then obtains camera intrinsic parameter
Wherein,With reference to the homography matrix of every width image, and can be with Calculate their outer parameter.
In view of the influence of lens distortion, using inside and outside parameter obtained above as the initial of Nonlinear System of Equations below Value.Make (xp,yp) for point standardization coordinate, (xd,yd) it is the actual coordinate with distortion, its model is
Wherein,, can be using least square method come to equation so after inside and outside parameter is reevaluated Group is iterated solution, can finally obtain intrinsic parameter with better accuracy.
2nd, calibrating external parameters
Spin matrix between stereoscopic camera coordinate system is R, translation matrix T.Certain object space point coordinates P is given, then it is on a left side Coordinate in right photo coordinate system is
Wherein, Rl、TlAnd Rr、TrIt is rotations of the point P by corresponding photo coordinate system to world coordinate system, translation matrix. And Pl、PrSame point in representation space, can obtain
Pl=RT(Pr-T) (11)
Matrix operation is carried out to formula (10) and (11), can be obtained
And the R in formulal、Rr、TlAnd TrIt can demarcate to obtain by intrinsic parameter above.
1.2.2 image preprocessing
When stereoscopic camera gathers image, due to reasons such as lens quality, light changes, the decline of picture quality is caused, this Negative impact will be brought to follow-up reconstruction.In order to mitigate this influence, it is necessary to be pre-processed to image.The figure of the system As pretreatment mainly includes filtering and grayscale equalization, it is therefore an objective to eliminate noise and effect enhancing.
1st, filter
In order to reduce the influence of image noise, usually improve image effect with smooth filtering method, its basic thought is Remove by the computing of the several points of target point and surrounding or weaken catastrophe point.Here, filtering operation is needed according to noise behavior Corresponding template is selected, common template has
Wherein, last template is Gaussian template.
In filter process, convolution algorithm formula can be used
To obtain smooth image.
2nd, grayscale equalization
Due to stereoscopic camera in processing technology there are error, the stereopsis that causes to obtain, which exists, radiates inconsistent feelings Condition, is mainly reflected in left and right image on the whole there are certain gray scale difference, thus needs to carry out gray scale balance to them before use Change is handled.The system carries out gray scale stretching to left and right image respectively using histogram equalization method.
Grey level histogram is a kind of function for describing gray value, that is, counts of the pixel in image with certain gray value Number, it represents the tonal gradation of pixel with abscissa, and ordinate represents the frequency that gray value occurs.And gray-level histogram equalization Purpose the histogram of raw video is transformed to equally distributed form, make the grey scope of stereopsis consistent, so that vertical The gray value of corresponding image points is basically identical on the image that body camera obtains.
In fact, the relative position relation between stereoscopic camera can cause left and right image Scene information not fully overlapping, This just needs to obtain the overlapping region of stereopsis by Image Matching.It is to be done on the basis of stereopsis overlapping region herein Gray-level histogram equalizationization processing, its specific step is as follows:
The first step:In tonal range [0,255], scan entire image and count the frequency n of kth gray level appearancek, k ∈[0,255];
Second step:Probable value is replaced with frequency is approximate, i.e., into column hisgram normalized.Pr(rk)=nk/ n, 0≤rk ≤ 1, k=0,1 ..., 255.Pr(rk) that represent is gray value rkThe probability of appearance.Histogram is important after normalization Summation is equal to 1;
3rd step:The gray value after conversion is calculated, its conversion formula is
1.2.3 feature extracting and matching
SURF operators are that current computer vision field applies more a kind of feature point extraction and matching algorithm, it has The characteristics of stability is good, speed is fast and accuracy is high.Therefore, the system obtains the connection between three-dimensional model using SURF operators Point.
The core of SURF operators is the calculating of Hessian matrixes.Assuming that function f (x, y), Hessian matrix H are by function Partial derivative forms:
H-matrix discriminate:
The value of discriminate is the characteristic value of H-matrix, can utilize the symbol for judging result that all the points are classified, according to differentiation Formula value is positive and negative, to differentiate whether the point is extreme point.In SURF operators, with image pixel I (x, y) replace functional value f (x, Y), second order standard gaussian function is selected, by specific internuclear convolutional calculation second-order partial differential coefficient, just can so to be counted as wave filter Calculate three matrix element L of H-matrixxx,Lyy,Lxy, so as to calculate H:
L (X, t)=G (t) * I (X) (19)
Lxx(X, t) is expression of the width image under different resolutions, can utilize Gaussian kernel G (t) and image function I (X) realized in the convolution of point X=(x, y), specifically expression such as formula (20), g (t) is Gaussian function to kernel function G (t), and t is Gauss Variance, LyyWith LxySimilarly.It can be the signals that each pixel calculates its H determinant in image by this method, be used in combination This value differentiates point of interest.For convenience of application, Herbert Bay propose to use approximation DxxInstead of Lxx, for balance exact value with Error between approximation introduces weight w, and weight w is represented by with dimensional variation, then H-matrix discriminate:
det(Happrox)=DxxDyy-(wDxy)2 (21)
The idiographic flow of SURF operators is as shown in Figure 5.
1.2.4 relative orientation
Tie point between the stereo-picture obtained using matching, can calculate the relative position and appearance between two images State, this is the essential step that three-dimensional reconstruction is carried out using stereo-picture.
The purpose of relative orientation is to determine the relative bearing of stereogram in space, including 5 relative bearing elements.Its Principle is that when the relative bearing of stereogram is determined, corresponding epipolar line should be coplanar with baseline, that is, corresponds to the projected light of picture point Line should be in core face to intersecting (as shown in Figure 6).
It can be obtained by Fig. 6, coplanar condition equation is with the citation form of vector representation
If S in Fig. 62In S1-X1Y1Z1Coordinate in coordinate system is (BX,BY,BZ), d1In S1-X1Y1Z1Seat in coordinate system It is designated as (X1,Y1,Z1), d2In S2-X2Y2Z2Coordinate in coordinate system is (X2,Y2,Z2).Then its coordinate expression-form is
Wherein
In addition, individually as the image space coordinate to the required data of relative orientation for corresponding image points on stereopsis, and Number is generally more than 6 and is preferably uniformly distributed.Specific orientation is given below and resolves flow, as shown in Figure 7:
Finally, solution obtains 5 parameters of relative orientation, specific solution process refering to (Zhang Baoming etc.,《Photogrammetry》).
1.2.5 core line video generation
According to the geometrical principle of stereoscopic camera imaging, the corresponding image points on stereopsis is necessarily located on corresponding epipolar line and same Picture point on name core line is one-to-one, and this point is of great significance follow-up dense matching.Therefore, it is necessary to true first Determine the core line relation between stereopsis, then stereopsis is corrected to obtain the stereopsis arranged by core line, is follow-up Dense matching prepare.
The basis matrix that the system is obtained according to relative orientation, stereopsis is corrected using Hartley algorithms, by a pair Plane projective transformation acts on image pair, it is matched the limit and coincides with the scan line of image.By such two dimension Projective transformation, can not only make the v directions coordinate value of matching double points in two images identical, but also can make their u side It is as far as possible close to coordinate value, even if horizontal parallax is smaller, be so conducive to search space during smaller matching.The algorithm merely with The basis matrix of image pair, without knowing camera projection matrix.
For the position relationship of stereo pairs after correction referring to Fig. 8, image overlaps the limit with scan line after correction, is This, which needs to find a projective transformation, makes the antipodal points of image be changed into infinite point, and as the image cut caused by the conversion Distortion is minimum.In order to meet this requirement, if u0For picture centre, it is desirable to which transformation matrix H can be in u0Nearby approx rotated And translation, the distortion of this sampled images will be smaller.If u0For origin, antipodal points p=(f, 0,1) is then transformed in x-axis
Antipodal points p can be moved to infinite point (f, 0,0) using G matrix, and the matrix is unit in origin approximation Matrix.For arbitrary point and antipodal points, there is H=GRT.Here, u is moved to origin by matrix T, and R is one and antipodal points is moved on to x Some point (f, 0,1) is moved to infinite point by the spin matrix of (f, 0,1), matrix G on axis.These three transformation matrixs are combined one Rise, the projective transformation exactly met the requirements.
If image to be corrected to for J and J ', a pair of of plane projective transformation H and H ' is respectively acting on this two images, If λ and λ ' meets polar curve, then required conversion to be a pair of:
H*λ=H ' λ ' (25)
Above formula represents matching to polar curve after converting.
Foregoing H is a mapping transformation battle array, then H*It is line mapping transformation battle array corresponding with H.Meet the change antonomasia of formula (25) For the conversion to match.Needing exist for first finding a conversion H ' makes antipodal points p ' move on to infinite point, then obtain satisfaction with The transformation matrix H to match with H ' of lower condition:
To try to achieve the transformation matrix H to match with H ', following theorem is introduced.
Theorem:If image is to the basis matrix F=of J and J ' | p ' | × M, H ' they are the projective transformations done to J '.When and only When H meets following form, the projective transformation H and H ' done to J matches:
H=(I+H ' p ' aT)H′M (27)
In formula, a is any vector.
When antipodal points p ' has been moved to infinite point (1,0,0) by transformation matrix H 'TWhen, have
Make H0=H ' M, then have H=AH0
IfThen above-mentioned minimization problem can be expressed as form:
A pair of of the plane projective transformation matrix H and H ', Ran Houbian of requirement can be met by solving above-mentioned minimization problem Resampling can be carried out to original image, and carry out gray-level interpolation processing, to generate new stereo pairs.
The precision of the algorithm relies on the recovery precision with Epipolar geometry, therefore, can be in advance using offline mode to extremely several What carries out the recovery of precision, so ensures that the precision of correction.
Correcting algorithm step is as follows:
The first step:The high accuracy that Epipolar geometry is carried out by the way of offline is recovered and antipodal points is found in two images P and p ';
Second step:Obtain and antipodal points p ' is mapped to infinite point (1,0,0)TProjective transformation H ';
3rd step:The projective transformation H to match with transformation matrix H ' is obtained, and makes its satisfaction
Wherein, m1i=(u1,v1,1),m2i=(u2,v2,1);
4th step:According to projective transformation H and H ', resampling is carried out to two width original images respectively, it is vertical after being corrected Body image pair.
1.2.6 dense matching
Existing stereopsis matching process, such as GC algorithms, SGM algorithms, BP algorithm, in, SGM algorithm speeds are soon, just True rate is high and stability is good.Therefore, the system uses SGM algorithms to carry out the dense matching of stereopsis to generate intensive three-dimensional Scene information.
The basic thought of SGM algorithms is:First cost pixel-by-pixel is performed based on mutual information to calculate, then with multiple one-dimensional smooth Constraint carrys out an approximate two-dimentional smoothness constraint[5]
Assuming that it is I with reference to image pixel p gray scalesbp, the same place q gray scales of corresponding image to be matched are Imq.Function q=ebm (p, d) represents to correspond to the core line with reference to image pixel p on matching image, and core line parameter is d.So, the matching generation based on MI Valency function is
Wherein,The entropy of the block diagram picture centered on pixel p, q is represented respectively,Represent the combination entropy of two block diagram pictures.
Along path r directions, the cost L of pixel pr(p, d) is defined as by recursive fashion
Wherein, P1、P2For penalty coefficient.The cost of all directions is added, total Matching power flow can be obtained
So, for each pixel p, depth dp=mindS(p,d).Finally, it is also necessary to carry out consistency check, i.e., The depth value of comparison match point pair, and then generate that profile is obvious, informative depth map, specific implementation process is as shown in Figure 9.
1.2.7 three-dimensional reconstruction
According to stereo camera calibration and dense matching as a result, the system rebuilds target scene with multi-disc forward intersection Three-dimensional information.
If point all on body surface can be obtained, shape and the position of three-dimensional body are exactly to uniquely determine.Such as Shown in Figure 10, it is assumed that space arbitrary point P is in two camera coordinates system C1With C2Under picture point be p1With p2, p1With p2It is same for space Corresponding points of the one point P in left and right image, meanwhile, camera has been demarcated, and projection matrix is respectively M1With M2, then have
In formula, (u1,v1, 1) and (u2,v2, 1) and it is respectively p1With p2Image homogeneous coordinates of the point in respective image;(X,Y, Z, 1) it is homogeneous coordinates of the P points under world coordinate system;(k=1,2;I=1 ..., 3;J=1 ..., 4) it is respectively MkI-th Row jth column element.
Two formulas eliminate Z more thanc1Or Zc2It can obtain on X, four linear equations of Y, Z:
From analytic geometry, the plane equation of three dimensions is linear equation, and the simultaneous of two plane equations is space Linear equation (changes intersection of the straight line for two planes), therefore the meaning of formula (1) was O1p1(or O2p2) straight line.
Now with 4 equations, it is desirable to 3 unknown numbers, it is contemplated that the presence of data noise, then can use least square method Solve.Formula (1) is rewritten in the matrix form to obtain
Formula (37) can be write a Chinese character in simplified form into
KX=U (38)
Wherein, K is 4 × 3 matrixes on formula (37) left side;X is unknown three-dimensional vector;U be formula (37) on the right of 4 × 1 to Amount.K and U is known vector, then the least square solution of formula (37) is
M=(KTK)-1KTU (39)
Rebuild under common European geometric meaning, it is necessary to carry out stringent calibration to camera, this can be by above The camera calibration method of introduction is completed.
There is perspective projection relation for two dimensional image and three-dimensional scenic.This projection relation can use a projection matrix (i.e. camera parameter matrix) describes.It is possible, firstly, to the three dimensional signal space projection matrix for passing through a small amount of picture point;Then, lead to The dual-projection matrix of double camera is crossed, just recovers the three-dimensional information of every bit using above-mentioned least square method, so as to recover object Three-dimensional appearance.
1.2.8 stereo-picture model connects
The system can constantly obtain stereopsis and real-time reconstruction target scene in the moving process of stereoscopic camera Three-dimensional model.It is complete in order to form one but what reconstruction obtained is all the partial model of target in the range of current field every time The result that model of place just needs to rebuild each time connects, and main here it is stereo-picture model link block is appointed Business.
The principle of stereo-picture model connection is the corresponding image points in two groups of stereopsis according to acquisition adjacent moment, with They are the tie point of two groups of stereopsis, the model points of the same name of two groups of three-dimensional models are calculated by forward intersection, finally Two groups of three-dimensional models are transformed under the same space coordinate system by spacial similarity transformation, the stereopsis obtained to following instant Same processing, you can all stereo-picture models are connected into a block mold for being directed to whole scene.Stereo-picture mould The calculating process of type connection is as shown in figure 11.
The spacial similarity transformation formula that the system uses for
Wherein, XT、YT、ZTFor the model point coordinates under previous group stereo-picture model coordinate systems, X, Y, Z is under adjacent Coordinate of one group of stereo-picture model model points of the same name under its model coordinate systems, X0、Y0、Z0It is adjacent next group of stereo-picture Coordinate of the model coordinate systems origin under previous group stereo-picture model coordinate systems, λ be two groups of stereo-picture models ratio because Son, ai、bi、ciBe angle element φ,The function of γ.If this known 7 parameters, it is possible to carry out two groups of stereo-picture models and sit Coordinate transform between mark system.
From formula (40), 7 unknown parameters are contained in its formula, and the equation number of a pair of of similitude is 3, so As soon as resolving them at least needs 3 characteristic points of the same name not point-blank.In actual process, in order to protect Precision, reliability are demonstrate,proved, often needs the characteristic point of the same name of 4 or more than 4 to solve transformation parameter to answer.Since formula (40) is non-thread Property equation, obtaining error equation by linearization process is
Connected by stereo-picture model, the target three-dimensional information that system is rebuild in moving process can be incorporated into one In the coordinate system of a first group of stereo-picture model, whole one complete geometrical model of scene is formed.If have in scene The geometrical model of whole scene, then can be converted into and the position of actual scene, big by the index point of some known spatial coordinates It is small completely the same.
The operational process of 2 systems
The system is on the basis of early period completes camera calibration, by relative to the continuous moving three-dimensional camera of target subject Platform, obtains the sequence observation stereogram of target subject, calculates each group of solid in real time in photographic process from different perspectives The three-dimensional reconstruction result of picture pair, and the three-dimensional reconstruction result of all stereograms is attached the three of generation target subject at the same time Tie up reconstruction model.Wherein, the process of camera calibration is carried out by 1.2.1 sections, and the system is described in detail below and carries out moving real-time survey The process (as shown in figure 12) of amount.
The first step:Stereoscopic camera shooting obtains stereogram.In order to obtain the complete model of target, it is necessary to constantly mobile system System platform, obtains the sequence stereopsis of target, and needs to keep certain speed in moving, so as to adjacent cube as centering contains Characteristic point of the same name.
Second step:Current stereogram carries out feature extracting and matching.For the one group of stereogram currently obtained, pass through Feature point extraction and matching algorithm is influenced, obtain in stereogram on the image of left and right several characteristic points of the same name (it is required that cannot lack In 6, and it cannot be located on straight line).
3rd step:Current stereogram carries out relative orientation.The characteristic point of the same name obtained according to previous step, combining camera mark It is fixed as a result, calculate the relative position and posture relation of left and right image in stereogram, typically on the basis of left image, calculate Right image relative to left image relative position and posture.
4th step:Current stereogram or so image is corrected as core line image respectively.It is relatively fixed according to current stereogram To as a result, corresponding core line image is generated according to core line relation resampling to left and right image respectively, at this time on the image of left and right Corresponding image points should be on same image row.
5th step:Left and right core line image carries out dense matching.To left and right core line image using 1.2.6 section algorithm carry out by Pixel matching, obtains the coordinate value of all corresponding image points on the image of left and right.
6th step:Current stereogram carries out three-dimensional reconstruction.According to dense matching as a result, being calculated using forward intersection method Per the corresponding object space space of points coordinate of a pair of corresponding image points, the corresponding target three-dimensional information of current stereogram is obtained.
7th step:The connection for the target three-dimensional that two adjacent groups stereogram obtains.One to six steps are repeated, are obtained next Group stereogram and its corresponding target three-dimensional, two group models have necessarily overlapping.Can be with by the method for 1.2.8 sections The geometrical relationship between the target three-dimensional of two adjacent groups stereogram acquisition is established, then corresponds to next group of stereogram Target three-dimensional be converted under unified coordinate system so that two group models combine together.
8th step:Obtain the complete three-dimensional geometric information of target.Repeat one to seven steps, the stereogram that will be shot each time The threedimensional model of generation integrates, and while the shooting to target all surface is completed, it is complete, accurate to obtain target surface 3 d-dem point information.
Specific embodiment is presented above, but the present invention is not limited to described embodiment.The base of the present invention This thinking is above-mentioned basic scheme, and for those of ordinary skill in the art, teaching, designs various changes according to the present invention The model of shape, formula, parameter simultaneously need not spend creative work.It is right without departing from the principles and spirit of the present invention The change, modification, replacement and modification that embodiment carries out are still fallen within protection scope of the present invention.

Claims (5)

1. image space type stereoscopic vision moves real-time measurement system online, it is characterised in that including by two cameras, camera fix with The stereoscopic camera that distance adjusting system is formed, stereoscopic camera connection control and computing device, the control are used for computing device Control, storage, processing and the handling result output of camera gathered data;Measurement process is as follows:
1) camera calibration;2) in the moving process of stereoscopic camera, some groups of stereo-pictures are obtained;3) image preprocessing;4) it is special Sign extraction and Stereo matching;5) three-dimensional reconstruction;6) stereo-picture model connects;For the stereopsis of any instant, phase is obtained Corresponding image points in the stereopsis at adjacent moment, the tie point using them as two groups of stereopsis, is calculated by forward intersection To the model points of the same name of two groups of three-dimensional models, two groups of three-dimensional models are transformed to by the same space coordinate system by spacial similarity transformation Under;The stereopsis of subsequent time is equally handled successively, all stereo-picture models are connected into one for whole The block mold of a scene;In the moving process of stereoscopic camera, stereopsis and real-time reconstruction target scene are constantly obtained Three-dimensional model, the three-dimensional model of the target scene are the partial models of target in the range of current field, complete in order to form one The result that whole model of place just needs to rebuild each time connects.
2. image space type stereoscopic vision according to claim 1 moves real-time measurement system online, it is characterised in that the phase Machine scaling method includes:Obtain stereopsis at the same time;Extract scaling board angle point;Calibration resolves.
3. image space type stereoscopic vision according to claim 1 moves real-time measurement system online, it is characterised in that image is pre- Processing includes filtering and grayscale equalization processing.
4. image space type stereoscopic vision according to claim 1 moves real-time measurement system online, it is characterised in that the spy Sign extraction includes with Stereo matching:The tie point between three-dimensional model is obtained using SURF operators;Calculate the phase between two images To position and posture, relative orientation resolving is carried out;Determine the core line relation between stereopsis, stereopsis is corrected to obtain The stereopsis arranged by core line;The dense matching of stereopsis is carried out to generate intensive corresponding image points with SGM algorithms.
5. image space type stereoscopic vision according to claim 1 moves real-time measurement system online, it is characterised in that according to The three-dimensional information of target scene is rebuild using multi-disc forward intersection with obtained intensive corresponding image points, realizes the Three-dimensional Gravity Build.
CN201410745020.1A 2014-12-08 2014-12-08 Image space type stereoscopic vision moves real-time measurement system online Active CN104537707B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410745020.1A CN104537707B (en) 2014-12-08 2014-12-08 Image space type stereoscopic vision moves real-time measurement system online

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410745020.1A CN104537707B (en) 2014-12-08 2014-12-08 Image space type stereoscopic vision moves real-time measurement system online

Publications (2)

Publication Number Publication Date
CN104537707A CN104537707A (en) 2015-04-22
CN104537707B true CN104537707B (en) 2018-05-04

Family

ID=52853226

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410745020.1A Active CN104537707B (en) 2014-12-08 2014-12-08 Image space type stereoscopic vision moves real-time measurement system online

Country Status (1)

Country Link
CN (1) CN104537707B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105469418B (en) * 2016-01-04 2018-04-20 中车青岛四方机车车辆股份有限公司 Based on photogrammetric big field-of-view binocular vision calibration device and method
CN106384368A (en) * 2016-09-14 2017-02-08 河南埃尔森智能科技有限公司 Distortion self-correction method for non-measurement type camera lens and light-sensing chip
CN108344369A (en) * 2017-01-22 2018-07-31 北京林业大学 A kind of method that mobile phone stereoscan measures forest diameter
CN107167077B (en) * 2017-07-07 2021-05-14 京东方科技集团股份有限公司 Stereoscopic vision measuring system and stereoscopic vision measuring method
CN107392898B (en) * 2017-07-20 2020-03-20 海信集团有限公司 Method and device for calculating pixel point parallax value applied to binocular stereo vision
CN107729824B (en) * 2017-09-28 2021-07-13 湖北工业大学 Monocular visual positioning method for intelligent scoring of Chinese meal banquet table
DE102017130897A1 (en) * 2017-12-21 2019-06-27 Pilz Gmbh & Co. Kg A method of determining range information from a map of a space area
CN107958469A (en) * 2017-12-28 2018-04-24 北京安云世纪科技有限公司 A kind of scaling method of dual camera, device, system and mobile terminal
CN110148205B (en) * 2018-02-11 2023-04-25 北京四维图新科技股份有限公司 Three-dimensional reconstruction method and device based on crowdsourcing image
CN108645426B (en) * 2018-04-09 2020-04-10 北京空间飞行器总体设计部 On-orbit self-calibration method for space target relative navigation vision measurement system
CN111336073B (en) * 2020-03-04 2022-04-05 南京航空航天大学 Wind driven generator tower clearance visual monitoring device and method
CN111445528B (en) * 2020-03-16 2021-05-11 天目爱视(北京)科技有限公司 Multi-camera common calibration method in 3D modeling
CN111462213B (en) * 2020-03-16 2021-07-13 天目爱视(北京)科技有限公司 Equipment and method for acquiring 3D coordinates and dimensions of object in motion process
CN112837411A (en) * 2021-02-26 2021-05-25 由利(深圳)科技有限公司 Method and system for realizing three-dimensional reconstruction of movement of binocular camera of sweeper

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102607533A (en) * 2011-12-28 2012-07-25 中国人民解放军信息工程大学 Block adjustment locating method of linear array CCD (Charge Coupled Device) optical and SAR (Specific Absorption Rate) image integrated local area network
CN102693542A (en) * 2012-05-18 2012-09-26 中国人民解放军信息工程大学 Image characteristic matching method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102607533A (en) * 2011-12-28 2012-07-25 中国人民解放军信息工程大学 Block adjustment locating method of linear array CCD (Charge Coupled Device) optical and SAR (Specific Absorption Rate) image integrated local area network
CN102693542A (en) * 2012-05-18 2012-09-26 中国人民解放军信息工程大学 Image characteristic matching method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
《一种基于图像相关的图像特征提取匹配算法》;王建文等;《科技创新导报》;20081231(第21期);全文 *
《一种实现三维立体模型重建的新方法》;宋丽华等;《计算机应用研究》;20040630(第6期);第148页左栏第2段, 第6节第2段第6-7行, 第1节第1段第5-6行, 第3节第1段, 第4节第1-2段, 第6节 *
《基于SURF和TPS的立体影像密集匹配方法》;侯文广等;《华中科技大学学报(自然科学版)》;20100731;第38卷(第7期);全文 *

Also Published As

Publication number Publication date
CN104537707A (en) 2015-04-22

Similar Documents

Publication Publication Date Title
CN104537707B (en) Image space type stereoscopic vision moves real-time measurement system online
CN107155341B (en) Three-dimensional scanning system and frame
TWI555379B (en) An image calibrating, composing and depth rebuilding method of a panoramic fish-eye camera and a system thereof
CN109919911B (en) Mobile three-dimensional reconstruction method based on multi-view photometric stereo
CN108053437B (en) Three-dimensional model obtaining method and device based on posture
CN111028155B (en) Parallax image splicing method based on multiple pairs of binocular cameras
CN105654547B (en) Three-dimensional rebuilding method
CN104376552A (en) Virtual-real registering algorithm of 3D model and two-dimensional image
CN111667536A (en) Parameter calibration method based on zoom camera depth estimation
CN109712232B (en) Object surface contour three-dimensional imaging method based on light field
CN109859272A (en) A kind of auto-focusing binocular camera scaling method and device
CN109523595A (en) A kind of architectural engineering straight line corner angle spacing vision measuring method
CN107038753B (en) Stereoscopic vision three-dimensional reconstruction system and method
CN110782498B (en) Rapid universal calibration method for visual sensing network
Ziegler et al. Acquisition system for dense lightfield of large scenes
CA3233222A1 (en) Method, apparatus and device for photogrammetry, and storage medium
CN108010125A (en) True scale three-dimensional reconstruction system and method based on line-structured light and image information
CN111854636A (en) Multi-camera array three-dimensional detection system and method
CN111739103A (en) Multi-camera calibration system based on single-point calibration object
CN116129037A (en) Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof
CN116625258A (en) Chain spacing measuring system and chain spacing measuring method
CN114998448A (en) Method for calibrating multi-constraint binocular fisheye camera and positioning space point
CN112712566B (en) Binocular stereo vision sensor measuring method based on structure parameter online correction
JP2015019346A (en) Parallax image generator
CN105427302B (en) A kind of three-dimensional acquisition and reconstructing system based on the sparse camera collection array of movement

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant