CN103278138A - Method for measuring three-dimensional position and posture of thin component with complex structure - Google Patents

Method for measuring three-dimensional position and posture of thin component with complex structure Download PDF

Info

Publication number
CN103278138A
CN103278138A CN2013101595556A CN201310159555A CN103278138A CN 103278138 A CN103278138 A CN 103278138A CN 2013101595556 A CN2013101595556 A CN 2013101595556A CN 201310159555 A CN201310159555 A CN 201310159555A CN 103278138 A CN103278138 A CN 103278138A
Authority
CN
China
Prior art keywords
und
dimensional
binocular
camera
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013101595556A
Other languages
Chinese (zh)
Other versions
CN103278138B (en
Inventor
贾立好
乔红
苏建华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN201310159555.6A priority Critical patent/CN103278138B/en
Publication of CN103278138A publication Critical patent/CN103278138A/en
Application granted granted Critical
Publication of CN103278138B publication Critical patent/CN103278138B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a method for measuring three-dimensional position and posture of a thin component with complex structure. The method comprises the following steps of: establishing a mapping relation between a three-dimensional world coordinate system and a two-dimensional image coordinate in a binocular vision measurement system; extracting the features on a reference posture thin component binocular image, establishing a prior feature library, extracting the features on a to-be-measured posture thin component binocular image, matching the extracted features with the prior feature library so as to obtain the same amount of feature matched pair sets corresponding to the prior feature library; obtaining a three-dimensional point cloud of feature points; calculating a main plane of the reference posture thin component three-dimensional point cloud, calculating the main plane of three-dimensional point cloud of the to-be-measured thin component and calculating the spatial position relation with the main plane of the reference posture thin component so as to obtain the three-dimensional position and posture of the to-be-measured thin component. The measurement method provided by the invention is beneficial for accurate positioning for the thin component in the technological processes of automatic manufacturing, assembly, welding, and the like of an industrial mechanical arm. According to the method, the implementation is simple and easy, the positioning accuracy is high, the cost is low, and the operation is convenient.

Description

The measuring method of the thin parts three-dimensional position of a kind of labyrinth and attitude
Technical field
The invention belongs to the vision measurement field in the infotech, relate in particular to the vision measuring method of the thin parts three-dimensional position of a kind of complexity and attitude.
Background technology
Can realize three-dimensional dimension measurement, location and the line Quality Control of parts on the industrial production line based on the measuring method of binocular vision, be an important research field in the current information technology, and be applied at aspects such as the measurement of common small size parts and location.And for large scale and have three-dimensional position and the attitude measurement of the thin parts (as the both sides gusset of automobile body-in-white and car door etc.) of labyrinth, yet there are no it and clearly use.
Summary of the invention
For addressing the above problem, the invention provides the measuring method of a kind of thin parts three-dimensional position to be measured and attitude, it comprises:
Step 1: demarcate the spatial relation of inner parameter, external parameter and the binocular camera of binocular camera, set up the mapping relations of three-dimensional world coordinate system and two dimensional image coordinate in the binocular vision measuring system;
Step 2: use the SIFT unique point in the thin parts binocular of the described benchmark of the SIFT operator extraction image right, generate SIFT priori features storehouse, and use described SIFT unique point to the three-dimensional planar at the place reference plane as the thin parts of benchmark place;
Step 3: the binocular solid correcting image for thin parts to be measured is right, it is right to extract a plurality of SIFT unique points respectively, and with the SIFT unique point in the SIFT unique point extracted pair and the described SIFT priori features storehouse to mating, obtain with described SIFT feature priori storehouse in the SIFT unique point to equal number and corresponding SIFT unique point pair set;
Step 4: the image coordinate according to unique point in the resulting SIFT unique point pair set obtains the three-dimensional point cloud of described thin parts to be measured, and then obtains the space plane of these thin parts to be measured;
Step 5: by finding the solution the transformation matrix between described space plane and the reference plane, determine three-dimensional position and the attitude information of described thin parts to be measured.
The present invention adopts the binocular vision measuring technique: (1) is according to application scenarios and selected suitable camera and the camera lens of field range, build binocular vision system, and respectively left and right two cameras are demarcated respectively, obtain inner parameter and the lens distortion coefficient of left and right camera, according to the world coordinate system on the reference plane, obtain the external parameter of left and right camera, carry out binocular solid again and demarcate, further obtain the spatial relation between the left and right camera; (2) the binocular image to gathering, successively carrying out lens distortion calibration and three-dimensional correction handles, obtain binocular distortion correction image respectively to right with the binocular solid correcting image, binocular solid correcting image at the thin parts of benchmark pose is right, extract its SIFT feature, choose thin parts specific location (as pilot hole by hand, positions such as pilot pin) SIFT feature, obtain its binocular characteristic matching pair set, with it as SIFT priori features storehouse, binocular solid correcting image at thin parts to be measured is right, extract its SIFT feature, and carry out characteristic matching with the priori features storehouse, obtain equal number and corresponding characteristic matching pair set with it; (3) obtain the three-dimensional coordinate of characteristic matching pair set by the binocular mensuration, obtain the three-dimensional point cloud of unique point; (4) according to least square method, calculate the principal plane of the thin parts three-dimensional point cloud of benchmark pose and thin parts three-dimensional point cloud to be measured respectively, by calculating the spatial relation between two principal planes, obtain three-dimensional position and the attitude of thin parts to be measured.Complicated thin parts three-dimensional position and attitude measurement method that the present invention proposes help in the technological processs such as the industrial machine hand is made automatically, assembled, welding the accurate location to thin parts, method bearing accuracy height of the present invention, with low cost, be convenient to operation.
Description of drawings
Fig. 1 is the measuring method general diagram of complicated thin parts three-dimensional position and attitude among the present invention;
Fig. 2 is binocular camera imaging model and physics imaging and normalization imaging coordinate system among the present invention;
Fig. 3 is the three-dimensional synoptic diagram of demarcating of binocular camera among the present invention;
Fig. 4 is characteristic matching process flow diagram among the present invention;
Fig. 5 is the synoptic diagram that obtains that lacks the SIFT feature among the present invention;
Fig. 6 is unique point three-dimensional point cloud calculation flow chart among the present invention.
Embodiment
For making the purpose, technical solutions and advantages of the present invention clearer, below in conjunction with specific embodiment, and with reference to accompanying drawing, the present invention is described in more detail.
The invention discloses the vision measuring method of the thin parts three-dimensional position of a kind of complexity and attitude.
Fig. 1 shows the vision measuring method flow process general diagram of complicated thin parts three-dimensional position and attitude among the present invention.As shown in Figure 1, this method is divided into benchmark pose measurement and pose to be measured two stages of estimation.Wherein, the benchmark pose measurement stage comprises: camera calibration, binocular picture synchronization collection, binocular image to distortion correction and three-dimensional correction, binocular image to SIFT feature extraction, priori features collection obtain, the thin parts three-dimensional point cloud of benchmark pose obtains, the steps such as three-dimensional pose estimation of the thin parts of benchmark pose; Pose estimation stages to be measured comprises: binocular picture synchronization collection, binocular image to distortion correction and three-dimensional correction, binocular image to the feature set of SIFT feature extraction, the thin parts of pose to be measured obtain, the thin parts three-dimensional point cloud of pose to be measured obtains, the pose estimation of the thin parts of pose to be measured etc.
Because the said reference pose measurement is identical with the implementation of corresponding steps in the pose estimation to be measured, therefore below its summary being summarized as four big steps launches to describe, be that camera calibration, feature extraction and thin parts characterize, binocular vision is measured and three-dimensional point cloud obtains and thin parts three-dimensional position and attitude estimation, and in conjunction with relevant drawings each step of the present invention be described in further detail.
The vision measuring method of the thin parts three-dimensional position of complexity disclosed by the invention and attitude specifically comprises:
Step 1: camera calibration.Namely demarcate the spatial relation between inner parameter, lens distortion coefficient and the binocular camera of camera, to set up the mapping relations of reference plane three-dimensional world coordinate system and two dimensional image coordinate in the binocular vision measuring system.
Setting whole working range is 5 meters * 5 meters, at this visual field, selects the moderate camera lens in visual field for use.Take all factors into consideration these factors, preferably choose the camera lens that focal length is 6mm (its visual angle is about 53 degree).The binocular camera horizontal positioned, spacing is set at 60cm, is the benchmark camera with left side camera wherein.
Camera calibration relates generally to four coordinate systems, i.e. three-dimensional world coordinate system X wY wZ w(initial point is on reference plane, and Z wAxle is perpendicular to reference plane), three-dimensional camera coordinates is X cY cZ c(initial point is positioned at the camera lens photocentre, and Z cAxle and optical axis coincidence), the two dimensional image physical coordinates is xO 1Y (initial point is positioned at picture centre, and coordinate is physical coordinates), two dimensional image pixel coordinate are uOv (initial point is positioned at the image upper left corner, and coordinate is pixel coordinate).
Accompanying drawing 2 shows each coordinate system and normalization coordinate system synoptic diagram thereof among the present invention.As shown in Figure 2, adopt linear video camera pin-hole model, the coordinate of definition space point P under the three-dimensional world coordinate system is [X wY wZ w] T, its corresponding homogeneous coordinates are P=[X wY wZ w1] TThe coordinate of defining point P under three-dimensional camera coordinates is is [X cY cZ c] T, its homogeneous coordinates are P c=[X cY cZ c1] TThe subpoint of defining point P on two dimensional image plane is p ', and the coordinate under its two dimensional image physical coordinates system is [x y] T(unit: millimeter), the coordinate under its two dimensional image pixel coordinate system is [u v] T, its homogeneous coordinates are p '=[u v1] TThe intersection point O of the optical axis of camera and the plane of delineation 1Pixel coordinate be [u 0v 0] TThe physical size of image as unit pixel on x axle and y direction of principal axis is respectively dx, dy.Then putting coordinate and its mapping relations the coordinate two dimensional image coordinate of subpoint p ' two dimensional image plane under between of P under three-dimensional system of coordinate is:
Z c u v 1 = f dx 0 u 0 0 0 f dy v 0 0 0 0 1 0 R t O t 1 X w Y w Z w 1 - - - ( 1 )
= a x 0 u 0 0 0 a y v 0 0 0 0 1 0 R t O t 1 X w Y w Z w 1 = M 1 M 2 P = MP
Wherein, P=[X wY wZ w1] TBe the homogeneous coordinates under the three-dimensional world coordinate system; F is lens focus, a x=f/dx, a y=f/dy is the coke ratio of camera, and R, t are respectively rotation matrix and the translation vector between three-dimensional camera coordinates system and the three-dimensional world coordinate system; O t=[0 0 0], M 1, M 2Be respectively inner parameter matrix and the external parameter matrix (being homography matrix) of camera, M is the overall projection matrix of camera.
In the whole calibrating procedure, need to demarcate inner parameter [the f dx dy u of camera 0v 0] T, camera lens distortion factor [k 1k 2p 1p 2k 3] TAnd external parameter, described inner parameter comprises coke ratio, the center position of camera, external parameter comprises rotation matrix R and the translation vector t between three-dimensional camera coordinates system and the three-dimensional world coordinate system.Use chequered with black and white square mesh scaling board to demarcate in calibration process, wherein, using little scaling board, its sizing grid when demarcating inner parameter is 50mm * 50mm, and meshes number is 8 (wide) * 12 (length); Using big scaling board, its sizing grid when demarcating external parameter is 100mm * 100mm, and meshes number is 4 (wide) * 6 (length).The Harris Corner Detection Algorithm is all adopted in the detection of each angle point on the scaling board, and obtains the sub-pixel corner location.
Step 1 specifically may further comprise the steps:
Step 11: left and right camera is demarcated respectively, obtained inner parameter and the external parameter matrix of each camera respectively.
(a) at first obtain the initial internal parameter of left and right camera and camera with respect to rotation matrix and the translation vector of little scaling board.
Demarcation with the benchmark camera is example, for obtaining the inner parameter matrix of camera, at first do not consider lens distortion, employing is based on the scaling method of plane homography matrix, hand-held little scaling board, by the attitude (at least 3 attitudes) of the little scaling board of continuous variation, utilize plane coupling (direct linear transformation's method) between different points of view to calculate the initial internal parameter of camera and camera with respect to rotation matrix and the translation vector of little scaling board; Camera inner parameter when namely trying to achieve zero lens distortion and camera are with respect to rotation matrix R and the translation vector t of little scaling board.Make little scaling board place plane Z w=0, then:
Z c u v 1 = a x 0 u 0 0 0 a y v 0 0 0 0 1 0 R t O t 1 X w Y w 0 1 = a x 0 u 0 0 a y v 0 0 0 1 R t X w Y w 0 1 [ 0,1 ]
= a x 0 u 0 0 a y v 0 0 0 1 r 1 r 2 r 3 t X w Y w 0 1 = a x 0 u 0 0 a y v 0 0 0 1 r 1 r 2 t X w Y w 1 - - - ( 2 )
= H · X w Y w 1
Wherein, R=[r 1r 2r 3].
Therefore, can finish the demarcation of benchmark camera by finding the solution the H matrix, the inner parameter of acquisition camera itself and camera are with respect to rotation matrix R and the translation vector t of little scaling board.
(b) based on above camera initial internal parameter of trying to achieve and camera rotation matrix R and the translation vector t with respect to little scaling board, consider lens distortion, further ask for the distortion factor of camera lens, the inner parameter of the one-step optimization camera of going forward side by side.
Making the image pixel coordinate of detected a certain angle point in the original lens distortion image is [u Rawv Raw] T, under the desirable pin-hole imaging model in the no lens distortion image image pixel coordinate of this angle point be [u UnD v Und] T, wherein, original lens distortion image is the acquired original image that has lens distortion, then:
(1) with the three-dimensional world coordinate transform of each angle point on the little scaling board in three-dimensional camera coordinates system, wherein, be initial point with an angle point on the little scaling board, set up the three-dimensional world coordinate system, and then obtain the three-dimensional world coordinate of each angle point on the scaling board according to the length computation of the chequered with black and white grid on the little scaling board.Namely
X c Y c Z c = R X w Y w Z w 1 + t - - - ( 3 )
Wherein, R and t are that camera is with respect to rotation matrix and the translation vector of little scaling board.
(2) further project to the plane of delineation, obtain angle point at the orthoscopic image physical coordinates [x of plane of delineation coordinate system Undy Und] TWith image pixel coordinate [u Undv Und] T, namely
x und y und = fX c / Z c f Y c / Z c - - - ( 4 )
u und v und a x 0 u 0 0 a y v 0 0 0 1 x und y und - - - ( 5 )
(3) will be in original lens distortion image the image pixel coordinate [u of detected angle point Rawv Raw] TBe transformed to image physical coordinates [x Rawy Raw] T, and introduce the lens distortion coefficient [k of a certain initial setting 1k 2p 1p 2k 3] T, obtain behind the distortion correction angle point image pixel coordinate [u ' UndV ' Und] TWith the image physical coordinates [x ' UndY ' Und] T, namely
x raw y raw 1 = dx 0 - u 0 dx 0 dy - v 0 dy 0 0 1 u raw v raw 1 - - - ( 6 )
x ′ und y ′ und = ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) x raw y raw + 2 p 1 x d y d + p 2 ( r 2 + 2 x d 2 ) p 1 ( r 2 + 2 y d 2 ) + 2 p 2 x d y d - - - ( 7 )
u ′ und v ′ und = a x 0 u 0 0 a y v 0 0 0 1 x ′ und y ′ und - - - ( 8 )
Wherein,
Figure BDA00003138936300066
As seen in the said process, set up the image pixel coordinate [u of detected angle point in the image pixel coordinate of angle point behind the distortion correction and the original lens distortion image according to camera inner parameter and distortion parameter Rawv Raw] TBetween linear equation.
(4) at N the angle point that uses in demarcating, the objective definition function:
min F = Σ i = 1 N [ ( u und i - u und ′ i ) 2 + ( v und i - v und ′ i ) 2 ] - - - ( 9 )
With this non-linear least square problem, by nonlinear optimization algorithm iteration repeatedly, try to achieve the parameter value that makes the objective function minimum, obtain camera inner parameter [the f dx dy u of global optimization 0v 0] TAnd distortion parameter [k 1k 2p 1p 2k 3] T
(c) external parameter of demarcation camera, namely the spatial relation between the plane of delineation and the reference plane obtains camera plane with respect to rotation matrix and the translation vector of reference plane.
The present invention need measure the three-dimensional coordinate of some specific location unique point on the complicated thin parts, the external parameter matrix of camera when therefore needing acquisition to be world coordinate system with the reference plane.Big scaling board is placed on the reference plane, and namely reference plane is world coordinate system X in the set scene WO WY WThe plane, place, its concrete steps are as follows:
(1) determines three-dimensional world coordinate system X wY wZ w, big scaling board is placed on the reference plane, as coordinate origin, and make Z with an angle point of grid on the big scaling board wAxle is perpendicular to big scaling board reference plane;
(2) gather the big scaling board image of this moment, and utilize the camera inner parameter and the lens distortion coefficient that obtain before to carry out distortion correction, the image after obtaining to proofread and correct;
(3) use the Harris Corner Detection Algorithm to detect the two dimensional image pixel coordinate of each angle point in the image behind distortion correction;
(4) according to the three-dimensional camera coordinates of detected each angle point on the following distortion correction image of three-dimensional world coordinate and three-dimensional camera coordinates system of each angle point on the big scaling board, ask for camera with respect to rotation matrix and the translation vector of big scaling board, i.e. the external parameter of camera.
Step 12: binocular solid is demarcated, and sets up the spatial relation between the binocular camera.Namely obtain right camera with respect to rotation matrix and the translation matrix of benchmark camera, i.e. homography matrix, concrete steps are as follows:
Fig. 3 shows the three-dimensional synoptic diagram of demarcating of binocular camera among the present invention.According to rotation matrix and the translation vector of left and right camera with respect to reference plane, find the solution big scaling board spatial relation between right camera and the benchmark camera when reference plane, namely obtain target object when the level ground right side camera with respect to rotation matrix and the translation vector of left camera.As shown in Figure 3, the spatial point P under the three-dimensional world coordinate system, (photocentre or initial point are respectively O in the three-dimensional camera coordinates system of left camera and right camera ClAnd O CR) under spatial point be respectively P ClAnd P CR, make the external parameter matrix of left camera and right camera be respectively [R lt l] and [R rt r], basis then
P Cl=R lP+t l
P Cr=R rP+t r
P Cl=R T(P Cr-t)
Right camera with respect to the spatial relation [R t] of left camera is
R = R r R l T - - - ( 10 )
t=t r-Rt l (11)
Step 13: the binocular image is carried out three-dimensional correction, realize coplanar, the row alignment of binocular image.Even two camera optical axises are parallel, to left and right image of camera plane re-projection, it is accurately fallen at grade, aim at the row of left and right image simultaneously.Based on the rotation matrix between the camera of the left and right sides and translation vector etc., adopt the overlapping maximization criterion of view and Bouguet calibration algorithm to realize three-dimensional correction, try to achieve the three-dimensional correction internal matrix M of left camera Rect_lWith row alignment matrix R Rect_lThe three-dimensional correction internal matrix M of right camera Red_rWith row alignment matrix R Rect_rThereby it is right to obtain the binocular solid correcting image.The three-dimensional correction specific operation process is:
Order point P is at the image coordinate P of binocular original image centering Raw_lAnd P Raw_rImage coordinate P in binocular distortion correction image pair Und_lAnd P Und_lImage coordinate P binocular solid correcting image centering Rect_lAnd P Rect_r, then the three-dimensional correction formula is:
P rect_l=M rect_lR l(M 1l) -1P und_l (12)
P rect_r=M rect_rR r(M 1r) -1P und_r (13)
Wherein, R lAnd R rBe respectively the rotation matrix of left and right camera external parameter.
P und_l=M 1l·UnDistort(M 1lP raw_l)
P und_r=M 1r·UnDistort(M 1rP raw_r)
In the formula,
UnDistort () is the distortion correction function of pixel.
Step 2: feature extraction and thin parts characterize.Thin parts binocular solid correcting image centering, use SIFT feature and neighborhood thereof to describe respectively and characterize thin image of component.
Fig. 4 shows the method flow diagram that feature extraction and thin parts characterize among the present invention.Shown in accompanying drawing 4, this method specifically comprises the steps:
Step 21: gather the binocular image of thin parts, and carry out lens distortion calibration and three-dimensional correction respectively, and then it is right to, binocular solid correcting image to obtain binocular distortion correction image;
Step 22: use the SIFT feature of SIFT operator extraction binocular solid correcting image centering, described SIFT feature comprises SIFT unique point and neighborhood description thereof;
Step 23: the priori features collection obtains and thin parts characterize.
In the benchmark pose measurement stage, at the binocular solid correcting image of thin parts to detected SIFT feature in the left image, choose near the SIFT feature of a plurality of (at least 3) ad-hoc location (as pilot hole, pilot pin etc.) by hand, and the SIFT feature in itself and the right image carried out characteristic matching, it is right that manual rejecting mistake is mated, obtain SIFT characteristic matching pair set, and then be used for forming the priori SIFT feature database of the thin parts of benchmark pose.
In pose estimation stages to be measured, to carry out characteristic matching with priori SIFT feature database respectively from the SIFT feature that the binocular solid correcting image centering of thin parts to be measured is extracted, and use the RANSAC algorithm to reject the mistake matching result, it is right with priori SIFT feature database equal number and corresponding SIFT characteristic matching to form.The purpose of its coupling is, if the SIFT feature logarithm of the binocular solid correcting image centering of thin parts to be measured is less than the characteristic number of priori SIFT feature database, and the SIFT feature corresponding with the priori features storehouse according to homography matrix then, the SIFT feature of disappearance is tried to achieve in conversion.
Fig. 5 shows the obtain manner synoptic diagram of disappearance SIFT feature among the present invention.As shown in Figure 5, it is right to 7 pairs of SIFT characteristic matching of last extraction to have provided priori SIFT feature database (by 8 pairs of SIFT characteristic matching to forming) and thin parts binocular solid correcting image to be measured among the figure, for the SIFT characteristic matching of completion disappearance is right, according to the homography matrix H between two left images 1With a P 6, calculate the SIFT unique point P of thin parts binocular solid correcting image to be measured to lacking in the left image 6', finally obtain the SIFT characteristic matching pair set with priori SIFT feature database equal number and corresponding thin parts to be measured.
Step 3: binocular vision is measured and the thin parts three-dimensional point cloud of benchmark obtains.
Fig. 6 shows among the present invention that binocular vision is measured and thin parts three-dimensional point cloud obtains synoptic diagram.Among the present invention, calculate the three-dimensional world coordinate of specific location feature point set on the thin parts by the binocular measuring method, thereby obtain the three-dimensional point cloud of thin parts.
Three-dimensional coordinate with certain specific location point P on the thin parts of three dimensions is calculated as example, describes computation process, and wherein, P is right corresponding to certain SIFT characteristic matching in the imaging of binocular solid correcting image centering for point.The coordinate of some P under the three-dimensional camera coordinates of the left and right camera of binocular is is respectively [X ClY ClZ Cl] T[X CrY CrZ Cr] TThe right image coordinate of this SIFT characteristic matching of binocular solid correcting image centering is respectively [u Rect_lv Rect_l] T[u Rect_rv Rect_r] T, its homogeneous coordinates are respectively P Rect_l=[u Rect_lv Rect_ l1] TAnd P Rect_r=[u Rect_rv Rect_r1] TAs shown in Figure 6, described binocular vision measurement and thin parts three-dimensional point cloud obtain specifically and comprise the steps:
Step 31: calculate the SIFT unique point in the image coordinate of binocular distortion correction image pair.Wherein, in the benchmark pose measurement stage, calculate the image coordinate of the binocular distortion correction image pair that the SIFT unique point on the thin parts of benchmark obtains at the thin parts of benchmark; In the pose measurement to be measured stage, calculate the image coordinate of the binocular distortion correction image pair that the SIFT unique point on the thin parts to be measured obtains in UUT.
Order point P binocular left and right sides distortion correction image on image coordinate be respectively [u Und_lv Und_l] T[u Und_rv Und_r] T, its corresponding homogeneous coordinates are respectively P Und_l=[u Und_lv Und_l1] TAnd P Und_r=[u Und_rv Und_r1] T, get according to formula (1):
Z cl u und _ l v und _ l 1 = M 1 l M 2 l X w Y w Z w 1 = M l X w Y w Z w 1 = m l 11 m l 12 m l 13 m l 14 m l 21 m l 22 m l 23 m l 24 m l 31 m l 32 m l 33 m l 34 X w Y w Z w 1 - - - ( 14 )
Z cr u und _ r v und _ r 1 = M 1 r M 2 r X w Y w Z w 1 = M r X w Y w Z w 1 = m r 11 m r 12 m r 13 m r 14 m r 21 m r 22 m r 23 m r 24 m r 31 m r 32 m r 33 m r 34 X w Y w Z w 1 - - - ( 15 )
In the formula, M 1l, M 2lAnd M 1r, M 2rBe respectively inner parameter matrix and the external parameter matrix of the left and right camera of binocular, M lAnd M rBe respectively the overall projection matrix of the left and right camera of binocular, and
M l = m l 11 m l 12 m l 13 m l 14 m l 21 m l 22 m l 23 m l 24 m l 31 m l 32 m l 33 m l 34
M r = m r 11 m r 12 m r 13 m r 14 m r 21 m r 22 m r 23 m r 24 m r 31 m r 32 m r 33 m r 34
Known this image coordinate P binocular solid correcting image centering Rect_lAnd P Rect_r, then this is at the image coordinate P of binocular distortion correction image pair Und_lAnd P Und_lBe respectively:
P und_l=M 1l(M rect_lR l) -1P rect_l
P Und_r=M 1r(M Rect_rR r) -1P Rect_rStep 32: calculate the three-dimensional coordinate of SIFT unique point under world coordinate system.Wherein, in the benchmark pose measurement stage, calculate three-dimensional coordinate under world coordinate system according to the image coordinate of the described SIFT unique point of the thin parts of benchmark; In the pose measurement to be measured stage, calculate three-dimensional coordinate under world coordinate system according to the image coordinate of the described SIFT unique point of thin parts to be measured;
With this image coordinate P on binocular distortion correction image Und_lAnd P Und_rDifference substitution pinhole camera modeling formula (14) and formula (15) are in conjunction with the population parameter matrix M of left and right sides camera lAnd M r, cancellation Z ClAnd Z Cr, obtain system of equations:
( m l 11 - m l 31 u und _ l ) X w + ( m l 12 - m l 32 u und _ l ) Y w + ( m l 13 - m l 33 u und _ l ) Z w = m 134 u und _ l - m l 14 ( m l 21 - m l 31 v und _ l ) X w + ( m l 22 - m l 32 v und _ l ) Y w + ( m l 23 - m l 33 v und _ l ) Z w = m 134 v und _ l - m l 24 ( m r 11 - m r 31 u und _ r ) X x + ( m r 12 - m r 32 u und _ r ) Y w + ( m r 13 - m r 33 u und _ r ) Z w = m r 34 u und _ r - m r 14 ( m r 21 - m r 31 v und - r ) X w + ( m r 22 - m r 32 v und _ r ) Y w + ( m r 23 - m r 33 v und _ r ) Z w = m r 34 v und _ r - m r 24
Write system of equations as matrix form:
A = X w Y w Z w = B
In the formula,
A = m l 11 - m l 31 u und _ l m l 12 - m l 32 u und _ l m l 13 - m l 33 u und _ l m l 21 - m l 31 v und _ l m l 22 - m l 32 v und _ l m l 23 - m l 33 v und _ l m r 11 - m r 31 u und _ r m r 12 - m r 32 u und _ r m r 13 - m r 33 u und _ r m r 21 - m r 31 v und _ r m r 22 - m r 32 v und _ r m r 23 - m r 33 v und _ r
B = m l 34 u und _ l - m l 14 m l 34 v und _ l - m l 24 m r 34 u und _ r - m r 14 m r 34 v und _ r - m r 24
Utilize least square method to try to achieve this three-dimensional coordinate under world coordinate system.
[X w Y w Z w] T=((A TA) -1A TB) (16)
Step 4: thin parts three-dimensional position and attitude are estimated.
According to least square method, calculate the principal plane of the thin parts three-dimensional point cloud of benchmark pose and thin parts three-dimensional point cloud to be measured respectively, by calculating the spatial relation between two principal planes, obtain three-dimensional position and the attitude of thin parts to be measured.
Step 41: the space principal plane that calculates three-dimensional point cloud; Namely in the benchmark pose measurement stage, with the three-dimensional planar at the three-dimensional point cloud place of the described SIFT unique point of the thin parts of benchmark as reference space principal plane π 0In the pose measurement to be measured stage, with the plane at the three-dimensional power supply place of the described SIFT unique point of thin parts to be measured as space principal plane π to be measured;
Adopt least square method, according to the three-dimensional point cloud of the thin parts of benchmark pose, calculate its space reference principal plane π 0According to the three-dimensional point cloud of thin parts to be measured, calculate its space principal plane π to be measured.With the example that is solved to of space principal plane π to be measured, make the equation of space plane π be: ax+by+cz+1=0 makes plane π normal vector n=that (c), the three-dimensional coordinate of specific location unique point is P for a, b i=(x i, y i, z i) T, then put P iTo fit Plane π apart from d i=nP i+ 1, according to the principle of least square, require S=∑ (d i) 2, then by finding the solution following system of equations, solve normal vector n.
∂ S a = 0 ∂ S b = 0 ∂ S c = 0 - - - ( 17 )
Step 42: find the solution space principal plane π to be measured and reference space principal plane π 0Between transformation matrix, the unique point of the thin parts of benchmark and thin parts to be measured is projected to respectively on its principal plane, utilize the subpoint on these principal planes, calculate rotation matrix and translation vector between two planes, thereby accurately determine three-dimensional position and the attitude information of the thin parts of thin parts relative datum to be measured.
In the description of said method step, for simplicity, benchmark pose measurement and pose measurement to be measured stage have been carried out also line description, but in practical application, at first carry out camera calibration, carry out the benchmark pose measurement then and be the calculating of the obtaining of priori features storehouse, reference plane etc., carry out the pose measurement stage to be measured at last, namely find the solution three-dimensional position and the attitude information of actual thin parts to be measured based on reference plane.
Above-described specific embodiment; purpose of the present invention, technical scheme and beneficial effect are further described; institute is understood that; the above only is specific embodiments of the invention; be not limited to the present invention; within the spirit and principles in the present invention all, any modification of making, be equal to replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1. the measuring method of a thin parts three-dimensional position to be measured and attitude, it comprises:
Step 1: demarcate the spatial relation of inner parameter, external parameter and the binocular camera of binocular camera, set up the mapping relations of three-dimensional world coordinate system and two dimensional image coordinate in the binocular vision measuring system;
Step 2: use the SIFT unique point in the thin parts binocular of the described benchmark of the SIFT operator extraction image right, generate SIFT priori features storehouse, and use described SIFT unique point to the three-dimensional planar at the place reference plane as the thin parts of benchmark place;
Step 3: the binocular solid correcting image for thin parts to be measured is right, it is right to extract a plurality of SIFT unique points respectively, and with the SIFT unique point in the SIFT unique point extracted pair and the described SIFT priori features storehouse to mating, obtain with described SIFT feature priori storehouse in the SIFT unique point to equal number and corresponding SIFT unique point pair set;
Step 4: the image coordinate according to unique point in the resulting SIFT unique point pair set obtains the three-dimensional point cloud of described thin parts to be measured, and then obtains the space plane of these thin parts to be measured;
Step 5: by finding the solution the transformation matrix between described space plane and the reference plane, determine three-dimensional position and the attitude information of described thin parts to be measured.
2. measuring method as claimed in claim 1 is characterized in that, described step 1 specifically comprises:
Step 11: set up the mapping relations between one camera two dimensional image coordinate system and the three-dimensional world coordinate system respectively;
Step 12: respectively single camera is demarcated, obtained camera inner parameter and external parameter;
Step 13: according to the camera inner parameter and the external parameter that obtain, carry out solid and demarcate, set up the spatial relation between the binocular camera.
3. measuring method as claimed in claim 2 is characterized in that, the inner parameter of described camera comprises the center position of coke ratio and the camera of camera, and external parameter comprises rotation matrix and the translation vector between three-dimensional camera coordinates system and the three-dimensional world coordinate system.
4. measuring method as claimed in claim 2 is characterized in that, the spatial relation between the described binocular camera is represented rotation matrix and the translation vector between the camera.
5. measuring method as claimed in claim 1 is characterized in that, the concrete steps that SIFT priori features storehouse obtains in the step 2 comprise:
Step 21: the binocular original image of gathering the thin parts of benchmark is right, utilize the lens distortion coefficient of binocular camera and three-dimensional correction parameter to described binocular original image to proofreading and correct respectively, the binocular solid correcting image that obtains the thin parts of benchmark is right;
Step 22: extract the SIFT feature from the binocular solid correcting image centering of the thin parts of described benchmark, form the SIFT priori features storehouse of the thin parts of benchmark.
6. measuring method as claimed in claim 1 is characterized in that, step 3 specifically comprises the steps:
Step 31, the binocular original image of gathering thin parts to be measured are right, utilize the lens distortion coefficient of binocular camera and three-dimensional correction parameter to described binocular original image to proofreading and correct respectively, the binocular solid correcting image that obtains thin parts to be measured is right;
Step 32: it is right to extract the SIFT unique point from the binocular solid correcting image centering of described thin parts to be measured, and with the SIFT unique point in the SIFT unique point extracted pair and the described SIFT priori features storehouse to mating, and then obtain with described SIFT priori features storehouse in equal number and corresponding SIFT unique point right.
7. as each described measuring method of claim 5-6, it is characterized in that described three-dimensional correction parameter comprises based on the rotation matrix between the described binocular camera and translation vector, the three-dimensional correction internal matrix of the binocular camera of acquisition and row alignment matrix.
8. measuring method as claimed in claim 1 is characterized in that, step 4 specifically comprises:
Step 41: according to the right image coordinate of SIFT unique point on the described thin parts to be measured, the mapping relations of three-dimensional world coordinate system and two dimensional image coordinate obtain the right three-dimensional world coordinate of described SIFT unique point in the binocular vision measuring system that utilization is set up;
Step 42: obtain described SIFT unique point to the three-dimensional planar at place according to the right three-dimensional world coordinate of described SIFT unique point, and with its space plane as thin parts to be measured.
9. measuring method as claimed in claim 8, it is characterized in that, the image coordinate that described SIFT unique point is right is binocular solid correcting image coordinate on the thin parts to be measured, the three-dimensional world coordinate of described SIFT unique point according to its binocular distortion correction image on image coordinate obtain, wherein said binocular distortion correction image obtains last image coordinate is following:
P und_l=M 1l(M rect_lR l) -1P rect_l
P und_r=M 1r(M rect_rR r) -1P rect_r
Wherein, P Und_l=[u Und_lv Und_l1] TAnd P Und_r=[u Und_rv Und_r1] TBe the image coordinate of binocular distortion correction image pair, P Rect_lAnd P Rect_rBe the image coordinate of binocular solid correcting image centering, M 1lAnd M 1rBe respectively the inner parameter matrix of binocular camera, M Rect_lAnd M Rect_rBe respectively the three-dimensional correction parameter matrix of binocular camera.
10. measuring method as claimed in claim 9 is characterized in that, the following acquisition of the three-dimensional coordinate of described SIFT unique point:
[X w Y w Z w] T=((A TA) -1A TB)
A = m l 11 - m l 31 u und _ l m l 12 - m l 32 u und _ l m l 13 - m l 33 u und _ l m l 21 - m l 31 v und _ l m l 22 - m l 32 v und _ l m l 23 - m l 33 v und _ l m r 11 - m r 31 u und _ r m r 12 - m r 32 u und _ r m r 13 - m r 33 u und _ r m r 21 - m r 31 v und _ r m r 22 - m r 32 v und _ r m r 23 - m r 33 v und _ r
B = m l 34 u und _ l - m l 14 m l 34 v und _ l - m l 24 m r 34 u und _ r - m r 14 m r 34 v und _ r - m r 24
Wherein, (X w, Y w, Z w) be the three-dimensional world coordinate of described SIFT unique point; m lAnd m rBe respectively the inner parameter matrix M of binocular camera 1lAnd M 1rElement.
CN201310159555.6A 2013-05-03 2013-05-03 Method for measuring three-dimensional position and posture of thin component with complex structure Active CN103278138B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310159555.6A CN103278138B (en) 2013-05-03 2013-05-03 Method for measuring three-dimensional position and posture of thin component with complex structure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310159555.6A CN103278138B (en) 2013-05-03 2013-05-03 Method for measuring three-dimensional position and posture of thin component with complex structure

Publications (2)

Publication Number Publication Date
CN103278138A true CN103278138A (en) 2013-09-04
CN103278138B CN103278138B (en) 2015-05-06

Family

ID=49060727

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310159555.6A Active CN103278138B (en) 2013-05-03 2013-05-03 Method for measuring three-dimensional position and posture of thin component with complex structure

Country Status (1)

Country Link
CN (1) CN103278138B (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103714571A (en) * 2013-09-23 2014-04-09 西安新拓三维光测科技有限公司 Single camera three-dimensional reconstruction method based on photogrammetry
CN104154875A (en) * 2014-08-20 2014-11-19 深圳大学 Three-dimensional data acquisition system and acquisition method based on two-axis rotation platform
CN104484870A (en) * 2014-11-25 2015-04-01 北京航空航天大学 Calibration aircraft positioning method
CN104636743A (en) * 2013-11-06 2015-05-20 北京三星通信技术研究有限公司 Character image correction method and device
CN104880176A (en) * 2015-04-15 2015-09-02 大连理工大学 Moving object posture measurement method based on prior knowledge model optimization
CN105157592A (en) * 2015-08-26 2015-12-16 北京航空航天大学 Binocular vision-based method for measuring deformation shape and deformation rate of flexible trailing edge of adaptive wing
CN105741290A (en) * 2016-01-29 2016-07-06 中国人民解放军国防科学技术大学 Augmented reality technology based printed circuit board information indication method and apparatus
CN106352855A (en) * 2016-09-26 2017-01-25 北京建筑大学 Photographing measurement method and device
CN107423772A (en) * 2017-08-08 2017-12-01 南京理工大学 A kind of new binocular image feature matching method based on RANSAC
CN108010085A (en) * 2017-11-30 2018-05-08 西南科技大学 Target identification method based on binocular Visible Light Camera Yu thermal infrared camera
CN108955685A (en) * 2018-05-04 2018-12-07 北京航空航天大学 A kind of tanker aircraft tapered sleeve pose measuring method based on stereoscopic vision
CN109700465A (en) * 2019-01-07 2019-05-03 广东体达康医疗科技有限公司 A kind of mobile three-dimensional wound scanning device and its workflow
CN110858403A (en) * 2018-08-22 2020-03-03 杭州萤石软件有限公司 Method for determining scale factor in monocular vision reconstruction and mobile robot
CN111179356A (en) * 2019-12-25 2020-05-19 北京中科慧眼科技有限公司 Binocular camera calibration method, device and system based on Aruco code and calibration board
CN111536981A (en) * 2020-04-23 2020-08-14 中国科学院上海技术物理研究所 Embedded binocular non-cooperative target relative pose measuring method
CN113378606A (en) * 2020-03-10 2021-09-10 杭州海康威视数字技术股份有限公司 Method, device and system for determining labeling information
CN113822945A (en) * 2021-09-28 2021-12-21 天津朗硕机器人科技有限公司 Workpiece identification and positioning method based on binocular vision
CN114559131A (en) * 2020-11-27 2022-05-31 北京颖捷科技有限公司 Welding control method and device and upper computer
CN114913246A (en) * 2022-07-15 2022-08-16 齐鲁空天信息研究院 Camera calibration method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07287764A (en) * 1995-05-08 1995-10-31 Omron Corp Stereoscopic method and solid recognition device using the method
CN103065351A (en) * 2012-12-16 2013-04-24 华南理工大学 Binocular three-dimensional reconstruction method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07287764A (en) * 1995-05-08 1995-10-31 Omron Corp Stereoscopic method and solid recognition device using the method
CN103065351A (en) * 2012-12-16 2013-04-24 华南理工大学 Binocular three-dimensional reconstruction method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
党乐: "基于双目立体视觉的三维重建方法研究", 《中国优秀硕士学位论文全文数据库(信息科技辑)》, 15 February 2010 (2010-02-15), pages 72 - 80 *

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103714571A (en) * 2013-09-23 2014-04-09 西安新拓三维光测科技有限公司 Single camera three-dimensional reconstruction method based on photogrammetry
CN103714571B (en) * 2013-09-23 2016-08-10 西安新拓三维光测科技有限公司 A kind of based on photogrammetric single camera three-dimensional rebuilding method
CN104636743A (en) * 2013-11-06 2015-05-20 北京三星通信技术研究有限公司 Character image correction method and device
CN104154875A (en) * 2014-08-20 2014-11-19 深圳大学 Three-dimensional data acquisition system and acquisition method based on two-axis rotation platform
CN104484870B (en) * 2014-11-25 2018-01-12 北京航空航天大学 Verify Plane location method
CN104484870A (en) * 2014-11-25 2015-04-01 北京航空航天大学 Calibration aircraft positioning method
CN104880176A (en) * 2015-04-15 2015-09-02 大连理工大学 Moving object posture measurement method based on prior knowledge model optimization
CN104880176B (en) * 2015-04-15 2017-04-12 大连理工大学 Moving object posture measurement method based on prior knowledge model optimization
CN105157592A (en) * 2015-08-26 2015-12-16 北京航空航天大学 Binocular vision-based method for measuring deformation shape and deformation rate of flexible trailing edge of adaptive wing
CN105157592B (en) * 2015-08-26 2018-03-06 北京航空航天大学 The deformed shape of the deformable wing of flexible trailing edge and the measuring method of speed based on binocular vision
CN105741290A (en) * 2016-01-29 2016-07-06 中国人民解放军国防科学技术大学 Augmented reality technology based printed circuit board information indication method and apparatus
CN105741290B (en) * 2016-01-29 2018-06-19 中国人民解放军国防科学技术大学 A kind of printed circuit board information indicating method and device based on augmented reality
CN106352855A (en) * 2016-09-26 2017-01-25 北京建筑大学 Photographing measurement method and device
CN107423772A (en) * 2017-08-08 2017-12-01 南京理工大学 A kind of new binocular image feature matching method based on RANSAC
CN108010085A (en) * 2017-11-30 2018-05-08 西南科技大学 Target identification method based on binocular Visible Light Camera Yu thermal infrared camera
CN108010085B (en) * 2017-11-30 2019-12-31 西南科技大学 Target identification method based on binocular visible light camera and thermal infrared camera
CN108955685B (en) * 2018-05-04 2021-11-26 北京航空航天大学 Refueling aircraft taper sleeve pose measuring method based on stereoscopic vision
CN108955685A (en) * 2018-05-04 2018-12-07 北京航空航天大学 A kind of tanker aircraft tapered sleeve pose measuring method based on stereoscopic vision
CN110858403A (en) * 2018-08-22 2020-03-03 杭州萤石软件有限公司 Method for determining scale factor in monocular vision reconstruction and mobile robot
CN109700465A (en) * 2019-01-07 2019-05-03 广东体达康医疗科技有限公司 A kind of mobile three-dimensional wound scanning device and its workflow
CN111179356A (en) * 2019-12-25 2020-05-19 北京中科慧眼科技有限公司 Binocular camera calibration method, device and system based on Aruco code and calibration board
CN113378606A (en) * 2020-03-10 2021-09-10 杭州海康威视数字技术股份有限公司 Method, device and system for determining labeling information
CN111536981A (en) * 2020-04-23 2020-08-14 中国科学院上海技术物理研究所 Embedded binocular non-cooperative target relative pose measuring method
CN111536981B (en) * 2020-04-23 2023-09-12 中国科学院上海技术物理研究所 Embedded binocular non-cooperative target relative pose measurement method
CN114559131A (en) * 2020-11-27 2022-05-31 北京颖捷科技有限公司 Welding control method and device and upper computer
CN113822945A (en) * 2021-09-28 2021-12-21 天津朗硕机器人科技有限公司 Workpiece identification and positioning method based on binocular vision
CN114913246A (en) * 2022-07-15 2022-08-16 齐鲁空天信息研究院 Camera calibration method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN103278138B (en) 2015-05-06

Similar Documents

Publication Publication Date Title
CN103278138A (en) Method for measuring three-dimensional position and posture of thin component with complex structure
CN110296691B (en) IMU calibration-fused binocular stereo vision measurement method and system
CN103971353B (en) Splicing method for measuring image data with large forgings assisted by lasers
CN103411553B (en) The quick calibrating method of multi-linear structured light vision sensors
CN102376089B (en) Target correction method and system
CN108510551B (en) Method and system for calibrating camera parameters under long-distance large-field-of-view condition
CN103530880B (en) Based on the camera marking method of projection Gaussian network pattern
CN109523595B (en) Visual measurement method for linear angular spacing of building engineering
CN105716542B (en) A kind of three-dimensional data joining method based on flexible characteristic point
CN112686877B (en) Binocular camera-based three-dimensional house damage model construction and measurement method and system
CN110672020A (en) Stand tree height measuring method based on monocular vision
CN104268876A (en) Camera calibration method based on partitioning
CN111091076B (en) Tunnel limit data measuring method based on stereoscopic vision
CN104778716B (en) Lorry compartment volume measuring method based on single image
CN113947638B (en) Method for correcting orthographic image of fish-eye camera
CN102693543B (en) Method for automatically calibrating Pan-Tilt-Zoom in outdoor environments
CN102519434A (en) Test verification method for measuring precision of stereoscopic vision three-dimensional recovery data
CA3233222A1 (en) Method, apparatus and device for photogrammetry, and storage medium
CN111879354A (en) Unmanned aerial vehicle measurement system that becomes more meticulous
CN112270698A (en) Non-rigid geometric registration method based on nearest curved surface
CN113902809A (en) Method for jointly calibrating infrared camera and laser radar
CN113658279B (en) Camera internal reference and external reference estimation method, device, computer equipment and storage medium
CN113012238B (en) Method for quick calibration and data fusion of multi-depth camera
CN110245634A (en) Multiposition, multi-angle crag deformation judgement and analysis method
CN102110290B (en) Method for solving internal parameters of camera by using regular triangular prism as target

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant