CN106595517A - Structured light measuring system calibration method based on projecting fringe geometric distribution characteristic - Google Patents

Structured light measuring system calibration method based on projecting fringe geometric distribution characteristic Download PDF

Info

Publication number
CN106595517A
CN106595517A CN201611072978.4A CN201611072978A CN106595517A CN 106595517 A CN106595517 A CN 106595517A CN 201611072978 A CN201611072978 A CN 201611072978A CN 106595517 A CN106595517 A CN 106595517A
Authority
CN
China
Prior art keywords
coordinate
camera
formula
coefficient
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611072978.4A
Other languages
Chinese (zh)
Other versions
CN106595517B (en
Inventor
孙长库
陆鹏
王鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201611072978.4A priority Critical patent/CN106595517B/en
Publication of CN106595517A publication Critical patent/CN106595517A/en
Application granted granted Critical
Publication of CN106595517B publication Critical patent/CN106595517B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/2433Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures for measuring outlines by shadow casting

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to the visual detection technology, and provides a calibration method of a structured light measuring system. According to a calibration model, the application range is wide, the measuring precision is high, and the calibration process is simple. According to the technical scheme, the structured light measuring system calibration method based on a projecting fringe geometric distribution characteristic includes: firstly obtaining coordinates of known spatial points in a camera coordinate system by employing an uncertain viewing angle calibration method, then performing system calibration by employing the camera coordinate system coordinates and image coordinates of the points and coding values thereof, and respectively establishing the relation between Z-direction coordinates and image information and the relation between XY direction coordinates and the image information. The method is mainly applied to occasions of visual detection.

Description

Projection striped geometric distribution feature structure light measurement system scaling method
Technical field
The present invention relates to vision detection technology, more particularly to projects striped geometric distribution feature structure light measurement system mark Determine method.
Background technology
Structured light three-dimensional vision e measurement technology, the noncontact with vision measurement, speed are fast, high degree of automation, flexible The advantages of good.Structured light three-dimensional vision is based on optic triangle method principle, by the various optical mode characteristic points for calculating collection image Offset information inverse go out the surface profile of testee.The optical mode that the projection of optical projection device determines so that structure light image Information is easy to extract, thus certainty of measurement is higher, is widely used in the on-line checking of various industrial products.
The precision of structured light measurement system depends on system calibrating precision.Existing structured light measurement system scaling method point For three kinds, the respectively photogrammetry based on matrixing, the triangulation and polynomial fitting method based on geometrical relationship. Photogrammetry is divided into pseudo- camera method, reverse camera method and optical plane method.The major defect of the method is the calibrated of projector Journey depends on camera calibration parameter, and so as to cause the diffusion of camera calibration error, advantage is the scaling method generally to being Without special constraint, equipment is installed and compares simple to operate with system calibrating process for the installation of system.Triangulation based on geometrical relationship Method is the mathematic(al) representation set up between 3D coordinates and a few parameter of system according to the geometrical relationship of system, used as demarcation With the mathematical model of measurement.The advantage of the method is that of avoiding the demarcation of projector, has the disadvantage that system installation accuracy is required higher, Model can cause precision low if excessively simplifying.Polynomial fitting method assumes that the 3D coordinates of determinand can be with which in video camera figure The polynomial repressentation of the encoded radio as at respective pixel, is determined by experiment polynomial parameter then, directly sets up shooting Mapping of 2 dimension coordinates of machine image to the space 3-dimensional coordinate of scene point.This method avoid the mark to video camera and projector It is fixed, but demarcate time-consuming longer and relatively costly.A kind of space reflection model that Lendray et al. is proposed is adopted in a large number, should Method is substantially a kind of interpolation method, has larger error when measuring the object outside calibration range.In sum, invent one Kind of caliberating device is simple, and calibration process is convenient, and the nominal time is fast and scaling method of high precision has very great meaning.
The content of the invention
To overcome the deficiencies in the prior art, it is contemplated that proposing a kind of scaling method of structured light measurement system, the mark Cover half type scope of application width, certainty of measurement are high, and calibration process is simple.For this purpose, the technical solution used in the present invention is, striped is projected Geometric distribution feature structure light measurement system scaling method, obtains camera coordinate system first with uncertain visual angle scaling method Under known spatial point coordinate, recycle these point camera coordinate system coordinate and image coordinate and its encoded radio system System is demarcated, and sets up the relation of relation and XY direction coordinate of the Z-direction coordinate with image information and image information respectively.
Comprising the concrete steps that for Z-direction is demarcated, the Z-direction numerical value of measuring system is by image coordinate (u, v) and corresponding volume Code value p determines that projector lens optical axis and ccd video camera camera lens optical axis have certain angle α and assume projector projects jointly The stripe order recognition direction for going out is parallel with the u direction of video camera, three planes h1, h2, h3 represent three it is parallel with camera imaging face Plane, plane h1, h2 and plane h2, the distance of h3 be identical, is Δ h, and plane h3 is with camera optical axis intersection point to projector Position A lines are followed successively by D, B with the focus of plane h2, h1, and plane h1, h2, h3 and projector optical axes crosspoint are followed successively by C, E, G, Encoded radio p on the same light that projector projects go out is identical, and the u in the same depth of video camera is with the change of p Rate of change meet relational expression:
Du/dp=kp+b (1)
Wherein du/dp is u with the rate of change of the change of p, and k is slope, and b is intercept, and due in the same depth of video camera On degree, u does not change with the change of v, so and having
Du/dv=0 (2)
Wherein du/dv is u with the rate of change of the change of v;It is integrated using formula (1) and formula (2)
U=∫ (kp+b) dp (3)
The relation function of the image coordinate (u, v) in the same depth of video camera and corresponding encoded radio p is obtained, its In:A0It is the secondary term coefficient with regard to encoded radio p, B0It is the Monomial coefficient with regard to encoded radio p, D0It is constant term;
U=A0p2+B0p+D0 (4)
In view of in camera coordinate system uov and projector coordinates system u'o'v', coordinate axess o'u' and ou has angle β Exist, need rotation transformation to be done to uov coordinate systems:
u0=ucos β+vsin β (5)
Bring formula (4) into obtain
Ucos β+vsin β=A0p2+B0p+D0
U=A0p2/cosβ+B0p/cosβ-vtanβ+D0/cosβ (6)
As
U=A1p2+B1p+C1v+D1 (7)
Wherein A1It is the secondary term coefficient with regard to encoded radio p, B1It is the Monomial coefficient with regard to encoded radio p, C1It is with regard to figure As the Monomial coefficient of coordinate v, D1It is constant term;
(u, the v, p) of same camera depth is on a parabola, and A1With projector and camera lens light The increase of the angle α and camera depth of axle and increase, using obtain image information combine method of least square to formula (7) not Know that parameter is calculated;
Δ ABC~Δ ADE Δ AFG are known by similar triangle theory, is obtained
Wherein p0For the encoded radio on projector optical axis, l represents the length of point-to-point transmission, by striped knowable to geometric optics knowledge Periodic width is also linear with camera depth, i.e.,:
Wherein r1For Monomial coefficient of the molecule item with regard to h, r in fraction2For the constant term of denominator term in fraction, r3To divide Monomial coefficient of the denominator term with regard to h, r in formula4For the constant term of denominator term in fraction;
To the coefficient A in surface equation formula (7)1Normalized is done, that is, eliminates the change of striped thickness different to video camera The impact of the coding difference of depth, obtains:
Wherein u/A1And v/A1It is the image coordinate after normalization, due to eliminating the change of striped thickness to encoding the shadow of difference Ring, so B1/A1As camera depth h linearly changes, constant term D1/A1With rule of the camera depth h in quadratic function Rule change, i.e.,:
B1/A1=s1h+s2
D1/A1=t0h2+t1h+t2 (11)
Wherein s1It is the Monomial coefficient of expression of first degree, s2It is the constant term of expression of first degree, t0It is quadratic secondary term coefficient, t1It is quadratic Monomial coefficient, t2It is quadratic constant term;
Bring formula (9) and formula (11) into formula (7) to obtain
Consider to eliminate the optical axis non-coplanar band in the horizontal plane of camera lens distortion and projector and camera lens The impact for coming, using second order error compensation model, wherein k0~k5, l0~l5It is error correction coefficient;
Therefore formula (12) is changed into
Wherein a0, a1It is the constant term and Monomial coefficient in the secondary term coefficient with regard to h respectively, b0~b7It is to close respectively Constant term in the secondary term coefficient of h, Monomial coefficient and secondary term coefficient, c0~c7It is the secondary term coefficient with regard to h respectively In constant term, Monomial coefficient and secondary term coefficient;
The coordinate Z of the N groups known spatial point of acquisition Z-direction under camera coordinate systemCiWith image corresponding informance (ui,vi, pi) meet formula (14) condition, as
PX=Q (15)
Wherein X is coefficient matrix, is 18 × 1 column vector, and matrix of the form that P is for N × 18, Q are that form is N × 1 Column vector, wherein
X=[g0 g1 … gi … g17]T
Instrument error function
Obtain unknown parameter in formula (17) using Levenburg-marquardt algorithms, thus by image information (u, v, P) the coordinate Z for obtaining Z-direction under camera coordinate system is solved equation with reference to simple cubic equation solution formulaC
Demarcate comprising the concrete steps that for XY directions:
The XY direction values of system are by Z-direction numerical value under the image coordinate (u, v) and camera coordinate system of Jing distortion corrections ZCIt is common to determine, using formula
In formula, s is scale factor, m11~m34It is polynomial parameters.When demarcating, sat in video camera using known spatial point Coordinate (X under mark systemC,YC,ZC) and Jing distortion corrections image coordinate point (u, v) with reference to method of least square can calculate it is many Item formula parameter m11~m34, in measurement, using image coordinate (u, v) and Z-direction numerical value ZCXY side is drawn by bringing multinomial into To the corresponding coordinate figure (X of differenceC,YC), draw measured object under camera coordinate system hence with image information (u, v, p) Three-dimensional information (XC,YC,ZC)。
Coordinate (the X under world coordinate system is obtainedW,YW,ZW), sat using the video camera that Formula of Coordinate System Transformation (20) is obtained Coordinate Conversion under mark system is the coordinate under world coordinate system, and wherein R is spin matrix, and T is translation matrix;
During demarcation, target is placed in camera field of view, moving target mark, often moving a position needs projective structure light Striped shooting image, obtain coordinate of the characteristic point under camera coordinate system on target using uncertain visual angle scaling method, The image coordinate and its eigenvalue of characteristic point are obtained by image procossing, known quantity are brought into formula (17) and formula (19) by calculating Systematic parameter is obtained, during actual measurement, you can seat of the measured point under camera coordinate system is obtained by formula (18) and formula (19) Mark, then these coordinates under world coordinate system are obtained by formula (20).
The characteristics of of the invention and beneficial effect are:
The present invention is applied to the grating and two-value striped mode of delivery of vertical light strips.Built by the derivation of geometric model The relation of image characteristic point and its encoded radio and space coordinatess is found.Overcome traditional triangulation based on geometrical relationship The strict shortcoming of middle system restriction, cannot measure the outer object of calibration range in also solving the problems, such as polynomial fitting method.The mark Determine process simple and convenient, certainty of measurement is high, be suitable for measurement range big.
Description of the drawings:
Fig. 1 system geometric model schematic diagrams,
Fig. 2 feature point for calibration and actual spot of measurement comparison diagram,
Fig. 3 measurement effect figures.
Specific embodiment
The scaling method that the system is proposed does not need the auxiliary of precise guide rail, and calibration process is easy, and stated accuracy is high, therefore It is especially suitable for field calibration.
This scaling method is made up of two parts, is obtained under camera coordinate system first with uncertain visual angle scaling method The coordinate of known spatial point, recycles the camera coordinate system coordinate and image coordinate and its encoded radio of these points to carry out system mark It is fixed, the relation of relation and XY direction coordinate of the Z-direction coordinate with image information and image information is set up respectively.
The demarcation in 1Z directions
The Z-direction numerical value of measuring system is determined jointly by image coordinate (u, v) and corresponding encoded radio p.Due to projection Instrument camera lens optical axis and ccd video camera camera lens optical axis there is certain angle α and assume the stripe order recognition direction that goes out of projector projects with The u direction of video camera is parallel, is known by geometric optics relative theory, and the width of fringe that projector projects go out is with the increasing of projection distance Big linear increase tendency, thus the u in the same depth of video camera meets relational expression with the rate of change of the change of p:
Du/dp=kp+b (1)
Wherein du/dp is u with the rate of change of the change of p, and k is slope, and b is intercept.Again due in the same depth of video camera On degree, u does not change with the change of v, so and having
Du/dv=0 (2)
Wherein du/dv is u with the rate of change of the change of v.
It is integrated using formula (1) and formula (2)
U=∫ (kp+b) dp (3)
The relation function of the image coordinate (u, v) in the same depth of video camera and corresponding encoded radio p can be obtained, Wherein:A0It is the secondary term coefficient with regard to encoded radio p, B0It is the Monomial coefficient with regard to encoded radio p, D0It is constant term.
U=A0p2+B0p+D0 (4)
In view of in camera coordinate system uov and projector coordinates system u'o'v', coordinate axess o'u' and ou has small Angle β, it is therefore desirable to rotation transformation is done to uov coordinate systems
u0=ucos β+vsin β (5)
Bring formula (4) into obtain
Ucos β+vsin β=A0p2+B0p+D0
U=A0p2/cosβ+B0p/cosβ-vtanβ+D0/cosβ (6)
As
U=A1p2+B1p+C1v+D1 (7)
Wherein A1It is the secondary term coefficient with regard to encoded radio p, B1It is the Monomial coefficient with regard to encoded radio p, C1It is with regard to figure As the Monomial coefficient of coordinate v, D1It is constant term.
Understand that (u, the v, p) of same camera depth is on a parabola, and A1With projector and video camera mirror Head the angle α of optical axis and the increase of camera depth and increase.During demarcation, least square is combined using the image information for obtaining Method is calculated to the unknown parameter of formula (7).
Below exploring the Changing Pattern that parabola parameter changes with camera depth.
If Fig. 1 is system geometric model, the wherein optical axis included angle of projector and video camera is α, is gone out in projector projects Encoded radio p on same light is identical, such as C, and E, G have identical encoded radio.Three planes h1, h2, h3 represent three with The parallel plane in camera imaging face, plane h1, h2 and plane h2, the distance of h3 are identical, are Δ h.By similar triangle theory Know Δ ABC~Δ ADE~Δ AFG, can obtain
Wherein p0For the encoded radio on projector optical axis, l represents the length of point-to-point transmission.As can be seen from Figure 1 it is same Coding difference is linear in the different corresponding length of camera depth and depth, and understands striped by geometric optics knowledge Periodic width is also linear with camera depth, i.e.,
Wherein r1For Monomial coefficient of the molecule item with regard to h, r in fraction2For the constant term of denominator term in fraction, r3To divide Monomial coefficient of the denominator term with regard to h, r in formula4For the constant term of denominator term in fraction.
To the coefficient A in surface equation formula (7)1Normalized is done, that is, eliminates the change of striped thickness different to video camera The impact of the coding difference of depth, can obtain
Wherein u/A1And v/A1The image coordinate after normalization, below main consider B1/A1And D1/A1With video camera depth The Changing Pattern of degree h.Due to eliminating the change of striped thickness to encoding the impact of difference, so B1/A1As camera depth h is in Linear change, constant term D1/A1As rules of the camera depth h in quadratic function changes, i.e.,
B1/A1=s1h+s2
D1/A1=t0h2+t1h+t2 (11)
Wherein s1It is the Monomial coefficient of expression of first degree, s2It is the constant term of expression of first degree, t0It is quadratic secondary term coefficient, t1It is quadratic Monomial coefficient, t2It is quadratic constant term.
Bring formula (9) and formula (11) into formula (7) to obtain
Consider to eliminate the optical axis non-coplanar band in the horizontal plane of camera lens distortion and projector and camera lens The impact for coming, using second order error compensation model, wherein k0~k5, l0~l5It is error correction coefficient.
Therefore formula (12) is changed into
Wherein a0, a1It is the constant term and Monomial coefficient in the secondary term coefficient with regard to h respectively, b0~b7It is to close respectively Constant term in the secondary term coefficient of h, Monomial coefficient and secondary term coefficient, c0~c7It is the secondary term coefficient with regard to h respectively In constant term, Monomial coefficient and secondary term coefficient.
The coordinate Z of the N groups known spatial point of acquisition Z-direction under camera coordinate systemCiWith image corresponding informance (ui,vi, pi) meet formula (14) condition, as
PX=Q (15)
Wherein X is coefficient matrix, is 18 × 1 column vector, and matrix of the form that P is for N × 18, Q are that form is N × 1 Column vector, wherein
X=[g0 g1 … gi … g17]T
Instrument error function
Unknown parameter in formula (17) is obtained using Levenburg-marquardt algorithms, therefore image information can be passed through (u, v, p) solves equation the coordinate Z for obtaining Z-direction under camera coordinate system with reference to simple cubic equation solution formulaC
The demarcation in 2XY directions
The XY direction values of system are by Z-direction numerical value under the image coordinate (u, v) and camera coordinate system of Jing distortion corrections ZCIt is common to determine.Using formula
In formula, s is scale factor, m11~m34It is polynomial parameters.When demarcating, sat in video camera using known spatial point Coordinate (X under mark systemC,YC,ZC) and Jing distortion corrections image coordinate point (u, v) with reference to method of least square can calculate it is many Item formula parameter m11~m34.In measurement, using image coordinate (u, v) and Z-direction numerical value ZCXY side is drawn by bringing multinomial into To the corresponding coordinate figure (X of differenceC,YC), draw measured object under camera coordinate system hence with image information (u, v, p) Three-dimensional information (XC,YC,ZC)。
The coordinate under world coordinate system is obtained, using under the camera coordinate system that Formula of Coordinate System Transformation (20) is obtained Coordinate Conversion is the coordinate under world coordinate system, and wherein R is spin matrix, and T is translation matrix.
During demarcation, target is placed in camera field of view, moving target mark, often moving a position needs projective structure light Striped shooting image.Coordinate of the characteristic point under camera coordinate system on target is obtained using uncertain visual angle scaling method, The image coordinate and its eigenvalue of characteristic point are obtained by image procossing.Known quantity is brought into formula (17) and formula (19) by calculating Obtain systematic parameter.During actual measurement, you can obtain seat of the measured point under camera coordinate system by formula (18) and formula (19) Mark, then these coordinates under world coordinate system are obtained by formula (20).
Fig. 2 is the comparison diagram of the measured value with setting value of the characteristic point on target, and wherein Red Star represents measured value, blueness three It is angular to represent setting value, show that systematic measurement error shows that the present invention can be preferable in 0.1mm by analyzing experimental data Be applied in the demarcation of structured light measurement system.Fig. 3 is the design sketch of systematic survey face.

Claims (3)

1. a kind of projection striped geometric distribution feature structure light measurement system scaling method, is characterized in that, first with uncertain Visual angle scaling method obtains the coordinate of the known spatial point under camera coordinate system, recycles the camera coordinate system of these points to sit Mark and image coordinate and its encoded radio carry out system calibrating, set up relation and the XY directions of Z-direction coordinate and image information respectively The relation of coordinate and image information.
2. projection striped geometric distribution feature structure light measurement system scaling method as claimed in claim 1, is characterized in that, mark Determine comprising the concrete steps that for Z-direction, the Z-direction numerical value of measuring system is jointly true by image coordinate (u, v) and corresponding encoded radio p Determine, projector lens optical axis and ccd video camera camera lens optical axis have the stripe order recognition that certain angle α and hypothesis projector projects go out Direction is parallel with the u direction of video camera, and three planes h1, h2, h3 represent three planes parallel with camera imaging face, plane H1, h2 and plane h2, the distance of h3 are identical, are Δ h, plane h3 and camera optical axis intersection point to projector position A line with The focus of plane h2, h1 is followed successively by D, B, and plane h1, h2, h3 and projector optical axes crosspoint are followed successively by C, E, G, in projector projects The encoded radio p on same light for going out is identical, and the u in the same depth of video camera meets with the rate of change of the change of p Relational expression:
Du/dp=kp+b (1)
Wherein du/dp is u with the rate of change of the change of p, and k is slope, and b is intercept, and due to the u in the same depth of video camera Do not change with the change of v, so and having
Du/dv=0 (2)
Wherein du/dv is u with the rate of change of the change of v;It is integrated using formula (1) and formula (2)
U=∫ (kp+b) dp (3)
The relation function of the image coordinate (u, v) in the same depth of video camera and corresponding encoded radio p is obtained, wherein:A0It is With regard to the secondary term coefficient of encoded radio p, B0It is the Monomial coefficient with regard to encoded radio p, D0It is constant term;
U=A0p2+B0p+D0 (4)
In view of in camera coordinate system uov and projector coordinates system u'o'v', coordinate axess o'u' and ou has angle β and deposits Needing to do rotation transformation to uov coordinate systems:
u0=ucos β+vsin β (5)
Bring formula (4) into obtain
U cos β+vsin β=A0p2+B0p+D0
U=A0p2/cosβ+B0p/cosβ-vtanβ+D0/cosβ (6)
As
U=A1p2+B1p+C1v+D1 (7)
Wherein A1It is the secondary term coefficient with regard to encoded radio p, B1It is the Monomial coefficient with regard to encoded radio p, C1It is to sit with regard to image The Monomial coefficient of mark v, D1It is constant term;
(u, the v, p) of same camera depth is on a parabola, and A1With projector and the folder of camera lens optical axis The increase of angle α and camera depth and increase, using obtain image information combine unknown parameter of the method for least square to formula (7) Calculated;
Δ ABC~Δ ADE~Δ AFG is known by similar triangle theory, is obtained
l ( B C ) L = l ( D E ) L + Δ h = l ( F G ) L + 2 Δ h - - - ( 8 )
Wherein p0For the encoded radio on projector optical axis, l represents the length of point-to-point transmission, by fringe period knowable to geometric optics knowledge Width is also linear with camera depth, i.e.,:
A 1 = r 1 h + r 2 r 3 h + r 4 - - - ( 9 )
Wherein r1For Monomial coefficient of the molecule item with regard to h, r in fraction2For the constant term of denominator term in fraction, r3For in fraction Monomial coefficient of the denominator term with regard to h, r4For the constant term of denominator term in fraction;
To the coefficient A in surface equation formula (7)1Normalized is done, that is, is eliminated striped thickness and is changed to video camera different depth The impact of coding difference, obtains:
u A 1 = p 2 + B 1 A 1 p + C 1 v A 1 + D 1 A 1 - - - ( 10 )
Wherein u/A1And v/A1It is the image coordinate after normalization, due to eliminating the change of striped thickness to encoding the impact of difference, institute With B1/A1As camera depth h linearly changes, constant term D1/A1As rules of the camera depth h in quadratic function becomes Change, i.e.,:
B1/A1=s1h+s2
D1/A1=t0h2+t1h+t2 (11)
Wherein s1It is the Monomial coefficient of expression of first degree, s2It is the constant term of expression of first degree, t0It is quadratic secondary term coefficient, t1It is two The Monomial coefficient of secondary formula, t2It is quadratic constant term;
Bring formula (9) and formula (11) into formula (7) to obtain
u = r 1 h + r 2 r 3 h + r 4 ( p 2 + ( s 1 h + s 2 ) p + ( t 0 h 2 + t 1 h + t 2 ) ) + C 1 v - - - ( 12 )
The optical axis for considering to eliminate camera lens distortion and projector and camera lens non-coplanar in the horizontal plane brings Affect, using second order error compensation model, wherein k0~k5, l0~l5It is error correction coefficient;
u = k 0 u d 2 + k 1 v d 2 + k 2 u d v d + k 3 u d + k 4 v d + k 5 v = l 0 u d 2 + l 1 v d 2 + l 2 u d v d + l 3 u d + l 4 v d + l 5 - - - ( 13 )
Therefore formula (12) is changed into
h 3 + ( a 0 + a 1 p ) h 2 + ( b 0 + b 1 p + b 2 p 2 + b 3 u d 2 + b 4 v d 2 + b 5 u d v d + b 6 u d + b 7 v d ) h + ( c 0 + c 1 p + c 2 p 2 + c 3 u d 2 + c 4 v d 2 + c 5 u d v d + c 6 u d + c 7 v d ) = 0 - - - ( 14 )
Wherein a0, a1It is the constant term and Monomial coefficient in the secondary term coefficient with regard to h respectively, b0~b7It is with regard to h respectively Constant term in secondary term coefficient, Monomial coefficient and secondary term coefficient, c0~c7In being the secondary term coefficient with regard to h respectively Constant term, Monomial coefficient and secondary term coefficient;
The coordinate Z of the N groups known spatial point of acquisition Z-direction under camera coordinate systemCiWith image corresponding informance (ui,vi,pi) full Sufficient formula (14) condition, as
PX=Q (15)
Wherein X is coefficient matrix, is 18 × 1 column vector, and matrix of the form that P is for N × 18, Q are the row that form is N × 1 Vector, wherein
X=[g0 g1 … gi … g17]T
P = Z C 1 2 p 1 Z C 1 2 Z C 1 p 1 Z C 1 p 1 2 Z C 1 u 1 2 Z C 1 v 1 2 Z C 1 u 1 v 1 Z C 1 u 1 Z C 1 v 1 Z C 1 1 p 1 p 1 2 u 1 2 v 1 2 u 1 v 1 u 1 v 1 Z C 2 2 p 2 Z C 2 2 Z C 2 p 2 Z C 2 p 2 2 Z C 2 u 2 2 Z C 2 v 2 2 Z C 2 u 2 v 2 Z C 2 u 2 Z C 2 v 2 Z C 2 1 p 2 p 2 2 u 2 2 v 2 2 u 2 v 2 u 2 v 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Z C i 2 p i Z C i 2 Z C i p i Z C i p i 2 Z C i u i 2 Z C i v i 2 Z C i u i v i Z C i u i Z C i v i Z C i 1 p i p i 2 u i 2 v i 2 u i v i u i v i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Z C N 2 p N Z C N 2 Z C N p N Z C N p N 2 Z C N u N 2 Z C N v N 2 Z C N u N v N Z C N u N Z C N v N Z C N 1 p N p N 2 u N 2 v N 2 u N v N u N v N
Q = - Z C 1 3 Z C 2 3 ... Z C i 3 ... Z C N 3 T - - - ( 16 )
Instrument error function
F = Σ [ ( g 0 + g 1 p i ) Z C i 2 + ( g 2 + g 3 p i + g 4 p i 2 + g 5 u i 2 + g 6 v i 2 + g 7 u i v i + g 8 u i + g 9 v i ) Z C i + ( g 10 + g 11 p i + g 12 p i 2 + g 13 u i 2 + g 14 v i 2 + g 15 u i v i + g 16 u i + g 17 v i ) + Z C i 3 ] 2 - - - ( 17 )
Unknown parameter in formula (17) is obtained using Levenburg-marquardt algorithms, therefore is tied by image information (u, v, p) Close simple cubic equation solution formula and solve equation the coordinate Z for obtaining Z-direction under camera coordinate systemC
Z C i 3 + ( g 0 + g 1 p i ) Z C i 2 + ( g 2 + g 3 p i + g 4 p i 2 + g 5 u i 2 + g 6 v i 2 + g 7 u i v i + g 8 u i + g 9 v i ) Z C i + ( g 10 + g 11 p i + g 12 p i 2 + g 13 u i 2 + g 14 v i 2 + g 15 u i v i + g 16 u i + g 17 v i ) = 0 - - - ( 18 )
Demarcate comprising the concrete steps that for XY directions:
The XY direction values of system are by Z-direction numerical value Z under the image coordinate (u, v) and camera coordinate system of Jing distortion correctionsCJointly It is determined that, using formula
s u s v s = m 11 m 12 m 13 m 14 m 21 m 22 m 23 m 24 m 31 m 32 m 33 m 34 X C Y C Z C 1 - - - ( 19 )
In formula, s is scale factor, m11~m34It is polynomial parameters.When demarcating, using known spatial point in camera coordinate system Under coordinate (XC,YC,ZC) and image coordinate point (u, v) of Jing distortion corrections can calculate multinomial with reference to method of least square Parameter m11~m34, in measurement, using image coordinate (u, v) and Z-direction numerical value ZCXY directions point are drawn by bringing multinomial into Not corresponding coordinate figure (XC,YC), measured object under camera coordinate system three has been drawn hence with image information (u, v, p) Dimension information (XC,YC,ZC)。
3. projection striped geometric distribution feature structure light measurement system scaling method as claimed in claim 1, is characterized in that, Obtain the coordinate (X under world coordinate systemW,YW,ZW), using the seat under the camera coordinate system that Formula of Coordinate System Transformation (20) is obtained Mark is converted to the coordinate under world coordinate system, and wherein R is spin matrix, and T is translation matrix;
X w Y w Z w = R X C Y C Z C + T - - - ( 20 )
During demarcation, target is placed in camera field of view, moving target mark, often moving a position needs projective structure striations And shooting image, coordinate of the characteristic point under camera coordinate system on target is obtained using uncertain visual angle scaling method, pass through Image procossing obtains the image coordinate and its eigenvalue of characteristic point, brings known quantity into formula (17) and formula (19) and is obtained by calculating Systematic parameter, during actual measurement, you can coordinate of the measured point under camera coordinate system is obtained by formula (18) and formula (19), then These coordinates under world coordinate system are obtained by formula (20).
CN201611072978.4A 2016-11-29 2016-11-29 Project striped geometry distribution characteristics structured light measurement system scaling method Expired - Fee Related CN106595517B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611072978.4A CN106595517B (en) 2016-11-29 2016-11-29 Project striped geometry distribution characteristics structured light measurement system scaling method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611072978.4A CN106595517B (en) 2016-11-29 2016-11-29 Project striped geometry distribution characteristics structured light measurement system scaling method

Publications (2)

Publication Number Publication Date
CN106595517A true CN106595517A (en) 2017-04-26
CN106595517B CN106595517B (en) 2019-01-29

Family

ID=58593675

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611072978.4A Expired - Fee Related CN106595517B (en) 2016-11-29 2016-11-29 Project striped geometry distribution characteristics structured light measurement system scaling method

Country Status (1)

Country Link
CN (1) CN106595517B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018228013A1 (en) * 2017-06-12 2018-12-20 北京航空航天大学 Front coated plane mirror-based structured light parameter calibration device and method
CN110542540A (en) * 2019-07-18 2019-12-06 北京的卢深视科技有限公司 optical axis alignment correction method of structured light module
CN111161358A (en) * 2019-12-31 2020-05-15 华中科技大学鄂州工业技术研究院 Camera calibration method and device for structured light depth measurement
CN113188478A (en) * 2021-04-28 2021-07-30 伏燕军 Mixed calibration method for telecentric microscopic three-dimensional measurement system
CN113418472A (en) * 2021-08-24 2021-09-21 深圳市华汉伟业科技有限公司 Three-dimensional measurement method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0538688A (en) * 1991-07-30 1993-02-19 Nok Corp Coordinate system calibrating method for industrial robot system
US5227985A (en) * 1991-08-19 1993-07-13 University Of Maryland Computer vision system for position monitoring in three dimensions using non-coplanar light sources attached to a monitored object
CN101216296A (en) * 2008-01-11 2008-07-09 天津大学 Binocular vision rotating axis calibration method
CN101245994A (en) * 2008-03-17 2008-08-20 南京航空航天大学 Calibration method for object surface three-dimensional contour structure light measurement system
CN103411553A (en) * 2013-08-13 2013-11-27 天津大学 Fast calibration method of multiple line structured light visual sensor

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0538688A (en) * 1991-07-30 1993-02-19 Nok Corp Coordinate system calibrating method for industrial robot system
US5227985A (en) * 1991-08-19 1993-07-13 University Of Maryland Computer vision system for position monitoring in three dimensions using non-coplanar light sources attached to a monitored object
CN101216296A (en) * 2008-01-11 2008-07-09 天津大学 Binocular vision rotating axis calibration method
CN101245994A (en) * 2008-03-17 2008-08-20 南京航空航天大学 Calibration method for object surface three-dimensional contour structure light measurement system
CN103411553A (en) * 2013-08-13 2013-11-27 天津大学 Fast calibration method of multiple line structured light visual sensor

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018228013A1 (en) * 2017-06-12 2018-12-20 北京航空航天大学 Front coated plane mirror-based structured light parameter calibration device and method
US10690492B2 (en) 2017-06-12 2020-06-23 Beihang University Structural light parameter calibration device and method based on front-coating plane mirror
CN110542540A (en) * 2019-07-18 2019-12-06 北京的卢深视科技有限公司 optical axis alignment correction method of structured light module
CN111161358A (en) * 2019-12-31 2020-05-15 华中科技大学鄂州工业技术研究院 Camera calibration method and device for structured light depth measurement
CN111161358B (en) * 2019-12-31 2022-10-21 华中科技大学鄂州工业技术研究院 Camera calibration method and device for structured light depth measurement
CN113188478A (en) * 2021-04-28 2021-07-30 伏燕军 Mixed calibration method for telecentric microscopic three-dimensional measurement system
CN113418472A (en) * 2021-08-24 2021-09-21 深圳市华汉伟业科技有限公司 Three-dimensional measurement method and system

Also Published As

Publication number Publication date
CN106595517B (en) 2019-01-29

Similar Documents

Publication Publication Date Title
CN106595517A (en) Structured light measuring system calibration method based on projecting fringe geometric distribution characteristic
CN102183213B (en) Aspherical mirror detection method based on phase measurement deflection technology
CN105300316B (en) Optical losses rapid extracting method based on grey scale centre of gravity method
CN103559707B (en) Based on the industrial fixed-focus camera parameter calibration method of motion side's target earnest
CN103559735A (en) Three-dimensional reconstruction method and system
CN106403838A (en) Field calibration method for hand-held line-structured light optical 3D scanner
CN110378969A (en) A kind of convergence type binocular camera scaling method based on 3D geometrical constraint
CN105910584B (en) Large scale dynamic photogrammtry system it is high-precision fixed to and orientation accuracy evaluation method
CN109827521A (en) Calibration method for rapid multi-line structured optical vision measurement system
CN104574415A (en) Target space positioning method based on single camera
Tutsch et al. Optical three-dimensional metrology with structured illumination
CN101149836A (en) Three-dimensional reconfiguration double pick-up camera calibration method
CN104422425A (en) Irregular-outline object space attitude dynamic measuring method
CN106546193B (en) Three-dimensional measurement method and system for surface of high-reflection object
Li et al. Monocular-vision-based contouring error detection and compensation for CNC machine tools
CN106489062A (en) System and method for measuring the displacement of mobile platform
CN105043304A (en) Novel calibration plate and calibration method for performing length measurement by using calibration plate
CN101936716B (en) Contour measuring method
CN103258327B (en) A kind of single-point calibration method based on two degrees of freedom video camera
CN103697811A (en) Method of obtaining three-dimensional coordinates of profile of object through combining camera and structural light source
CN103559710B (en) A kind of scaling method for three-dimensional reconstruction system
CN110248179B (en) Camera pupil aberration correction method based on light field coding
CN109341588B (en) Binocular structured light three-system method visual angle weighted three-dimensional contour measurement method
CN110146032A (en) Synthetic aperture camera calibration method based on optical field distribution
CN114170321A (en) Camera self-calibration method and system based on distance measurement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190129

Termination date: 20201129

CF01 Termination of patent right due to non-payment of annual fee