CN108801218B - High-precision orientation and orientation precision evaluation method of large-size dynamic photogrammetry system - Google Patents

High-precision orientation and orientation precision evaluation method of large-size dynamic photogrammetry system Download PDF

Info

Publication number
CN108801218B
CN108801218B CN201810319004.4A CN201810319004A CN108801218B CN 108801218 B CN108801218 B CN 108801218B CN 201810319004 A CN201810319004 A CN 201810319004A CN 108801218 B CN108801218 B CN 108801218B
Authority
CN
China
Prior art keywords
orientation
measurement
length
point
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810319004.4A
Other languages
Chinese (zh)
Other versions
CN108801218A (en
Inventor
董明利
孙鹏
李巍
王君
燕必希
吕乃光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Information Science and Technology University
Original Assignee
Beijing Information Science and Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Information Science and Technology University filed Critical Beijing Information Science and Technology University
Priority to CN201810319004.4A priority Critical patent/CN108801218B/en
Publication of CN108801218A publication Critical patent/CN108801218A/en
Application granted granted Critical
Publication of CN108801218B publication Critical patent/CN108801218B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C15/00Surveying instruments or accessories not provided for in groups G01C1/00 - G01C13/00

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a high-precision orientation and orientation precision evaluation method of a large-size dynamic photogrammetric system, which comprises the following steps of: a) selecting a reference scale, and arranging coding points at two ends of the reference scale to measure the length of the reference scale; b) moving the reference scale in a first mode and adjusting the posture of the reference scale in a measurement space, and taking pictures of the reference scale in a second mode; c) performing primary relative positioning orientation on the measurement system; d) limiting self-calibration light beam adjustment by using multiple length constraints, wherein adjustment parameters comprise a principal point, a principal distance, radial distortion, eccentric distortion, in-plane distortion, external orientation parameters and space point coordinates; e) and carrying out traceability evaluation on the orientation precision of the measuring system.

Description

High-precision orientation and orientation precision evaluation method of large-size dynamic photogrammetry system
The application is a divisional application of a high-precision orientation and orientation precision evaluation method of a large-size dynamic photogrammetry system, wherein the application date is 2016, 05 and 06, and the application number is CN 201610297902.5.
Technical Field
The invention relates to a high-precision orientation and orientation precision evaluation method, in particular to a high-precision orientation and orientation precision evaluation method of a large-size dynamic photogrammetry system.
Background
The dynamic photogrammetry system consists of two or more cameras capable of acquiring images at high speed, and the cameras synchronously acquire images of target points to be measured. By calibrating the projection imaging model of each camera, the space light can be recovered from the image surface points, and further the space coordinates of the measured point can be calculated in an intersection manner. The imaging model of the camera comprises two types of inner orientation parameters and outer orientation parameters, wherein the inner orientation parameters comprise principal points, principal distances and various types and different orders of distortion parameters, and the outer orientation parameters comprise the position and the pointing angle of the camera in a space coordinate system. The solution to the external orientation parameter is the positioning orientation of the measurement system.
At present, it is acknowledged that the highest positioning and orientation precision can be obtained by the self-calibration light beam adjustment method, because the influence of the change of the internal orientation parameter on the measurement result is considered in the adjustment process by the technology, and the obtained internal and external orientation parameter results are the optimal results adaptive to the measurement environment and the measurement network. The most widely applied self-calibration method in the field of stereoscopic vision measurement is the two-step calibration method of Tsai and the planar calibration method of Zhangyingyou, and the commonly used calibration objects comprise a stereoscopic calibration object, a planar checkerboard calibration plate, a planar dot calibration plate and the like. The calibration method is essentially to generate a measuring network with a plurality of control points through the relative movement between a calibration object and a camera, so as to realize the self-calibration light beam adjustment of a single camera. And then solving the orientation relation between the two cameras under the same coordinate system.
This method is used not only for machine vision but also for structured light measurement, and so-called high-precision large-size stereo vision measurement systems are also used. This type of calibration method is suitable for smaller measurement ranges (typically less than 1m x 1m) because the accuracy of the calibration is inversely related to its size, and satisfactory calibration accuracy can be obtained when used for structured light measurement or microscopic measurement in small spaces.
The self-calibration method of the calibration object has the following defects: the key point of the adjustment process is the internal orientation parameter, and the orientation relation between cameras is not directly optimized and evaluated; the coordinate precision of a target point serving as a control point on the calibration plate can bring system errors; the difference between the target point of the calibration plate and the measuring point in material and imaging quality can bring system errors; except for the estimation of image plane errors or space coordinate errors, no objective and reliable space evaluation index exists. The method is not suitable for large-size measurement occasions with high precision requirements and traceable precision.
Disclosure of Invention
In order to solve the technical problems, the invention provides a high-precision orientation and orientation precision evaluation method of a large-size dynamic photogrammetric system, which comprises the following steps: a) selecting a reference scale, and arranging coding points at two ends of the reference scale for length measurement; b) moving the reference scale in a first mode and adjusting the posture of the reference scale in a measurement space, and taking pictures of the reference scale in a second mode; c) performing primary relative positioning orientation on the measurement system; d) limiting self-calibration light beam adjustment by using multiple length constraints, wherein adjustment parameters comprise a principal point, a principal distance, radial distortion, eccentric distortion, in-plane distortion, external orientation parameters and space point coordinates; e) and carrying out traceability evaluation on the orientation precision of the measuring system.
Preferably, the first mode of step b is: the normal direction of the coding point is parallel to the intersecting angle bisector of the camera optical axis, and the reference ruler covers the volume space of the measured object and has change in the depth direction.
Preferably, the second mode of step b is:
b1) uniformly dividing a measured space, wherein at least 15 nodes for dividing the space are provided;
b2) moving the reference ruler to a certain node position;
b3) at each node position, rotating the reference scale, and keeping the current posture for a short time when the reference scale has 0 degrees, 45 degrees, 90 degrees and 135 degrees with the horizontal direction;
b4) controlling two cameras to shoot reference scale images simultaneously;
b5) repeating the step b3) and the step b4), traversing all the positions of the spatial nodes, and obtaining global adjustment measurement network pictures of the reference ruler at different positions and different postures;
preferably, the steps b3) and b4) are repeated at least 60 times in the step b5) to traverse all spatial node positions.
Preferably, the step c includes the steps of:
c1) the solution space of the basis matrix is determined using a five-point method. Automatically selecting 5 imaging points in the reference ruler position, wherein the imaging points are the center and four vertexes of the shot picture; the algorithm for automatic selection is as follows:
abs(xli)+abs(yli)+abs(xri)+abs(yri)→min
(xli)+(yli)+(xri)+(yri)→max
(-xli)+(yli)+(-xri)+(yri)→max
(-xli)+(-yli)+(-xri)+(-yri)→max
(xli)+(-yli)+(xri)+(-yri)→max
wherein [ xli ] represents the image plane coordinate of the ith point acquired by the left camera, and [ xri ] represents the image point coordinate of the ith point acquired by the right camera.
c2) And obtaining the solution of the essential matrix by using a elimination method.
Preferably, the solution of obtaining the essence matrix in step c2) comprises the following three key steps:
c21) establishing a 10 th degree polynomial about the unknown number w;
c22) solving the real root of the 10 th order polynomial in step c 21;
c23) the solution of the essence matrix is judged.
Preferably, in the method of establishing a 10 th order polynomial on w in step c21, a Toeplitz matrix is used:
c211) describing all the polynomials in an array form;
amxm+am-1xm-1+…+a2x2+a1x+a0 A=[am am-1 … a2 a1 a0]T
bnxn+bn-1xn-1+…+b2x2+b1x+b0 B=[bn bn-1 … b2 b1 b0]T
c212) programming and calculating polynomial multiplication by using a Toeplitz matrix, wherein the formula is as follows:
Figure GDA0002647173300000031
where T is the Toeplitz matrix corresponding to polynomial A, with T having a number of columns equal to the number of elements in B and a number of rows equal to (m +1) + (n +1) -1.
Preferably, the step c212 may further perform a programmed calculation polynomial division method by using a Toeplitz matrix, and the method includes the steps of:
c213) describing n as using the Toeplitz matrix for d: n ═ T (d) q + r
Wherein, a numerator polynomial is set as n, a denominator polynomial is set as d, a quotient is set as q, and the rest terms are set as r;
c214) solving the optimal solution of the quotient of the polynomial division method: q ═ T (T)TT)-1TTn。
Preferably, the method for solving the real root of the 10 th-order polynomial in step c22 is as follows:
c221) establishing a Sturm sequence of a tenth-order polynomial, wherein the formula is as follows:
p0(x)=p(x)
p1(x)=p′(x)
p2(x)=-rem(p0,p1)
p3(x)=-rem(p1,p2)
Figure GDA0002647173300000041
0=-rem(pm-1,pm)
where rem represents the remainder of the polynomial and P (x) is a known tenth degree polynomial.
c222) All single intervals of the polynomial are searched by a recursive dichotomy in combination with the Sturm theorem.
c223) After the single-root intervals are obtained, a numerical solution of the real root of | c (w) | 0 is solved on each single-root interval using a bisection method.
Preferably, the step d comprises the steps of:
d1) acquiring a camera imaging model;
d2) establishing an error equation for the photogrammetric system;
d3) carrying out self-adaptive proportion adjustment on the error equation;
d4) carrying out block operation on a normal equation;
d5) obtaining a self-calibration light beam adjustment with length constraint;
d6) evaluating the accuracy of the measurement system.
Preferably, the step e performs traceability evaluation on the orientation precision of the measurement system, and adopts a direct linear transformation solution, and the steps are as follows:
e1) obtaining an error equation for a single imaging point:
e2) when the measurement system consists of two cameras, a set of error equations corresponding to a spatial point [ X Y Z ] is obtained:
e3) acquiring a corresponding normal equation and coordinate correction, and optimizing the space point coordinates through multiple iterations:
e4) obtaining the measurement values of all azimuth reference-ruler lengths:
e5) obtaining the average value of the length measurement results of all the azimuth reference scales:
e6) scaling by taking the ratio of the average length to the metering length as an orientation result; and analyzing the uncertainty of the length measurement result to obtain the directional precision evaluation:
e7) obtaining quality parameters of the measuring instrument:
e8) the relative error for each length is calculated:
e9) the uncertainty of the length measurement relative to the error is calculated.
In summary of the above description, the high-precision positioning and orienting method for large-size multi-camera photogrammetry of the invention has the advantages that the measurement system is kept stable, the positioning and orienting are completed on the measurement site, the processing and the metering of calibration objects are simple and convenient, and the calibration process is simple. The traceability evaluation of the length measurement error of the measurement system can be given when the orientation process is finished, and the following technical effects are obtained:
1. positioning and orienting of the multi-camera photogrammetry system are carried out by adopting a reference ruler with two return light reflection target points (or spherical return light reflection mark points), the reference ruler images at a plurality of positions in the measurement range in different postures, a length orientation field covering the whole measurement range is formed, and the space point coordinates in the orientation field do not need to be known;
2. a numerical calculation method for relative orientation by using a five-point method and a technology for reasonable solution judgment by using space orthogonal length constraint;
3. multi-camera self-calibration beam adjustment is an indirect adjustment with multiple constraints. The reference lengths of a plurality of positions and postures are used as three-dimensional constraint conditions to control the whole adjustment process, the correlation between internal and external orientation parameters can be decoupled, and the full-parameter self-calibration light beam adjustment (comprising a principal point, a principal distance, radial distortion, eccentric distortion, in-plane distortion, external orientation parameters and space point coordinates) is realized;
4. and when the orientation process is finished, performing traceable precision evaluation on the whole positioning orientation process through the statistical results of the plurality of length measurement errors.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is an experimental set-up of a high precision positioning and orientation method for large scale multi-camera photogrammetry according to the present invention;
FIG. 2 is an experimental schematic of the system;
FIG. 3 is an image of a reference scale in the system in two cameras;
fig. 4 is a diagram showing the relative positions between the scale and the two cameras at different positions and postures in space.
Detailed Description
The objects and functions of the present invention and methods for accomplishing the same will be apparent by reference to the exemplary embodiments. However, the present invention is not limited to the exemplary embodiments disclosed below, and may be implemented in various forms. The nature of the description is merely to assist those skilled in the relevant art in a comprehensive understanding of the specific details of the invention.
The invention provides a large-size dynamic photogrammetry system, which comprises a camera 101, a power supply 102, a network cable 103, a signal line 104 and an upper computer 105: the power supply 102 supplies power to the camera 1, the camera 2 and the upper computer 105, the network cable 103a is connected between the camera 1 and the upper computer 105, and the network cable 103b is connected between the camera 2 and the upper computer 105 and is used for communication between the camera and the upper computer; the signal line 104a is also connected with the camera 1 and the upper computer 105, and the signal line 104b is connected with the camera 2 and the upper computer 105 and is used for transmitting signals between the upper computer 105 and the camera 101; the upper computer 105 is also provided with a USB signal card, the upper computer 105 sends out a synchronous signal through the USB signal card 106, and controls the two cameras 101 to synchronously acquire images through the signal line 104)
The experimental schematic of the system is shown in fig. 2: the measurement space 201 mainly comprises two cameras 202 and a reference scale 203, and the measurement space 201 provides an experimental verification environment for high-precision orientation and an orientation precision evaluation method of the system.
The invention provides a high-precision orientation and orientation precision evaluation method of a large-size dynamic photogrammetric system, which comprises the following steps of:
a) selecting a reference scale 203, and arranging coding points at two ends of the reference scale 203 to measure the length of the reference scale;
b) moving the reference scale 203 in a first way and adjusting the posture of the reference scale in a measurement space 201, and taking pictures of the reference scale in a second way by using the camera 202;
c) performing primary relative positioning orientation on the measurement system;
d) limiting self-calibration light beam adjustment by using multiple length constraints, wherein adjustment parameters comprise a principal point, a principal distance, radial distortion, eccentric distortion, in-plane distortion, external orientation parameters and space point coordinates;
e) and carrying out traceability evaluation on the orientation precision of the measuring system.
The method comprises the following specific steps:
step b:
according to an embodiment of the present invention, the first manner of step b is: the normal direction of the coding point is parallel to an intersecting angle bisector of the camera optical axis, and the reference ruler covers the volume space of the measured object and has change in the depth direction;
according to an embodiment of the present invention, the second manner of step b is: b1) dividing a detected space region to form at least 15 divided nodes; b2) moving the reference ruler to a certain node position; b3) changing the posture of the reference ruler, and keeping the reference ruler stable for a short time at angles of 0 degree, 45 degrees, 90 degrees and 135 degrees; b4) controlling two cameras to simultaneously acquire a reference scale image at each stable position; b5) repeating the step b3) and the step b4), traversing all the positions of the spatial nodes, and obtaining global adjustment measurement network pictures of the reference ruler at different positions and different postures;
wherein steps b3) and b4) are repeated at least 60 times in step b5) to traverse all spatial node positions.
Based on the method in the step b, the design and calibration operations of the reference scale are as follows:
the main body of the reference scale is a carbon fiber rod, and two light return reflection encoding points are fixed at two ends of the rod and serve as end points of the metering length. In the calibration process, the reference scale at a certain position is imaged in two cameras, as shown in fig. 3, wherein a white point region is a return light reflection point 301 attached to two ends of the reference scale;
and (4) holding and moving the reference scale to different positions of a measurement space by hand, adjusting the posture of the reference scale to enable the normal direction of the coding point to be approximately parallel to the intersecting angular bisector of the optical axes of the two cameras, locating in a public view field, and shooting a picture of the reference scale. The position of the reference scale should cover the space to be measured and have a change in the depth direction, as shown in fig. 4, the three-dimensional space includes coordinates 402 of two cameras and a plurality of reference scales 401 at different positions, and the return light reflection points 403 are attached to two ends of the reference scales for calibrating the length of the reference scales. The number of the taken pictures is not less than 60.
Step c:
according to one embodiment of the invention, step c comprises the steps of:
c1) determining the solution space of the basic matrix by using a five-point method: automatically selecting 5 imaging points in the reference ruler position, wherein the imaging points are the center and four vertexes of the shot picture; the algorithm for automatic selection is as follows:
Figure GDA0002647173300000071
wherein [ xli ] represents the image plane coordinate of the ith point acquired by the left camera, and [ xri ] represents the image point coordinate of the ith point acquired by the right camera.
The point satisfying the above respective descriptions is the point closest to the expected position.
Because the corresponding image point satisfies:
X′TEX=0 (3.2)
where E is the essential matrix and X' are the coordinates of the corresponding points on the left and right images, respectively, normalized by their respective intrinsic parameters.
Substituting the selected five points into the above equation yields the following system of equations:
Figure GDA0002647173300000081
and (3) obtaining a basic solution system of the formula through SVD (singular value decomposition), thereby expanding the solution space of the essential matrix:
E=wEw+xEx+yEy+Ez (3.4)
c2) obtaining a solution of the essential matrix by using a elimination method;
the essential matrix has two properties:
Figure GDA0002647173300000082
using the above equation, 10 equations for the unknowns w, x, y can be derived, where x, y, x2,xy,y2,x3,x2y,xy2,y3As an unknown quantity, w is a known quantity, rewriting the 10 equations, eliminating x and y, yields the following system of equations:
Figure GDA0002647173300000091
where C (w) is a polynomial matrix for w.
The homogeneous system of linear equations described by equation (3.6) has a non-zero solution, so the determinant of C (w) is zero. Theoretically, a polynomial expressed by a determinant of c (w) is described, and the root of the polynomial is solved to obtain a numerical solution of w and c (w), and further a non-zero solution of (3.6) is solved to obtain values of x and y. This necessarily involves a polynomial description of the determinant of the 10 th order symbol matrix and how to solve the real roots of the higher order polynomial.
According to one embodiment of the invention, the solution of obtaining the essence matrix in step c2) comprises the following key steps;
c21) establishing a 10 th degree polynomial about the unknown number w;
the solution of the determinant is programmed by the following method, suitable for either symbolic polynomials or numerical operations:
Figure GDA0002647173300000092
the above algorithm involves multiplication and division of polynomials;
according to one embodiment of the present invention, step c21 is implemented by using a Toeplitz matrix in a method of building a 10 th order polynomial on w by:
c211) describing all the polynomials in an array form;
amxm+am-1xm-1+…+a2x2+a1x+a0 A=[am am-1 … a2 a1 a0]T
bnxn+bn-1xn-1+…+b2x2+b1x+b0 B=[bn bn-1 … b2 b1 b0]T (3.7)
c212) programming and calculating polynomial multiplication by using a Toeplitz matrix, wherein the formula is as follows:
Figure GDA0002647173300000101
where T is the Toeplitz matrix corresponding to polynomial A, with T having a number of columns equal to the number of elements in B and a number of rows equal to (m +1) + (n +1) -1.
According to an embodiment of the present invention, step c212 may further perform a programmed calculation polynomial division method using a Toeplitz matrix, and the method includes the following steps:
c213) describing n as using the Toeplitz matrix for d: n ═ T (d) q + r
Wherein, a numerator polynomial is set as n, a denominator polynomial is set as d, a quotient is set as q, and the rest terms are set as r;
c214) solving the optimal solution of the quotient of the polynomial division method: q ═ T (T)TT)-1TTn。
The method for solving the polynomial division by utilizing the Toeplitz matrix does not have the error accumulation phenomenon of long division, can avoid the ill-conditioned problem of dividing a large number by a small number, and obtains an optimal solution in the least square sense.
c22) Solving the real root of the 10 th order polynomial in step c 21;
the solution of the real roots of the high-order polynomial can only depend on numerical solutions, such as newton's method, dichotomy method, etc., but the key problem is to determine a single interval.
According to an embodiment of the present invention, the single-root interval is searched by using the Sturm theorem, and knowing a polynomial p (x), the Sturm sequence of the single-root interval can be determined as follows:
Figure GDA0002647173300000111
where rem represents the remainder of the polynomial.
The Sturm theorem describes the number of real roots of a polynomial P (x) in a section (a, b), wherein the symbol change times of the Sturm sequence function value at an end point a are c (a) times, and the symbol change times of the Sturm sequence function value at an end point b are c (b) times, so that the number of the real roots of the polynomial P (x) in the half-open section is c (a) -c (b), the invention provides a recursive dichotomy technology, all single-root sections are searched in a wide numerical range (a, b), and the pseudo codes of the corresponding functions are as follows:
determining the Sturm sequence sturmSequence of polynomial p (x);
function[rootIntervals]=rootIntervalsDetect(a,b,sturmSequence)
determining the number numRoot of the root in (a, b) by Sturm theorem;
Figure GDA0002647173300000112
after the single-root intervals are obtained, the real root of | c (w) | 0 is solved on each single-root interval by using a bisection method. Substituting the solved w into the expression of C (w) in the formula (3.6), solving unknown terms about x and y through SVD, and further obtaining the values of x and y. Substituting the solved w, x and y into the formula (3.4) to obtain the intrinsic matrix E, wherein the number of the solutions is the same as the number of the w real roots.
c23) The solution of the essence matrix is judged.
Each w has a set of camera position relationships and corresponding reconstructed spatial point coordinates, and in special cases, such as imaging a planar scene, the conventional criterion of minimum image plane error cannot be used to determine a good understanding because there are two solutions to approximate the image plane error. In order to fundamentally solve the root discrimination problem, the invention uses the spatial information as constraint and uses the length proportion of the reference scale at two orthogonal positions as discrimination standard. The reconstructed spatial length ratio is the final general understanding of the essential matrix closest to 1.
The amount of translation and the spatial coordinates are further scaled by the ratio between the spatial length and the reconstruction length. The obtained external orientation parameter and the space coordinate can be used as the initial parameter value of the next self-calibration light beam adjustment.
Step d:
according to one embodiment of the present invention, step d comprises the steps of:
d1) acquiring a camera imaging model;
by [ X ]0 Y0 Z0 Az El Ro]TDescribing the orientation of the camera, the coordinates of the spatial points are [ X Y Z ]]TCoordinates [ X Y Z ] in the Camera coordinate System]TThe description is as follows:
Figure GDA0002647173300000121
wherein the content of the first and second substances,
Figure GDA0002647173300000122
Figure GDA0002647173300000123
the linear projection coordinates of this spatial point are:
Figure GDA0002647173300000124
its object distance relative to the imaging system is:
Figure GDA0002647173300000125
research shows that the radial distortion parameter of a space point with object distance s' on the image surface of the imaging system is as follows:
Figure GDA0002647173300000131
Figure GDA0002647173300000132
Figure GDA0002647173300000133
Figure GDA0002647173300000134
where s is the imaging system focal distance,
Figure GDA0002647173300000135
and
Figure GDA0002647173300000136
is the calibration result of the radial distortion parameter at two distances,
Figure GDA0002647173300000137
and
Figure GDA0002647173300000138
the two distances correspond to the image distances of the gaussian imaging formula, respectively.
Let the principal point of the camera be offset by xpAnd ypCorrespond to thisThe eccentric distortion and in-plane distortion parameter of the space point is P1,P2,B1,B2. Is located in (x)l,yl) The distortion of the image point is:
Figure GDA0002647173300000139
Figure GDA00026471733000001310
Figure GDA00026471733000001311
the final image point coordinates of the space point on the image plane of the standing camera are as follows:
Figure GDA00026471733000001312
the exterior orientation parameter and the space coordinate are correlated with the radial distortion parameter through the distortion parameter and s'.
d2) Establishing an error equation for the photogrammetric system;
for a two-station photogrammetry system, the imaging coordinate of the ith photo for the jth point in space is represented by (xij, yij), then the error equation is:
Figure GDA0002647173300000141
wherein, Xij 0Is all and the imaging point (x)ij,yij) (vi) related parameters, (v)xij,vyij) Is a residual error, AijAnd BijIs the Jacobian matrix of the imaging model (4.7) for the external orientation parameter, the j space point coordinate and the camera imaging parameter of the ith picture.
Figure GDA0002647173300000142
Figure GDA0002647173300000143
AijIs solved by the following procedure:
Figure GDA0002647173300000144
Figure GDA0002647173300000145
Figure GDA0002647173300000146
Figure GDA0002647173300000147
Figure GDA0002647173300000151
from the relationship described by equation (4.4), it can be seen that:
Figure GDA0002647173300000152
Figure GDA0002647173300000153
Figure GDA0002647173300000154
through the description of the formula (4.3), the method can be usedSolving for xlAnd ylFor X0The partial derivatives of (a) are:
Figure GDA0002647173300000155
then, in combination with equation (4.7), the observed value is for X in the orientation parameter0The partial derivatives of the terms are:
Figure GDA0002647173300000156
similarly, the partial derivatives of the observed values for other orientation parameters can be obtained:
Figure GDA0002647173300000157
Figure GDA0002647173300000158
linear term xlAnd ylThe partial derivatives for each angle are described more complex and reference can be made to the relevant literature. Only the partial derivatives of Δ x and Δ y for each angle are analyzed here.
Figure GDA0002647173300000159
Figure GDA00026471733000001510
Figure GDA00026471733000001511
Figure GDA0002647173300000161
Figure GDA0002647173300000162
Figure GDA0002647173300000163
Figure GDA0002647173300000164
Figure GDA0002647173300000165
The invention uses single-side self-calibration light beam adjustment, and the internal parameter participating in adjustment is xp,yp,f,P1,P2,B1,B2And S2Single edge radial distortion parameter
Figure GDA0002647173300000166
Figure GDA0002647173300000167
Figure GDA0002647173300000168
Figure GDA0002647173300000169
Figure GDA00026471733000001610
Figure GDA00026471733000001611
Figure GDA00026471733000001612
Figure GDA00026471733000001613
Figure GDA00026471733000001614
Figure GDA00026471733000001615
Figure GDA00026471733000001616
Figure GDA00026471733000001617
Figure GDA00026471733000001618
Figure GDA00026471733000001619
Figure GDA0002647173300000171
BijIs solved by the following procedure:
Figure GDA0002647173300000172
Figure GDA0002647173300000173
d3) carrying out block operation on a normal equation;
and setting a double-camera measuring system to carry out photogrammetry on n reference ruler positions, wherein an error equation system is as follows:
Figure GDA0002647173300000174
in the formula, the subscripts i and jk denote the kth end points (i: 1,2, j: 1, 2.. n, k: 1,2) of the ith camera and the jth reference length.
Let vTThe parameter increment delta to be solved for v → min is solved by the following formula:
Figure GDA0002647173300000175
in view of the sparsity of A, ATA and ATl can be described by regular blocks with respect to the index of pictures and target points:
Figure GDA0002647173300000181
Figure GDA0002647173300000182
Figure GDA0002647173300000183
Figure GDA0002647173300000184
Figure GDA0002647173300000185
Figure GDA0002647173300000186
Figure GDA0002647173300000191
d4) obtaining a self-calibration light beam adjustment with length constraint;
the constraints are described by the following equations:
Figure GDA0002647173300000192
Figure GDA0002647173300000193
Figure GDA0002647173300000194
wherein L is a length measurement value, LjThe method for solving the partial derivative of the length to the corresponding endpoint space coordinate is as follows:
Figure GDA0002647173300000201
Figure GDA0002647173300000202
then, the least squares solution of equation (4.31) under the constraint described by equation (4.32) and the corresponding joint coefficient vector are:
Figure GDA0002647173300000203
Figure GDA0002647173300000204
d5) evaluating the accuracy of the measurement system;
the error in unit weight of the measurement system is:
Figure GDA0002647173300000205
in order to ensure the orientation accuracy, it is recommended to place the reference scale at more than 60 positions.
The error covariance matrix of the internal and external orientation parameters of the two cameras and the space coordinates of the characteristic points of the reference scale is as follows:
Figure GDA0002647173300000206
d6) carrying out self-adaptive proportion adjustment on the error equation;
unknown quantities to be solved have great difference in magnitude, especially internal parameters such as K1, P1, P2, B1 and B2 generally have 10-5, K2 has 10-7 and K3 has 10-11, the great difference in magnitude brings ill-conditioned problem of matrix A in error equation (4.31), and due to the problems of machine precision, large number and small number division and the like commonly existing in numerical calculation, the calculation result of the normal equation can be seriously influenced, and the problems of matrix rank deficiency and the like often occur. In order to unify the orders of magnitude of the terms in the error equation, an adaptive scaling is designed here.
At the beginning of each adjustment iteration process, firstly counting the magnitude Mj and the adjustment coefficient kj of each column in an error equation, wherein the calculation method comprises the following steps:
Figure GDA0002647173300000207
and then multiplying the calculation result of each column by the proportional coefficient of the corresponding column to obtain an adjusted error equation:
Figure GDA0002647173300000211
the unknowns calculated using equation (4.31)
Figure GDA0002647173300000212
And to be solved
Figure GDA0002647173300000213
The relationship between them is:
Figure GDA0002647173300000214
both simulation and experiment prove that the ill-condition of the normal equation in the aspect of numerical calculation is eliminated by the self-adaptive proportional adjustment, and better adjustment stability and adjustment precision are brought.
Step e
And after the orientation process is finished, fixing the internal and external orientation parameters of the two cameras, and solving the space coordinate of the point to be measured through least square. In the process of solving the space coordinate, because the measured points have no correlation any more, the measured points can be solved point by point, the time consumption caused by large-scale matrix operation is avoided, the performance of dynamic measurement is improved, and the method is essentially a direct linear transformation solution.
According to an embodiment of the present invention, a specific method for performing traceability evaluation on the orientation accuracy of the measurement system comprises:
e1) error equations describing individual imaging points:
Figure GDA0002647173300000215
sparse matrix BijIs the partial derivative moment of the image plane coordinate with respect to the space coordinateArray, solved by equations (4.30) and (4.31).
e2) When the measurement system consists of two cameras, the system of error equations corresponding to a spatial point [ XYZ ] is:
Figure GDA0002647173300000216
e3) the corresponding normal equation and coordinate correction are:
Figure GDA0002647173300000217
e4) the orientation precision evaluation is a length measurement evaluation, and data in an orientation process is used. Reconstructing 2n reference scale endpoint coordinates by using a formula (5.3), and further solving the measured value of the corresponding reference scale length:
L1 L2 … Ln
e5) the average of the measurements was:
Figure GDA0002647173300000221
e6) scaling with the ratio between the average length and the gauge length as the orientation result (translation of the orientation parameter):
Figure GDA0002647173300000222
L′1=sL1 L′2=sL2 ... L′n=sLn (5.6)
and analyzing the uncertainty of the length measurement result to obtain the directional precision evaluation:
Figure GDA0002647173300000223
e7) the quality parameters of the measurement instrument are described by integer multiples of the length uncertainty:
B=ku(L′i) (5.8)
e8) the relative error for each length is:
Figure GDA0002647173300000224
e9) the relative error of the length measurement is equal to the relative error of the length measurement when no scaling is performed, so the uncertainty of the relative error of the length measurement is also equal to the uncertainty of the relative error of the original measured length, which is:
Figure GDA0002647173300000225
according to the high-precision orientation and orientation precision evaluation method of the large-size dynamic photogrammetry system, the following verification experiments are carried out:
the measurement system used for the experiment consisted of two industrial cameras, as shown in fig. 3, and for the orientation was a reference scale of approximately 950mm in length, the reference length being indicated by the return light reflection encoded marker points at the two ends of the scale. The volume of the measuring environment is about 3.6m multiplied by 2.4m, the reference scale is moved and imaged in the measuring environment, the self-calibration positioning and orientation of the measuring system are completed by using the method, and the orientation precision of the system is evaluated by using the evaluation method. The results of the three directional experiments are shown in tables 1 and 2, where table 1 shows the directional result of the self-calibration beam adjustment target with the minimum image plane error, and table 2 shows the directional result of the multi-length-constrained self-calibration beam adjustment introduced in the present invention.
TABLE 1 orientation results of conventional self-calibration beam adjustment
Figure GDA0002647173300000231
TABLE 2 orientation results of multiple length constrained self-calibration beam adjustment
Figure GDA0002647173300000232
Comparing the standard deviation of the image plane errors of the three directional experiments in tables 1 and 2, it can be seen that the error of the x direction of the image plane is reduced to a negligible state by the conventional self-calibration method, and is one order of magnitude smaller than the image plane error in the other direction. This is clearly incorrect because the imaging should have similar error levels in both directions. The adjustment method causes errors of self-calibration results and positioning and orientation results, and corresponding length errors are large. The self-calibration adjustment technology with multi-length constraint introduced in the invention is constrained by the space length, and the image plane error is not the only factor determining the adjustment result, so that the image plane error is more reasonable, and the corresponding length error level is obviously reduced.
The experimental result shows that the relative error of length measurement is used as an index for evaluation, and the analysis complexity caused by the metering error of the reference length can be ignored. If the length measurement has traceability, the measurement system can also perform precision evaluation through absolute measurement errors, and the evaluation result also has traceability.
In conclusion, the invention utilizes a single reference ruler to complete the field positioning and orientation of the double-camera large-size dynamic photogrammetry system. The five-point method is combined with the space length ratio for preliminary orientation, so that the problem of solution loss under special conditions can be solved, and the solution can be accurately determined and understood from multiple solutions. Meanwhile, the adjustment of the self-calibration light beams can be restrained by utilizing a plurality of space lengths, the correlation among parameters is overcome, and the overall orientation precision of the measuring system is improved. The statistical result of the length measurement error provides a traceable objective evaluation index, and the experimental result marks that the relative error of the length measurement of the invention reaches 1/15000.
The above description is only exemplary of the invention and should not be taken as limiting the scope of the invention, which is intended to include all equivalent variations or modifications of the structure, features and principles of the invention as described in the claims.

Claims (1)

1. A high-precision orientation and orientation precision evaluation method for a large-size dynamic photogrammetric system comprises the following steps:
a) selecting a reference scale, and arranging coding points at two ends of the reference scale to measure the length of the reference scale;
b) in a measurement space, moving the reference ruler in a first mode, adjusting the posture of the reference ruler, and taking pictures of the reference ruler in a second mode, wherein the first mode is as follows: the normal direction of the coding point is parallel to an intersecting angle bisector of the camera optical axis, and the reference ruler covers the volume space of the measured object and has change in the depth direction; the second mode is as follows:
b1) uniformly dividing a measured space, wherein at least 15 nodes for dividing the space are provided;
b2) moving the reference ruler to a certain node position;
b3) at each node position, rotating the reference scale, and keeping the current posture for a short time when the reference scale has 0 degrees, 45 degrees, 90 degrees and 135 degrees with the horizontal direction;
b4) controlling two cameras to shoot reference scale images simultaneously;
b5) repeating the step b3) and the step b4), traversing all the positions of the spatial nodes, and obtaining global adjustment measurement network pictures of the reference ruler at different positions and different postures;
c) performing a preliminary relative positioning orientation of the measurement system, comprising the steps of:
c1) determining a solution space of the basic matrix by using a five-point method; automatically selecting 5 imaging points in the reference ruler position, wherein the imaging points are the center and four vertexes of the shot picture; the algorithm for automatic selection is as follows:
abs(xli)+abs(yli)+abs(xri)+abs(yri)→min
(xli)+(yli)+(xri)+(yri)→max
(-xli)+(yli)+(-xri)+(yri)→max
(-xli)+(-yli)+(-xri)+(-yri)→max
(xli)+(-yli)+(xri)+(-yri)→max
wherein [ xli yli]Image plane coordinates of the ith point [ xr ] acquired by the left camerai yri]Representing the image point coordinates of the ith point acquired by the right camera;
c2) obtaining a solution of the essential matrix by using a elimination method, wherein the solution comprises the following key steps;
c21) establishing a 10 th-order polynomial about an unknown number w, wherein w is c1) coefficients of a basic solution system to be solved in the basic solution system in the algorithm, and expanding a solution space of the essential matrix: e ═ wEw+xEx+yEy+EzThe number of unknowns in (a),
wherein, in the method of establishing a 10 th order polynomial on w, a Toeplitz matrix is used:
c211) describing all the polynomials in an array form;
amxm+am-1xm-1+…+a2x2+a1x+a0 A=[am am-1…a2 a1 a0]T
bnxn+bn-1xn-1+…+b2x2+b1x+b0 B=[bn bn-1…b2 b1 b0]T
c212) programming and calculating polynomial multiplication by using a Toeplitz matrix, wherein the formula is as follows:
Figure FDA0002987920010000021
where T is the Toeplitz matrix corresponding to polynomial A, T having a number of columns equal to the number of elements in B and a number of rows equal to (m +1) + (n +1) -1;
or a Toeplitz matrix is used for carrying out programmed calculation polynomial division, and the method comprises the following steps:
c213) describing n as using the Toeplitz matrix for d: n ═ T (d) q + r
Wherein, a numerator polynomial is set as n, a denominator polynomial is set as d, a quotient is set as q, and the rest terms are set as r;
c214) solving the optimal solution of the quotient of the polynomial division method: q ═ T (T)TT)-1TTn;
c22) Solving the real root of the 10 th order polynomial in step c 21;
c23) judging the solution of the essence matrix;
d) limiting self-calibration light beam adjustment by using multiple length constraints, wherein adjustment parameters comprise a principal point, a principal distance, radial distortion, eccentric distortion, in-plane distortion, external orientation parameters and space point coordinates;
e) performing traceability evaluation on the orientation precision of the measurement system:
and performing traceability evaluation on the orientation precision of the measurement system by adopting a direct linear transformation solution, and comprising the following steps of:
e1) acquiring an error equation of a single imaging point; error equations describing individual imaging points:
Figure FDA0002987920010000022
e2) when the measuring system consists of two cameras, acquiring an error equation set corresponding to a space point [ X Y Z ];
when the measurement system consists of two cameras, the system of error equations corresponding to a spatial point [ XYZ ] is:
Figure FDA0002987920010000031
e3) acquiring a corresponding normal equation and coordinate correction, and optimizing the space point coordinates through multiple iterations:
e4) obtaining the measurement values of all azimuth reference-ruler lengths:
e5) obtaining the average value of the length measurement results of all the azimuth reference scales:
e6) scaling by taking the ratio of the average length to the metering length as an orientation result; and analyzing the uncertainty of the length measurement result to obtain the directional precision evaluation:
e7) acquiring quality parameters of a measuring instrument;
the quality parameters of the measurement instrument are described by integer multiples of the length uncertainty:
B=ku(L′i);
e8) calculating the relative error of each length;
e9) the uncertainty of the length measurement relative to the error is calculated,
the uncertainty of the relative error of the length measurement is equal to the uncertainty of the relative error of the original measurement length, and is:
Figure FDA0002987920010000032
CN201810319004.4A 2016-05-06 2016-05-06 High-precision orientation and orientation precision evaluation method of large-size dynamic photogrammetry system Active CN108801218B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810319004.4A CN108801218B (en) 2016-05-06 2016-05-06 High-precision orientation and orientation precision evaluation method of large-size dynamic photogrammetry system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610297902.5A CN105910584B (en) 2016-05-06 2016-05-06 Large scale dynamic photogrammtry system it is high-precision fixed to and orientation accuracy evaluation method
CN201810319004.4A CN108801218B (en) 2016-05-06 2016-05-06 High-precision orientation and orientation precision evaluation method of large-size dynamic photogrammetry system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201610297902.5A Division CN105910584B (en) 2016-05-06 2016-05-06 Large scale dynamic photogrammtry system it is high-precision fixed to and orientation accuracy evaluation method

Publications (2)

Publication Number Publication Date
CN108801218A CN108801218A (en) 2018-11-13
CN108801218B true CN108801218B (en) 2021-07-02

Family

ID=56748538

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201610297902.5A Active CN105910584B (en) 2016-05-06 2016-05-06 Large scale dynamic photogrammtry system it is high-precision fixed to and orientation accuracy evaluation method
CN201810319004.4A Active CN108801218B (en) 2016-05-06 2016-05-06 High-precision orientation and orientation precision evaluation method of large-size dynamic photogrammetry system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201610297902.5A Active CN105910584B (en) 2016-05-06 2016-05-06 Large scale dynamic photogrammtry system it is high-precision fixed to and orientation accuracy evaluation method

Country Status (1)

Country Link
CN (2) CN105910584B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108182709A (en) * 2017-12-28 2018-06-19 北京信息科技大学 A kind of camera calibration and the method for relative orientation
CN109559355B (en) * 2018-12-04 2021-08-10 北京航空航天大学 Multi-camera global calibration device and method without public view field based on camera set
CN110285827B (en) * 2019-04-28 2023-04-07 武汉大学 Distance-constrained photogrammetry high-precision target positioning method
CN110260822B (en) * 2019-06-18 2021-01-19 西安交通大学 High-precision calibration method for multi-view structured light system
CN112985455B (en) * 2019-12-16 2022-04-26 武汉四维图新科技有限公司 Precision evaluation method and device of positioning and attitude determination system and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101694370A (en) * 2009-09-15 2010-04-14 北京信息科技大学 Method for evaluating precision of large-scale industrial photogrammetry system and benchmark device
CN102889882A (en) * 2012-09-03 2013-01-23 北京信息科技大学 Three-dimensional reconstruction method based on bundle adjustment
JP2013160602A (en) * 2012-02-03 2013-08-19 Mitsubishi Electric Corp Photographic surveying apparatus
CN103389072A (en) * 2013-07-22 2013-11-13 北京信息科技大学 An image point positioning precision assessment method based on straight line fitting
CN103544699A (en) * 2013-10-11 2014-01-29 江西省电力公司检修分公司 Method for calibrating cameras on basis of single-picture three-circle template
CN104036542A (en) * 2014-05-21 2014-09-10 北京信息科技大学 Spatial light clustering-based image surface feature point matching method
CN104395692A (en) * 2012-06-29 2015-03-04 富士胶片株式会社 3D measurement method, device, and system, and image processing device

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7609846B2 (en) * 2004-07-13 2009-10-27 Eastman Kodak Company Matching of digital images to acquisition devices
US20060125920A1 (en) * 2004-12-10 2006-06-15 Microsoft Corporation Matching un-synchronized image portions
CN101644563B (en) * 2009-08-18 2010-12-08 北京信息科技大学 Vision measuring system uncertainty evaluation method based on distance restraint fit point
CN101727670B (en) * 2009-11-10 2012-01-04 西安交通大学 Flexible calibrating method and device for variable-format multiple-camera system
CN102175261B (en) * 2011-01-10 2013-03-20 深圳大学 Visual measuring system based on self-adapting targets and calibrating method thereof
CN102721376B (en) * 2012-06-20 2014-12-31 北京航空航天大学 Calibrating method of large-field three-dimensional visual sensor
CN104019799B (en) * 2014-05-23 2016-01-13 北京信息科技大学 A kind of relative orientation method utilizing local parameter optimization to calculate basis matrix
CN104091345B (en) * 2014-07-24 2017-01-25 中国空气动力研究与发展中心高速空气动力研究所 Five-point relative orientation method based on forward intersection constraints

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101694370A (en) * 2009-09-15 2010-04-14 北京信息科技大学 Method for evaluating precision of large-scale industrial photogrammetry system and benchmark device
JP2013160602A (en) * 2012-02-03 2013-08-19 Mitsubishi Electric Corp Photographic surveying apparatus
CN104395692A (en) * 2012-06-29 2015-03-04 富士胶片株式会社 3D measurement method, device, and system, and image processing device
CN102889882A (en) * 2012-09-03 2013-01-23 北京信息科技大学 Three-dimensional reconstruction method based on bundle adjustment
CN103389072A (en) * 2013-07-22 2013-11-13 北京信息科技大学 An image point positioning precision assessment method based on straight line fitting
CN103544699A (en) * 2013-10-11 2014-01-29 江西省电力公司检修分公司 Method for calibrating cameras on basis of single-picture three-circle template
CN104036542A (en) * 2014-05-21 2014-09-10 北京信息科技大学 Spatial light clustering-based image surface feature point matching method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
大尺寸动态视觉测量系统的并行加速;董明利等;《光学精密工程》;20151031;第23卷(第10期);第2909-2918页 *
大尺寸摄影测量局部参数优化相对定向方法;李巍等;《仪器仪表学报》;20140930;第35卷(第9期);第2053-2060页 *

Also Published As

Publication number Publication date
CN105910584A (en) 2016-08-31
CN108801218A (en) 2018-11-13
CN105910584B (en) 2018-05-04

Similar Documents

Publication Publication Date Title
US9857172B1 (en) Method for implementing high-precision orientation and evaluating orientation precision of large-scale dynamic photogrammetry system
CN108801218B (en) High-precision orientation and orientation precision evaluation method of large-size dynamic photogrammetry system
CN107014399B (en) Combined calibration method for satellite-borne optical camera-laser range finder combined system
Hinsken et al. Triangulation of LH systems ADS40 imagery using Orima GPS/IMU
CN106959075B (en) Method and system for accurate measurement using a depth camera
CN108648242B (en) Two-camera calibration method and device without public view field based on assistance of laser range finder
CN108198219B (en) Error compensation method for camera calibration parameters for photogrammetry
Toschi et al. On the evaluation of photogrammetric methods for dense 3D surface reconstruction in a metrological context
CN108198224A (en) A kind of line-scan digital camera caliberating device and scaling method for stereo-visiuon measurement
CN112902930B (en) Method for automatically calculating initial value of adjustment of regional network by beam method
Bräuer-Burchardt et al. Phase unwrapping using geometric constraints for high-speed fringe projection based 3D measurements
CN106671081B (en) A kind of lower-mobility robot kinematics calibration method based on monocular vision
CN115638726A (en) Fixed sweep pendulum type multi-camera vision measurement method
CN112229321B (en) Method for solving 21-item geometric errors of three-coordinate measuring machine based on LASSO algorithm
CN112116665A (en) Structured light sensor calibration method
CN113160331A (en) External parameter calibration method based on vision system imaging
CN108871204B (en) Absolute evaluation method for length measurement relative error in photogrammetry
CN208061260U (en) A kind of line-scan digital camera caliberating device for stereo-visiuon measurement
Chai et al. Single-image calibration method for multiple virtual binocular vision system
CN105528788B (en) Scaling method, device and the device for determining 3D shape of relative pose parameter
Ma et al. Research on a precision and accuracy estimation method for close-range photogrammetry
CN114170321A (en) Camera self-calibration method and system based on distance measurement
KR100823070B1 (en) Camera calibration method using a plurality of calibration images for 3 dimensional measurement
Rofallski et al. An efficient solution to ray tracing problems for hemispherical refractive interfaces
Sun et al. 3D Estimation of Single Image based on Homography Transformation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant