Disclosure of Invention
The invention provides a calibration method for an infrared camera of a surgical robot, which can overcome the self-shielding problem, enable each infrared camera to observe the whole calibration object and improve the calibration precision.
In order to achieve the purpose, the invention adopts the following technical scheme:
the calibration method of the infrared camera of the surgical robot comprises the following steps:
calibrating a calibration object by using a cross positioning method, corresponding a mark point on the calibration object to an image point projected on the plane of the infrared camera, rotating the calibration object for multiple times, and solving a parameter mixing matrix of the infrared camera by using the relationship between the rotated mark point and the image point;
obtaining a system matrix of the infrared camera by utilizing the product property of the parameter mixing matrix and the orthogonal matrix;
resolving an internal parameter matrix of the infrared camera by using an infrared camera system matrix;
solving an external parameter matrix of the infrared camera by using the parameter mixing matrix and the internal parameter matrix of the infrared camera;
and carrying out nonlinear optimization by using a maximum likelihood estimation principle and using the parameter mixing matrix, the system matrix, the internal parameter matrix, the external parameter matrix and the mark points as optimization variables and using the minimized re-projection error as an optimization target to obtain an optimized solution of the infrared camera projection matrix.
Further, when the extrinsic parameter matrix is calculated, the extrinsic parameter matrix is solved by using the optimal estimation mode of the rotation matrix.
Further, the rotation matrix optimal estimation specifically includes:
constructing an initial matrix, and decomposing singular values of the initial matrix;
and calculating the optimal estimation value of the initial matrix by using the singular value, multiplying elements in the optimal estimation value, and forming the optimal estimation value of the rotation matrix by the value obtained by multiplication and the optimal estimation value of the initial matrix.
The invention has the beneficial effects that:
the method determines a parameter mixing matrix, a system matrix, an internal parameter matrix and an external parameter matrix in stages, wherein the linear solutions are all linear solutions, optimizes the linear solutions by using a maximum likelihood estimation principle, performs nonlinear optimization by using an infrared camera matrix and a space point as optimization variables and using a minimized reprojection error as an optimization target by using the maximum likelihood estimation principle, and finally solves the infrared camera parameters in the maximum likelihood estimation meaning, so that the camera calibration precision is improved.
Detailed Description
In order that those skilled in the art will better understand the technical solutions of the present invention, the present invention will be further described in detail with reference to the following detailed description.
The calibration method of the infrared camera of the surgical robot comprises the following specific steps:
1. firstly, a cross positioning method is utilized to calibrate a calibration object. Cross positioning method As shown in FIG. 1, the calibration object is composed of two phasesThe marking object is fixed with five marking points M1,M2,M3,M4,M5Wherein M is1The point is located at the midpoint of the two one-dimensional linear objects, d is a preset distance value, and the following equation is established:
||M2-M1||=||M3-M1||=||M4-M1||=||M5-M1||=d (1)
the five marked points on the calibration object are coplanar, so the calibration object shown in fig. 1 is a calibration reference object between the one-dimensional calibration object and the two-dimensional calibration object.
And taking the intersection point of the two one-dimensional linear objects as the origin of the actual coordinate system, and the plane where the calibration reference object is located as an XY plane, so that the direction of the Z axis of the actual coordinate system can be obtained according to the rule of a Cartesian coordinate system. Thus, the coordinates of the five mark points under the cross positioning method in the actual coordinate system can be obtained, namely:
M1=[0,0,0],M2=[-d,0,0],M3=[d,0,0],M4=[0,d,0],M5=[0,-d,0] (2)
and (c) establishing rectangular coordinate systems u and v on the image by taking the upper left vertex of the acquired image as an origin, so that the coordinates (u and v) of each pixel are the column number and the row number of the pixel in the array respectively, and the coordinates (u and v) are position coordinates taking the pixel as a unit. Because the image pixel coordinate system can not use physical unit to represent the position of the pixel in the image, and can not be linked with the actual coordinate system and the infrared camera coordinate system, it is necessary to establish an image coordinate system represented by a physical unit, and the intersection point of the infrared camera optical axis and the image plane, i.e. the principal point, is used as the origin, and the x and y axes are respectively used as the x and y axesuAxis and yuAxis, xuAxis and yuThe axes are parallel to the u and v axes respectively, and a two-dimensional rectangular coordinate system is established. If the origin is O1The coordinate in the u, v coordinate system is (u)o、vo) Then each pixel is at xuAxis and yuThe physical distances in the axial direction are dx and dy.
Establishing rectangular coordinate systems Xc, Yc and Zc by taking the O point as the coordinate origin, wherein the Xc axis and the Yc axis are in x of the image coordinate systemuAxis and yuThe axis is parallel, the Zc axis is the optical axis of the infrared camera, and the Zc axis is parallel to the image planeAnd is vertical. The intersection point of the optical axis and the image plane is the origin of the image coordinate system
The coordinate system of the infrared camera and the actual coordinate system are both three-dimensional Euclidean coordinate systems and can be obtained from knowledge of space geometry, the relationship between the coordinate system of the infrared camera and the actual coordinate system can be described by a rotation matrix R and a translation vector t, (Xw, Yw and Zw) are the actual coordinate system, (tx, ty and tz) represent the translation vector of the infrared camera relative to the actual coordinate system, (theta, psi, tz,
) Representing the rotation angle of the infrared camera about the x, y, z axes with respect to the actual coordinate system. R is an orthogonal identity matrix (rotation matrix) and t is a three-dimensional translation vector.
Since the Z coordinates of all the marker points are 0, the following formula is obtained:
let t be ═ t1 t2 t3]T,R=[r1 r2 r3],r1=[r11 r21 r31]T,r2=[r12 r22 r32]T,r3=[r13r23 r33]TCan be converted into the following steps through the derivation formula (3):
where α, β are derivatives of x, y, respectively.
In the formula (4)
Let B be A
-TA
-1。
And marking C as a parameter mixing matrix of the infrared camera, B as a system matrix and A as an internal parameter matrix of the infrared camera.
The method for determining the infrared camera parameters in stages comprises five stages:
1. and solving a parameter mixing matrix C by using the correspondence of the space points and the image points.
2. And (4) obtaining a matrix B by using the properties of the parameter mixing matrix C and the orthogonal matrix.
3. And decomposing the B matrix to obtain an internal reference matrix A of the infrared camera.
4. And further obtaining external parameters of the infrared camera by the parameter mixing matrix C and the internal parameter matrix A of the infrared camera.
5. And (5) fusion adjustment.
(1) Solving of the parametric hybrid matrix C
Fig. 2 describes the projection process of 5 marked points on the calibration reference under the infrared camera. Let the image point of the ith marker point on the infrared camera plane at the jth rotation be { m }ij|i=1,2,3,4,5,j=1,2,……n}。
C is expressed by the following formula:
introduce a new vector for the calculation:
then equation (4) can be represented by:
wherein:
Mixrepresents the X coordinate of the index point Mi, Miy represents the Y coordinate of Mi, i is 1,2,3,4, 5. N10 × 9 is a matrix of calculated variables for the sake of convenience of solving.
The calibration reference shown in fig. 1 is rotated N times to obtain N equations shown in formula (6), the obtained N equations are listed, and the least square solution is the linear solution of the parameter mixing matrix C.
(2) Solving of B matrix
From the definition of the parametric mixing matrix C ═ A · [ r-1 r2 t]The following can be obtained:
[C1 C2 C3]=A·[r1 r2 t] (8)
due to r1,r2The first and second columns of the orthogonal rotation matrix R, respectively, so there is the following relationship:
by definition of B matrix B ═ A-TA-1It can be seen that B is a real symmetric matrix, so a 6-dimensional calculation vector B can be introduced to represent the B matrix:
b=[B11,B12,B22,B13,B23,B33]T (11)
then equations (9) and (10) can be converted into:
the intermediate quantities introduced by Vij for the sake of convenience in calculation, i, j all have a value range of 1,2,3 … 10.
If the calibration reference shown in fig. 1 is rotated N times, N equations similar to those shown in equation (12) can be obtained, and by listing the N equations, the following can be obtained:
Vb=0 (13)
where V is the intermediate quantity introduced for the calculation and is a 2N 6 matrix. When N > 3, a unique b vector can be solved by the following steps:
1. calculating VTV
2. Calculating VTEigenvalues of V and corresponding eigenvectors
3、VTThe eigenvector corresponding to the V minimum eigenvalue is the solution of b
After solving for vector B, matrix B can be constructed.
(3) Solving method of internal parameter matrix A of infrared camera
By definition of B matrix B ═ A-TA-1The following can be obtained:
the correspondence between the parameters in the infrared camera and the elements of the B matrix can be obtained from equation (14), where s is an abbreviation for Sin function:
(4) solving of external parameter matrix of infrared camera
Infrared camera extrinsic parameter matrix calculation
The internal parameters of the camera are not changed as long as the structure of the camera is not changed, but only the relative position of the camera and the actual coordinate system is changed. That is, the matrix a is constant, and only the rotation matrix R and the translation vector t are changed. After obtaining the internal reference matrix of the infrared camera, the following equation can be obtained by using equation (8):
r1=λA-1C1 (21)
r2=λA-1C2 (22)
r3=r1×r2 (23)
t=λA-1C3 (24)
because r is
1And r
2Are all unit vectors, therefore
λ is a coefficient for ease of calculation.
And [ r ] actually obtained due to the influence of noise and error in practice1 r2 r3]And do not form the orthogonal matrix required for the rotation matrix. I.e. the following rotation matrix optimization procedure is required.
Rotational matrix optimization
By using r1,r2A rotation matrix is constructed. Theoretical r1,r2Are mutually orthogonal unit vectors, which may not fully satisfy the relationship of unit orthogonality due to the influence of noise. The goal of the rotation matrix optimization is to find the two sums r1,r2The nearest unit orthogonal vector. Let g1,g2Are two mutually orthogonal unit vectors. The basic idea of rotation matrix optimization is to make r1,r2And g1,g2The difference is minimal.
Is provided with
According to the least square method, the corresponding judgment function is as follows:
wherein k is a counting initial value, and F is a norm. The above equation is actually solved for matrix g, so that the F norm of r-g is minimized because
The optimization function is shown in equation (26).
According to the F norm property:
where tr is a norm function, tr (r) since the matrix r is knownTr) is a definite value, i.e.
The optimization function thus transitions to:
rotation matrix optimal estimate derivation
r is a 3 x 2 matrix with singular value decomposition into
Where U is a 3 × 3 orthogonal matrix, V is a 2 × 2 orthogonal matrix, σ
1,σ
2Are the singular values of the matrix r. Is provided with
Because of the fact that
So vector [ z
11 z
12 z
13]
T,[z
21 z
22 z
33]
TAre all unit vectors; therefore, there are:
-1≤z11,z22≤1 (30)
according to the properties of the matrix trace:
tr(rTg)=tr(VTΣTUTg)=tr(ΣTUTgVT)=tr(ΣTZ) (31)
Σ is a summation function according to the relationship shown in equation (30), and the singular values of any matrix are greater than 0, so σ is1·z11+σ2·z22≤σ1+σ2. When z is11=z22When 1, tr (r)Tg) The maximum value is taken.
Because of the vector z
11 z
12 z
13]
T,[z
21 z
22 z
33]
TAre all unit vectors, and z
11=
z 221 is ═ 1; therefore, it is not only easy to use
Because of the fact that
This gives the optimal estimate g for r.
In summary, the rotation matrix optimal estimation step can be summarized as
1. Constructing initial data
2. Singular value decomposition of the calculation matrix r
3. Best estimate of the computation matrix r
4. Calculate another unit vector g3=g1×g2
5. Matrix [ g ]1 g2 g3]Is the best estimation of the rotation matrix
(5) Fusion adaptation
The above 4 stages of determining parameters in stages are linear solutions obtained by minimizing algebraic distances, and the results of linear algorithms can be optimized using the principle of maximum likelihood estimation. The fusion adjustment is a nonlinear optimization process which takes an infrared camera matrix and a space point as optimization variables and takes the minimized re-projection error as an optimization target by utilizing a maximum likelihood estimation principle. When the image point noise obeys the Gaussian distribution of the isotropic zero mean value and is independently distributed, the fusion adjustment obtains the infrared camera parameters under the maximum likelihood estimation meaning by solving the following nonlinear minimization problem:
wherein R is
jIs an orthogonal identity matrix (rotation matrix), t
jIn the form of a three-dimensional translation vector,
as a spatial point M
iThe projection point on the infrared camera plane at the j-th rotation. Linear solutions obtained in the previous 4 stages are used as initial values, the nonlinear minimization problem shown in the formula (33) is solved by using an LM (Levenberg-Marquardt) algorithm, and finally an optimized solution of the infrared camera projection matrix is obtained. In addition, the nonlinear optimization method may be applied to the first stage of determining parameters in stages to obtain the parameter mixture matrix C and then optimize the parameter mixture matrix C.
Effect verification
In order to verify the cross positioning method for determining the infrared camera parameters in stages, the following experimental process was performed. The internal parameters of the infrared camera are respectively as follows: 980, 890, 0.1, u0=338,and v0288. The dimensions of the calibration reference as shown in fig. 2 were chosen as: d is 100. In the experiment, the calibration reference object is rotated 6 times, and the direction of the plane of the calibration reference object is [0,90 ]]Three random numbers in between, and the position of the calibration reference is represented by the array [500, 500, 1000]Each multiplied by one [0,1 ]]Random number representation in between. And projecting the 5 space mark points on the calibration reference object according to the internal and external parameters of the infrared camera to obtain image points. To take into account the error in the actual case, a mean of 0 standard deviation σ gaussian noise was applied to the image points, the standard deviation was adjusted from 0.1 pixel to 1.0 pixel, and 100 independent experiments were performed for each error level. And (3) corresponding the obtained image points with the spatial mark points on the calibration reference object, calculating internal and external parameters of the infrared camera by using the five-point method for determining the parameters by stages, and further comparing the internal and external parameters with actual values. The relative error of each intrinsic parameter of the infrared camera with respect to α is shown in fig. 3,4 and 5. The above description is only an embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are all covered by the scope of the present inventionAnd (4) the following steps. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.