CN112419428A - Calibration method for infrared camera of surgical robot - Google Patents

Calibration method for infrared camera of surgical robot Download PDF

Info

Publication number
CN112419428A
CN112419428A CN202011462562.XA CN202011462562A CN112419428A CN 112419428 A CN112419428 A CN 112419428A CN 202011462562 A CN202011462562 A CN 202011462562A CN 112419428 A CN112419428 A CN 112419428A
Authority
CN
China
Prior art keywords
matrix
infrared camera
parameter
calibration
calibration object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011462562.XA
Other languages
Chinese (zh)
Inventor
侯礼春
芦颖僖
周亚瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kuanrui Intelligent Technology Suzhou Co ltd
Original Assignee
Nanjing Linghua Microelectronics Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Linghua Microelectronics Technology Co ltd filed Critical Nanjing Linghua Microelectronics Technology Co ltd
Priority to CN202011462562.XA priority Critical patent/CN112419428A/en
Publication of CN112419428A publication Critical patent/CN112419428A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image

Abstract

The invention discloses a calibration method of an infrared camera of a surgical robot, which relates to the technical field of surgical robots and can overcome the problem of self-shielding, so that each infrared camera can observe the whole calibration object, and the calibration precision is improved, wherein the calibration method comprises the following steps: calibrating a calibration object by using a cross positioning method, corresponding a mark point on the calibration object to an image point projected on the plane of the infrared camera, rotating the calibration object for multiple times, and solving a parameter mixing matrix of the infrared camera by using the relationship between the rotated mark point and the image point; obtaining an infrared camera system matrix by utilizing the product property of the parameter mixing matrix and the orthogonal matrix; resolving an internal parameter matrix of the infrared camera by using an infrared camera system matrix; solving an external parameter matrix of the infrared camera by using the parameter mixing matrix and the internal parameter matrix of the infrared camera; and carrying out nonlinear optimization by using a maximum likelihood estimation principle and using the parameter mixing matrix, the system matrix, the internal parameter matrix, the external parameter matrix and the mark points as optimization variables and using the minimized re-projection error as an optimization target to obtain an optimized solution of the infrared camera projection matrix.

Description

Calibration method for infrared camera of surgical robot
Technical Field
The invention relates to the technical field of surgical robots, in particular to a calibration method for an infrared camera of a surgical robot.
Background
Calibration of an infrared camera is a crucial step in an optical positioning system, and its accuracy determines the accuracy of the optical positioning system in a certain sense. In an optical system based on a multi-view infrared camera, a necessary condition for high-precision calibration of the infrared camera is that each infrared camera in the system can observe the whole calibration object simultaneously. However, the conventional calibration method based on three-dimensional or two-dimensional calibration objects cannot satisfy this condition due to self-occlusion.
Disclosure of Invention
The invention provides a calibration method for an infrared camera of a surgical robot, which can overcome the self-shielding problem, enable each infrared camera to observe the whole calibration object and improve the calibration precision.
In order to achieve the purpose, the invention adopts the following technical scheme:
the calibration method of the infrared camera of the surgical robot comprises the following steps:
calibrating a calibration object by using a cross positioning method, corresponding a mark point on the calibration object to an image point projected on the plane of the infrared camera, rotating the calibration object for multiple times, and solving a parameter mixing matrix of the infrared camera by using the relationship between the rotated mark point and the image point;
obtaining a system matrix of the infrared camera by utilizing the product property of the parameter mixing matrix and the orthogonal matrix;
resolving an internal parameter matrix of the infrared camera by using an infrared camera system matrix;
solving an external parameter matrix of the infrared camera by using the parameter mixing matrix and the internal parameter matrix of the infrared camera;
and carrying out nonlinear optimization by using a maximum likelihood estimation principle and using the parameter mixing matrix, the system matrix, the internal parameter matrix, the external parameter matrix and the mark points as optimization variables and using the minimized re-projection error as an optimization target to obtain an optimized solution of the infrared camera projection matrix.
Further, when the extrinsic parameter matrix is calculated, the extrinsic parameter matrix is solved by using the optimal estimation mode of the rotation matrix.
Further, the rotation matrix optimal estimation specifically includes:
constructing an initial matrix, and decomposing singular values of the initial matrix;
and calculating the optimal estimation value of the initial matrix by using the singular value, multiplying elements in the optimal estimation value, and forming the optimal estimation value of the rotation matrix by the value obtained by multiplication and the optimal estimation value of the initial matrix.
The invention has the beneficial effects that:
the method determines a parameter mixing matrix, a system matrix, an internal parameter matrix and an external parameter matrix in stages, wherein the linear solutions are all linear solutions, optimizes the linear solutions by using a maximum likelihood estimation principle, performs nonlinear optimization by using an infrared camera matrix and a space point as optimization variables and using a minimized reprojection error as an optimization target by using the maximum likelihood estimation principle, and finally solves the infrared camera parameters in the maximum likelihood estimation meaning, so that the camera calibration precision is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a model of a calibration object for cross location;
FIG. 2 is a projection process of calibrating a reference;
FIG. 3 is a linear solution of infrared camera parameters found by the cross location method;
FIG. 4 is a non-linear optimization solution (α, β) of infrared camera parameters obtained by cross localization;
FIG. 5 is a non-linear optimization solution (u) of infrared camera parameters obtained by cross location0、v0)。
Detailed Description
In order that those skilled in the art will better understand the technical solutions of the present invention, the present invention will be further described in detail with reference to the following detailed description.
The calibration method of the infrared camera of the surgical robot comprises the following specific steps:
1. firstly, a cross positioning method is utilized to calibrate a calibration object. Cross positioning method As shown in FIG. 1, the calibration object is composed of two phasesThe marking object is fixed with five marking points M1,M2,M3,M4,M5Wherein M is1The point is located at the midpoint of the two one-dimensional linear objects, d is a preset distance value, and the following equation is established:
||M2-M1||=||M3-M1||=||M4-M1||=||M5-M1||=d (1)
the five marked points on the calibration object are coplanar, so the calibration object shown in fig. 1 is a calibration reference object between the one-dimensional calibration object and the two-dimensional calibration object.
And taking the intersection point of the two one-dimensional linear objects as the origin of the actual coordinate system, and the plane where the calibration reference object is located as an XY plane, so that the direction of the Z axis of the actual coordinate system can be obtained according to the rule of a Cartesian coordinate system. Thus, the coordinates of the five mark points under the cross positioning method in the actual coordinate system can be obtained, namely:
M1=[0,0,0],M2=[-d,0,0],M3=[d,0,0],M4=[0,d,0],M5=[0,-d,0] (2)
and (c) establishing rectangular coordinate systems u and v on the image by taking the upper left vertex of the acquired image as an origin, so that the coordinates (u and v) of each pixel are the column number and the row number of the pixel in the array respectively, and the coordinates (u and v) are position coordinates taking the pixel as a unit. Because the image pixel coordinate system can not use physical unit to represent the position of the pixel in the image, and can not be linked with the actual coordinate system and the infrared camera coordinate system, it is necessary to establish an image coordinate system represented by a physical unit, and the intersection point of the infrared camera optical axis and the image plane, i.e. the principal point, is used as the origin, and the x and y axes are respectively used as the x and y axesuAxis and yuAxis, xuAxis and yuThe axes are parallel to the u and v axes respectively, and a two-dimensional rectangular coordinate system is established. If the origin is O1The coordinate in the u, v coordinate system is (u)o、vo) Then each pixel is at xuAxis and yuThe physical distances in the axial direction are dx and dy.
Establishing rectangular coordinate systems Xc, Yc and Zc by taking the O point as the coordinate origin, wherein the Xc axis and the Yc axis are in x of the image coordinate systemuAxis and yuThe axis is parallel, the Zc axis is the optical axis of the infrared camera, and the Zc axis is parallel to the image planeAnd is vertical. The intersection point of the optical axis and the image plane is the origin of the image coordinate system
The coordinate system of the infrared camera and the actual coordinate system are both three-dimensional Euclidean coordinate systems and can be obtained from knowledge of space geometry, the relationship between the coordinate system of the infrared camera and the actual coordinate system can be described by a rotation matrix R and a translation vector t, (Xw, Yw and Zw) are the actual coordinate system, (tx, ty and tz) represent the translation vector of the infrared camera relative to the actual coordinate system, (theta, psi, tz,
Figure BDA0002824947390000044
) Representing the rotation angle of the infrared camera about the x, y, z axes with respect to the actual coordinate system. R is an orthogonal identity matrix (rotation matrix) and t is a three-dimensional translation vector.
Since the Z coordinates of all the marker points are 0, the following formula is obtained:
Figure BDA0002824947390000041
let t be ═ t1 t2 t3]T,R=[r1 r2 r3],r1=[r11 r21 r31]T,r2=[r12 r22 r32]T,r3=[r13r23 r33]TCan be converted into the following steps through the derivation formula (3):
Figure BDA0002824947390000042
where α, β are derivatives of x, y, respectively.
In the formula (4)
Figure BDA0002824947390000043
Let B be A-TA-1
And marking C as a parameter mixing matrix of the infrared camera, B as a system matrix and A as an internal parameter matrix of the infrared camera.
The method for determining the infrared camera parameters in stages comprises five stages:
1. and solving a parameter mixing matrix C by using the correspondence of the space points and the image points.
2. And (4) obtaining a matrix B by using the properties of the parameter mixing matrix C and the orthogonal matrix.
3. And decomposing the B matrix to obtain an internal reference matrix A of the infrared camera.
4. And further obtaining external parameters of the infrared camera by the parameter mixing matrix C and the internal parameter matrix A of the infrared camera.
5. And (5) fusion adjustment.
(1) Solving of the parametric hybrid matrix C
Fig. 2 describes the projection process of 5 marked points on the calibration reference under the infrared camera. Let the image point of the ith marker point on the infrared camera plane at the jth rotation be { m }ij|i=1,2,3,4,5,j=1,2,……n}。
C is expressed by the following formula:
Figure BDA0002824947390000051
introduce a new vector for the calculation:
Figure BDA0002824947390000052
then equation (4) can be represented by:
Figure BDA0002824947390000053
wherein:
Figure BDA0002824947390000061
Mixrepresents the X coordinate of the index point Mi, Miy represents the Y coordinate of Mi, i is 1,2,3,4, 5. N10 × 9 is a matrix of calculated variables for the sake of convenience of solving.
The calibration reference shown in fig. 1 is rotated N times to obtain N equations shown in formula (6), the obtained N equations are listed, and the least square solution is the linear solution of the parameter mixing matrix C.
(2) Solving of B matrix
From the definition of the parametric mixing matrix C ═ A · [ r-1 r2 t]The following can be obtained:
[C1 C2 C3]=A·[r1 r2 t] (8)
due to r1,r2The first and second columns of the orthogonal rotation matrix R, respectively, so there is the following relationship:
Figure BDA0002824947390000062
Figure BDA0002824947390000063
by definition of B matrix B ═ A-TA-1It can be seen that B is a real symmetric matrix, so a 6-dimensional calculation vector B can be introduced to represent the B matrix:
b=[B11,B12,B22,B13,B23,B33]T (11)
then equations (9) and (10) can be converted into:
Figure BDA0002824947390000064
Figure BDA0002824947390000071
the intermediate quantities introduced by Vij for the sake of convenience in calculation, i, j all have a value range of 1,2,3 … 10.
If the calibration reference shown in fig. 1 is rotated N times, N equations similar to those shown in equation (12) can be obtained, and by listing the N equations, the following can be obtained:
Vb=0 (13)
where V is the intermediate quantity introduced for the calculation and is a 2N 6 matrix. When N > 3, a unique b vector can be solved by the following steps:
1. calculating VTV
2. Calculating VTEigenvalues of V and corresponding eigenvectors
3、VTThe eigenvector corresponding to the V minimum eigenvalue is the solution of b
After solving for vector B, matrix B can be constructed.
(3) Solving method of internal parameter matrix A of infrared camera
By definition of B matrix B ═ A-TA-1The following can be obtained:
Figure BDA0002824947390000072
the correspondence between the parameters in the infrared camera and the elements of the B matrix can be obtained from equation (14), where s is an abbreviation for Sin function:
Figure BDA0002824947390000073
Figure BDA0002824947390000081
Figure BDA0002824947390000082
Figure BDA0002824947390000083
Figure BDA0002824947390000084
Figure BDA0002824947390000085
(4) solving of external parameter matrix of infrared camera
Infrared camera extrinsic parameter matrix calculation
The internal parameters of the camera are not changed as long as the structure of the camera is not changed, but only the relative position of the camera and the actual coordinate system is changed. That is, the matrix a is constant, and only the rotation matrix R and the translation vector t are changed. After obtaining the internal reference matrix of the infrared camera, the following equation can be obtained by using equation (8):
r1=λA-1C1 (21)
r2=λA-1C2 (22)
r3=r1×r2 (23)
t=λA-1C3 (24)
because r is1And r2Are all unit vectors, therefore
Figure BDA0002824947390000086
λ is a coefficient for ease of calculation.
And [ r ] actually obtained due to the influence of noise and error in practice1 r2 r3]And do not form the orthogonal matrix required for the rotation matrix. I.e. the following rotation matrix optimization procedure is required.
Rotational matrix optimization
By using r1,r2A rotation matrix is constructed. Theoretical r1,r2Are mutually orthogonal unit vectors, which may not fully satisfy the relationship of unit orthogonality due to the influence of noise. The goal of the rotation matrix optimization is to find the two sums r1,r2The nearest unit orthogonal vector. Let g1,g2Are two mutually orthogonal unit vectors. The basic idea of rotation matrix optimization is to make r1,r2And g1,g2The difference is minimal.
Is provided with
Figure BDA0002824947390000091
According to the least square method, the corresponding judgment function is as follows:
Figure BDA0002824947390000092
wherein k is a counting initial value, and F is a norm. The above equation is actually solved for matrix g, so that the F norm of r-g is minimized because
Figure BDA0002824947390000093
The optimization function is shown in equation (26).
Figure BDA0002824947390000094
Figure BDA0002824947390000095
According to the F norm property:
Figure BDA0002824947390000096
where tr is a norm function, tr (r) since the matrix r is knownTr) is a definite value, i.e.
Figure BDA0002824947390000097
The optimization function thus transitions to:
Figure BDA0002824947390000098
Figure BDA0002824947390000101
rotation matrix optimal estimate derivation
r is a 3 x 2 matrix with singular value decomposition into
Figure BDA0002824947390000102
Where U is a 3 × 3 orthogonal matrix, V is a 2 × 2 orthogonal matrix, σ1,σ2Are the singular values of the matrix r. Is provided with
Figure BDA0002824947390000103
Because of the fact that
Figure BDA0002824947390000104
So vector [ z11 z12 z13]T,[z21 z22 z33]TAre all unit vectors; therefore, there are:
-1≤z11,z22≤1 (30)
according to the properties of the matrix trace:
tr(rTg)=tr(VTΣTUTg)=tr(ΣTUTgVT)=tr(ΣTZ) (31)
Figure BDA0002824947390000105
Σ is a summation function according to the relationship shown in equation (30), and the singular values of any matrix are greater than 0, so σ is1·z112·z22≤σ12. When z is11=z22When 1, tr (r)Tg) The maximum value is taken.
Because of the vector z11 z12 z13]T,[z21 z22 z33]TAre all unit vectors, and z11z 221 is ═ 1; therefore, it is not only easy to use
Figure BDA0002824947390000106
Because of the fact that
Figure BDA0002824947390000107
This gives the optimal estimate g for r.
In summary, the rotation matrix optimal estimation step can be summarized as
1. Constructing initial data
Figure BDA0002824947390000111
2. Singular value decomposition of the calculation matrix r
Figure BDA0002824947390000112
3. Best estimate of the computation matrix r
Figure BDA0002824947390000113
4. Calculate another unit vector g3=g1×g2
5. Matrix [ g ]1 g2 g3]Is the best estimation of the rotation matrix
(5) Fusion adaptation
The above 4 stages of determining parameters in stages are linear solutions obtained by minimizing algebraic distances, and the results of linear algorithms can be optimized using the principle of maximum likelihood estimation. The fusion adjustment is a nonlinear optimization process which takes an infrared camera matrix and a space point as optimization variables and takes the minimized re-projection error as an optimization target by utilizing a maximum likelihood estimation principle. When the image point noise obeys the Gaussian distribution of the isotropic zero mean value and is independently distributed, the fusion adjustment obtains the infrared camera parameters under the maximum likelihood estimation meaning by solving the following nonlinear minimization problem:
Figure BDA0002824947390000114
wherein R isjIs an orthogonal identity matrix (rotation matrix), tjIn the form of a three-dimensional translation vector,
Figure BDA0002824947390000115
as a spatial point MiThe projection point on the infrared camera plane at the j-th rotation. Linear solutions obtained in the previous 4 stages are used as initial values, the nonlinear minimization problem shown in the formula (33) is solved by using an LM (Levenberg-Marquardt) algorithm, and finally an optimized solution of the infrared camera projection matrix is obtained. In addition, the nonlinear optimization method may be applied to the first stage of determining parameters in stages to obtain the parameter mixture matrix C and then optimize the parameter mixture matrix C.
Effect verification
In order to verify the cross positioning method for determining the infrared camera parameters in stages, the following experimental process was performed. The internal parameters of the infrared camera are respectively as follows: 980, 890, 0.1, u0=338,and v0288. The dimensions of the calibration reference as shown in fig. 2 were chosen as: d is 100. In the experiment, the calibration reference object is rotated 6 times, and the direction of the plane of the calibration reference object is [0,90 ]]Three random numbers in between, and the position of the calibration reference is represented by the array [500, 500, 1000]Each multiplied by one [0,1 ]]Random number representation in between. And projecting the 5 space mark points on the calibration reference object according to the internal and external parameters of the infrared camera to obtain image points. To take into account the error in the actual case, a mean of 0 standard deviation σ gaussian noise was applied to the image points, the standard deviation was adjusted from 0.1 pixel to 1.0 pixel, and 100 independent experiments were performed for each error level. And (3) corresponding the obtained image points with the spatial mark points on the calibration reference object, calculating internal and external parameters of the infrared camera by using the five-point method for determining the parameters by stages, and further comparing the internal and external parameters with actual values. The relative error of each intrinsic parameter of the infrared camera with respect to α is shown in fig. 3,4 and 5. The above description is only an embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are all covered by the scope of the present inventionAnd (4) the following steps. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (3)

1. The calibration method of the infrared camera of the surgical robot is characterized by comprising the following steps:
calibrating a calibration object by using a cross positioning method, corresponding a mark point on the calibration object to an image point projected on the plane of the infrared camera, rotating the calibration object for multiple times, and solving a parameter mixing matrix of the infrared camera by using the relationship between the rotated mark point and the image point;
obtaining a system matrix of the infrared camera by utilizing the product property of the parameter mixing matrix and the orthogonal matrix;
resolving an internal parameter matrix of the infrared camera by using an infrared camera system matrix;
solving an external parameter matrix of the infrared camera by using the parameter mixing matrix and the internal parameter matrix of the infrared camera;
and carrying out nonlinear optimization by using a maximum likelihood estimation principle and using the parameter mixing matrix, the system matrix, the internal parameter matrix, the external parameter matrix and the mark points as optimization variables and using the minimized re-projection error as an optimization target to obtain an optimized solution of the infrared camera projection matrix.
2. The method for calibrating an infrared camera of a surgical robot according to claim 1, wherein the extrinsic parameter matrix is calculated by using a rotation matrix optimal estimation method.
3. The method for calibrating an infrared camera of a surgical robot according to claim 1, wherein the optimal estimation of the rotation matrix specifically comprises:
constructing an initial matrix, and decomposing singular values of the initial matrix;
and calculating the optimal estimation value of the initial matrix by using the singular value, multiplying elements in the optimal estimation value, and forming the optimal estimation value of the rotation matrix by the value obtained by multiplication and the optimal estimation value of the initial matrix.
CN202011462562.XA 2020-12-09 2020-12-09 Calibration method for infrared camera of surgical robot Pending CN112419428A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011462562.XA CN112419428A (en) 2020-12-09 2020-12-09 Calibration method for infrared camera of surgical robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011462562.XA CN112419428A (en) 2020-12-09 2020-12-09 Calibration method for infrared camera of surgical robot

Publications (1)

Publication Number Publication Date
CN112419428A true CN112419428A (en) 2021-02-26

Family

ID=74775658

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011462562.XA Pending CN112419428A (en) 2020-12-09 2020-12-09 Calibration method for infrared camera of surgical robot

Country Status (1)

Country Link
CN (1) CN112419428A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115601451A (en) * 2022-12-14 2023-01-13 深圳思谋信息科技有限公司(Cn) External parameter data calibration method and device, computer equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106725282A (en) * 2016-12-12 2017-05-31 南京理工大学 A kind of small-sized dry eyes testing equipment
CN107123147A (en) * 2017-03-31 2017-09-01 深圳市奇脉电子技术有限公司 Scaling method, device and the binocular camera system of binocular camera
CN107146254A (en) * 2017-04-05 2017-09-08 西安电子科技大学 The Camera extrinsic number scaling method of multicamera system
CN108198223A (en) * 2018-01-29 2018-06-22 清华大学 A kind of laser point cloud and the quick method for precisely marking of visual pattern mapping relations
WO2019179200A1 (en) * 2018-03-22 2019-09-26 深圳岚锋创视网络科技有限公司 Three-dimensional reconstruction method for multiocular camera device, vr camera device, and panoramic camera device
CN111024003A (en) * 2020-01-02 2020-04-17 安徽工业大学 3D four-wheel positioning detection method based on homography matrix optimization

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106725282A (en) * 2016-12-12 2017-05-31 南京理工大学 A kind of small-sized dry eyes testing equipment
CN107123147A (en) * 2017-03-31 2017-09-01 深圳市奇脉电子技术有限公司 Scaling method, device and the binocular camera system of binocular camera
CN107146254A (en) * 2017-04-05 2017-09-08 西安电子科技大学 The Camera extrinsic number scaling method of multicamera system
CN108198223A (en) * 2018-01-29 2018-06-22 清华大学 A kind of laser point cloud and the quick method for precisely marking of visual pattern mapping relations
WO2019179200A1 (en) * 2018-03-22 2019-09-26 深圳岚锋创视网络科技有限公司 Three-dimensional reconstruction method for multiocular camera device, vr camera device, and panoramic camera device
CN111024003A (en) * 2020-01-02 2020-04-17 安徽工业大学 3D four-wheel positioning detection method based on homography matrix optimization

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周珈琪等: "基于伺服机构的单目视觉深度测量算", 《机械设计与研究》, vol. 36, no. 4, pages 154 - 159 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115601451A (en) * 2022-12-14 2023-01-13 深圳思谋信息科技有限公司(Cn) External parameter data calibration method and device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
Hartley et al. Triangulation
CN102472609B (en) Position and orientation calibration method and apparatus
DeMenthon et al. Model-based object pose in 25 lines of code
Hu et al. Extrinsic calibration of 2-D laser rangefinder and camera from single shot based on minimal solution
Beardsley et al. Navigation using affine structure from motion
CN104019799B (en) A kind of relative orientation method utilizing local parameter optimization to calculate basis matrix
Chatterjee et al. Algorithms for coplanar camera calibration
Gong et al. An uncalibrated visual servo method based on projective homography
US10628968B1 (en) Systems and methods of calibrating a depth-IR image offset
Chuan et al. A planar homography estimation method for camera calibration
Botterill et al. Fast RANSAC hypothesis generation for essential matrix estimation
CN112419428A (en) Calibration method for infrared camera of surgical robot
Tahri et al. Efficient iterative pose estimation using an invariant to rotations
Ponce et al. Analytical methods for uncalibrated stereo and motion reconstruction
CN110555880B (en) Focal length unknown P6P camera pose estimation method
Horaud et al. Object pose: Links between paraperspective and perspective
CN110428457A (en) A kind of point set affine transform algorithm in vision positioning
Stevenson et al. Nonparametric correction of distortion
CN109059761B (en) EIV model-based handheld target measuring head calibration method
Shibata et al. Absolute scale structure from motion using a refractive plate
Yan et al. A decoupled calibration method for camera intrinsic parameters and distortion coefficients
Hou et al. High-precision visual imaging model and calibration method for multi-depth-of-field targets
Koch et al. Evolutionary-based 3D reconstruction using an uncalibrated stereovision system: application of building a panoramic object view
Zhang et al. A method for calibrating the central catadioptric camera via homographic matrix
Boutteau et al. Circular laser/camera-based attitude and altitude estimation: minimal and robust solutions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240325

Address after: 215000 office area, 4th floor, No. 6, Yuping Road, science and Technology City, high tech Zone, Suzhou, Jiangsu Province

Applicant after: Kuanrui Intelligent Technology (Suzhou) Co.,Ltd.

Country or region after: China

Address before: 211800 no.22-23, Dangui Road, Pukou District, Nanjing City, Jiangsu Province

Applicant before: Nanjing Linghua Microelectronics Technology Co.,Ltd.

Country or region before: China

TA01 Transfer of patent application right