Disclosure of Invention
The main objective of the present application is to provide a 3D triangulation method based on binocular cameras, so as to solve the technical problem of adverse effects on 3D triangulation caused by inaccuracy of stereoscopic external parameters and parallax.
To achieve the above object, according to one aspect of the present application, a 3D triangulation method and apparatus based on binocular cameras are provided.
In a first aspect, the present application provides a binocular camera-based 3D triangulation method.
The binocular camera-based 3D triangulation method according to the present application includes:
establishing a coordinate system of the binocular camera, and acquiring parameters of the coordinate system;
acquiring an end effector speed, a joint speed and a joint jacobian matrix through visual fixation;
reducing the deviation of the actual pixels and the expected pixels of the spatial points by the motion of the binocular camera;
and acquiring optical axes of the left camera and the right camera to obtain a fixed point p.
Further, establishing a coordinate system of the binocular camera, and acquiring parameters of the coordinate system includes:
establishing a coordinate system based on a standard D-H method;
parameter d i ,θ i ,a i ,α i Respectively connecting rod offset, joint angle, connecting rod length and connecting rod corner; the joint offset is θ in the initial state i Is a value of (2);
the transformation matrix between the coordinate systems i and i-1 is:
further, obtaining the end effector speed, joint jacobian matrix by visual fixation includes:
the left and right cameras are the end effectors of the left and right eyes, respectively;
speed V of end effector n =[v x v y v z ω x ω y ω z ] T; wherein ,vx ,v y ,v z Is the linear velocity omega x ,ω y ,ω z Is the angular velocity;
And is also provided with
By the formula->
Calculating; wherein Z is
i and O
i Is the direction vector of the Z axis and the coordinate system O
i -x
i y
i z
i Origin of (2);
by the formula
Will J
θ Converting from the basic coordinate system to a camera coordinate system; wherein J is
joint The transformed joint jacobian matrix is represented by R, which is a rotation matrix of a basic coordinate system relative to a camera coordinate system;
then V
n And
the relation between->
Coordinates p= (u, v) of the 3D spatial point;
then, pixel speed
And end effector speed V
n The relation between->
wherein ,J
image Is an image jacobian matrix;
and, image jacobian matrix
wherein ,f
u and f
v Focal lengths in column and row directions of the camera, Z
c Is the depth of the spatial point p in the camera coordinate system.
Further, reducing the deviation of the actual pixel and the expected pixel of the spatial point by the motion of the binocular camera includes:
the actual pixel coordinates and the expected pixel coordinates of the spatial point are p and p, respectively * ;
Then p and p * Deviation e=p of (2) * -p;
Then the first time period of the first time period,
wherein ,/>
Is the pixel speed;
wherein K is a constant matrix affecting visual fixation performance;
constructing a constant matrix
wherein ,k
1 and k
2 Is the gain in each channel;
then, the pixel speed of the spatial point
Then the first time period of the first time period,
establishing a coordinate system O at the joint of the left camera and the right camera
N -x
N y
N z
N The left eye joint velocity vector is:
/>
then the first time period of the first time period,
wherein ,p
l and />
Is the actual pixel coordinates and the expected pixel coordinates of the spatial point in the left camera;
then, the joint velocity of the left eye is
wherein ,/>
Is a matrix J
l Elements on the ith row and the jth column;
similarly, the right eye joint velocity is
Preferably, when the optical axes of the left and right cameras are not on one plane, the midpoint of the common perpendicular segment of the two skewed optical axes is taken as the fixed point p.
Specifically, when the optical axes of the left and right cameras are not on one plane, the midpoint of the common perpendicular line segment of the two oblique optical axes is taken as a fixed point p including:
L l and Lr Is the ideal optical axis passing through the fixed point p at the same time, L' l and L′r Is the actual optical axis, points A and B are at L' l On the L ', points C and D' r Applying;
in the initial state of the binocular camera, L'
l and L′
r Respectively with
and />
The direction of the axes is the same;
the homogeneous coordinates of points A, B, C, D are (x) a ,y a ,z a ,1),(x b ,y b ,z b ,1),(x c ,y c ,z c,1) and (xd ,y d ,z d 1) a step of; the initial homogeneous coordinates of points A, B, C, D are (0, z), respectively a ′,1),(0,0,z b ′,1),(0,0,z c ', 1) and (0, z d ′,1);
Then, the homogeneous coordinates of a are:
homogeneous coordinates of points B, C and D can be obtained by the same method;
X 1 =x b -x a ,Y 1 =y b -y a ,Z 1 =z b -z a ,X 2 =x d -x c ,Y 2 =y d -y c and Z2 =z d -z c ,
Then, L'
l and L′
r Respectively is
Then, L' l and L′r The direction vector of the common perpendicular of (a) is:
then, from L' l The plane defined by the common vertical line is:
X cp (X-x a )+Y cp (Y-y a )+Z cp (Z-z a )=0
then, the coordinate p is solved 1 =(u 1 ,v 1 ) The coordinate p is obtained by the same method 2 =(u 2 ,v 2 );
By determining the point p in the coordinate system
1 and p
2 Is to determine the fixed point coordinates
In a second aspect, the present application provides a binocular camera-based 3D triangulation apparatus, the apparatus comprising:
and a coordinate system establishment module: the method comprises the steps of establishing a coordinate system of a binocular camera and acquiring coordinate system parameters;
an information acquisition module: the method comprises the steps of obtaining end effector speed, joint speed and joint jacobian matrix through visual fixation;
a deviation reducing module: for reducing the deviation of the actual pixels and the intended pixels of the spatial point by the motion of the binocular camera;
a fixed point acquisition module: and the optical axes of the left camera and the right camera are acquired to obtain a fixed point p.
In a third aspect, the present application provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the binocular camera based 3D triangulation method provided in the first aspect when the program is executed.
In a fourth aspect, the present application provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the binocular camera based 3D triangulation method provided in the first aspect.
In the embodiment of the application, the 3D triangulation is performed by calculating the intersection point of two optical axes of the camera, so that the defect of using image parallax or stereoscopic external parameters is avoided, the aim of better performance and smaller uncertainty is fulfilled, and the technical problem of adverse effect on the 3D triangulation caused by inaccuracy of the stereoscopic external parameters and parallax is solved.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "comprises" and "comprising," and any variations thereof, in the description and claims of the present application and in the foregoing figures, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
In the present application, the terms "upper", "lower", "left", "right", "front", "rear", "top", "bottom", "inner", "outer", "middle", "vertical", "horizontal", "lateral", "longitudinal" and the like indicate an azimuth or a positional relationship based on that shown in the drawings. These terms are only used to better describe the present utility model and its embodiments and are not intended to limit the scope of the indicated devices, elements or components to the particular orientations or to configure and operate in the particular orientations.
Also, some of the terms described above may be used to indicate other meanings in addition to orientation or positional relationships, for example, the term "upper" may also be used to indicate some sort of attachment or connection in some cases. The specific meaning of these terms in the present utility model will be understood by those of ordinary skill in the art according to the specific circumstances.
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
As shown in fig. 1, the method includes the following steps S1 to S4:
s1: and establishing a coordinate system of the binocular camera, and acquiring parameters of the coordinate system.
Further, a coordinate system is established based on a standard D-H method.
By way of example, the coordinate system of the binocular camera in the preferred embodiment of the present utility model is shown in FIG. 2.
Examples, L 1 64.27mm, L 2 11.00mm, L 3l 44.80mm, L 3r 47.20mm, L 4 13.80mm, L 5 30.33mm.
Further, parameter d i ,θ i ,a i ,α i Respectively connecting rod offset, joint angle, connecting rod length and connecting rod corner; the joint offset is θ in the initial state i Is a value of (2).
Specifically, the joint shift amount is θ in the initial state i Is a value of (2).
Illustratively, d in the preferred embodiment of the utility model i ,θ i ,a i ,α i The following table shows:
specifically, the transformation matrix between coordinate systems i and i-1 is:
the optical axis of the camera is perpendicular to the image plane and passes through the principal point of the image. If the 2D image of the 3D spatial point p can be held on the principal image points of the left and right cameras by visual gaze or visual servoing, the 3D spatial point p will be located on the optical axes of the left and right cameras at the same time. Thus, p is the intersection of the two camera optical axes, referred to as the fixed point. By calculating the intersection of the two optical axes, the 3D coordinates of the fixed point p can be estimated in real time. In practical applications, the two optical axes may not be on the same plane due to visual fixation errors and model errors, so the midpoint of the common perpendicular to the two skewed optical axes is taken as a representation of the fixed point.
S2: the end effector speed, joint jacobian matrix are obtained by visual fixation.
Further, the left and right cameras are the end effectors of the left and right eyes, respectively.
Further, the end effector speed is defined as V n =[v x v y v z ω x ω y ω z ] T ;
Joint velocity is defined as
The joint jacobian matrix is defined as
Further, v x ,v y ,v z Is the linear velocity omega x ,ω y ,ω z Is the angular velocity;
in particular, the method comprises the steps of,
by the formula->
And (5) performing calculation.
Further, Z i and Oi Is the direction vector of the Z axis and the coordinate system O i -x i y i z i Is the origin of (c).
Specifically, will J
θ Conversion from basic coordinate system to camera coordinate system by formula
And (5) performing calculation.
Further, J joint Is the transformed joint jacobian matrix, and R is the rotation matrix of the base coordinate system with respect to the camera coordinate system.
Specifically, V
n And
the relation between->
Further, the coordinates of the 3D spatial point are defined as p= (u, v).
Specifically, the pixel speed
And end effector speed V
n The relation between->
Further, J image Is an image jacobian matrix.
Specifically, by the formula
An image jacobian matrix is calculated.
Further, f u and fv Focal lengths in column and row directions of the camera, Z c Is the depth of the spatial point p in the camera coordinate system.
S3: the deviation of the actual pixels and the expected pixels of the spatial point is reduced by the motion of the binocular camera.
Further, the actual pixel coordinates and the expected pixel coordinates of the spatial point are defined as p and p, respectively * 。
Further, p and p * Deviation of (c) is defined as e=p * -p。
In particular, the method comprises the steps of,
further, the method comprises the steps of,
is the pixel speed.
In particular, the method comprises the steps of,
further, K is a constant matrix that affects visual fixation performance.
Specifically, a constant matrix is constructed
Further, k 1 and k2 Is the gain in each channel.
Preferably, k is increased 1 and k2 The time required for visual fixation can be shortened.
Specifically, the pixel speed of the spatial point is
/>
In particular, the method comprises the steps of,
preferably, as shown in FIG. 2, a coordinate system O is established at the junction of the left and right cameras N -x N y N z N 。
Further, the left eye joint velocity vector is:
in particular, the method comprises the steps of,
further, p
l And
is the actual pixel coordinates and the expected pixel coordinates of the spatial point in the left camera.
Specifically, the joint velocity of the left eye is
Further, the method comprises the steps of,
is a matrix J
l Elements on the ith row and the jth column.
In the same way, the joint velocity of the right eye
S4: and acquiring optical axes of the left camera and the right camera to obtain a fixed point p.
For example, in the 3D triangulation method based on visual fixation according to the preferred embodiment of the present utility model, as shown in fig. 3, a left eye coordinate system is established at the connection of the left and right cameras as O
6 -x
6 y
6 z
6 The right eye coordinate system is O
9 -x
9 y
9 z
9 The left camera coordinate system is
The right camera coordinate system is->
Further, and L l and Lr The optical axes of the left and right cameras, respectively.
Specifically, transformation matrices of the left camera and the right camera coordinate systems with respect to the basic coordinate system are calculated as:
further, the method comprises the steps of,
and />
The head-eye parameters of the left and right cameras respectively,
N T
6 and
N T
9 the transformation matrices of the left eye coordinate system and the right eye coordinate system with respect to the base coordinate system are respectively.
For example, in the actual situation of the embodiment of the present utility model, the fixed point p cannot be located at the same time at the L due to the motor servo error and the feature point extraction error l and Lr As shown in fig. 4. In this case, the expected pixel coordinates of point p will still be fixed on the principal point.
Further, L l and Lr Is the ideal optical axis that passes through the fixed point p at the same time. L'. l and L′r Is the actual optical axis, and is two oblique lines. Points A and B are at L' l And (3) upper part. Points C and D are at L' r And (3) upper part.
Further, in the initial state of the binocular camera, L'
l and L′
r Respectively with
and />
The axes are in the same direction.
Specifically, the homogeneous coordinates of points a, B, C, D are set to (x a ,y a ,z a ,1),(x b ,y b ,z b ,1),(x c ,y c ,z c,1) and (xd ,y d ,z d ,1)。
Specifically, the initial homogeneous coordinates of points A, B, C, D are set to (0, z), respectively a ′,1),(0,0,z b ′,1),(0,0,z c ', 1) and (0, z d ′,1)。
For example, the initial homogeneous coordinates of points a, B, C, D are (0, 10, 1), (0, 20, 1), (0, 10, 1) and (0, 20, 1), respectively.
The homogeneous coordinates of a are, for example:
further, homogeneous coordinates of points B, C and D are similarly obtained.
Specifically, x is b -x a ,y b -y a ,z b -z a ,x d -x c ,y d -y c and zd -z c Are defined as X 1 ,Y 1 ,Z 1 ,X 2 ,Y 2 and Z2 。
Namely X 1 =x b -x a ,Y 1 =y b -y a ,Z 1 =z b -z a ,X 2 =x d -x c ,Y 2 =y d -y c and Z2 =z d -z c ,
Specifically, L'
l and L′
r Respectively defined as
Specifically, L' l and L′r The direction vector of the common perpendicular of (a) is defined as:
from L' l The plane defined by the common vertical line is:
X cp (X-x a )+Y cp (Y-y a )+Z cp (Z-z a )=0
by way of example, solve for the coordinate p 1 =(u 1 ,v 1 ) The coordinate p is obtained by the same method 2 =(u 2 ,v 2 )。
Specifically, by determining the point p in the coordinate system
1 and p
2 Is to determine the fixed point coordinates
From the above description, it can be seen that in the method proposed by the present patent, image parallax or stereoscopic external parameters will not be used any more. Instead, two cameras are driven by visual fixation to focus on a 3D spatial point p in the optical center. Thus, p is located on the optical axes of both cameras at the same time, or p is a fixed point of both cameras. The 3D coordinates of p can be obtained by the intersection of the two optical axes of the two cameras. If joint position feedback and head-eye parameters are provided, both optical axes can be derived by forward kinematics.
In practice, the two optical axes may not be on the same plane due to visual fixation errors and model errors, so the midpoint of the common perpendicular segment of the two skewed optical axes is taken as a representation of the fixed point.
An advantage of 3D triangulation based on visual fixation is that no direct stereoscopic external parameters are required anymore. On the other hand, since the fixed point is calculated by the intersection of two skew lines, the error tolerance of image detection is higher than that of the parallax-based triangulation method.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
According to an embodiment of the present utility model, there is provided a binocular camera-based 3D triangulation apparatus, the apparatus including:
and a coordinate system establishment module: the method comprises the steps of establishing a coordinate system of a binocular camera and acquiring coordinate system parameters;
an information acquisition module: the method comprises the steps of obtaining end effector speed, joint speed and joint jacobian matrix through visual fixation;
a deviation reducing module: for reducing the deviation of the actual pixels and the intended pixels of the spatial point by the motion of the binocular camera;
a fixed point acquisition module: and the optical axes of the left camera and the right camera are acquired to obtain a fixed point p.
The following describes in detail, by way of specific example, simulation experiments of a binocular camera-based 3D triangulation method:
for the simulation experiments, static experiments and dynamic experiments were performed.
And (5) introducing a mechanical model of the robot bionic eye into the GAZEBO. Parameters such as joints, links, etc. are set to be the same as the real robot-simulated eye. And constructing a joint controller, sending a joint speed command to the simulated robot bionic eye, and obtaining joint angle feedback.
The image size of the left and right analog cameras is 1040×860 pixels. As shown in the table below, the intrinsic parameters of two analog cameras, including focal length and principal point, are shown calibrated using the Bouguet toolbox. The simulated checkerboard corner is detected with sub-pixel position accuracy. The first upper right corner of the simulated checkerboard is chosen as the spatial point p to be gazed at, which is called the fixed point. The expected pixel coordinates of the p-point in both cameras are fixed on the principal point (520.50, 430.50) by visual fixation. For the head-eye parameters of the two simulated cameras, coordinate system O
6 -x
6 y
6 z
6 and O
9 -x
9 y
9 z
9 The Z axes of (2) each deviate by only 24.30mm. Coordinate system
and />
X and Y axes of (2) and coordinate system O
6 -x
6 y
6 z
6 and O
9 -x
9 y
9 z
9 The same applies. Setting K in a constant matrix K
1 =1.8,k
2 =1.8。
In the simulation experiment, a simulated checkerboard was placed, and fixed points were set with respect to the base coordinate system O N -x N y N z N (-0.00, -145.00, 1500). The simulated robotic biomimetic eye is set to an initial state. The pixel coordinates of the fixed point p are moved by visual fixation from (551.15, 374.35) to (520.50, 430.50) in the left camera and from (489.74, 374.16) to (520.50, 430.50) in the right camera, for a total of 600ms. In simulation experiments, 55 μs is required to calculate the 3D coordinates of the fixed point using the method proposed by this patent.
In static experiments, it is desirable to verify the validity of the method proposed by this patent by stabilizing at a fixed point.
By placing the analog board at 21 different positions, 21 different static fixation points are obtained. At each position, the basic coordinate system O is recorded in the same time interval by using the method disclosed by the patent N -x N y N z N Is used for the 3D coordinates of the static fixation points. 3D coordinates of static fixed points with respect to the base coordinate system O N -x N y N z N As shown in the following table, can be obtained from GAZEBO.
Using the method proposed in this patent, the absolute error between the reference truth value of the 3D coordinates and the average value of the estimated 3D coordinates is calculated. The absolute errors in the simulation experiments are mainly from accumulated errors in forward kinematics, errors corrected using Bouguet and errors corrected for the anterior eye. The absolute error of the method on the X axis is between 0.73mm and 3.96 mm; the absolute error on the Y-axis is between 1.14mm and 6.18 mm. The minimum absolute error on the Z axis is 0.02mm when the Z reference true value is 1100 mm. The maximum absolute error of the method on the Z axis is only 0.5% of the depth reference true value.
Uncertainty of the 3D coordinates estimated using the method proposed in this patent can be calculated by applying the transfer theorem of uncertainty. The uncertainty of the estimated 3D coordinates is mainly from the uncertainty of the image detection. The uncertainty of the image detection can be reflected by the uncertainty of the pixel coordinates of the detection point, defined as u u and uv 。u u and uv The following can be calculated:
wherein u
i ,v
i Respectively the pixel coordinates u are given by the pixel, v (i=1, 2, once again, n).
The average of n independent repeated observations of pixel coordinates u, v, respectively.
It is assumed that the pixel coordinates of the detection points coincide with the principal point within the same time t. After time t, in the method proposed in this patent, the eye joint angle θ is simulated by visual fixation i (i=4, 5,7, 8) varies with the pixel coordinates of the detection points in the two cameras. For the left eye, joint angle θ 4 ,θ 5 Can be expressed as follows:
wherein u
l ,v
l Is the pixel coordinates of the detection point in the left camera.
Is the principal point in the left camera. />
The focal lengths of the left camera in the column and row directions, respectively. Right eye joint angle theta
7 ,θ
8 And θ
4 ,θ
5 Similarly. Bionic eye joint angle theta
i The uncertainty of (i=4, 5,7, 8) can be calculated by detecting the uncertainty of the dot pixel coordinate transfer in the two cameras. θ
4 ,θ
5 The uncertainty of (2) can be calculated as follows:
at the position of
In the result of (a)>
At->
In the result of (a)>
Right eye joint angle theta
7 ,θ
8 Uncertainty calculation method and θ
4 ,θ
5 Similarly.
Calculating optical axes of the two cameras using forward kinematics and head-eye parameters; computing a defined using a 3D triangulation algorithm based on visual gazeMeaning as (x) p ,y p ,z p ) Is provided for the estimated 3D coordinates of the detection point p. By using the method proposed by the patent, the uncertainty of the 3D coordinates of the estimated detection point p can be calculated by the bionic eye joint angle theta i (i=4, 5,7, 8). X is x p ,y p ,z p The uncertainty of using the method proposed by this patent can be expressed as:
wherein, in
and />
In the result of (a)>
wherein />
Is the average of n independent repeated observations of the i (i=4, 5,7, 8) th joint angle feedback result.
As previously described, a board is placed in 21 different positions. At each position, n=1150. The absolute uncertainty in the X-axis is between 0.009mm and 0.130 mm; the absolute uncertainties on the Y-axis are all between 0.028mm and 0.188 mm; the absolute uncertainty in the Z-axis is between 0.088mm and 4.545 mm.
In dynamic experiments, it is desirable to verify the validity of the method proposed by this patent by moving the fixed point.
Placing the checkerboard in a basic coordinate system O N -x N y N z N 1000mm on the Z axis of (c).
In relation to the basic coordinate system O N -x N y N z N The trajectories of the fixed points on the X and Y axes of (a) are x=r×sin (t/1000) and y= -100+r×cos (t/1000), respectively, where r=100 mm. The estimated mobile fixation point was recorded with respect to the base coordinate system O at equal time intervals using the method proposed by the present patent N -x N y N z N 3D coordinates of (c).
The method calculates the absolute error between the reference true value and the estimation result, and the average absolute error on the X axis is 1.76mm; the average absolute error on the Y axis is 2.03mm; the average absolute error on the Z axis is 1.69mm, reaching only 0.17% of the depth reference true value. The uncertainty in the Z-axis of the method proposed in this patent is calculated using the standard deviation. In the dynamic simulation experiment, n=3780. The absolute uncertainty of the method proposed by the patent on the Z axis is 0.30mm.
The following describes in detail, with specific examples, physical experiments of a binocular camera-based 3D triangulation method:
for the physical experiment, static experiments and dynamic experiments were performed on design number robotic biomimetic eyes. The image size of the left and right cameras is 640×480 pixels.
As shown in the table below, the intrinsic parameters of two cameras, including focal length and principal point, are shown calibrated using the Bouguet toolbox.
Distortion coefficients of the left camera and the right camera are [ -0.0437,0.1425,0.0005, -0.0012,0.0000 respectively]And [ -0.0425,0.1080,0.0001, -0.0015,0.0000]. April tag information was obtained through the ViSP library. The center of april tag (point p) was chosen as the fixed point. The expected pixel coordinates of the midpoint p of the left and right cameras are respectively set toMain points (362.94, 222.53) and (388.09, 220.82). Setting K in a constant matrix K 1 =3.6,k 2 =3.6. The head-eye parameters after left and right camera calibration are as follows:
setting K in a constant matrix K 1 =4.0,k 2 =4.0. In physical experiments, april tag was placed for setting the fixation point in relation to the base coordinate system O N -x N y N z N (-0.00, -145.00, 1500). The robot-simulated eye is set to an initial state. The pixel coordinates of the fixed point p are moved by visual fixation from (385.98, 196.60) to (362.94, 222.53) in the left camera and from (361.09, 191.82) to (388.09, 220.82) in the right camera, taking a total of 650ms. In a physical experiment, 15 μs is required to calculate the 3D coordinates of the fixed point using the method proposed by this patent.
In a static experiment, april tag was placed at 21 different locations to obtain 21 different static fixation points. At each location, the 3D coordinates of the static fixed points estimated using the method proposed in this patent are recorded at the same time intervals. The 3D coordinates of the april tag center were chosen as the reference truth values for the static fixed point, as shown in the table below.
The absolute error between the reference truth and the estimated 3D coordinate mean is calculated using the method presented in this patent. The absolute errors in physical experiments are mainly from the accumulated errors of forward kinematics, the errors of intrinsic parameter calibration, head-eye calibration and joint offset.
Especially when the Z reference true value is larger than 2000mm, the absolute error of the method on the Y axis is between 0.55mm and 12.28mm, and the minimum absolute error of the method on the Z axis is 1.42mm, so that the reference true value is 0.23%. The maximum absolute error on the Z axis is 124.49mm, reaching 4.97% of the depth reference true value.
April tag was placed in 21 different locations. At each position, n=1200. The absolute uncertainty of the 3D coordinates estimated by the method proposed by the patent on the X axis is between 0.119mm and 3.905 mm; the absolute uncertainties on the Y-axis are all between 0.091mm and 0.640 mm; the absolute uncertainties in the Z-axis are all between 0.268mm and 7.975 mm.
In dynamic experiments, april tag is moved to move the baseline true value of the 3D coordinates of the fixed point from (-278.60, -23.13, 957.84) to (-390.02, -30.39, 1111.91), relative to the base coordinate system O N -x N y N z N The average velocity was 0.01m/s. The method provided by the patent is used for recording the basic coordinate system O in the same time interval N -x N y N z N And 3D coordinates of the estimated moving gaze point.
The absolute error between the reference truth and the estimation result is calculated using the method proposed by the present patent. The average absolute error on the X axis is 11.81mm; the average absolute error on the Y-axis is 14.89mm; the average absolute error on the Z axis is 23.74mm.
It will be apparent to those skilled in the art that the modules or steps of the utility model described above may be implemented in a general purpose computing device, they may be concentrated on a single computing device, or distributed across a network of computing devices, or they may alternatively be implemented in program code executable by computing devices, such that they may be stored in a memory device for execution by the computing devices, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps within them may be fabricated into a single integrated circuit module. Thus, the present utility model is not limited to any specific combination of hardware and software.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the same, but rather, various modifications and variations may be made by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.