CN111784771B - Binocular camera-based 3D triangulation method and device - Google Patents

Binocular camera-based 3D triangulation method and device Download PDF

Info

Publication number
CN111784771B
CN111784771B CN202010600958.XA CN202010600958A CN111784771B CN 111784771 B CN111784771 B CN 111784771B CN 202010600958 A CN202010600958 A CN 202010600958A CN 111784771 B CN111784771 B CN 111784771B
Authority
CN
China
Prior art keywords
camera
coordinate system
joint
optical axes
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010600958.XA
Other languages
Chinese (zh)
Other versions
CN111784771A (en
Inventor
陈晓鹏
黄强
王启航
徐德
黄华
樊迪
高峻峣
余张国
陈学超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202010600958.XA priority Critical patent/CN111784771B/en
Publication of CN111784771A publication Critical patent/CN111784771A/en
Application granted granted Critical
Publication of CN111784771B publication Critical patent/CN111784771B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/30Interpretation of pictures by triangulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application discloses a binocular camera-based 3D triangulation method. The binocular camera-based 3D triangulation method comprises the steps of establishing a coordinate system of a binocular camera and acquiring parameters of the coordinate system; acquiring an end effector speed, a joint speed and a joint jacobian matrix through visual fixation; reducing the deviation of the actual pixels and the expected pixels of the spatial points by the motion of the binocular camera; and acquiring optical axes of the left camera and the right camera to obtain a fixed point p. The method and the device solve the technical problem of adverse effects on 3D triangulation caused by inaccuracy of stereoscopic external parameters and parallax.

Description

Binocular camera-based 3D triangulation method and device
Technical Field
The application relates to the field of binocular active vision, in particular to a 3D triangulation method based on binocular cameras.
Background
The application of the binocular active vision-based 3D triangulation technology in computer vision and robot technology is increasing, and binocular active vision is beneficial to various applications such as manipulation, 3D reconstruction, navigation, 3D mapping and the like. 3D coordinate estimation based on binocular active vision has attracted extensive research interest due to its non-contact, low cost and high accuracy advantages.
Binocular active vision systems can be divided into two categories: the first class has fixed cameras and the second class has non-fixed cameras. The second category may increase flexibility and expand the field of view, which is more similar to the human visual system. In general, 3D triangulation may be performed using parallax and stereoscopic external parameters. The stereo external parameters may be calibrated off-line or on-line. The first type of active vision system requires only offline calibration, its stereoscopic external parameters are fixed, as one camera is static with respect to the other. The second type of stereoscopic external parameters requires online calibration, as the external parameters may be constantly changing.
In the related art, an integrated double correction (ITPC) method is adopted, stereoscopic external parameters are calculated on line through forward kinematics and corrected head-eye parameters, and parallax-based 3D triangulation is performed using the stereoscopic external parameters to estimate coordinates of 3D spatial points. However, in the parallax-based triangulation method, stereoscopic external parameters and accuracy of parallax have a significant influence on 3D triangulation.
For the problem of adverse effects on 3D triangulation due to inaccuracy of stereo external parameters and parallax in the parallax-based triangulation method, no effective solution has been proposed at present.
Disclosure of Invention
The main objective of the present application is to provide a 3D triangulation method based on binocular cameras, so as to solve the technical problem of adverse effects on 3D triangulation caused by inaccuracy of stereoscopic external parameters and parallax.
To achieve the above object, according to one aspect of the present application, a 3D triangulation method and apparatus based on binocular cameras are provided.
In a first aspect, the present application provides a binocular camera-based 3D triangulation method.
The binocular camera-based 3D triangulation method according to the present application includes:
establishing a coordinate system of the binocular camera, and acquiring parameters of the coordinate system;
acquiring an end effector speed, a joint speed and a joint jacobian matrix through visual fixation;
reducing the deviation of the actual pixels and the expected pixels of the spatial points by the motion of the binocular camera;
and acquiring optical axes of the left camera and the right camera to obtain a fixed point p.
Further, establishing a coordinate system of the binocular camera, and acquiring parameters of the coordinate system includes:
establishing a coordinate system based on a standard D-H method;
parameter d i ,θ i ,a i ,α i Respectively connecting rod offset, joint angle, connecting rod length and connecting rod corner; the joint offset is θ in the initial state i Is a value of (2);
the transformation matrix between the coordinate systems i and i-1 is:
Figure BDA0002558663230000021
further, obtaining the end effector speed, joint jacobian matrix by visual fixation includes:
the left and right cameras are the end effectors of the left and right eyes, respectively;
speed V of end effector n =[v x v y v z ω x ω y ω z ] T; wherein ,vx ,v y ,v z Is the linear velocity omega x ,ω y ,ω z Is the angular velocity;
joint velocity
Figure BDA0002558663230000022
Joint jacobian matrix
Figure BDA0002558663230000023
And is also provided with
Figure BDA0002558663230000024
By the formula->
Figure BDA0002558663230000025
Calculating; wherein Z is i and Oi Is the direction vector of the Z axis and the coordinate system O i -x i y i z i Origin of (2);
by the formula
Figure BDA0002558663230000031
Will J θ Converting from the basic coordinate system to a camera coordinate system; wherein J is joint The transformed joint jacobian matrix is represented by R, which is a rotation matrix of a basic coordinate system relative to a camera coordinate system;
then V n And
Figure BDA0002558663230000032
the relation between->
Figure BDA0002558663230000033
Coordinates p= (u, v) of the 3D spatial point;
then, pixel speed
Figure BDA0002558663230000034
And end effector speed V n The relation between->
Figure BDA0002558663230000035
wherein ,Jimage Is an image jacobian matrix;
and, image jacobian matrix
Figure BDA0002558663230000036
wherein ,fu and fv Focal lengths in column and row directions of the camera, Z c Is the depth of the spatial point p in the camera coordinate system.
Further, reducing the deviation of the actual pixel and the expected pixel of the spatial point by the motion of the binocular camera includes:
the actual pixel coordinates and the expected pixel coordinates of the spatial point are p and p, respectively *
Then p and p * Deviation e=p of (2) * -p;
Then the first time period of the first time period,
Figure BDA0002558663230000037
wherein ,/>
Figure BDA0002558663230000038
Is the pixel speed;
Figure BDA0002558663230000039
wherein K is a constant matrix affecting visual fixation performance;
constructing a constant matrix
Figure BDA00025586632300000310
wherein ,k1 and k2 Is the gain in each channel;
then, the pixel speed of the spatial point
Figure BDA00025586632300000313
Then the first time period of the first time period,
Figure BDA00025586632300000311
establishing a coordinate system O at the joint of the left camera and the right camera N -x N y N z N The left eye joint velocity vector is:
Figure BDA00025586632300000312
/>
then the first time period of the first time period,
Figure BDA0002558663230000041
wherein ,pl and />
Figure BDA0002558663230000042
Is the actual pixel coordinates and the expected pixel coordinates of the spatial point in the left camera;
then, the joint velocity of the left eye is
Figure BDA0002558663230000043
wherein ,/>
Figure BDA0002558663230000044
Is a matrix J l Elements on the ith row and the jth column;
similarly, the right eye joint velocity is
Figure BDA0002558663230000045
Preferably, when the optical axes of the left and right cameras are not on one plane, the midpoint of the common perpendicular segment of the two skewed optical axes is taken as the fixed point p.
Specifically, when the optical axes of the left and right cameras are not on one plane, the midpoint of the common perpendicular line segment of the two oblique optical axes is taken as a fixed point p including:
L l and Lr Is the ideal optical axis passing through the fixed point p at the same time, L' l and L′r Is the actual optical axis, points A and B are at L' l On the L ', points C and D' r Applying;
in the initial state of the binocular camera, L' l and L′r Respectively with
Figure BDA0002558663230000046
and />
Figure BDA0002558663230000047
The direction of the axes is the same;
the homogeneous coordinates of points A, B, C, D are (x) a ,y a ,z a ,1),(x b ,y b ,z b ,1),(x c ,y c ,z c,1) and (xd ,y d ,z d 1) a step of; the initial homogeneous coordinates of points A, B, C, D are (0, z), respectively a ′,1),(0,0,z b ′,1),(0,0,z c ', 1) and (0, z d ′,1);
Then, the homogeneous coordinates of a are:
Figure BDA0002558663230000048
homogeneous coordinates of points B, C and D can be obtained by the same method;
X 1 =x b -x a ,Y 1 =y b -y a ,Z 1 =z b -z a ,X 2 =x d -x c ,Y 2 =y d -y c and Z2 =z d -z c
Then, L' l and L′r Respectively is
Figure BDA0002558663230000051
Then, L' l and L′r The direction vector of the common perpendicular of (a) is:
Figure BDA0002558663230000052
then, from L' l The plane defined by the common vertical line is:
X cp (X-x a )+Y cp (Y-y a )+Z cp (Z-z a )=0
then, the coordinate p is solved 1 =(u 1 ,v 1 ) The coordinate p is obtained by the same method 2 =(u 2 ,v 2 );
By determining the point p in the coordinate system 1 and p2 Is to determine the fixed point coordinates
Figure BDA0002558663230000053
In a second aspect, the present application provides a binocular camera-based 3D triangulation apparatus, the apparatus comprising:
and a coordinate system establishment module: the method comprises the steps of establishing a coordinate system of a binocular camera and acquiring coordinate system parameters;
an information acquisition module: the method comprises the steps of obtaining end effector speed, joint speed and joint jacobian matrix through visual fixation;
a deviation reducing module: for reducing the deviation of the actual pixels and the intended pixels of the spatial point by the motion of the binocular camera;
a fixed point acquisition module: and the optical axes of the left camera and the right camera are acquired to obtain a fixed point p.
In a third aspect, the present application provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the binocular camera based 3D triangulation method provided in the first aspect when the program is executed.
In a fourth aspect, the present application provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the binocular camera based 3D triangulation method provided in the first aspect.
In the embodiment of the application, the 3D triangulation is performed by calculating the intersection point of two optical axes of the camera, so that the defect of using image parallax or stereoscopic external parameters is avoided, the aim of better performance and smaller uncertainty is fulfilled, and the technical problem of adverse effect on the 3D triangulation caused by inaccuracy of the stereoscopic external parameters and parallax is solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, are included to provide a further understanding of the application and to provide a further understanding of the application with regard to the other features, objects and advantages of the application. The drawings of the illustrative embodiments of the present application and their descriptions are for the purpose of illustrating the present application and are not to be construed as unduly limiting the present application. In the drawings:
FIG. 1 is a flow diagram of a binocular camera-based 3D triangulation method according to an embodiment of the present application;
FIG. 2 is a coordinate system of a binocular camera according to an embodiment of the present application;
FIG. 3 is a 3D triangulation method based on visual fixation according to an embodiment of the present application; and
fig. 4 shows a practical situation model according to an embodiment of the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "comprises" and "comprising," and any variations thereof, in the description and claims of the present application and in the foregoing figures, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
In the present application, the terms "upper", "lower", "left", "right", "front", "rear", "top", "bottom", "inner", "outer", "middle", "vertical", "horizontal", "lateral", "longitudinal" and the like indicate an azimuth or a positional relationship based on that shown in the drawings. These terms are only used to better describe the present utility model and its embodiments and are not intended to limit the scope of the indicated devices, elements or components to the particular orientations or to configure and operate in the particular orientations.
Also, some of the terms described above may be used to indicate other meanings in addition to orientation or positional relationships, for example, the term "upper" may also be used to indicate some sort of attachment or connection in some cases. The specific meaning of these terms in the present utility model will be understood by those of ordinary skill in the art according to the specific circumstances.
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
As shown in fig. 1, the method includes the following steps S1 to S4:
s1: and establishing a coordinate system of the binocular camera, and acquiring parameters of the coordinate system.
Further, a coordinate system is established based on a standard D-H method.
By way of example, the coordinate system of the binocular camera in the preferred embodiment of the present utility model is shown in FIG. 2.
Examples, L 1 64.27mm, L 2 11.00mm, L 3l 44.80mm, L 3r 47.20mm, L 4 13.80mm, L 5 30.33mm.
Further, parameter d i ,θ i ,a i ,α i Respectively connecting rod offset, joint angle, connecting rod length and connecting rod corner; the joint offset is θ in the initial state i Is a value of (2).
Specifically, the joint shift amount is θ in the initial state i Is a value of (2).
Illustratively, d in the preferred embodiment of the utility model i ,θ i ,a i ,α i The following table shows:
Figure BDA0002558663230000071
Figure BDA0002558663230000081
specifically, the transformation matrix between coordinate systems i and i-1 is:
Figure BDA0002558663230000082
the optical axis of the camera is perpendicular to the image plane and passes through the principal point of the image. If the 2D image of the 3D spatial point p can be held on the principal image points of the left and right cameras by visual gaze or visual servoing, the 3D spatial point p will be located on the optical axes of the left and right cameras at the same time. Thus, p is the intersection of the two camera optical axes, referred to as the fixed point. By calculating the intersection of the two optical axes, the 3D coordinates of the fixed point p can be estimated in real time. In practical applications, the two optical axes may not be on the same plane due to visual fixation errors and model errors, so the midpoint of the common perpendicular to the two skewed optical axes is taken as a representation of the fixed point.
S2: the end effector speed, joint jacobian matrix are obtained by visual fixation.
Further, the left and right cameras are the end effectors of the left and right eyes, respectively.
Further, the end effector speed is defined as V n =[v x v y v z ω x ω y ω z ] T
Joint velocity is defined as
Figure BDA0002558663230000083
The joint jacobian matrix is defined as
Figure BDA0002558663230000084
Further, v x ,v y ,v z Is the linear velocity omega x ,ω y ,ω z Is the angular velocity;
in particular, the method comprises the steps of,
Figure BDA0002558663230000085
by the formula->
Figure BDA0002558663230000086
And (5) performing calculation.
Further, Z i and Oi Is the direction vector of the Z axis and the coordinate system O i -x i y i z i Is the origin of (c).
Specifically, will J θ Conversion from basic coordinate system to camera coordinate system by formula
Figure BDA0002558663230000091
And (5) performing calculation.
Further, J joint Is the transformed joint jacobian matrix, and R is the rotation matrix of the base coordinate system with respect to the camera coordinate system.
Specifically, V n And
Figure BDA0002558663230000092
the relation between->
Figure BDA0002558663230000093
Further, the coordinates of the 3D spatial point are defined as p= (u, v).
Specifically, the pixel speed
Figure BDA0002558663230000094
And end effector speed V n The relation between->
Figure BDA0002558663230000095
Further, J image Is an image jacobian matrix.
Specifically, by the formula
Figure BDA0002558663230000096
An image jacobian matrix is calculated.
Further, f u and fv Focal lengths in column and row directions of the camera, Z c Is the depth of the spatial point p in the camera coordinate system.
S3: the deviation of the actual pixels and the expected pixels of the spatial point is reduced by the motion of the binocular camera.
Further, the actual pixel coordinates and the expected pixel coordinates of the spatial point are defined as p and p, respectively *
Further, p and p * Deviation of (c) is defined as e=p * -p。
In particular, the method comprises the steps of,
Figure BDA0002558663230000097
further, the method comprises the steps of,
Figure BDA0002558663230000098
is the pixel speed.
In particular, the method comprises the steps of,
Figure BDA0002558663230000099
further, K is a constant matrix that affects visual fixation performance.
Specifically, a constant matrix is constructed
Figure BDA0002558663230000101
Further, k 1 and k2 Is the gain in each channel.
Preferably, k is increased 1 and k2 The time required for visual fixation can be shortened.
Specifically, the pixel speed of the spatial point is
Figure BDA0002558663230000102
/>
In particular, the method comprises the steps of,
Figure BDA0002558663230000103
preferably, as shown in FIG. 2, a coordinate system O is established at the junction of the left and right cameras N -x N y N z N
Further, the left eye joint velocity vector is:
Figure BDA0002558663230000104
in particular, the method comprises the steps of,
Figure BDA0002558663230000105
further, p l And
Figure BDA0002558663230000106
is the actual pixel coordinates and the expected pixel coordinates of the spatial point in the left camera.
Specifically, the joint velocity of the left eye is
Figure BDA0002558663230000107
Further, the method comprises the steps of,
Figure BDA0002558663230000108
is a matrix J l Elements on the ith row and the jth column.
In the same way, the joint velocity of the right eye
Figure BDA0002558663230000109
S4: and acquiring optical axes of the left camera and the right camera to obtain a fixed point p.
For example, in the 3D triangulation method based on visual fixation according to the preferred embodiment of the present utility model, as shown in fig. 3, a left eye coordinate system is established at the connection of the left and right cameras as O 6 -x 6 y 6 z 6 The right eye coordinate system is O 9 -x 9 y 9 z 9 The left camera coordinate system is
Figure BDA00025586632300001010
The right camera coordinate system is->
Figure BDA00025586632300001011
Further, and L l and Lr The optical axes of the left and right cameras, respectively.
Specifically, transformation matrices of the left camera and the right camera coordinate systems with respect to the basic coordinate system are calculated as:
Figure BDA00025586632300001012
further, the method comprises the steps of,
Figure BDA0002558663230000111
and />
Figure BDA0002558663230000112
The head-eye parameters of the left and right cameras respectively, N T 6 and N T 9 the transformation matrices of the left eye coordinate system and the right eye coordinate system with respect to the base coordinate system are respectively.
For example, in the actual situation of the embodiment of the present utility model, the fixed point p cannot be located at the same time at the L due to the motor servo error and the feature point extraction error l and Lr As shown in fig. 4. In this case, the expected pixel coordinates of point p will still be fixed on the principal point.
Further, L l and Lr Is the ideal optical axis that passes through the fixed point p at the same time. L'. l and L′r Is the actual optical axis, and is two oblique lines. Points A and B are at L' l And (3) upper part. Points C and D are at L' r And (3) upper part.
Further, in the initial state of the binocular camera, L' l and L′r Respectively with
Figure BDA0002558663230000113
and />
Figure BDA0002558663230000114
The axes are in the same direction.
Specifically, the homogeneous coordinates of points a, B, C, D are set to (x a ,y a ,z a ,1),(x b ,y b ,z b ,1),(x c ,y c ,z c,1) and (xd ,y d ,z d ,1)。
Specifically, the initial homogeneous coordinates of points A, B, C, D are set to (0, z), respectively a ′,1),(0,0,z b ′,1),(0,0,z c ', 1) and (0, z d ′,1)。
For example, the initial homogeneous coordinates of points a, B, C, D are (0, 10, 1), (0, 20, 1), (0, 10, 1) and (0, 20, 1), respectively.
The homogeneous coordinates of a are, for example:
Figure BDA0002558663230000115
further, homogeneous coordinates of points B, C and D are similarly obtained.
Specifically, x is b -x a ,y b -y a ,z b -z a ,x d -x c ,y d -y c and zd -z c Are defined as X 1 ,Y 1 ,Z 1 ,X 2 ,Y 2 and Z2
Namely X 1 =x b -x a ,Y 1 =y b -y a ,Z 1 =z b -z a ,X 2 =x d -x c ,Y 2 =y d -y c and Z2 =z d -z c
Specifically, L' l and L′r Respectively defined as
Figure BDA0002558663230000121
Specifically, L' l and L′r The direction vector of the common perpendicular of (a) is defined as:
Figure BDA0002558663230000122
from L' l The plane defined by the common vertical line is:
X cp (X-x a )+Y cp (Y-y a )+Z cp (Z-z a )=0
by way of example, solve for the coordinate p 1 =(u 1 ,v 1 ) The coordinate p is obtained by the same method 2 =(u 2 ,v 2 )。
Specifically, by determining the point p in the coordinate system 1 and p2 Is to determine the fixed point coordinates
Figure BDA0002558663230000123
From the above description, it can be seen that in the method proposed by the present patent, image parallax or stereoscopic external parameters will not be used any more. Instead, two cameras are driven by visual fixation to focus on a 3D spatial point p in the optical center. Thus, p is located on the optical axes of both cameras at the same time, or p is a fixed point of both cameras. The 3D coordinates of p can be obtained by the intersection of the two optical axes of the two cameras. If joint position feedback and head-eye parameters are provided, both optical axes can be derived by forward kinematics.
In practice, the two optical axes may not be on the same plane due to visual fixation errors and model errors, so the midpoint of the common perpendicular segment of the two skewed optical axes is taken as a representation of the fixed point.
An advantage of 3D triangulation based on visual fixation is that no direct stereoscopic external parameters are required anymore. On the other hand, since the fixed point is calculated by the intersection of two skew lines, the error tolerance of image detection is higher than that of the parallax-based triangulation method.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
According to an embodiment of the present utility model, there is provided a binocular camera-based 3D triangulation apparatus, the apparatus including:
and a coordinate system establishment module: the method comprises the steps of establishing a coordinate system of a binocular camera and acquiring coordinate system parameters;
an information acquisition module: the method comprises the steps of obtaining end effector speed, joint speed and joint jacobian matrix through visual fixation;
a deviation reducing module: for reducing the deviation of the actual pixels and the intended pixels of the spatial point by the motion of the binocular camera;
a fixed point acquisition module: and the optical axes of the left camera and the right camera are acquired to obtain a fixed point p.
The following describes in detail, by way of specific example, simulation experiments of a binocular camera-based 3D triangulation method:
for the simulation experiments, static experiments and dynamic experiments were performed.
And (5) introducing a mechanical model of the robot bionic eye into the GAZEBO. Parameters such as joints, links, etc. are set to be the same as the real robot-simulated eye. And constructing a joint controller, sending a joint speed command to the simulated robot bionic eye, and obtaining joint angle feedback.
The image size of the left and right analog cameras is 1040×860 pixels. As shown in the table below, the intrinsic parameters of two analog cameras, including focal length and principal point, are shown calibrated using the Bouguet toolbox. The simulated checkerboard corner is detected with sub-pixel position accuracy. The first upper right corner of the simulated checkerboard is chosen as the spatial point p to be gazed at, which is called the fixed point. The expected pixel coordinates of the p-point in both cameras are fixed on the principal point (520.50, 430.50) by visual fixation. For the head-eye parameters of the two simulated cameras, coordinate system O 6 -x 6 y 6 z 6 and O9 -x 9 y 9 z 9 The Z axes of (2) each deviate by only 24.30mm. Coordinate system
Figure BDA0002558663230000131
and />
Figure BDA0002558663230000132
X and Y axes of (2) and coordinate system O 6 -x 6 y 6 z 6 and O9 -x 9 y 9 z 9 The same applies. Setting K in a constant matrix K 1 =1.8,k 2 =1.8。
In the simulation experiment, a simulated checkerboard was placed, and fixed points were set with respect to the base coordinate system O N -x N y N z N (-0.00, -145.00, 1500). The simulated robotic biomimetic eye is set to an initial state. The pixel coordinates of the fixed point p are moved by visual fixation from (551.15, 374.35) to (520.50, 430.50) in the left camera and from (489.74, 374.16) to (520.50, 430.50) in the right camera, for a total of 600ms. In simulation experiments, 55 μs is required to calculate the 3D coordinates of the fixed point using the method proposed by this patent.
Figure BDA0002558663230000141
In static experiments, it is desirable to verify the validity of the method proposed by this patent by stabilizing at a fixed point.
By placing the analog board at 21 different positions, 21 different static fixation points are obtained. At each position, the basic coordinate system O is recorded in the same time interval by using the method disclosed by the patent N -x N y N z N Is used for the 3D coordinates of the static fixation points. 3D coordinates of static fixed points with respect to the base coordinate system O N -x N y N z N As shown in the following table, can be obtained from GAZEBO.
Figure BDA0002558663230000142
Figure BDA0002558663230000151
Using the method proposed in this patent, the absolute error between the reference truth value of the 3D coordinates and the average value of the estimated 3D coordinates is calculated. The absolute errors in the simulation experiments are mainly from accumulated errors in forward kinematics, errors corrected using Bouguet and errors corrected for the anterior eye. The absolute error of the method on the X axis is between 0.73mm and 3.96 mm; the absolute error on the Y-axis is between 1.14mm and 6.18 mm. The minimum absolute error on the Z axis is 0.02mm when the Z reference true value is 1100 mm. The maximum absolute error of the method on the Z axis is only 0.5% of the depth reference true value.
Uncertainty of the 3D coordinates estimated using the method proposed in this patent can be calculated by applying the transfer theorem of uncertainty. The uncertainty of the estimated 3D coordinates is mainly from the uncertainty of the image detection. The uncertainty of the image detection can be reflected by the uncertainty of the pixel coordinates of the detection point, defined as u u and uv 。u u and uv The following can be calculated:
Figure BDA0002558663230000152
Figure BDA0002558663230000153
wherein ui ,v i Respectively the pixel coordinates u are given by the pixel, v (i=1, 2, once again, n).
Figure BDA0002558663230000154
The average of n independent repeated observations of pixel coordinates u, v, respectively.
It is assumed that the pixel coordinates of the detection points coincide with the principal point within the same time t. After time t, in the method proposed in this patent, the eye joint angle θ is simulated by visual fixation i (i=4, 5,7, 8) varies with the pixel coordinates of the detection points in the two cameras. For the left eye, joint angle θ 4 ,θ 5 Can be expressed as follows:
Figure BDA0002558663230000161
Figure BDA0002558663230000162
wherein ul ,v l Is the pixel coordinates of the detection point in the left camera.
Figure BDA0002558663230000163
Is the principal point in the left camera. />
Figure BDA0002558663230000164
The focal lengths of the left camera in the column and row directions, respectively. Right eye joint angle theta 7 ,θ 8 And θ 4 ,θ 5 Similarly. Bionic eye joint angle theta i The uncertainty of (i=4, 5,7, 8) can be calculated by detecting the uncertainty of the dot pixel coordinate transfer in the two cameras. θ 4 ,θ 5 The uncertainty of (2) can be calculated as follows:
Figure BDA0002558663230000165
Figure BDA0002558663230000166
at the position of
Figure BDA0002558663230000167
In the result of (a)>
Figure BDA0002558663230000168
At->
Figure BDA0002558663230000169
In the result of (a)>
Figure BDA00025586632300001610
Right eye joint angle theta 7 ,θ 8 Uncertainty calculation method and θ 4 ,θ 5 Similarly.
Calculating optical axes of the two cameras using forward kinematics and head-eye parameters; computing a defined using a 3D triangulation algorithm based on visual gazeMeaning as (x) p ,y p ,z p ) Is provided for the estimated 3D coordinates of the detection point p. By using the method proposed by the patent, the uncertainty of the 3D coordinates of the estimated detection point p can be calculated by the bionic eye joint angle theta i (i=4, 5,7, 8). X is x p ,y p ,z p The uncertainty of using the method proposed by this patent can be expressed as:
Figure BDA00025586632300001611
Figure BDA00025586632300001612
/>
Figure BDA0002558663230000171
wherein, in
Figure BDA0002558663230000172
and />
Figure BDA0002558663230000173
In the result of (a)>
Figure BDA0002558663230000174
wherein />
Figure BDA0002558663230000175
Is the average of n independent repeated observations of the i (i=4, 5,7, 8) th joint angle feedback result.
As previously described, a board is placed in 21 different positions. At each position, n=1150. The absolute uncertainty in the X-axis is between 0.009mm and 0.130 mm; the absolute uncertainties on the Y-axis are all between 0.028mm and 0.188 mm; the absolute uncertainty in the Z-axis is between 0.088mm and 4.545 mm.
In dynamic experiments, it is desirable to verify the validity of the method proposed by this patent by moving the fixed point.
Placing the checkerboard in a basic coordinate system O N -x N y N z N 1000mm on the Z axis of (c).
In relation to the basic coordinate system O N -x N y N z N The trajectories of the fixed points on the X and Y axes of (a) are x=r×sin (t/1000) and y= -100+r×cos (t/1000), respectively, where r=100 mm. The estimated mobile fixation point was recorded with respect to the base coordinate system O at equal time intervals using the method proposed by the present patent N -x N y N z N 3D coordinates of (c).
The method calculates the absolute error between the reference true value and the estimation result, and the average absolute error on the X axis is 1.76mm; the average absolute error on the Y axis is 2.03mm; the average absolute error on the Z axis is 1.69mm, reaching only 0.17% of the depth reference true value. The uncertainty in the Z-axis of the method proposed in this patent is calculated using the standard deviation. In the dynamic simulation experiment, n=3780. The absolute uncertainty of the method proposed by the patent on the Z axis is 0.30mm.
The following describes in detail, with specific examples, physical experiments of a binocular camera-based 3D triangulation method:
for the physical experiment, static experiments and dynamic experiments were performed on design number robotic biomimetic eyes. The image size of the left and right cameras is 640×480 pixels.
As shown in the table below, the intrinsic parameters of two cameras, including focal length and principal point, are shown calibrated using the Bouguet toolbox.
Figure BDA0002558663230000181
Distortion coefficients of the left camera and the right camera are [ -0.0437,0.1425,0.0005, -0.0012,0.0000 respectively]And [ -0.0425,0.1080,0.0001, -0.0015,0.0000]. April tag information was obtained through the ViSP library. The center of april tag (point p) was chosen as the fixed point. The expected pixel coordinates of the midpoint p of the left and right cameras are respectively set toMain points (362.94, 222.53) and (388.09, 220.82). Setting K in a constant matrix K 1 =3.6,k 2 =3.6. The head-eye parameters after left and right camera calibration are as follows:
Figure BDA0002558663230000182
Figure BDA0002558663230000183
setting K in a constant matrix K 1 =4.0,k 2 =4.0. In physical experiments, april tag was placed for setting the fixation point in relation to the base coordinate system O N -x N y N z N (-0.00, -145.00, 1500). The robot-simulated eye is set to an initial state. The pixel coordinates of the fixed point p are moved by visual fixation from (385.98, 196.60) to (362.94, 222.53) in the left camera and from (361.09, 191.82) to (388.09, 220.82) in the right camera, taking a total of 650ms. In a physical experiment, 15 μs is required to calculate the 3D coordinates of the fixed point using the method proposed by this patent.
In a static experiment, april tag was placed at 21 different locations to obtain 21 different static fixation points. At each location, the 3D coordinates of the static fixed points estimated using the method proposed in this patent are recorded at the same time intervals. The 3D coordinates of the april tag center were chosen as the reference truth values for the static fixed point, as shown in the table below.
Figure BDA0002558663230000191
Figure BDA0002558663230000201
/>
The absolute error between the reference truth and the estimated 3D coordinate mean is calculated using the method presented in this patent. The absolute errors in physical experiments are mainly from the accumulated errors of forward kinematics, the errors of intrinsic parameter calibration, head-eye calibration and joint offset.
Especially when the Z reference true value is larger than 2000mm, the absolute error of the method on the Y axis is between 0.55mm and 12.28mm, and the minimum absolute error of the method on the Z axis is 1.42mm, so that the reference true value is 0.23%. The maximum absolute error on the Z axis is 124.49mm, reaching 4.97% of the depth reference true value.
April tag was placed in 21 different locations. At each position, n=1200. The absolute uncertainty of the 3D coordinates estimated by the method proposed by the patent on the X axis is between 0.119mm and 3.905 mm; the absolute uncertainties on the Y-axis are all between 0.091mm and 0.640 mm; the absolute uncertainties in the Z-axis are all between 0.268mm and 7.975 mm.
In dynamic experiments, april tag is moved to move the baseline true value of the 3D coordinates of the fixed point from (-278.60, -23.13, 957.84) to (-390.02, -30.39, 1111.91), relative to the base coordinate system O N -x N y N z N The average velocity was 0.01m/s. The method provided by the patent is used for recording the basic coordinate system O in the same time interval N -x N y N z N And 3D coordinates of the estimated moving gaze point.
The absolute error between the reference truth and the estimation result is calculated using the method proposed by the present patent. The average absolute error on the X axis is 11.81mm; the average absolute error on the Y-axis is 14.89mm; the average absolute error on the Z axis is 23.74mm.
It will be apparent to those skilled in the art that the modules or steps of the utility model described above may be implemented in a general purpose computing device, they may be concentrated on a single computing device, or distributed across a network of computing devices, or they may alternatively be implemented in program code executable by computing devices, such that they may be stored in a memory device for execution by the computing devices, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps within them may be fabricated into a single integrated circuit module. Thus, the present utility model is not limited to any specific combination of hardware and software.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the same, but rather, various modifications and variations may be made by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.

Claims (7)

1. A binocular camera-based 3D triangulation method, comprising:
establishing a coordinate system of the binocular camera, and acquiring parameters of the coordinate system;
acquiring an end effector speed, a joint speed and a joint jacobian matrix through visual fixation;
reducing the deviation of actual pixels and expected pixels of the spatial points by the motion of the binocular camera;
acquiring optical axes of a left camera and a right camera to obtain a fixed point p;
the obtaining the optical axes of the left camera and the right camera, and the obtaining the fixed point p includes:
establishing a left eye coordinate system as O 6 -x 6 y 6 z 6 The right eye coordinate system is O 9 -x 9 y 9 z 9 The left camera coordinate system is
Figure QLYQS_1
The right camera coordinate system is +.>
Figure QLYQS_2
And L is l and Lr The optical axes of the left camera and the right camera are respectively:
the transformation matrices of the left camera and the right camera coordinate systems with respect to a basic coordinate system are respectively as follows:
Figure QLYQS_3
wherein ,
Figure QLYQS_4
and />
Figure QLYQS_5
The head-eye parameters of the left camera and the right camera respectively, N T 6 and N T 9 the transformation matrix of the left eye coordinate system and the right eye coordinate system relative to a basic coordinate system is respectively;
when the optical axes of the left camera and the right camera are not on one plane, taking the midpoint of a common vertical line segment of two oblique optical axes as the fixed point p;
when the optical axes of the left and right cameras are not on one plane, taking the midpoint of a common perpendicular segment of two skewed optical axes as the fixed point p includes:
L l and Lr Is the ideal optical axis simultaneously passing through the fixed point p, L' l and L′r Is the actual optical axis, points A and B are at L' l On the L ', points C and D' r Applying;
in the initial state of the binocular camera, L' l and L′r Respectively with
Figure QLYQS_6
and />
Figure QLYQS_7
The direction of the axes is the same;
the homogeneous coordinates of points A, B, C, D are (x) a ,y a ,z a ,1),(x b ,y b ,z b ,1),(x c ,y c ,z c,1) and (xd ,y d ,z d 1) a step of; the initial homogeneous coordinates of points A, B, C, D are (0, z), respectively a ′,1),(0,0,z b ′,1),(0,0,z c ', 1) and (0, z d ′,1);
Then, the homogeneous coordinates of a are:
Figure QLYQS_8
homogeneous coordinates of points B, C and D are similarly available:
X 1 =x b -x a ,Y 1 =y b -y a ,Z 1 =z b -z a ,X 2 =x d -x c ,Y 2 =y d -y c and Z2 =z d -z c
Then, L' l and L′r Respectively is
Figure QLYQS_9
Then, L' l and L′r The direction vector of the common perpendicular of (a) is:
Figure QLYQS_10
Figure QLYQS_11
then, from L' l The plane defined by the common vertical line is:
X cp (X-x a )+Y cp (Y-y a )+Z cp (Z-z a )=0
then, the coordinate p is solved 1 =(u 1 ,v 1 ) The coordinate p is obtained by the same method 2 =(u 2 ,v 2 );
By determining the point p in the coordinate system 1 and p2 Is used for determining the fixed point coordinates
Figure QLYQS_12
Figure QLYQS_13
2. The binocular camera-based 3D triangulation method of claim 1, wherein the establishing a coordinate system of the binocular camera and obtaining coordinate system parameters comprises:
establishing a coordinate system based on a standard D-H method;
parameter d i ,θ i ,a i ,α i Respectively connecting rod offset, joint angle, connecting rod length and connecting rod corner; the joint offset is θ in the initial state i Is a value of (2);
the transformation matrix between the coordinate systems i and i_1 is:
Figure QLYQS_14
3. the binocular camera-based 3D triangulation method of claim 1, wherein the acquiring the end effector speed, joint jacobian matrix by visual fixation comprises:
the left and right cameras are end effectors for left and right eyes, respectively;
speed V of the end effector n =[v x v y v z ω x ω y ω z ] T; wherein ,vx ,v y ,v z Is the linear velocity omega x ,ω y ,ω z Is the angular velocity;
the joint velocity
Figure QLYQS_15
The joint jacobian matrix
Figure QLYQS_16
And said
Figure QLYQS_17
By the formula->
Figure QLYQS_18
Determining; wherein Z is i and Oi Is the direction vector of the Z axis and the coordinate system O i -x i y i z i Origin of (2);
by the formula
Figure QLYQS_19
Subjecting said J to θ Converting from the basic coordinate system to a camera coordinate system; wherein J is joint The transformed joint jacobian matrix is represented by R, which is a rotation matrix of a basic coordinate system relative to a camera coordinate system;
then, the V is n And said
Figure QLYQS_20
The relation between->
Figure QLYQS_21
Coordinates p= (u, v) of the 3D spatial point;
then, pixel speed
Figure QLYQS_22
And said V n The relation between->
Figure QLYQS_23
wherein ,Jimage Is an image jacobian matrix;
and, the image jacobian matrix
Figure QLYQS_24
wherein ,fu and fv Focal lengths in column and row directions of the camera, Z c Is the depth of the spatial point p in the camera coordinate system.
4. A binocular camera based 3D triangulation method according to claim 1, wherein said reducing the deviation of the actual pixels and the expected pixels of the spatial points by the motion of the binocular camera comprises:
actual pixel coordinates and expectation of spatial pointsThe pixel coordinates are p and p, respectively *
Then, said p and said p * Deviation e=p of (2) * -p;
Then the first time period of the first time period,
Figure QLYQS_25
wherein ,/>
Figure QLYQS_26
Is the pixel speed;
the said
Figure QLYQS_27
Wherein K is a constant matrix affecting visual fixation performance;
constructing the constant matrix
Figure QLYQS_28
wherein ,k1 and k2 Is the gain in each channel;
then, spatial point pixel speed
Figure QLYQS_29
Then the first time period of the first time period,
Figure QLYQS_30
establishing a coordinate system O at the joint of the left camera and the right camera N -x N y N z N The left eye joint velocity vector
Figure QLYQS_31
Then the first time period of the first time period,
Figure QLYQS_32
wherein ,pl and pl * Is the actual pixel coordinates and the expected pixel coordinates of the spatial point in the left camera;
the left eye joint velocity is then
Figure QLYQS_33
wherein ,/>
Figure QLYQS_34
Is a matrix J l Elements on the ith row and the jth column;
similarly, the right eye joint speed is
Figure QLYQS_35
5. A binocular camera-based 3D triangulation apparatus, comprising:
and a coordinate system establishment module: the method comprises the steps of establishing a coordinate system of a binocular camera and acquiring coordinate system parameters;
an information acquisition module: the method comprises the steps of obtaining end effector speed, joint speed and joint jacobian matrix through visual fixation;
a deviation reducing module: for reducing the deviation of the actual pixels and the expected pixels of the spatial points by the motion of the binocular camera;
a fixed point acquisition module: the optical axes of the left camera and the right camera are acquired to obtain the fixed point p;
the obtaining the optical axes of the left camera and the right camera, and the obtaining the fixed point p includes:
establishing a left eye coordinate system as O 6 -x 6 y 6 z 6 The right eye coordinate system is 0 9 -x 9 y 9 z 9 The left camera coordinate system is
Figure QLYQS_36
The right camera coordinate system is +.>
Figure QLYQS_37
And L is l and Lr Optical axes of the left camera and the right camera, respectively;
the transformation matrices of the left camera and the right camera coordinate systems with respect to a basic coordinate system are respectively as follows:
Figure QLYQS_38
wherein ,
Figure QLYQS_39
and />
Figure QLYQS_40
The head-eye parameters of the left camera and the right camera respectively, N T 6 and N T 9 the transformation matrix of the left eye coordinate system and the right eye coordinate system relative to a basic coordinate system is respectively;
when the optical axes of the left camera and the right camera are not on one plane, taking the midpoint of a common vertical line segment of two oblique optical axes as the fixed point p;
when the optical axes of the left and right cameras are not on one plane, taking the midpoint of a common perpendicular segment of two skewed optical axes as the fixed point p includes:
L l and Lr Is the ideal optical axis simultaneously passing through the fixed point p, L' l and L′r Is the actual optical axis, points A and B are at L' l On the L ', points C and D' r Applying;
in the initial state of the binocular camera, L' l and L′r Respectively with
Figure QLYQS_41
and />
Figure QLYQS_42
The direction of the axes is the same;
the homogeneous coordinates of points A, B, C, D are (x) a ,y a ,z a ,1),(x b ,y b ,z b ,1),(x c ,y c ,z c,1) and (xd ,y d ,z d 1) a step of; the initial homogeneous coordinates of points a, B, C, D are (0,0,z a ′,1),(0,0,z b ′,1),(0,0,z c ', 1) and (0, z d ′,1);
Then, the homogeneous coordinates of a are:
Figure QLYQS_43
homogeneous coordinates of points B, C and D can be obtained by the same method;
X 1 =x b -x a ,Y 1 =y b -y a ,Z 1 =z b -z a ,X 2 =x d -x c ,Y 2 =y d -y c and Z2 =z d -z c
Then, L' l and L′r Respectively is
Figure QLYQS_44
Then, L' l and L′r The direction vector of the common perpendicular of (a) is:
Figure QLYQS_45
Figure QLYQS_46
then, from L' l The plane defined by the common vertical line is:
X cp (X-x a )+Y cp (Y-y a )+Z cp (Z-z a )=0
then, the coordinate p is solved 1 =(u 1 ,v 1 ) The coordinate p is obtained by the same method 2 =(u 2 ,v 2 );
By determining the point p in the coordinate system 1 and p2 Is used for determining the fixed point coordinates
Figure QLYQS_47
Figure QLYQS_48
6. A computer readable storage medium having stored thereon computer instructions for causing the computer to perform a binocular camera based 3D triangulation method according to any of claims 1-4.
7. An electronic device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores a computer program executable by the at least one processor to cause the at least one processor to perform a binocular camera based 3D triangulation method of any of claims 1-4.
CN202010600958.XA 2020-06-28 2020-06-28 Binocular camera-based 3D triangulation method and device Active CN111784771B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010600958.XA CN111784771B (en) 2020-06-28 2020-06-28 Binocular camera-based 3D triangulation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010600958.XA CN111784771B (en) 2020-06-28 2020-06-28 Binocular camera-based 3D triangulation method and device

Publications (2)

Publication Number Publication Date
CN111784771A CN111784771A (en) 2020-10-16
CN111784771B true CN111784771B (en) 2023-05-23

Family

ID=72761650

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010600958.XA Active CN111784771B (en) 2020-06-28 2020-06-28 Binocular camera-based 3D triangulation method and device

Country Status (1)

Country Link
CN (1) CN111784771B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107256569A (en) * 2017-06-08 2017-10-17 爱佩仪中测(成都)精密仪器有限公司 Three-dimensional measurement double-camera calibrating method based on binocular visual angle
CN109163657A (en) * 2018-06-26 2019-01-08 浙江大学 A kind of circular target position and posture detection method rebuild based on binocular vision 3 D
CN109345542A (en) * 2018-09-18 2019-02-15 重庆大学 A kind of wearable visual fixations target locating set and method
CN109522935A (en) * 2018-10-22 2019-03-26 易思维(杭州)科技有限公司 The method that the calibration result of a kind of pair of two CCD camera measure system is evaluated
CN110751691A (en) * 2019-09-24 2020-02-04 同济大学 Automatic pipe fitting grabbing method based on binocular vision

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107747941B (en) * 2017-09-29 2020-05-15 歌尔股份有限公司 Binocular vision positioning method, device and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107256569A (en) * 2017-06-08 2017-10-17 爱佩仪中测(成都)精密仪器有限公司 Three-dimensional measurement double-camera calibrating method based on binocular visual angle
CN109163657A (en) * 2018-06-26 2019-01-08 浙江大学 A kind of circular target position and posture detection method rebuild based on binocular vision 3 D
CN109345542A (en) * 2018-09-18 2019-02-15 重庆大学 A kind of wearable visual fixations target locating set and method
CN109522935A (en) * 2018-10-22 2019-03-26 易思维(杭州)科技有限公司 The method that the calibration result of a kind of pair of two CCD camera measure system is evaluated
CN110751691A (en) * 2019-09-24 2020-02-04 同济大学 Automatic pipe fitting grabbing method based on binocular vision

Also Published As

Publication number Publication date
CN111784771A (en) 2020-10-16

Similar Documents

Publication Publication Date Title
CN107883929B (en) Monocular vision positioning device and method based on multi-joint mechanical arm
CN109859275B (en) Monocular vision hand-eye calibration method of rehabilitation mechanical arm based on S-R-S structure
Hollinghurst et al. Uncalibrated stereo hand-eye coordination
CN109323650B (en) Unified method for measuring coordinate system by visual image sensor and light spot distance measuring sensor in measuring system
CN109877840B (en) Double-mechanical-arm calibration method based on camera optical axis constraint
Balanji et al. A novel vision-based calibration framework for industrial robotic manipulators
Gong et al. An uncalibrated visual servo method based on projective homography
CN113211431B (en) Pose estimation method based on two-dimensional code correction robot system
Hu et al. Automatic calibration of hand–eye–workspace and camera using hand-mounted line laser
CN113386136A (en) Robot posture correction method and system based on standard spherical array target estimation
CN110919658A (en) Robot calibration method based on vision and multi-coordinate system closed-loop conversion
CN112109072B (en) Accurate 6D pose measurement and grabbing method for large sparse feature tray
CN114310901B (en) Coordinate system calibration method, device, system and medium for robot
CN105096341A (en) Mobile robot pose estimation method based on trifocal tensor and key frame strategy
CN111862236B (en) Self-calibration method and system for fixed-focus binocular camera
CN110928311B (en) Indoor mobile robot navigation method based on linear features under panoramic camera
CN110686650B (en) Monocular vision pose measuring method based on point characteristics
CN114474058B (en) Visual guidance industrial robot system calibration method
CN113211433B (en) Separated visual servo control method based on composite characteristics
CN112700505B (en) Binocular three-dimensional tracking-based hand and eye calibration method and device and storage medium
Yau et al. Fast relative depth computation for an active stereo vision system
JP6040264B2 (en) Information processing apparatus, information processing apparatus control method, and program
CN111784771B (en) Binocular camera-based 3D triangulation method and device
Kim et al. Robust extrinsic calibration for arbitrarily configured dual 3D LiDARs using a single planar board
Lee et al. Path planning for micro-part assembly by using active stereo vision with a rotational mirror

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Chen Xiaopeng

Inventor after: Huang Qiang

Inventor after: Wang Qihang

Inventor after: Xu De

Inventor after: Huang Hua

Inventor after: Fan Di

Inventor after: Gao Junyao

Inventor after: Yu Zhangguo

Inventor after: Chen Xuechao

Inventor before: Chen Xiaopeng

Inventor before: Huang Qiang

Inventor before: Wang Qihang

Inventor before: Xu De

Inventor before: Fan Di

Inventor before: Gao Junyao

Inventor before: Yu Zhangguo

Inventor before: Chen Xuechao

GR01 Patent grant
GR01 Patent grant