CN117197241A - Robot tail end absolute pose high-precision tracking method based on multi-eye vision - Google Patents

Robot tail end absolute pose high-precision tracking method based on multi-eye vision Download PDF

Info

Publication number
CN117197241A
CN117197241A CN202311189123.XA CN202311189123A CN117197241A CN 117197241 A CN117197241 A CN 117197241A CN 202311189123 A CN202311189123 A CN 202311189123A CN 117197241 A CN117197241 A CN 117197241A
Authority
CN
China
Prior art keywords
pose
target
robot
precision
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311189123.XA
Other languages
Chinese (zh)
Inventor
钟芳宠
吴海涛
张洪辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Platform For Smart Manufacturing Co Ltd
Original Assignee
Shanghai Platform For Smart Manufacturing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Platform For Smart Manufacturing Co Ltd filed Critical Shanghai Platform For Smart Manufacturing Co Ltd
Priority to CN202311189123.XA priority Critical patent/CN117197241A/en
Publication of CN117197241A publication Critical patent/CN117197241A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a robot tail end absolute pose high-precision tracking method based on multi-view vision, which comprises the following steps of: acquiring low-precision pose information of the robot; calibrating the multi-camera system to obtain calibration parameters; self-calibrating the artificial targets based on the low-precision pose information to obtain the relative position relationship between target points; and tracking the pose based on the low-precision pose information, the calibration parameters and the relative position, acquiring the target pose of the robot, and completing the absolute pose high-precision tracking of the tail end of the robot based on multi-vision. The application provides an automatic target group calibration method, which can conveniently acquire a relatively high-precision relative position relation between target groups and provide a reliable initial value for a follow-up tracking process; by additionally arranging the measuring equipment at the tail end of the robot, the method can be used for compensating the pose of the tail end of the robot and realizing higher-precision characteristic measurement under an absolute coordinate system.

Description

Robot tail end absolute pose high-precision tracking method based on multi-eye vision
Technical Field
The application belongs to the field of vision measurement, and particularly relates to a robot tail end absolute pose high-precision tracking method based on multi-view vision.
Background
In the processing process of detecting automobile, ship and airplane parts by using visual measurement, when the pose relation among various local elements and the deviation relation between a measured object and a design size are involved, a global visual measurement system needs to be established. The common solution is to arrange a high-precision pose tracking system outside the mobile measuring head system, separate a local measurement task from a global measurement task, and ensure the global measurement precision by the external pose tracking system.
The existing external pose tracking system mostly consists of a binocular vision system, has a simple structure, and can realize the most basic requirement of acquiring depth information by triangulation. However, the binocular vision system can only give one measurement result to the same world point and is very sensitive to measurement noise, so that the measurement accuracy is still to be improved. Meanwhile, the binocular vision system has the problems of few common view points and poor matching robustness. However, currently available technical researches on multi-view pose tracking generally use a weighted least square method to perform three-dimensional reconstruction of the identification points. The main problem is that the weight distribution strategy in the current research is still relatively simple and lacks a weight distribution strategy according to a scientific quantitative model.
In the production and processing process, the requirement on measurement precision is high, the field environment is complex, the interference factors are many, and the effect of the existing binocular pose tracking mode is not ideal. Therefore, the application provides a robot tail end absolute pose high-precision tracking method based on multi-view vision, which aims to solve the defects in the prior art.
Disclosure of Invention
The application aims to provide a robot tail end absolute pose high-precision tracking method based on multi-eye vision, so as to solve the problem that the existing binocular pose tracking mode is not ideal in effect.
In order to achieve the above purpose, the application provides a robot tail end absolute pose high-precision tracking method based on multi-view vision, which comprises the following steps:
acquiring low-precision pose information of the robot;
calibrating the multi-camera system to obtain calibration parameters;
self-calibrating the artificial targets based on the low-precision pose information to obtain the relative position relationship between target points;
and tracking the pose based on the low-precision pose information, the calibration parameters and the relative position, acquiring the target pose of the robot, and completing the absolute pose high-precision tracking of the tail end of the robot based on multi-vision.
Optionally, calibrating the multi-camera system, and obtaining calibration parameters includes:
calibrating the multi-camera system by using a Zhang Zhengyou calibration method to obtain initial calibration parameters;
optimizing the initial calibration parameters by using a nonlinear optimization method LM algorithm by taking the minimum reconstruction error of each calibration point as an optimization target to obtain calibration parameters;
the calibration parameters include internal parameters, external parameters and distortion parameters.
Optionally, performing self-calibration on the artificial target based on the low-precision pose information, and acquiring the relative positional relationship between the target points includes:
s1, acquiring an initial state of the artificial target, shooting the artificial target by using the multi-camera system, determining sub-pixel coordinates of the circle center of each ellipse by using a Zernike moment method, and obtaining three-dimensional information of each target point by using three-dimensional reconstruction of a binocular system, wherein the binocular system is formed by two cameras;
s2, based on the low-precision pose information and the three-dimensional information of each target point, performing neighbor point search by utilizing KD-Tree, and determining an index number of the current target point;
s3, the robot drives the artificial targets to rotate and move, the multi-camera system is utilized to collect the rigidly transformed artificial targets, the three-dimensional reconstruction process of S1 and S2 is repeated, and index numbers and depth information of each target point are obtained;
s4, repeating the step S3 until each artificial target is shot for a plurality of times;
s5, acquiring point clouds in an initial image, calculating the pose transformation relation before and after every two frames of point clouds, and multiplying the pose transformation relation to obtain a rotation matrix and a translation vector from the initial image point clouds to the reference point clouds;
s6, based on the rotation matrix and the translation vector, acquiring an optimization target of an LM algorithm of a nonlinear optimization method, and further acquiring a transformation relationship from point cloud to reference point cloud in the initial image;
and S7, unifying partial point clouds of each image to the datum point clouds by utilizing the transformation relation from the point clouds to the datum point clouds in the initial image, and acquiring the relative position relation between the target points.
Optionally, shooting the artificial target with the multi-view camera system includes:
when the robot drives the artificial target to walk a preset track, the multi-camera system shoots the artificial target.
Optionally, using the three-dimensional reconstruction of the binocular system, obtaining the three-dimensional information of each target point includes:
constructing a measurement error model;
and carrying out three-dimensional reconstruction according to the measurement error model to obtain the three-dimensional information of each target point.
Optionally, the measurement error model includes: the X-direction measurement error probability, the Y-direction measurement error probability and the Z-direction measurement error probability;
the X-direction measurement error probability is as follows:
when D is greater than or equal to v:
when D < v:
wherein v is the coordinate of the image point on the left camera in the vertical direction; the Y-direction measurement error probability is as follows:
when h is more than or equal to 0 and less than D:
when h > D:
wherein h is J r ,R=f/dv,
The Z-direction measurement error probability is as follows:
wherein τ z Is a given margin of error, D is parallax.
Optionally, performing three-dimensional reconstruction according to the measurement error model, and obtaining the three-dimensional information of each target point includes:
setting an error limit, and determining the weight of three-dimensional reconstruction of each group of binocular systems based on the measurement error model;
and normalizing the weight of the three-dimensional reconstruction of each group of binocular systems, and obtaining the three-dimensional information of each target point by weighted least square.
Optionally, the optimization objective of the LM algorithm of the nonlinear optimization method is:
wherein E is a reprojection error, m and n are distributed as m and n frame point cloud serial numbers, and K represents a K-frame point cloud; n in m and N frame point clouds mn Common view points; r is R m 、R n And t m 、t n Respectively converting the m-th frame point cloud and the n-th frame point cloud into a rotation matrix and a translation vector under a reference coordinate system, and P mi Is the three-dimensional coordinate of the ith point in the point cloud of the mth frame, P ni Is the three-dimensional coordinates of the ith point in the nth frame point cloud.
Optionally, performing pose tracking based on the low-precision pose information, the calibration parameter and the relative position, and acquiring the target pose of the robot includes:
determining an index number corresponding to each circular light spot based on the relative position relation between the low-precision pose information and the target;
performing three-dimensional reconstruction on the multi-view camera system, and performing pose tracking on the robot by using the index number corresponding to each circular light spot to obtain the three-dimensional coordinates of part of target points;
optimizing the three-dimensional coordinates of the partial target points to obtain the three-dimensional optimized coordinates of the partial target points;
and acquiring the target pose of the robot by utilizing the three-dimensional optimized coordinates of the partial target points and the relative position relation between the targets.
Optionally, optimizing the three-dimensional coordinates of the partial target points, and obtaining the three-dimensional optimized coordinates of the partial target points includes:
and (3) optimizing a rotation matrix R and a translation vector t of the current pose to the artificial target measurement pose by using an LM algorithm, wherein the specific optimization targets are as follows:
wherein E is a reprojection error, N is the total number of feature points for successfully completing three-dimensional reconstruction under the current pose, and the rotation matrix R and the translation vector t are the absolute pose and P of the output i Is the three-dimensional coordinates of the i-th point,is the initial three-dimensional coordinates of the i-th point.
The application has the following beneficial effects:
(1) The method relies on reasonable weighting, the accuracy of the three-dimensional reconstruction algorithm can be improved by effectively utilizing the redundant information of multiple views, and compared with the nearest multi-view three-dimensional reconstruction algorithm based on a gravity center method, the accuracy of the multi-view three-dimensional reconstruction algorithm is improved by 11%, and the accuracy of the multi-view three-dimensional reconstruction algorithm is improved by 4.5% compared with the nearest least square method in the industry;
(2) The application provides an automatic target group calibration method, which can conveniently acquire a relatively high-precision relative position relation between target groups and provide a reliable initial value for a follow-up tracking process; the pose tracking method is suitable for measuring the 6-degree-of-freedom absolute pose of the movable object in a large scene;
(3) According to the application, the measuring equipment is additionally arranged at the tail end of the robot, so that the method can be used for compensating the pose of the tail end of the robot, and higher-precision characteristic measurement under an absolute coordinate system can be realized.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application. In the drawings:
fig. 1 is a flow chart of a robot tail end absolute pose high-precision tracking method based on multi-view vision according to an embodiment of the application;
fig. 2 is a graph of a probability model of error measured by a camera xz plane according to an embodiment of the present application.
Detailed Description
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. The application will be described in detail below with reference to the drawings in connection with embodiments.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
As shown in fig. 1, the embodiment provides a robot end absolute pose high-precision tracking method based on multi-view vision, which comprises the following steps:
the infrared calibration technology is utilized to calibrate the multi-camera system, including internal parameters, external parameters and distortion parameters. The specific calibration method can utilize Zhang Zhengyou calibration method to obtain initial values of calibration parameters, further utilize nonlinear optimization method LM algorithm to minimize reconstruction error of each calibration point as an optimization target, and further improve calibration parameters of the multi-objective system. After the calibration of the multi-camera system is finished, the robot drives the artificial target to move a specific track, and the self-calibration of the artificial target is finished through the multi-camera system in the process, namely, the relative position relation between target points is obtained. To further reduce the impact of accumulated errors on the self-calibration process, a global ICP algorithm may be used for optimization. In the pose tracking process, each two cameras form a binocular system, the three-dimensional reconstruction of target points is carried out by using a weighted least square method, the three-dimensional coordinates of a part of target points can be obtained, the part of point cloud is registered with the self-calibration result of the front artificial target, and the absolute pose of the current robot can be obtained.
And (3) performing a manual target self-calibration process by using a global ICP optimization algorithm:
firstly, selecting an initial state as a reference state, shooting an artificial target by using a multi-objective system, determining the sub-pixel coordinates of the circle center of each ellipse by using a Zernike moment method, and further obtaining the three-dimensional information of each target point by using the three-dimensional reconstruction of a binocular system. On the basis, nonlinear optimization is performed, and the reconstruction error of the minimum two target points is used as an optimization target, so that the three-dimensional reconstruction accuracy of the target points is improved. Meanwhile, low-precision pose information of the current robot is input, nearest neighbor point searching is carried out by utilizing KD-Tree, and an index number of a current target point can be further determined so as to be corresponding to a correct point physical target after three-dimensional reconstruction.
The sub-pixel edge detection is realized based on the Zernike moment, which is an image processing method, and sub-pixel level position information of the edge of an object in an image is extracted by using the Zernike moment, and the basic step frame is as follows:
(1) Image preprocessing: preprocessing the input image, such as smoothing, noise removal and the like, so as to ensure that the quality of the extracted edge signal is higher;
(2) Edge detection: using a suitable edge detection algorithm anny edge detection) to perform an edge detection operation on the image to generate a binarized edge image;
(3) Extracting edge segments: selecting an interested edge region or a target object according to the requirement, and obtaining an edge segment through technologies such as image segmentation or ROI extraction;
(4) Zernike moment calculation: the Zernike moment of the edge segments is calculated by applying a Zernike polynomial generating function. Zernike moments have rotational, dimensional, and translational invariance in terms of edge shape descriptions;
(5) Sub-pixel fitting: based on the characteristics of the Zernike moments, the sub-pixel level position information of the edge is calculated through interpolation and fitting of the Zernike moments. Common fitting methods include least squares, maximum likelihood estimation, etc.;
(6) Edge position display: with pixel location information, more accurate edge locations can be marked on the original image for visual presentation or subsequent analysis.
It is noted that the subpixel edge detection method based on Zernike moments may be sensitive to noise in some cases and has some limitations on the complexity of the edge shape. In practice, it is desirable to trade off factors such as algorithm complexity, data quality, and processing efficiency, and to make adjustments and improvements depending on the particular application.
And secondly, driving the artificial target to rotate and move at a small angle by a robot, shooting the rigidly transformed artificial target by using a multi-eye system, and repeating the three-dimensional reconstruction process in the first step and the second step to obtain the index number and high-precision three-dimensional information of each target point under different poses of the artificial target.
Third, repeating the second step until each of the artificial target points is photographed a plurality of times.
And fourthly, calculating the pose transformation relation before and after each two frames of point clouds by taking the point clouds in the first image as a reference, and obtaining a rotation matrix R and a translation vector t from each image point cloud to the reference point cloud by cumulative multiplication. The R and t obtained in this step have accumulated errors and therefore need to be reduced by a subsequent global optimization algorithm.
Fifthly, writing an optimization target of a nonlinear optimization method LM algorithm:
wherein E is a reprojection error, m and n are distributed as m and n frame point cloud serial numbers, and K represents a K-frame point cloud; n in m and N frame point clouds mn Common view points; r is R m 、R n And t m 、t n Respectively converting the m-th frame point cloud and the n-th frame point cloud into a rotation matrix and a translation vector under a reference coordinate system, and P mi Is the three-dimensional coordinate of the ith point in the point cloud of the mth frame, P ni Is the three-dimensional coordinates of the ith point in the nth frame point cloud.
Finally obtaining the point cloud in each image of the optimized resultConversion relation to datum point cloud, namely R i And t i (i= … K), so that part of point clouds of each graph can be unified on the datum point clouds, the high-precision relative positions among all target points under the same datum are obtained, and the self-calibration of the artificial targets is completed.
Target point high-precision three-dimensional reconstruction algorithm flow based on multi-view vision:
the first part builds a mathematical model of the coordinate measurement error.
As shown in fig. 2, P1-P8 are the intersections of several rays, specifically, rays where P1, P2 are located are emitted by the right boundary of the left pixel through the optical center of the left camera; the light rays where P3 and P4 are located are emitted by the left boundary of the left pixel through the optical center of the left camera; the light where P5 and P6 are located is at the current n l Under the condition of uncertain measurement, the connection line between the measured point and the left optical center is formed. Similarly, the light rays P1 and P4 are emitted from the right boundary of the right pixel through the right optical center, and the light rays P2 and P3 are emitted from the left boundary of the right pixel through the right optical center. Considering the influence of the camera pixel dispersion on the measurement result, wherein the quadrilateral area enclosed by the points P1, P2, P3 and P4 corresponds to the left matching pixel and the right matching pixel, so that the matching pixels in the left camera and the right camera correspond to not one point in space but one area. Intuitively, the longer the length of the straight line P5P6, the greater the probability that the measured point is on the straight lines P5, P6. The length of the written straight lines P5, P6 is measured with respect to the left camera as uncertainty n l Is represented by the expression:
wherein delta is the baseline distance, J l Is the column coordinate where the left pixel is located, J r The column coordinate where the right pixel is located, f represents the camera focal length, and d is the parallax.
Measurement uncertainty for right camera takes infinitesimal δn r Further obtain the lengths of the straight lines P7 and P8 about δn r Is represented by the expression:
thereby normalizing the areas of the quadrilaterals P1, P2, P3, P4 and the lengths of the straight lines P5, P6 to obtain the left image uncertainty N l Regarding n l Probability density distribution and right image uncertainty N of (2) r Regarding n l And n r Is a conditional probability distribution density of (c).
Wherein n is l And n r Are indeterminate measurement uncertainty quantities and therefore consider the probability distribution density converted by way of a given threshold and integration into the probability that the measurement error is within the given threshold. The Z-direction measurement error probability distribution is as follows:
wherein ε z For indicating the actual error magnitude in the Z direction, τ z Is the margin of error given in the Z direction and D is the parallax. It can be seen that the measurement error probability in the z direction is related to the parallax only, and the larger the parallax is, the smaller the measurement error probability is, which is also in accordance with the fact. Similarly, the measurement error probabilities in the X-direction and the Y-direction can be given.
The probability of error in the X-direction measurement is as follows:
when D is greater than or equal to v:
when D < v:
wherein τ x Is the margin of error given in the X direction, ε x The method is used for representing the actual error size in the X direction, and v is the coordinate of an image point on the left camera in the vertical direction;
the measurement error probability in the Y direction is as follows:
when h is more than or equal to 0 and less than D:
when h > D:
wherein h is J r ,R=f/dv,
Wherein τ y To be the margin of error given in the Y-direction ε y For indicating the actual error magnitude in the Y direction.
It can be seen that the measurement error in the Y direction is related to the degree of deviation of the pixel coordinates from the principal point, in addition to the parallax, and the probability that the larger the deviation from the principal point, i.e., the closer to the image edge, the lower the measured point is kept. This is also in agreement with the measurement facts.
A second part for performing a weighted least squares three-dimensional reconstruction from the measurement error model
And shooting images of the tail end of the robot fixedly connected with the artificial target by using a multi-eye system. The two-phase machine is combined into a binocular system to carry out three-dimensional reconstruction, 0.05mm is selected as a given error limit, the given absolute error limit can be changed according to the field size of an application scene, the weight size of the three-dimensional reconstruction of each group of binocular system is determined according to a measurement error model in the first part, and further weight normalization and weighted least square are carried out to obtain three-dimensional information of a single target point.
And (3) optimizing a rotation matrix R and a translation vector t of the current pose to the artificial target measurement pose by using an LM algorithm, wherein the specific optimization targets are as follows:
wherein P is i Is the three-dimensional coordinates of the i-th point,and (3) for the initial three-dimensional coordinates of the ith point, N is the total number of feature points successfully completing three-dimensional reconstruction under the current pose, and the rotation matrix R and the translation vector t are the absolute pose of the output.
The present application is not limited to the above-mentioned embodiments, and any changes or substitutions that can be easily understood by those skilled in the art within the technical scope of the present application are intended to be included in the scope of the present application. Therefore, the protection scope of the present application should be subject to the protection scope of the claims.

Claims (10)

1. A robot tail end absolute pose high-precision tracking method based on multi-eye vision is characterized by comprising the following steps:
acquiring low-precision pose information of the robot;
calibrating the multi-camera system to obtain calibration parameters;
self-calibrating the artificial targets based on the low-precision pose information to obtain the relative position relationship between target points;
and tracking the pose based on the low-precision pose information, the calibration parameters and the relative position, acquiring the target pose of the robot, and completing the absolute pose high-precision tracking of the tail end of the robot based on multi-vision.
2. The method for tracking absolute pose of the tail end of a robot with high precision based on multi-eye vision according to claim 1, wherein calibrating the multi-eye camera system comprises:
calibrating the multi-camera system by using a Zhang Zhengyou calibration method to obtain initial calibration parameters;
optimizing the initial calibration parameters by using a nonlinear optimization method LM algorithm by taking the minimum reconstruction error of each calibration point as an optimization target to obtain calibration parameters;
the calibration parameters include internal parameters, external parameters and distortion parameters.
3. The multi-vision-based robot end absolute pose high-precision tracking method of claim 1, wherein self-calibrating an artificial target based on the low-precision pose information, obtaining the relative positional relationship between the target points comprises:
s1, acquiring an initial state of the artificial target, shooting the artificial target by using the multi-camera system, determining sub-pixel coordinates of the circle center of each ellipse by using a Zernike moment method, and obtaining three-dimensional information of each target point by using three-dimensional reconstruction of a binocular system, wherein the binocular system is formed by two cameras;
s2, based on the low-precision pose information and the three-dimensional information of each target point, performing neighbor point search by utilizing KD-Tree, and determining an index number of the current target point;
s3, the robot drives the artificial targets to rotate and move, the multi-camera system is utilized to collect the rigidly transformed artificial targets, the three-dimensional reconstruction process of S1 and S2 is repeated, and index numbers and depth information of each target point are obtained;
s4, repeating the step S3 until each artificial target is shot for a plurality of times;
s5, acquiring point clouds in an initial image, calculating the pose transformation relation before and after every two frames of point clouds, and multiplying the pose transformation relation to obtain a rotation matrix and a translation vector from the initial image point clouds to the reference point clouds;
s6, based on the rotation matrix and the translation vector, acquiring an optimization target of an LM algorithm of a nonlinear optimization method, and further acquiring a transformation relationship from point cloud to reference point cloud in the initial image;
and S7, unifying partial point clouds of each image to the datum point clouds by utilizing the transformation relation from the point clouds to the datum point clouds in the initial image, and acquiring the relative position relation between the target points.
4. The multi-view based robot end absolute pose high precision tracking method of claim 3, wherein capturing the artificial target with the multi-view camera system comprises:
when the robot drives the artificial target to walk a preset track, the multi-camera system shoots the artificial target.
5. The method for high-precision tracking of absolute pose of a robot end based on multi-eye vision as claimed in claim 3, wherein obtaining three-dimensional information of each target point by using three-dimensional reconstruction of a binocular system comprises:
constructing a measurement error model;
and carrying out three-dimensional reconstruction according to the measurement error model to obtain the three-dimensional information of each target point.
6. The method for tracking absolute pose of the tail end of a robot with high precision based on multi-eye vision according to claim 5, wherein the measurement error model comprises: the X-direction measurement error probability, the Y-direction measurement error probability and the Z-direction measurement error probability;
the X-direction measurement error probability is as follows:
when D is greater than or equal to v:
when D < v:
wherein v is the coordinate of the image point on the left camera in the vertical direction; the Y-direction measurement error probability is as follows:
when h is more than or equal to 0 and less than D:
when h > D:
wherein h is J r ,R=f/dv,
The Z-direction measurement error probability is as follows:
wherein τ z Is a given margin of error, D is parallax.
7. The method for tracking absolute pose of the tail end of a robot with high precision based on multi-objective vision according to claim 5, wherein the step of performing three-dimensional reconstruction according to the measurement error model to obtain three-dimensional information of each target point comprises the steps of:
setting an error limit, and determining the weight of three-dimensional reconstruction of each group of binocular systems based on the measurement error model;
and normalizing the weight of the three-dimensional reconstruction of each group of binocular systems, and obtaining the three-dimensional information of each target point by weighted least square.
8. The robot end absolute pose high-precision tracking method based on the multi-view according to claim 3, wherein the optimization target of the LM algorithm of the nonlinear optimization method is as follows:
wherein E is a reprojection error, m and n are distributed as m and n frame point cloud serial numbers, and K represents a K-frame point cloud; n in m and N frame point clouds mn Common view points; r is R m 、R n And t m 、t n Respectively converting the m-th frame point cloud and the n-th frame point cloud into a rotation matrix and a translation vector under a reference coordinate system, and P mi Is the three-dimensional coordinate of the ith point in the point cloud of the mth frame, P ni Is the three-dimensional coordinates of the ith point in the nth frame point cloud.
9. The multi-vision-based robot end absolute pose high-precision tracking method of claim 1, wherein performing pose tracking based on the low-precision pose information, the calibration parameters and the relative positions, obtaining a target pose of the robot comprises:
determining an index number corresponding to each circular light spot based on the relative position relation between the low-precision pose information and the target;
performing three-dimensional reconstruction on the multi-view camera system, and performing pose tracking on the robot by using the index number corresponding to each circular light spot to obtain the three-dimensional coordinates of part of target points;
optimizing the three-dimensional coordinates of the partial target points to obtain the three-dimensional optimized coordinates of the partial target points;
and acquiring the target pose of the robot by utilizing the three-dimensional optimized coordinates of the partial target points and the relative position relation between the targets.
10. The multi-vision-based robot end absolute pose high-precision tracking method of claim 9, wherein optimizing the three-dimensional coordinates of the partial target points, obtaining the three-dimensional optimized coordinates of the partial target points comprises:
and (3) optimizing a rotation matrix R and a translation vector t of the current pose to the artificial target measurement pose by using an LM algorithm, wherein the specific optimization targets are as follows:
wherein E is a reprojection error, N is the total number of feature points for successfully completing three-dimensional reconstruction under the current pose, and the rotation matrix R and the translation vector t are the absolute pose and P of the output i Is the three-dimensional coordinates of the i-th point,is the initial three-dimensional coordinates of the i-th point.
CN202311189123.XA 2023-09-14 2023-09-14 Robot tail end absolute pose high-precision tracking method based on multi-eye vision Pending CN117197241A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311189123.XA CN117197241A (en) 2023-09-14 2023-09-14 Robot tail end absolute pose high-precision tracking method based on multi-eye vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311189123.XA CN117197241A (en) 2023-09-14 2023-09-14 Robot tail end absolute pose high-precision tracking method based on multi-eye vision

Publications (1)

Publication Number Publication Date
CN117197241A true CN117197241A (en) 2023-12-08

Family

ID=89001322

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311189123.XA Pending CN117197241A (en) 2023-09-14 2023-09-14 Robot tail end absolute pose high-precision tracking method based on multi-eye vision

Country Status (1)

Country Link
CN (1) CN117197241A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111220126A (en) * 2019-11-19 2020-06-02 中国科学院光电技术研究所 Space object pose measurement method based on point features and monocular camera
CN111415391A (en) * 2020-02-28 2020-07-14 中国民航大学 Multi-view camera external orientation parameter calibration method adopting inter-shooting method
CN111768448A (en) * 2019-03-30 2020-10-13 北京伟景智能科技有限公司 Spatial coordinate system calibration method based on multi-camera detection
WO2021063127A1 (en) * 2019-09-30 2021-04-08 深圳市瑞立视多媒体科技有限公司 Pose positioning method and related equipment of active rigid body in multi-camera environment
CN112907679A (en) * 2021-01-28 2021-06-04 烟台大学 Robot repeated positioning precision measuring method based on vision

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111768448A (en) * 2019-03-30 2020-10-13 北京伟景智能科技有限公司 Spatial coordinate system calibration method based on multi-camera detection
WO2021063127A1 (en) * 2019-09-30 2021-04-08 深圳市瑞立视多媒体科技有限公司 Pose positioning method and related equipment of active rigid body in multi-camera environment
CN111220126A (en) * 2019-11-19 2020-06-02 中国科学院光电技术研究所 Space object pose measurement method based on point features and monocular camera
CN111415391A (en) * 2020-02-28 2020-07-14 中国民航大学 Multi-view camera external orientation parameter calibration method adopting inter-shooting method
CN112907679A (en) * 2021-01-28 2021-06-04 烟台大学 Robot repeated positioning precision measuring method based on vision

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
XU, YAFAN等: "Error Analysis of Calibration Parameters Estimation for Binocular Stereo Vision System", 《2013 IEEE INTERNATIONAL CONFERENCE ON IMAGING SYSTEMS AND TECHNIQUES (IST 2013)》, 1 January 2013 (2013-01-01) *
吴贤权等: "多目视觉定向天线位姿测量", 《自动化与仪器仪表》, 25 May 2019 (2019-05-25), pages 1 - 6 *
葛庆如等: "基于辅助相机的多目视觉线结构光测量系统全局标定方法研究", 《中国科学:技术科学》, 20 August 2022 (2022-08-20) *

Similar Documents

Publication Publication Date Title
CN107301654B (en) Multi-sensor high-precision instant positioning and mapping method
CN106651942B (en) Three-dimensional rotating detection and rotary shaft localization method based on characteristic point
CN110675418B (en) Target track optimization method based on DS evidence theory
CN110853075B (en) Visual tracking positioning method based on dense point cloud and synthetic view
CN104484648B (en) Robot variable visual angle obstacle detection method based on outline identification
CN105716542B (en) A kind of three-dimensional data joining method based on flexible characteristic point
CN109859272B (en) Automatic focusing binocular camera calibration method and device
CN109211198B (en) Intelligent target detection and measurement system and method based on trinocular vision
CN111862235B (en) Binocular camera self-calibration method and system
CN110874854B (en) Camera binocular photogrammetry method based on small baseline condition
CN109974618B (en) Global calibration method of multi-sensor vision measurement system
CN113947638B (en) Method for correcting orthographic image of fish-eye camera
Cvišić et al. Recalibrating the KITTI dataset camera setup for improved odometry accuracy
CN113327296B (en) Laser radar and camera online combined calibration method based on depth weighting
CN111123242A (en) Combined calibration method based on laser radar and camera and computer readable storage medium
CN113393439A (en) Forging defect detection method based on deep learning
CN111998862A (en) Dense binocular SLAM method based on BNN
CN116778288A (en) Multi-mode fusion target detection system and method
CN105335977A (en) Image pickup system and positioning method of target object
CN113793266A (en) Multi-view machine vision image splicing method, system and storage medium
CN116625258A (en) Chain spacing measuring system and chain spacing measuring method
CN112712566B (en) Binocular stereo vision sensor measuring method based on structure parameter online correction
CN114998448A (en) Method for calibrating multi-constraint binocular fisheye camera and positioning space point
CN117197333A (en) Space target reconstruction and pose estimation method and system based on multi-view vision
CN117197241A (en) Robot tail end absolute pose high-precision tracking method based on multi-eye vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination