CN113223050B - Robot motion track real-time acquisition method based on Aruco code - Google Patents

Robot motion track real-time acquisition method based on Aruco code Download PDF

Info

Publication number
CN113223050B
CN113223050B CN202110515741.3A CN202110515741A CN113223050B CN 113223050 B CN113223050 B CN 113223050B CN 202110515741 A CN202110515741 A CN 202110515741A CN 113223050 B CN113223050 B CN 113223050B
Authority
CN
China
Prior art keywords
robot
camera
coordinate system
aruco code
aruco
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110515741.3A
Other languages
Chinese (zh)
Other versions
CN113223050A (en
Inventor
何伟
李月华
朱世强
谢天
李小倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202110515741.3A priority Critical patent/CN113223050B/en
Publication of CN113223050A publication Critical patent/CN113223050A/en
Application granted granted Critical
Publication of CN113223050B publication Critical patent/CN113223050B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Abstract

The invention discloses a robot motion track real-time acquisition method based on ArUco codes, which comprises the following steps: determining the model and the installation height of a camera according to the requirements of the experimental site limit, the height of the robot and the precision of the motion trail, calculating the measurable range of the camera, and installing the camera and an ArUco code sign board; calibrating the internal reference of each camera and the external reference of the ArUco code mark plate and the robot coordinate system; acquiring robot motion image data in real time by using a camera, and calculating the 6D pose of the robot in the image data in real time; evaluating the visible area of the Aruco code sign plate in image data, calculating the difference value of the 6D poses of two continuous frames of images, and evaluating the 6D pose calculation result according to the visible area and the difference value; and resolving external parameters between two adjacent cameras, calculating a transformation matrix between the two adjacent cameras, transforming the 6D poses calculated by all the cameras into a world coordinate system, and outputting the 6D poses as a motion track in a TUM data set format.

Description

Robot motion track real-time acquisition method based on ArUco code
Technical Field
The invention relates to the technical field of robot motion trail acquisition, in particular to a robot motion trail real-time acquisition method based on ArUco codes.
Background
In the research of robot related technologies, especially in the field of robot vision research represented by SLAM technology, the true value of the motion trajectory of a robot is very important for evaluating performance indexes of the related research. However, the existing dynamic capture equipment for collecting the true values of the motion trajectories is very expensive and has a very limited working space range, the maximum working range of the dynamic capture system of OptiTrack corporation is 15 mx 6m, the price is about 160 thousands, and the price of domestic products such as Nokov reaches hundreds of thousands under the condition that the working range is relatively reduced. Meanwhile, some of the methods proposed by research for tracking the motion of the object by using the monocular camera can only calculate the current pose of the object, cannot evaluate the accuracy of the calculated pose, and cannot be used in large-scale scenes.
Disclosure of Invention
The invention provides a robot motion track real-time acquisition method based on Aruco codes aiming at the defects of the prior art and aims to solve the problems that the motion track acquisition range is limited, the acquisition equipment is expensive and the like in the prior art.
The purpose of the invention is realized by the following technical scheme: a robot motion track real-time acquisition method based on ArUco codes comprises the following steps:
s101, determining the type and the installation height of a camera according to the requirements of the limitation of an experimental site, the height of a robot and the precision of a motion track, and calculating the measurable range of the camera;
step S102, calculating the number and the arrangement mode of cameras according to an experimental site and the measurable range of the cameras, installing an ArUco code mark plate on the top of the robot, and finely adjusting the focal lengths of all the cameras according to the imaging effect to ensure that the ArUco code mark plate can clearly image in the measurable range of all the cameras;
step S103, calibrating internal reference of each camera, and calculating a transformation matrix from a robot coordinate system to a sign board coordinate system;
step S104, acquiring robot motion image data in real time by using a camera, introducing a transformation matrix from a robot coordinate system to a sign board coordinate system, and calculating the 6D position of the robot in the camera coordinate system in real time;
step S105, evaluating the visible area of the Aruco code mark plate in image data, calculating the difference value of the 6D poses of two continuous frames of images, and evaluating the 6D pose calculation result according to the visible area and the difference value;
step S106, external parameters between two adjacent cameras are solved, a transformation matrix between the two adjacent cameras is calculated by the external parameters, the 6D poses calculated by all the cameras are transformed to a world coordinate system by the transformation matrix between the two cameras, and the 6D poses are output as motion tracks in a TUM data set format.
Further, the step S104 specifically includes the following sub-steps:
step S1041: acquiring image data of robot motion in real time by using a camera;
step S1042: detecting and extracting the ArUco codes on the ArUco code mark plate in real time, extracting the ID, the outline and the angular point coordinates of the ArUco codes, and solving the problem that the ArUco codes are shielded in the motion process of the robot based on template matching and track interpolation;
step S1043: and solving a transformation matrix from the ArUco code mark plate coordinate system to the camera coordinate system according to the contour, the corner point coordinates and the camera internal reference of the ArUco code, introducing the transformation matrix from the robot coordinate system to the ArUco code mark plate coordinate system, and calculating the transformation matrix from the robot coordinate system to the camera coordinate system, namely the 6D pose of the robot under the camera coordinate system.
Further, in the step S1042, when the ArUco code is occluded, the problem that the ArUco code is partially occluded is solved based on template matching, and the problem that the ArUco code is completely occluded is solved based on trajectory interpolation;
the method for solving the problem that the ArUco code is partially shielded based on template matching specifically comprises the following steps: firstly, acquiring a previous frame image with the Aruco code partially shielded, and extracting the Aruco code in the frame image. Setting thresholds of rotation transformation and translation transformation according to the motion performance of the robot, simultaneously performing rotation transformation and translation transformation on the image to generate an image sequence, and performing template matching on the ArUco code in the current frame which is shielded and the image sequence to generate a complete ArUco code of the current frame, so as to realize target tracking by taking the ArUco code as a target. At the moment, the robot 6D pose is calculated according to the step S104 by utilizing the ArUco code image in the image sequence, wherein the template matching is successful.
The method for solving the problem that the ArUco code is completely shielded based on the track interpolation specifically comprises the following steps: when the ArUco code has complete occlusion, acquiring the first two frame images with the occlusion and the two frame images when the complete occlusion disappears, calculating the positions of the robot 6D corresponding to the four frame images according to the step S104, and calculating the speed and the acceleration of the robot at two moments according to the positions. Then, assuming that the acceleration is continuously changed in the motion process of the robot, introducing fifth-order polynomial interpolation to predict the motion track of the robot:
x(t)=a 0 +a 1 (t-t 0 ) 1 +a 2 (t-t 0 ) 2 +a 3 (t-t 0 ) 3 +a 4 (t-t 0 ) 4 +a 5 (t-t 0 ) 5
wherein x (t) is the position of the robot at time t, a 0 、a 1 、a 2 、a 3 、a 4 、a 5 Respectively, the undetermined coefficients of the fifth-order polynomial. At t can be measured by robot 0 Time and t 1 The undetermined coefficients of the fifth-order polynomial are determined by the position, the speed and the acceleration of the moment, namely:
x(t 0 )=x 0
x(t 1 )=x 1
Figure BDA0003061611840000021
Figure BDA0003061611840000022
Figure BDA0003061611840000023
Figure BDA0003061611840000024
substituting the known quantity into a fifth-order polynomial, and setting T as T 1 -t 0 、X=x 1 -x 0 The following can be obtained:
Figure BDA0003061611840000031
the robot t can be obtained from the above formula 0 Time and t 1 A quintic spline between times, the spline beingThe motion situation of the robot when the Aruco code is completely shielded can be approximated, and the 6D pose of the robot can be extracted. On the basis, the pose estimation result is optimized by using a g2o algorithm, namely pose interpolation is calculated according to the pose of the robot when the occlusion disappears and the pose of the robot closest to the frame time in the quintic spline curve, and the interpolation is averaged into each predicted pose in the process that the Aruco code is completely occluded, so that the predicted motion track of the robot is smoother.
Further, the step S1043 specifically includes: knowing camera parameters, acquiring the geometric dimensions of the Aruco code mark plate by using the Aruco code outline and the corner point coordinates output in the step S1042, and solving a transformation matrix from the Aruco code mark plate coordinate system to the camera coordinate system by using a PnP algorithm. The solution is performed using the solvePnP () function in OpenCV. Then introducing a transformation matrix from the robot coordinate system to an ArUco code mark plate coordinate system, and calculating the transformation matrix from the robot coordinate system to a camera coordinate system, namely the 6D position of the robot under the camera coordinate system, namely:
Figure BDA0003061611840000032
in the formula (I), the compound is shown in the specification,
Figure BDA0003061611840000033
respectively a transformation matrix from the robot coordinate system to the camera coordinate system,
Figure BDA0003061611840000034
A transformation matrix labeled ArUco code vane coordinate system to camera coordinate system,
Figure BDA0003061611840000035
Is a transformation matrix from a robot coordinate system to an Aruco code sign board coordinate system.
Further, the step S105 specifically includes the following sub-steps:
step S1051, calculating the picture proportion of the ArUco code sign in the image according to the result of detecting and segmenting the ArUco code sign, setting a proportion threshold value, and rejecting the 6D pose result with unqualified proportion;
step S1052 calculates the pose difference between the 6D pose result output by the current image frame and the previous frame calculation result, sets a threshold value, and rejects the calculation result with an excessive difference value.
Further, the specific method in step S101 is:
estimating the installation height of the camera to be about H +/-L according to the height of the experimental site, wherein L is the difference between the installation position of the camera and the floor height, the vertical distance between the robot and the camera is H-H +/-L, estimating the visual range of the camera according to the average field angle of 90 degrees of the camera, and the visual range of the camera is about 4(H-H +/-L) 2 Calculating the unit pixel of the image data according to the motion trail precision shot by the camera, wherein the camera pixel p required by motion trail acquisition is as follows:
Figure BDA0003061611840000041
in the formula, p represents a camera pixel required for acquisition, and μ represents the precision required by the motion trail acquisition. And after determining p, carrying out camera model selection according to the camera pixels, and recording the resolution of the selected camera as p', the size of the photosensitive device as s and the focal length as f. And installing a camera according to the optimal imaging range of the selected camera model, wherein the height of the camera is H', and the measurable range of the single camera is calculated as S:
Figure BDA0003061611840000042
calculating the acquisition precision sigma of the motion trail according to the measurable range of a single camera and the camera pixel:
Figure BDA0003061611840000043
and comparing the motion trail acquisition precision sigma obtained by calculation with the precision mu required by motion trail acquisition, if the precision sigma does not meet the precision requirement, reselecting the model of the camera, and repeating the process.
According to the technical scheme, the invention has the beneficial effects that:
(1) the robot motion trail real-time acquisition, calculation and evaluation system based on the AR technology can improve the universality of robot motion trail acquisition.
(2) The motion trail acquisition system provided by the invention can complete the motion trail acquisition work of the robot in a large-scale scene only by arranging a limited number of monocular cameras, and the cost of the whole system comes from the cameras and the camera installation, so that the cost of the whole system is reduced, and the universality of the system is improved.
(3) The invention can install and arrange cameras according to the scale and the shape of an experimental site, thereby breaking the limitation of the dynamic capture equipment to the site and improving the scene adaptability of motion trail acquisition.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
Fig. 1 is a flowchart of a method for acquiring a robot motion trajectory in real time based on ArUco code according to the present invention, as shown in fig. 1, the method for acquiring a robot motion trajectory by using the system may include the following steps:
in this embodiment, the robot includes, but is not limited to, a contour mobile robot, including wheeled, tracked, multi-legged robots, including various types of aircraft.
Step S101, determining the type and the installation height of a camera according to the restriction of an experimental site, the height of a robot and the precision requirement of a motion track, and calculating the measurable range of the camera;
in the embodiment, a wheeled mobile robot is taken as an example, and the height of the robot is h. The experimental site is generally an indoor scene with various structures and can also be used for an outdoor scene, but the influence of illumination on the performance of the camera needs to be considered in the outdoor scene, the indoor scene is taken as an example in the embodiment, and the height of the floor is H.
Specifically, the installation height of the camera is estimated to be about H ± L, where L is easy camera, according to the height of the experimental siteAnd if the difference between the installation position and the floor height is larger, the vertical distance between the robot and the camera is H-H +/-L, the visible range of the camera is estimated according to the average field angle of 90 degrees of the camera, and the visible range of the camera is about 4(H-H +/-L) 2 Calculating the unit pixel of the image data according to the motion trail precision shot by the camera, wherein the camera pixel p required by motion trail acquisition is as follows:
Figure BDA0003061611840000051
in the formula, p represents a camera pixel required for acquisition, and μ is the precision required for acquisition of a motion trajectory. And after determining p, carrying out camera model selection according to the camera pixels, and recording the resolution of the selected camera as p', the size of the photosensitive device as s and the focal length as f. And installing a camera according to the optimal imaging range of the selected camera model, wherein the height of the camera is H', and the measurable range of the single camera is calculated as S:
Figure BDA0003061611840000052
calculating the acquisition precision sigma of the motion trail according to the measurable range of the unit single camera and the camera pixel:
Figure BDA0003061611840000053
and comparing the motion trail acquisition precision sigma obtained by calculation with the precision mu required by motion trail acquisition, if the precision sigma does not meet the precision requirement, reselecting the model of the camera, and repeating the process.
Step S102, calculating the number and the arrangement mode of cameras according to an experimental site and the measurable range of the cameras, installing an ArUco code sign board at the top of the robot, and finely adjusting the focal lengths of all the cameras according to the imaging effect to ensure that the ArUco code sign board can clearly image in the visual fields of all the cameras;
specifically, the cameras are arranged according to the size and the shape of the experimental site and the measurable range of the cameras, and the arrangement of the cameras needs to ensure that the measurable ranges of the cameras are overlapped to a certain extent. In order to ensure the accuracy of splicing a plurality of motion tracks, the superposition area of the measurable range of every two cameras is ensured to be more than 20% of the measurable range of a single camera. Then, in order to ensure that the measurement accuracy of all positions in the measurable range of a single camera is consistent, the imaging plane of the camera needs to be adjusted to be a horizontal plane as much as possible. Then, the Aruco code sign board is installed on the top of the robot, the fact that the robot is not shielded in the measurable range of the camera is guaranteed, and the Aruco code sign board and the camera imaging plane are kept parallel to the greatest extent. And finally, fine-tuning the focal lengths of all the cameras to ensure that the ArUco code sign boards can clearly image in the measurable range of all the cameras.
Step S103, calibrating internal parameters of each camera, and calculating a transformation matrix from a robot coordinate system to a sign board coordinate system;
specifically, the camera reference calibration is carried out based on the ROS system, and compared with other methods, the camera reference calibration method is relatively good in visualization and easy to operate. Firstly, printing a checkerboard required for calibration, starting a camera required to be calibrated by taking clear imaging of the camera as a standard, and acquiring image data in real time by using the camera; then, opening a camera _ calibration function package in the ROS system, and inputting checkerboard parameters and the topic names of the image data in the ROS system to obtain a visual interface; and then slowly moving the checkerboard to enable the checkerboard to appear in each area in the visual field range of the camera, and paying attention to the calibration progress of the operation interface until the progress is completed, and automatically calculating camera internal parameters by the program.
For solving the transformation matrix from the robot coordinate system to the sign board coordinate system, because the ArUco code sign board is installed at the top of the robot, if the robot coordinate system is arranged at the geometric center position of the robot, the translational transformation of the ArUco code sign board relative to the geometric center position of the robot in the x, y and z directions is measured, and no rotational transformation exists, the transformation matrix is pure translational transformation, that is, the transformation matrix T from the robot coordinate system to the sign board coordinate system is:
Figure BDA0003061611840000061
in this embodiment, the collected motion trajectory is mainly used for positioning and tracking the robot, so that the origin of the coordinate system of the robot is directly placed at the origin of the coordinate system of the calibration plate, thereby reducing unnecessary measurement and errors generated by measurement.
Step S104, acquiring robot motion image data in real time by using a camera, introducing a transformation matrix from a robot coordinate system to a marking plate coordinate system, and calculating the 6D pose of the robot in the camera coordinate system in real time; the step comprises the following substeps:
step S1041, acquiring image data of robot motion in real time by using a camera;
specifically, the robot motion data are collected by all the cameras installed at the top of the laboratory, and when an experiment is carried out, all the cameras are opened at the same time, so that the time stamps of all the pictures in the image data are consistent. Meanwhile, the clearer the Aruco code sign plate is imaged in the camera, the more accurate the robot 6D pose information obtained through calculation. Therefore, the acquired image data of the robot motion mainly refers to the moving ArUco code sign plate image data, and is not a pose picture of the whole robot at each moment. In this embodiment, the acquisition of image data and the calculation of the pose of the robot are performed simultaneously, and step S1041 is mainly used to acquire an image and output it in real time.
Step S1042, detecting and extracting the ArUco codes on the ArUco code mark plate in real time, extracting the ID, the outline and the angular point coordinates of the ArUco codes, and solving the problem that the ArUco codes are shielded in the motion process of the robot based on template matching and track interpolation;
specifically, the input image data of the Aruco code sign board is firstly converted into a gray-scale map, and the Dictionary size and the mark size of the Aruco code in the Dictionary function in OpenCV are set. Then, candidate frame detection and quadrilateral extraction are respectively carried out to detect and extract the Aruco codes on the Aruco logo plate in real time. The candidate frame detection comprises candidate detection, corner sorting and similar quadrangle removal, the gray-scale image is segmented by using a self-adaptive threshold, the outline of the gray-scale image is extracted and filtered, so that the outline and the outline which is not in accordance with the size are abandoned, and the rough identification of the Aruco code is realized. And (3) performing perspective projection on the image by quadrilateral extraction to obtain a front view of the image, performing binary threshold classification on the front view by using an Otus method, obtaining the ID of the Aruco code by analyzing a threshold classification result, finishing the precise identification of the Aruco code, and outputting the ID, the outline and the corner coordinates of the Aruco code. The steps are realized by using a detect Markers function in OpenCV. And finally, in order to improve the detection precision of the Aruco code and avoid the calculation error of the coordinate transformation relation caused by the ID error of the Aruco code, filtering the ID of the Aruco code by using a filter DetectedMarkers function in OpenCV.
Solving the problem that the ArUco code is shielded in the moving process of the robot based on template matching and track interpolation; specifically, in order to solve the problem of unclear markers caused by the problems of occlusion, motion blur, and the like in the environment, a conventional method generally prints multiple marks on the same marker plate, and the multiple marks on the marker plate cannot be occluded at the same time to ensure the robustness of positioning. However, in practical situations, the area of the sign that is obscured is unpredictable. Based on the method, the invention provides a motion estimation method based on target tracking and motion prediction, so as to solve the problem of motion estimation of the robot when the mark is shielded. The step comprises the following substeps:
the method for solving the problem that the ArUco code is partially shielded based on template matching specifically comprises the following steps: when the ArUco code has partial occlusion, firstly acquiring a previous frame image of the ArUco code which is partially occluded, and extracting the ArUco code in the frame image. Setting thresholds of rotation transformation and translation transformation according to the motion performance of the robot, simultaneously performing rotation transformation and translation transformation on the image to generate an image sequence, and performing template matching on the ArUco code in the current frame which is shielded and the image sequence to generate a complete ArUco code of the current frame, so as to realize target tracking by taking the ArUco code as a target. At this time, the robot 6D pose is calculated according to step S104 using the ArUco code image in the image sequence for which template matching is successful.
The method for solving the problem that the ArUco code is completely shielded based on the track interpolation specifically comprises the following steps: when the ArUco code has complete occlusion, acquiring the first two frame images with the occlusion and the two frame images when the complete occlusion disappears, calculating the positions of the robot 6D corresponding to the four frame images according to the step S104, and calculating the speed and the acceleration of the robot at two moments according to the positions. Then, assuming that the acceleration is continuously changed in the motion process of the robot, introducing fifth-order polynomial interpolation to predict the motion track of the robot:
x(t)=a 0 +a 1 (t-t 0 ) 1 +a 2 (t-t 0 ) 2 +a 3 (t-t 0 ) 3 +a 4 (t-t 0 ) 4 +a 5 (t-t 0 ) 5
wherein x (t) is the position of the robot at time t, a 0 、a 1 、a 2 、a 3 、a 4 、a 5 Respectively, the undetermined coefficients of the fifth-order polynomial. Can pass through the robot at t 0 Time t and 1 the undetermined coefficients of the fifth-order polynomial are determined by the position, the speed and the acceleration of the moment, namely:
x(t 0 )=x 0
x(t 1 )=x 1
Figure BDA0003061611840000071
Figure BDA0003061611840000072
Figure BDA0003061611840000073
Figure BDA0003061611840000074
substituting the known quantity into a fifth-order polynomial, and setting T as T 1 -t 0 、X=x 1 -x 0 The following can be obtained:
Figure BDA0003061611840000081
the robot t can be obtained according to the above formula 0 Time t and 1 and a quintic spline curve between moments can be approximated to the motion situation of the robot when the Aruco code is completely shielded, and the 6D pose of the robot can be extracted. On the basis, the pose estimation result is optimized by using a g2o algorithm, namely pose interpolation is calculated according to the pose of the robot when the occlusion disappears and the pose of the robot closest to the frame time in the quintic spline curve, and the interpolation is averaged into each predicted pose in the process that the Aruco code is completely occluded, so that the predicted motion track of the robot is smoother.
Step S1043, according to the contour, the angular point coordinate and the camera internal reference of the Aruco code, solving a transformation matrix from an Aruco code mark plate coordinate system to a camera coordinate system, introducing the transformation matrix from the robot coordinate system to the Aruco code mark plate coordinate system, and calculating the transformation matrix from the robot coordinate system to the camera coordinate system, namely the 6D position and pose of the robot in the camera coordinate system;
specifically, if the coordinates of n 3D points in the space in the world coordinate system, the coordinates of the corresponding 2D points in the image coordinate system, and camera parameters are known, the pose transformation relationship between the camera coordinate system and the world coordinate system, that is, the PnP problem of computer vision, can be solved according to the perspective projection relationship. Similarly, in this embodiment, given camera parameters, the contour and the corner coordinates of the ArUco code are output in step S1042, and the geometric dimensions of the ArUco code marker are easily obtained, so that the transform matrix from the coordinate system of the ArUco code marker to the coordinate system of the camera can be solved by using the PnP algorithm. The present embodiment uses the solvePnP () function in OpenCV to solve. Then, introducing a transformation matrix from the robot coordinate system to an Aruco code sign board coordinate system, and calculating the transformation matrix from the robot coordinate system to a camera coordinate system, namely the 6D pose of the robot under the camera coordinate system, namely:
Figure BDA0003061611840000082
in the formula (I), the compound is shown in the specification,
Figure BDA0003061611840000083
a transformation matrix from the robot coordinate system to the camera coordinate system, respectively,
Figure BDA0003061611840000084
A transformation matrix labeled ArUco code logo plate coordinate system to camera coordinate system,
Figure BDA0003061611840000085
Is a transformation matrix from a robot coordinate system to an Aruco code sign board coordinate system.
Step S105, evaluating the visible area of the calibration plate in image data, calculating the difference value of the 6D poses of two continuous frames of images, and evaluating the 6D pose calculation result according to the visible area and the difference value; the method comprises the following substeps:
step S1051, calculating the picture proportion of the ArUco mark in the image according to the result of detecting and segmenting the ArUco mark, setting a proportion threshold value, and rejecting the 6D pose result with unqualified proportion;
specifically, according to the detection segmentation result of the ArUco, the imaging size and the imaging angle of the ArUco code are respectively judged, a threshold value is set, and image data which are unqualified in imaging are removed. Firstly, four corner coordinates of the mark are obtained according to the mark segmentation result, the area before perspective projection is carried out is solved according to the corner coordinates, the area is compared with the size of the image, and the image with the too small mark is removed. Then, the distance between two adjacent corner points is solved according to the coordinates of the corner points, the length of four sides of the Aruco code outline is solved, the proportion of the four sides is calculated, a proportion threshold value is set, and an image with overlarge mark distortion is removed. And finally, calculating an included angle between the camera plane and the mark plane according to the coordinates of the four corner points, setting a threshold value of the included angle, and rejecting image data with an excessively small included angle.
Step S1052, calculating a pose difference value between a 6D pose result output by a current image frame and a previous frame calculation result, setting a certain threshold value, and eliminating a calculation result with an overlarge difference value;
specifically, the motion of the robot comprises translation and rotation, for convenience of calculation and reduction of calculation amount, a difference value between a translation matrix and a rotation matrix obtained by a current frame and a previous frame is calculated respectively, the motion speed of the robot is obtained, actual motion is calculated according to the motion speed and the motion time, the difference value between the two is compared, a threshold value is set, and motion with an overlarge difference value is eliminated.
Step S106, solving external parameters between two adjacent cameras, calculating a transformation matrix between the two adjacent cameras by using the external parameters, transforming the 6D poses calculated by all the cameras to a world coordinate system by using the transformation matrix between the two cameras, and outputting the 6D poses as a motion trail in a TUM data set format;
specifically, qualified 6D pose results obtained by calculation of all cameras are output, and a transformation matrix between the two cameras is introduced to transform the qualified 6D poses to a world coordinate system. The general method for the joint external reference calibration of the two cameras is as follows: the two cameras are used for shooting the same checkerboard at the same time, and the pose transformation of the two cameras and the checkerboard is calculated at the same time, so that the pose transformation between the two cameras is solved. In this embodiment, the position and pose transformation calculation is directly performed by using the ArUco marker panel, that is, a change matrix between two cameras is calculated according to a motion trajectory of the robot obtained in the common view area of the cameras, and the motion trajectory is transformed to be under the same camera. And setting the coordinate system of a certain camera as a world coordinate system, and converting the robot motion trail acquired by all the cameras into the camera coordinate system, namely the world coordinate system. Secondly, converting the rotation matrix in the transformed 6D pose into a quaternion form required by the TUM data set by using an Eigen library, generating a corresponding time stamp and a translation matrix for the pose in the transformed 6D pose, and outputting the quaternion to a txt file, wherein the quaternion is the motion track of the robot.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the present disclosure. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (4)

1. A robot motion track real-time acquisition method based on Aruco codes is characterized by comprising the following steps:
step S101, determining the model and the installation height of a camera according to the restriction of an experimental site, the height of a robot and the precision requirement of a motion trail, and calculating the measurable range of the camera;
step S102, calculating the number and the arrangement mode of cameras according to an experimental site and the measurable range of the cameras, installing an ArUco code mark plate on the top of the robot, and finely adjusting the focal lengths of all the cameras according to the imaging effect to ensure that the ArUco code mark plate can clearly image in the measurable range of all the cameras;
step S103, calibrating internal parameters of each camera, and calculating a transformation matrix from a robot coordinate system to a sign board coordinate system;
step S104, acquiring robot motion image data in real time by using a camera, introducing a transformation matrix from a robot coordinate system to a marker plate coordinate system, and calculating the 6D pose of the robot in the camera coordinate system in real time;
the step S104 specifically includes the following substeps:
step S1041: acquiring image data of robot motion in real time by using a camera;
step S1042: detecting and extracting the Aruco code on the Aruco code mark plate in real time, extracting the ID, the contour and the angular point coordinate of the Aruco code, and solving the problem that the Aruco code is shielded in the motion process of the robot based on template matching and track interpolation;
in the step S1042, when the ArUco code is occluded, the problem that the ArUco code is partially occluded is solved based on template matching, and the problem that the ArUco code is completely occluded is solved based on trajectory interpolation;
the method for solving the problem that the Aruco code is partially shielded based on template matching specifically comprises the following steps: firstly, acquiring a previous frame image with an Aruco code partially shielded, and extracting the Aruco code in the frame image; setting thresholds of rotation transformation and translation transformation according to the motion performance of the robot, simultaneously performing rotation transformation and translation transformation on the image to generate an image sequence, and performing template matching on the ArUco code in the shielded current frame and the image sequence to generate a complete ArUco code of the current frame, so as to realize target tracking by taking the ArUco code as a target; at the moment, the robot 6D pose is calculated according to the step S104 by utilizing the ArUco code image successfully matched with the template in the image sequence;
the method for solving the problem that the Aruco code is completely shielded based on the track interpolation specifically comprises the following steps: when the Aruco code is completely shielded, acquiring the first two frames of images with shielding and the two frames of images when the complete shielding disappears, calculating the 6D poses of the robot corresponding to the four frames of images according to the step S104, and calculating the speed and the acceleration of the robot at two moments according to the poses; then, assuming that the acceleration is continuously changed in the motion process of the robot, introducing fifth-order polynomial interpolation to predict the motion track of the robot:
x(t)=a 0 +a 1 (t-t 0 ) 1 +a 2 (t-t 0 ) 2 +a 3 (t-t 0 ) 3 +a 4 (t-t 0 ) 4 +a 5 (t-t 0 ) 5 wherein x (t) is the position of the robot at time t, a 0 、a 1 、a 2 、a 3 、a 4 、a 5 Respectively, undetermined coefficients of a fifth-order polynomial; can pass through the robot at t 0 Time and t 1 The position, velocity, acceleration at the moment of time determine the coefficients of a fifth order polynomial, i.e.:
x(t 0 )=x 0
x(t 1 )=x 1
Figure FDA0003635218400000021
Figure FDA0003635218400000022
Figure FDA0003635218400000023
Figure FDA0003635218400000024
substituting the known quantity into a fifth-order polynomial, and setting T as T 1 -t 0 、X=x 1 -x 0 The following can be obtained:
Figure FDA0003635218400000025
obtaining the robot t according to the formula 0 Time and t 1 A quintic spline curve between moments is approximate to the motion situation of the robot when the Aruco code is completely shielded, and the 6D pose of the robot is extracted; on the basis, optimizing a pose estimation result by using a g2o algorithm, namely calculating pose interpolation according to the pose of the robot when the occlusion disappears and the pose of the robot closest to the frame time in a quintic spline curve, and averaging the interpolation into each predicted pose in the process that the Aruco code is completely occluded so as to enable the predicted motion track of the robot to be smoother;
step S1043: according to the contour, the angular point coordinates and the camera internal reference of the ArUco code, solving a transformation matrix from an ArUco code mark plate coordinate system to a camera coordinate system, introducing the transformation matrix from the robot coordinate system to the ArUco code mark plate coordinate system, and calculating the transformation matrix from the robot coordinate system to the camera coordinate system, namely the 6D pose of the robot under the camera coordinate system;
step S105, evaluating the visible area of the Aruco code sign in image data, calculating the difference value of the 6D poses of two continuous frames of images, and evaluating the 6D pose calculation result according to the visible area and the difference value;
step S106, external parameters between two adjacent cameras are solved, a transformation matrix between the two adjacent cameras is calculated by the external parameters, the 6D poses calculated by all the cameras are transformed to a world coordinate system by the transformation matrix between the two cameras, and the 6D poses are output as motion tracks in a TUM data set format.
2. The method for acquiring the motion trail of the robot in real time based on the Aruco code as claimed in claim 1, wherein the step S1043 specifically comprises: knowing camera internal parameters, acquiring the geometric size of the Aruco code mark plate by utilizing the Aruco code outline and the angular point coordinates output in the step S1042, and solving a transformation matrix from the Aruco code mark plate coordinate system to the camera coordinate system by utilizing a PnP algorithm; solving by using a solvePnP () function in OpenCV; then introducing a transformation matrix from the robot coordinate system to an Aruco code sign board coordinate system, and calculating the transformation matrix from the robot coordinate system to a camera coordinate system, namely the 6D pose of the robot under the camera coordinate system, namely:
Figure FDA0003635218400000031
in the formula (I), the compound is shown in the specification,
Figure FDA0003635218400000032
a transformation matrix from a robot coordinate system to a camera coordinate system,
Figure FDA0003635218400000033
For the transform matrix from the coordinate system of the Aruco code signpost to the coordinate system of the camera,
Figure FDA0003635218400000034
Is a transformation matrix from a robot coordinate system to an ArUco code sign board coordinate system.
3. The method for acquiring the motion trail of the robot based on the ArUco code as claimed in claim 1, wherein the step S105 specifically comprises the following substeps:
step S1051, calculating the picture proportion of the Aruco code mark in the image according to the results of detecting and extracting the Aruco code mark, setting a proportion threshold value, and rejecting the 6D pose result with unqualified proportion;
step S1052 calculates the pose difference between the 6D pose result output by the current image frame and the previous frame calculation result, sets a threshold value, and rejects the calculation result with an excessive difference value.
4. The real-time robot motion trail acquisition method based on Aruco code as claimed in claim 1, wherein the concrete method of step S101 is as follows:
estimating the installation height of a camera to be about H +/-L according to the height of an experimental field, wherein L is the difference between the installation position of the camera and the height of a floor, the vertical distance between the robot and the camera is H-H +/-L, estimating the visible range of the camera according to the average field angle of 90 degrees of the camera, the visible range of the camera is about 4(H-H +/-L) 2, calculating according to the motion track precision shot by the camera as a unit pixel of image data, and the camera pixel p required by motion track acquisition is as follows:
Figure FDA0003635218400000035
in the formula, p represents a camera pixel required for acquisition, and mu is the precision required by the acquisition of a motion track; after p is determined, carrying out camera model selection according to camera pixels, recording the resolution of the selected camera as p', the size of the photosensitive device as s and the focal length as f; and installing a camera according to the optimal imaging range of the selected camera model, wherein the height of the camera is H', and the measurable range of the single camera is calculated as S:
Figure FDA0003635218400000036
calculating the acquisition precision sigma of the motion trail according to the measurable range of a single camera and the camera pixel:
Figure FDA0003635218400000037
and comparing the motion trail acquisition precision sigma obtained by calculation with the precision mu required by motion trail acquisition, if the precision sigma does not meet the precision requirement, reselecting the model of the camera, and repeating the process.
CN202110515741.3A 2021-05-12 2021-05-12 Robot motion track real-time acquisition method based on Aruco code Active CN113223050B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110515741.3A CN113223050B (en) 2021-05-12 2021-05-12 Robot motion track real-time acquisition method based on Aruco code

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110515741.3A CN113223050B (en) 2021-05-12 2021-05-12 Robot motion track real-time acquisition method based on Aruco code

Publications (2)

Publication Number Publication Date
CN113223050A CN113223050A (en) 2021-08-06
CN113223050B true CN113223050B (en) 2022-07-26

Family

ID=77094929

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110515741.3A Active CN113223050B (en) 2021-05-12 2021-05-12 Robot motion track real-time acquisition method based on Aruco code

Country Status (1)

Country Link
CN (1) CN113223050B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114216482A (en) * 2021-12-14 2022-03-22 Oppo广东移动通信有限公司 Method and device for determining external trace parameter value, storage medium and electronic equipment
CN115289982A (en) * 2022-09-28 2022-11-04 天津大学建筑设计规划研究总院有限公司 Aruco code-based structural plane displacement visual monitoring method
CN117226853B (en) * 2023-11-13 2024-02-06 之江实验室 Robot kinematics calibration method, device, storage medium and equipment

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10339389B2 (en) * 2014-09-03 2019-07-02 Sharp Laboratories Of America, Inc. Methods and systems for vision-based motion estimation
CN109374003A (en) * 2018-11-06 2019-02-22 山东科技大学 A kind of mobile robot visual positioning and air navigation aid based on ArUco code
CN109993798B (en) * 2019-04-09 2021-05-28 上海肇观电子科技有限公司 Method and equipment for detecting motion trail by multiple cameras and storage medium
CN112304302B (en) * 2019-07-26 2023-05-12 北京魔门塔科技有限公司 Multi-scene high-precision vehicle positioning method and device and vehicle-mounted terminal
CN113643378B (en) * 2019-09-30 2023-06-09 深圳市瑞立视多媒体科技有限公司 Active rigid body pose positioning method in multi-camera environment and related equipment
CN111179356A (en) * 2019-12-25 2020-05-19 北京中科慧眼科技有限公司 Binocular camera calibration method, device and system based on Aruco code and calibration board
CN112734844B (en) * 2021-01-08 2022-11-08 河北工业大学 Monocular 6D pose estimation method based on octahedron

Also Published As

Publication number Publication date
CN113223050A (en) 2021-08-06

Similar Documents

Publication Publication Date Title
CN113223050B (en) Robot motion track real-time acquisition method based on Aruco code
JP6095018B2 (en) Detection and tracking of moving objects
US8189051B2 (en) Moving object detection apparatus and method by using optical flow analysis
CN110580723B (en) Method for carrying out accurate positioning by utilizing deep learning and computer vision
CN107038683B (en) Panoramic imaging method for moving object
JP6261266B2 (en) Moving body detection device
CN110139031B (en) Video anti-shake system based on inertial sensing and working method thereof
CN110991360B (en) Robot inspection point position intelligent configuration method based on visual algorithm
CN108362205B (en) Space distance measuring method based on fringe projection
CN109934873B (en) Method, device and equipment for acquiring marked image
Linger et al. Aerial image registration for tracking
CN111996883B (en) Method for detecting width of road surface
JP2018205870A (en) Object tracking method and device
CN116844147A (en) Pointer instrument identification and abnormal alarm method based on deep learning
CN109410272B (en) Transformer nut recognition and positioning device and method
CN110969135A (en) Vehicle logo recognition method in natural scene
CN116310263A (en) Pointer type aviation horizon instrument indication automatic reading implementation method
CN113689365B (en) Target tracking and positioning method based on Azure Kinect
Gandhi et al. Application of planar motion segmentation for scene text extraction
Ginzburg et al. A cheap system for vehicle speed detection
CN107292932B (en) Head-on video speed measurement method based on image expansion rate
van de Wouw et al. Hierarchical 2.5-d scene alignment for change detection with large viewpoint differences
US10776928B1 (en) Efficient egomotion estimation using patch-based projected correlation
CN113191239A (en) Vehicle overall dimension dynamic detection system based on computer vision
CN112800890A (en) Road obstacle detection method based on surface normal vector

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant