CN113223050A - Robot motion track real-time acquisition method based on Aruco code - Google Patents

Robot motion track real-time acquisition method based on Aruco code Download PDF

Info

Publication number
CN113223050A
CN113223050A CN202110515741.3A CN202110515741A CN113223050A CN 113223050 A CN113223050 A CN 113223050A CN 202110515741 A CN202110515741 A CN 202110515741A CN 113223050 A CN113223050 A CN 113223050A
Authority
CN
China
Prior art keywords
robot
camera
coordinate system
aruco
aruco code
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110515741.3A
Other languages
Chinese (zh)
Other versions
CN113223050B (en
Inventor
何伟
李月华
朱世强
谢天
李小倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202110515741.3A priority Critical patent/CN113223050B/en
Publication of CN113223050A publication Critical patent/CN113223050A/en
Application granted granted Critical
Publication of CN113223050B publication Critical patent/CN113223050B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Abstract

The invention discloses a robot motion track real-time acquisition method based on Aruco codes, which comprises the following steps: determining the model and the installation height of a camera according to the requirements of the restriction of an experimental site, the height of a robot and the precision of a motion track, calculating the measurable range of the camera, and installing the camera and an ArUco code sign board; calibrating the internal reference of each camera, the Aruco code mark plate and the external reference of the robot coordinate system; acquiring robot motion image data in real time by using a camera, and calculating the 6D pose of the robot in the image data in real time; evaluating the visible area of the Aruco code sign plate in image data, calculating the difference value of the 6D poses of two continuous frames of images, and evaluating the 6D pose calculation result according to the visible area and the difference value; resolving external parameters between two adjacent cameras, calculating a transformation matrix between the two adjacent cameras, transforming the 6D poses calculated by all the cameras into a world coordinate system, and outputting the 6D poses as a motion track in a TUM data set format.

Description

Robot motion track real-time acquisition method based on Aruco code
Technical Field
The invention relates to the technical field of robot motion trail acquisition, in particular to a robot motion trail real-time acquisition method based on Aruco codes.
Background
In the related art research of robots, particularly in the field of robot vision research represented by SLAM technology, the true value of the motion trajectory of a robot is important for evaluating the performance index of the related research. However, the existing dynamic capture equipment used for collecting true values of motion tracks is very expensive and has a very limited range of working space, the maximum working range of the dynamic capture system of OptiTrack company is 15 mx 6m, the price is about 160 ten thousand, and the price of domestic products such as Nokov and the like is still hundreds of thousands under the condition that the working range is relatively reduced. Meanwhile, some of the methods proposed by research for tracking the motion of an object by using a monocular camera can only calculate the current pose of the object, cannot evaluate the accuracy of the calculated pose, and cannot be used in large-scale scenes.
Disclosure of Invention
The invention provides a robot motion track real-time acquisition method based on Aruco codes aiming at the defects of the prior art and aims to solve the problems that the motion track acquisition range is limited, the acquisition equipment is expensive and the like in the prior art.
The purpose of the invention is realized by the following technical scheme: a robot motion track real-time acquisition method based on Aruco codes comprises the following steps:
s101, determining the type and the installation height of a camera according to the requirements of the limitation of an experimental site, the height of a robot and the precision of a motion track, and calculating the measurable range of the camera;
step S102, calculating the number and the arrangement mode of cameras according to an experimental site and the measurable range of the cameras, installing an ArUco code mark plate on the top of the robot, and finely adjusting the focal lengths of all the cameras according to the imaging effect to ensure that the ArUco code mark plate can clearly image in the measurable range of all the cameras;
step S103, calibrating internal parameters of each camera, and calculating a transformation matrix from a robot coordinate system to a sign board coordinate system;
step S104, acquiring robot motion image data in real time by using a camera, introducing a transformation matrix from a robot coordinate system to a sign board coordinate system, and calculating the 6D position of the robot in the camera coordinate system in real time;
step S105, evaluating the visible area of the Aruco code sign in image data, calculating the difference value of the 6D poses of two continuous frames of images, and evaluating the 6D pose calculation result according to the visible area and the difference value;
step S106, external parameters between two adjacent cameras are solved, a transformation matrix between the two adjacent cameras is calculated by the external parameters, the 6D poses calculated by all the cameras are transformed to a world coordinate system by the transformation matrix between the two cameras, and the 6D poses are output as motion tracks in a TUM data set format.
Further, the step S104 specifically includes the following sub-steps:
step S1041: acquiring image data of robot motion in real time by using a camera;
step S1042: detecting and extracting the Aruco code on the Aruco code mark plate in real time, extracting the ID, the contour and the angular point coordinates of the Aruco code, and solving the problem that the Aruco code is shielded in the motion process of the robot based on template matching and track interpolation;
step S1043: and solving a transformation matrix from the ArUco code mark plate coordinate system to the camera coordinate system according to the contour, the corner point coordinates and the camera internal reference of the ArUco code, introducing the transformation matrix from the robot coordinate system to the ArUco code mark plate coordinate system, and calculating the transformation matrix from the robot coordinate system to the camera coordinate system, namely the 6D pose of the robot under the camera coordinate system.
Further, in the step S1042, when the ArUco code is occluded, the problem that the ArUco code is partially occluded is solved based on template matching, and the problem that the ArUco code is completely occluded is solved based on trajectory interpolation;
the method for solving the problem that the Aruco code is partially shielded based on template matching specifically comprises the following steps: firstly, acquiring a previous frame image with the Aruco code partially shielded, and extracting the Aruco code in the frame image. Setting thresholds of rotation transformation and translation transformation according to the motion performance of the robot, simultaneously performing rotation transformation and translation transformation on the image to generate an image sequence, and performing template matching on the ArUco code in the shielded current frame and the image sequence to generate a complete ArUco code of the current frame, thereby realizing target tracking with the ArUco code as a target. At this time, the robot 6D pose is calculated according to step S104 using the ArUco code image in the image sequence for which template matching is successful.
The method for solving the problem that the Aruco code is completely shielded based on the track interpolation specifically comprises the following steps: when the Aruco code is completely blocked, the first two frames of images with the blocking and the two frames of images when the complete blocking disappears are obtained, the robot 6D poses corresponding to the four frames of images are calculated according to the step S104, and the speed and the acceleration of the robot at two moments are calculated according to the poses. Then, assuming that the acceleration is continuously changed in the motion process of the robot, introducing fifth-order polynomial interpolation to predict the motion track of the robot:
x(t)=a0+a1(t-t0)1+a2(t-t0)2+a3(t-t0)3+a4(t-t0)4+a5(t-t0)5
wherein x (t) is the position of the robot at time t, a0、a1、a2、a3、a4、a5Respectively, are the undetermined coefficients of a fifth-order polynomial. Can pass through the robot at t0Time t and1the position, the speed and the acceleration of the moment determine undetermined coefficients of a fifth-order polynomial, namely:
x(t0)=x0
x(t1)=x1
Figure BDA0003061611840000021
Figure BDA0003061611840000022
Figure BDA0003061611840000023
Figure BDA0003061611840000024
substituting the known quantity into a fifth-order polynomial, and setting T as T1-t0、X=x1-x0The following can be obtained:
Figure BDA0003061611840000031
the robot t can be obtained according to the above formula0Time t and1and a quintic spline curve between moments can be approximated to the motion situation of the robot when the Aruco code is completely shielded, and the 6D pose of the robot can be extracted. On the basis, the pose estimation result is optimized by using a g2o algorithm, namely pose interpolation is calculated according to the pose of the robot when the occlusion disappears and the pose of the robot closest to the frame time in the quintic spline curve, and the interpolation is averaged into each predicted pose in the process that the Aruco code is completely occluded, so that the predicted motion track of the robot is smoother.
Further, the step S1043 specifically includes: knowing camera parameters, acquiring the geometric dimensions of the Aruco code mark plate by using the Aruco code outline and the corner point coordinates output in the step S1042, and solving a transformation matrix from the Aruco code mark plate coordinate system to the camera coordinate system by using a PnP algorithm. The solution is performed using the solvePnP () function in OpenCV. Then introducing a transformation matrix from the robot coordinate system to an Aruco code sign board coordinate system, and calculating the transformation matrix from the robot coordinate system to a camera coordinate system, namely the 6D pose of the robot under the camera coordinate system, namely:
Figure BDA0003061611840000032
in the formula (I), the compound is shown in the specification,
Figure BDA0003061611840000033
respectively a transformation matrix from the robot coordinate system to the camera coordinate system,
Figure BDA0003061611840000034
A transformation matrix labeled ArUco code logo plate coordinate system to camera coordinate system,
Figure BDA0003061611840000035
Is a transformation matrix from a robot coordinate system to an Aruco code sign board coordinate system.
Further, the step S105 specifically includes the following sub-steps:
step S1051, calculating the picture proportion of the Aruco code mark in the image according to the result of detecting and dividing the Aruco code mark, setting a proportion threshold value, and rejecting a 6D pose result with unqualified proportion;
step S1052 is used for calculating the pose difference value between the 6D pose result output by the current image frame and the previous frame calculation result, setting a threshold value and eliminating the calculation result with overlarge difference value.
Further, the specific method in step S101 is as follows:
estimating the installation height of the camera to be about H +/-L according to the height of the experimental site, wherein L is the difference between the installation position of the camera and the floor height, the vertical distance between the robot and the camera is H-H +/-L, estimating the visual range of the camera according to the average field angle of 90 degrees of the camera, and the visual range of the camera is about 4(H-H +/-L)2Calculating the unit pixel of the image data according to the motion trail precision shot by the camera, wherein the camera pixel p required by motion trail acquisition is as follows:
Figure BDA0003061611840000041
in the formula, p represents a camera pixel required for acquisition, and μ is the precision required for acquisition of a motion trajectory. And after determining p, carrying out camera type selection according to the camera pixels, and recording the resolution of the selected camera as p', the size of the photosensitive device as s and the focal length as f. And installing a camera according to the optimal imaging range of the selected camera model, wherein the height of the camera is H', and the measurable range of the single camera is calculated as S:
Figure BDA0003061611840000042
calculating the acquisition precision sigma of the motion trail according to the measurable range of the unit single camera and the camera pixel:
Figure BDA0003061611840000043
and comparing the motion trail acquisition precision sigma obtained by calculation with the precision mu required by motion trail acquisition, if the precision sigma does not meet the precision requirement, reselecting the model of the camera, and repeating the process.
According to the technical scheme, the invention has the beneficial effects that:
(1) the robot motion trail real-time acquisition, calculation and evaluation system based on the AR technology can improve the universality of robot motion trail acquisition.
(2) The motion trail acquisition system provided by the invention can complete the motion trail acquisition work of the robot in a large-scale scene only by arranging a limited number of monocular cameras, and the cost of the whole system comes from the cameras and the camera installation, so that the cost of the whole system is reduced, and the universality of the system is improved.
(3) The invention can install and arrange cameras according to the scale and the shape of an experimental site, thereby breaking the limitation of the dynamic capture equipment to the site and improving the scene adaptability of motion trail acquisition.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
Fig. 1 is a flowchart of a method for acquiring a robot motion trajectory in real time based on ArUco codes, and as shown in fig. 1, the system applied to robot motion trajectory acquisition may include the following steps:
in this embodiment, the robot includes, but is not limited to, a contour mobile robot, including wheeled, tracked, multi-legged robots, including various types of aircraft.
Step S101, determining the type and the installation height of a camera according to the restriction of an experimental site, the height of a robot and the precision requirement of a motion track, and calculating the measurable range of the camera;
in the embodiment, a wheeled mobile robot is taken as an example, and the height of the robot is h. The experimental site is generally an indoor scene with various structures and can also be used for an outdoor scene, but the influence of illumination on the performance of the camera needs to be considered in the outdoor scene, the indoor scene is taken as an example in the embodiment, and the height of the floor is H.
Specifically, the installation height of the camera is estimated to be about H +/-L according to the height of the experimental site, wherein L is the difference between the position where the camera is easily installed and the floor height, the vertical distance between the robot and the camera is H-H +/-L, the visible range of the camera is estimated according to the average field angle of 90 degrees of the camera, and the visible range of the camera is about 4(H-H +/-L)2Calculating the unit pixel of the image data according to the motion trail precision shot by the camera, wherein the camera pixel p required by motion trail acquisition is as follows:
Figure BDA0003061611840000051
in the formula, p represents a camera pixel required for acquisition, and μ is the precision required for acquisition of a motion trajectory. And after determining p, carrying out camera type selection according to the camera pixels, and recording the resolution of the selected camera as p', the size of the photosensitive device as s and the focal length as f. And installing a camera according to the optimal imaging range of the selected camera model, wherein the height of the camera is H', and the measurable range of the single camera is calculated as S:
Figure BDA0003061611840000052
calculating the acquisition precision sigma of the motion trail according to the measurable range of the unit single camera and the camera pixel:
Figure BDA0003061611840000053
and comparing the motion trail acquisition precision sigma obtained by calculation with the precision mu required by motion trail acquisition, if the precision sigma does not meet the precision requirement, reselecting the model of the camera, and repeating the process.
Step S102, calculating the number and the arrangement mode of cameras according to an experimental site and the measurable range of the cameras, installing an ArUco code mark plate on the top of the robot, and finely adjusting the focal lengths of all the cameras according to the imaging effect to ensure that the ArUco code mark plate is imaged clearly in the visual fields of all the cameras;
specifically, the cameras are arranged according to the size and the shape of the experimental site and the measurable range of the cameras, and the arrangement of the cameras needs to ensure that the measurable ranges of the cameras are overlapped to a certain extent. In order to ensure the accuracy of splicing a plurality of motion tracks, the overlapping area of the measurable range of every two cameras is ensured to be more than 20% of the measurable range of a single camera. Then, in order to ensure that the measurement accuracy of all positions in the measurable range of a single camera is consistent, the imaging plane of the camera needs to be adjusted to be a horizontal plane as much as possible. Then, the Aruco code sign board is installed on the top of the robot, the fact that the robot is not shielded in the measurable range of the camera is guaranteed, and the Aruco code sign board and the camera imaging plane are kept parallel to the greatest extent. And finally, finely adjusting the focal lengths of all the cameras to ensure that the Aruco code sign board can clearly image in the measurable range of all the cameras.
Step S103, calibrating internal parameters of each camera, and calculating a transformation matrix from a robot coordinate system to a sign board coordinate system;
specifically, the camera internal reference calibration is performed based on the ROS system, and compared with other methods, the visualization is relatively good, and the operation is easier. Firstly, printing a checkerboard required for calibration, starting a camera required to be calibrated by taking clear imaging of the camera as a standard, and acquiring image data in real time by using the camera; then, opening a camera _ calibration function package in the ROS system, and inputting checkerboard parameters and the topic names of the image data in the ROS system to obtain a visual interface; and then slowly moving the checkerboard to enable the checkerboard to appear in each area in the visual field range of the camera, and paying attention to the calibration progress of the operation interface until the progress is completed, and automatically calculating camera internal parameters by the program.
For solving the transformation matrix from the robot coordinate system to the sign board coordinate system, because the ArUco code sign board is installed at the top of the robot, if the robot coordinate system is arranged at the geometric center position of the robot, the translational transformation of the ArUco code sign board relative to the geometric center position of the robot in the x, y and z directions is measured, and no rotational transformation exists, the transformation matrix is pure translational transformation, that is, the transformation matrix T from the robot coordinate system to the sign board coordinate system is:
Figure BDA0003061611840000061
in this embodiment, the collected motion trajectory is mainly used for positioning and tracking the robot, so that the origin of the coordinate system of the robot is directly placed at the origin of the coordinate system of the calibration board, thereby reducing unnecessary measurement and errors generated by measurement.
Step S104, acquiring robot motion image data in real time by using a camera, introducing a transformation matrix from a robot coordinate system to a sign board coordinate system, and calculating the 6D position of the robot in the camera coordinate system in real time; the step comprises the following substeps:
step S1041, collecting image data of robot motion in real time by using a camera;
specifically, the present embodiment utilizes all the cameras installed on the top of the laboratory to collect the robot motion data, and when performing an experiment, all the cameras are simultaneously turned on, so as to ensure that the timestamps of all the pictures in the image data are consistent. Meanwhile, the clearer the Aruco code sign plate is imaged in the camera, the more accurate the robot 6D pose information obtained by calculation. Therefore, the acquired image data of the robot motion mainly refers to the moving Aruco code sign board image data, and is not a pose picture of the whole robot at each moment. In this embodiment, the acquisition of image data and the calculation of the pose of the robot are performed simultaneously, and step S1041 is mainly used to acquire an image and output it in real time.
Step S1042, detecting and extracting the ArUco codes on the ArUco code mark plate in real time, extracting the ID, the outline and the angular point coordinates of the ArUco codes, and solving the problem that the ArUco codes are shielded in the motion process of the robot based on template matching and track interpolation;
specifically, the input image data of the Aruco code sign board is firstly converted into a gray-scale map, and the Dictionary size and the mark size of the Aruco code in the Dictionary function in OpenCV are set. Then, candidate frame detection and quadrilateral extraction are respectively carried out to detect and extract the Aruco codes on the Aruco logo plate in real time. The candidate frame detection comprises candidate detection, corner sorting and similar quadrangle removal, the gray image is segmented by using a self-adaptive threshold, the outline of the gray image is extracted and filtered, so that the outline and the outline which is not in conformity with the size are abandoned, and the rough identification of the Aruco code is realized. And (3) performing perspective projection on the image by quadrilateral extraction to obtain a front view of the image, performing binary threshold classification on the front view by using an Otus method, obtaining the ID of the Aruco code by analyzing a threshold classification result, finishing the precise identification of the Aruco code, and outputting the ID, the outline and the corner coordinates of the Aruco code. The steps are realized by using a detect Markers function in OpenCV. And finally, in order to improve the detection precision of the Aruco code and avoid the calculation error of the coordinate transformation relation caused by the ID error of the Aruco code, filtering the ID of the Aruco code by using a filter DetectedMarkers function in OpenCV.
Solving the problem that the Aruco code is shielded in the motion process of the robot based on template matching and track interpolation; specifically, in order to solve the problem of unclear markers caused by problems of occlusion and motion blur in the environment, a conventional method generally prints multiple marks on the same marker plate, and the multiple marks on the marker plate cannot be simultaneously occluded to ensure the robustness of positioning. However, in practical situations, the area of the sign that is obscured is unpredictable. Based on the motion estimation method, the invention provides a motion estimation method based on target tracking and motion prediction, so as to solve the problem of motion estimation of the robot when the mark is shielded. The step comprises the following substeps:
the method for solving the problem that the Aruco code is partially shielded based on template matching specifically comprises the following steps: when the Aruco code is partially occluded, firstly, a previous frame image of the Aruco code partially occluded is obtained, and the Aruco code in the frame image is extracted. Setting thresholds of rotation transformation and translation transformation according to the motion performance of the robot, simultaneously performing rotation transformation and translation transformation on the image to generate an image sequence, and performing template matching on the ArUco code in the shielded current frame and the image sequence to generate a complete ArUco code of the current frame, thereby realizing target tracking with the ArUco code as a target. At this time, the robot 6D pose is calculated according to step S104 using the ArUco code image in the image sequence for which template matching is successful.
The method for solving the problem that the Aruco code is completely shielded based on the track interpolation specifically comprises the following steps: when the Aruco code is completely blocked, the first two frames of images with the blocking and the two frames of images when the complete blocking disappears are obtained, the robot 6D poses corresponding to the four frames of images are calculated according to the step S104, and the speed and the acceleration of the robot at two moments are calculated according to the poses. Then, assuming that the acceleration is continuously changed in the motion process of the robot, introducing fifth-order polynomial interpolation to predict the motion track of the robot:
x(t)=a0+a1(t-t0)1+a2(t-t0)2+a3(t-t0)3+a4(t-t0)4+a5(t-t0)5
wherein x (t) is the position of the robot at time t, a0、a1、a2、a3、a4、a5Respectively, are the undetermined coefficients of a fifth-order polynomial. Can pass through the robot at t0Time t and1the position, the speed and the acceleration of the moment determine undetermined coefficients of a fifth-order polynomial, namely:
x(t0)=x0
x(t1)=x1
Figure BDA0003061611840000071
Figure BDA0003061611840000072
Figure BDA0003061611840000073
Figure BDA0003061611840000074
substituting the known quantity into a fifth-order polynomial, and setting T as T1-t0、X=x1-x0The following can be obtained:
Figure BDA0003061611840000081
the robot t can be obtained according to the above formula0Time t and1and a quintic spline curve between moments can be approximated to the motion situation of the robot when the Aruco code is completely shielded, and the 6D pose of the robot can be extracted. On the basis, the pose estimation result is optimized by using a g2o algorithm, namely pose interpolation is calculated according to the pose of the robot when the occlusion disappears and the pose of the robot closest to the frame time in the quintic spline curve, and the interpolation is averaged into each predicted pose in the process that the Aruco code is completely occluded, so that the predicted motion track of the robot is smoother.
Step S1043, according to the contour, the corner point coordinates and the camera internal reference of the Aruco code, solving a transformation matrix from an Aruco code mark plate coordinate system to a camera coordinate system, introducing the transformation matrix from the robot coordinate system to the Aruco code mark plate coordinate system, and calculating the transformation matrix from the robot coordinate system to the camera coordinate system, namely the 6D pose of the robot under the camera coordinate system;
specifically, if the coordinates of n 3D points in the space in the world coordinate system, the coordinates of corresponding 2D points in the image coordinate system, and camera parameters are known, the pose transformation relationship between the camera coordinate system and the world coordinate system, that is, the PnP problem of computer vision, can be solved according to the perspective projection relationship. Similarly, in this embodiment, given camera parameters, the contour and the corner coordinates of the ArUco code are output in step S1042, and the geometric dimensions of the ArUco code marker are easily obtained, so that the transform matrix from the coordinate system of the ArUco code marker to the coordinate system of the camera can be solved by using the PnP algorithm. The present embodiment solves using the solvePnP () function in OpenCV. Then, introducing a transformation matrix from the robot coordinate system to an Aruco code sign board coordinate system, and calculating the transformation matrix from the robot coordinate system to a camera coordinate system, namely the 6D pose of the robot under the camera coordinate system, namely:
Figure BDA0003061611840000082
in the formula (I), the compound is shown in the specification,
Figure BDA0003061611840000083
respectively a transformation matrix from the robot coordinate system to the camera coordinate system,
Figure BDA0003061611840000084
A transformation matrix labeled ArUco code logo plate coordinate system to camera coordinate system,
Figure BDA0003061611840000085
Is a transformation matrix from a robot coordinate system to an Aruco code sign board coordinate system.
Step S105, evaluating the visible area of the calibration plate in image data, calculating the difference value of the 6D poses of two continuous frames of images, and evaluating the 6D pose calculation result according to the visible area and the difference value; the step comprises the following substeps:
step S1051, calculating the picture proportion of the Aruco mark in the image according to the result of detecting and dividing the Aruco mark, setting a proportion threshold value, and rejecting a 6D pose result with unqualified proportion;
specifically, according to the detection and segmentation result of the Aruco, the imaging size and the imaging angle of the Aruco code are respectively judged, a threshold value is set, and image data which are unqualified in imaging are removed. Firstly, four corner coordinates of the mark are obtained according to the mark segmentation result, the area before perspective projection is carried out is solved according to the corner coordinates, the area is compared with the size of the image, and the image with the too small mark is removed. Then, the distance between two adjacent corner points is solved according to the coordinates of the corner points, the length of four sides of the Aruco code outline is solved, the proportion of the four sides is calculated, a proportion threshold value is set, and an image with overlarge mark distortion is removed. And finally, calculating an included angle between the camera plane and the mark plane according to the coordinates of the four corner points, setting a threshold value of the included angle, and rejecting image data with an excessively small included angle.
Step S1052, calculating a pose difference value between a 6D pose result output by a current image frame and a previous frame calculation result, setting a certain threshold value, and eliminating a calculation result with an overlarge difference value;
specifically, the motion of the robot comprises translation and rotation, for convenience of calculation and reduction of calculation amount, a difference value between a translation matrix and a rotation matrix obtained by a current frame and a previous frame is calculated respectively, the motion speed of the robot is obtained, actual motion is calculated according to the motion speed and the motion time, the difference value between the two is compared, a threshold value is set, and motion with an overlarge difference value is eliminated.
Step S106, solving external parameters between two adjacent cameras, calculating a transformation matrix between the two adjacent cameras by using the external parameters, transforming the 6D poses calculated by all the cameras to a world coordinate system by using the transformation matrix between the two cameras, and outputting the 6D poses as motion tracks in a TUM data set format;
specifically, qualified 6D pose results obtained by calculation of all cameras are output, and a transformation matrix between the two cameras is introduced to transform the qualified 6D poses to a world coordinate system. The general method for the joint external reference calibration of the two cameras comprises the following steps: the two cameras are used for shooting the same checkerboard at the same time, and the pose transformation of the two cameras and the checkerboard is calculated at the same time, so that the pose transformation between the two cameras is calculated. In this embodiment, the position and pose transformation calculation is directly performed by using the ArUco sign board, that is, a change matrix between two cameras is calculated according to the motion trajectory of the robot obtained from the common view area of the cameras, and the motion trajectory is transformed to be under the same camera. And setting the coordinate system of a certain camera as a world coordinate system, and converting the robot motion trail acquired by all the cameras into the camera coordinate system, namely the world coordinate system. And secondly, converting the rotation matrix in the transformed 6D pose into a quaternion form required by the TUM data set by using an Eigen library, and outputting a corresponding timestamp and a translation matrix generated by the pose in the transformed 6D pose and the quaternion to a txt file, wherein the quaternion is the motion track of the robot.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (6)

1. A robot motion track real-time acquisition method based on Aruco codes is characterized by comprising the following steps:
s101, determining the type and the installation height of a camera according to the requirements of the limitation of an experimental site, the height of a robot and the precision of a motion track, and calculating the measurable range of the camera;
step S102, calculating the number and the arrangement mode of cameras according to an experimental site and the measurable range of the cameras, installing an ArUco code mark plate on the top of the robot, and finely adjusting the focal lengths of all the cameras according to the imaging effect to ensure that the ArUco code mark plate can clearly image in the measurable range of all the cameras;
step S103, calibrating internal parameters of each camera, and calculating a transformation matrix from a robot coordinate system to a sign board coordinate system;
step S104, acquiring robot motion image data in real time by using a camera, introducing a transformation matrix from a robot coordinate system to a sign board coordinate system, and calculating the 6D position of the robot in the camera coordinate system in real time;
step S105, evaluating the visible area of the Aruco code sign in image data, calculating the difference value of the 6D poses of two continuous frames of images, and evaluating the 6D pose calculation result according to the visible area and the difference value;
step S106, external parameters between two adjacent cameras are solved, a transformation matrix between the two adjacent cameras is calculated by the external parameters, the 6D poses calculated by all the cameras are transformed to a world coordinate system by the transformation matrix between the two cameras, and the 6D poses are output as motion tracks in a TUM data set format.
2. The method for acquiring the motion trail of the robot based on the ArUco code as claimed in claim 1, wherein the step S104 comprises the following substeps:
step S1041: acquiring image data of robot motion in real time by using a camera;
step S1042: detecting and extracting the Aruco code on the Aruco code mark plate in real time, extracting the ID, the contour and the angular point coordinate of the Aruco code, and solving the problem that the Aruco code is shielded in the motion process of the robot based on template matching and track interpolation;
step S1043: and solving a transformation matrix from the ArUco code mark plate coordinate system to the camera coordinate system according to the contour, the corner point coordinates and the camera internal reference of the ArUco code, introducing the transformation matrix from the robot coordinate system to the ArUco code mark plate coordinate system, and calculating the transformation matrix from the robot coordinate system to the camera coordinate system, namely the 6D pose of the robot under the camera coordinate system.
3. The method for acquiring the motion trail of the robot based on the ArUco codes as claimed in claim 2, wherein in the step S1042, when the ArUco codes are occluded, the problem that the ArUco codes are partially occluded is solved based on template matching, and the problem that the ArUco codes are completely occluded is solved based on trail interpolation;
the method for solving the problem that the Aruco code is partially shielded based on template matching specifically comprises the following steps: firstly, acquiring a previous frame image with the Aruco code partially shielded, and extracting the Aruco code in the frame image. Setting thresholds of rotation transformation and translation transformation according to the motion performance of the robot, simultaneously performing rotation transformation and translation transformation on the image to generate an image sequence, and performing template matching on the ArUco code in the shielded current frame and the image sequence to generate a complete ArUco code of the current frame, thereby realizing target tracking with the ArUco code as a target. At this time, the robot 6D pose is calculated according to step S104 using the ArUco code image in the image sequence for which template matching is successful.
The method for solving the problem that the Aruco code is completely shielded based on the track interpolation specifically comprises the following steps: when the Aruco code is completely blocked, the first two frames of images with the blocking and the two frames of images when the complete blocking disappears are obtained, the robot 6D poses corresponding to the four frames of images are calculated according to the step S104, and the speed and the acceleration of the robot at two moments are calculated according to the poses. Then, assuming that the acceleration is continuously changed in the motion process of the robot, introducing fifth-order polynomial interpolation to predict the motion track of the robot:
x(t)=a0+a1(t-t0)1+a2(t-t0)2+a3(t-t0)3+a4(t-t0)4+a5(t-t0)5
wherein x (t) is the position of the robot at time t, a0、a1、a2、a3、a4、a5Respectively of a fifth order polynomialAnd (5) fixing the coefficient. Can pass through the robot at t0Time t and1the position, the speed and the acceleration of the moment determine undetermined coefficients of a fifth-order polynomial, namely:
x(t0)=x0
x(t1)=x1
Figure FDA0003061611830000021
Figure FDA0003061611830000022
Figure FDA0003061611830000023
Figure FDA0003061611830000024
substituting the known quantity into a fifth-order polynomial, and setting T as T1-t0、X=x1-x0The following can be obtained:
Figure FDA0003061611830000025
obtaining the robot t according to the formula0Time t and1and (3) a quintic spline curve between moments is approximate to the motion condition of the robot when the Aruco code is completely shielded, and the 6D pose of the robot is extracted. On the basis, the pose estimation result is optimized by using a g2o algorithm, namely pose interpolation is calculated according to the pose of the robot when the occlusion disappears and the pose of the robot closest to the frame time in a quintic spline curve, and the interpolation is averaged into each predicted pose in the process that the Aruco code is completely occluded, so that the predicted motion track of the robot is smoother。
4. The method for acquiring the motion trail of the robot based on the ArUco code as claimed in claim 2, wherein the step S1043 specifically comprises: knowing camera parameters, acquiring the geometric dimensions of the Aruco code mark plate by using the Aruco code outline and the corner point coordinates output in the step S1042, and solving a transformation matrix from the Aruco code mark plate coordinate system to the camera coordinate system by using a PnP algorithm. The solution is performed using the solvePnP () function in OpenCV. Then introducing a transformation matrix from the robot coordinate system to an Aruco code sign board coordinate system, and calculating the transformation matrix from the robot coordinate system to a camera coordinate system, namely the 6D pose of the robot under the camera coordinate system, namely:
Figure FDA0003061611830000031
in the formula (I), the compound is shown in the specification,
Figure FDA0003061611830000032
respectively a transformation matrix from the robot coordinate system to the camera coordinate system,
Figure FDA0003061611830000033
A transformation matrix labeled ArUco code logo plate coordinate system to camera coordinate system,
Figure FDA0003061611830000034
Is a transformation matrix from a robot coordinate system to an Aruco code sign board coordinate system.
5. The method for acquiring the motion trail of the robot based on the ArUco code as claimed in claim 1, wherein the step S105 specifically comprises the following substeps:
step S1051, calculating the picture proportion of the Aruco code mark in the image according to the result of detecting and dividing the Aruco code mark, setting a proportion threshold value, and rejecting a 6D pose result with unqualified proportion;
step S1052 is used for calculating the pose difference value between the 6D pose result output by the current image frame and the previous frame calculation result, setting a threshold value and eliminating the calculation result with overlarge difference value.
6. The method for acquiring the motion trail of the robot based on the Aruco code in real time as claimed in claim 1, wherein the step S101 comprises the following specific steps:
estimating the installation height of the camera to be about H +/-L according to the height of the experimental site, wherein L is the difference between the installation position of the camera and the floor height, the vertical distance between the robot and the camera is H-H +/-L, estimating the visual range of the camera according to the average field angle of 90 degrees of the camera, and the visual range of the camera is about 4(H-H +/-L)2Calculating the unit pixel of the image data according to the motion trail precision shot by the camera, wherein the camera pixel p required by motion trail acquisition is as follows:
Figure FDA0003061611830000035
in the formula, p represents a camera pixel required for acquisition, and μ is the precision required for acquisition of a motion trajectory. And after determining p, carrying out camera type selection according to the camera pixels, and recording the resolution of the selected camera as p', the size of the photosensitive device as s and the focal length as f. And installing a camera according to the optimal imaging range of the selected camera model, wherein the height of the camera is H', and the measurable range of the single camera is calculated as S:
Figure FDA0003061611830000036
calculating the acquisition precision sigma of the motion trail according to the measurable range of the unit single camera and the camera pixel:
Figure FDA0003061611830000037
and comparing the motion trail acquisition precision sigma obtained by calculation with the precision mu required by motion trail acquisition, if the precision sigma does not meet the precision requirement, reselecting the model of the camera, and repeating the process.
CN202110515741.3A 2021-05-12 2021-05-12 Robot motion track real-time acquisition method based on Aruco code Active CN113223050B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110515741.3A CN113223050B (en) 2021-05-12 2021-05-12 Robot motion track real-time acquisition method based on Aruco code

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110515741.3A CN113223050B (en) 2021-05-12 2021-05-12 Robot motion track real-time acquisition method based on Aruco code

Publications (2)

Publication Number Publication Date
CN113223050A true CN113223050A (en) 2021-08-06
CN113223050B CN113223050B (en) 2022-07-26

Family

ID=77094929

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110515741.3A Active CN113223050B (en) 2021-05-12 2021-05-12 Robot motion track real-time acquisition method based on Aruco code

Country Status (1)

Country Link
CN (1) CN113223050B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114216482A (en) * 2021-12-14 2022-03-22 Oppo广东移动通信有限公司 Method and device for determining external trace parameter value, storage medium and electronic equipment
CN115289982A (en) * 2022-09-28 2022-11-04 天津大学建筑设计规划研究总院有限公司 Aruco code-based structural plane displacement visual monitoring method
CN117226853A (en) * 2023-11-13 2023-12-15 之江实验室 Robot kinematics calibration method, device, storage medium and equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160063330A1 (en) * 2014-09-03 2016-03-03 Sharp Laboratories Of America, Inc. Methods and Systems for Vision-Based Motion Estimation
CN109374003A (en) * 2018-11-06 2019-02-22 山东科技大学 A kind of mobile robot visual positioning and air navigation aid based on ArUco code
CN109993798A (en) * 2019-04-09 2019-07-09 上海肇观电子科技有限公司 Method, equipment and the storage medium of multi-cam detection motion profile
CN111179356A (en) * 2019-12-25 2020-05-19 北京中科慧眼科技有限公司 Binocular camera calibration method, device and system based on Aruco code and calibration board
WO2021017212A1 (en) * 2019-07-26 2021-02-04 魔门塔(苏州)科技有限公司 Multi-scene high-precision vehicle positioning method and apparatus, and vehicle-mounted terminal
WO2021063127A1 (en) * 2019-09-30 2021-04-08 深圳市瑞立视多媒体科技有限公司 Pose positioning method and related equipment of active rigid body in multi-camera environment
CN112734844A (en) * 2021-01-08 2021-04-30 河北工业大学 Monocular 6D pose estimation method based on octahedron

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160063330A1 (en) * 2014-09-03 2016-03-03 Sharp Laboratories Of America, Inc. Methods and Systems for Vision-Based Motion Estimation
CN109374003A (en) * 2018-11-06 2019-02-22 山东科技大学 A kind of mobile robot visual positioning and air navigation aid based on ArUco code
CN109993798A (en) * 2019-04-09 2019-07-09 上海肇观电子科技有限公司 Method, equipment and the storage medium of multi-cam detection motion profile
WO2021017212A1 (en) * 2019-07-26 2021-02-04 魔门塔(苏州)科技有限公司 Multi-scene high-precision vehicle positioning method and apparatus, and vehicle-mounted terminal
WO2021063127A1 (en) * 2019-09-30 2021-04-08 深圳市瑞立视多媒体科技有限公司 Pose positioning method and related equipment of active rigid body in multi-camera environment
CN111179356A (en) * 2019-12-25 2020-05-19 北京中科慧眼科技有限公司 Binocular camera calibration method, device and system based on Aruco code and calibration board
CN112734844A (en) * 2021-01-08 2021-04-30 河北工业大学 Monocular 6D pose estimation method based on octahedron

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HO CHUEN KAM ET AL.: "An Improvement on ArUco Marker for Pose tracking using kalman filter", 《SNPD 2018》 *
黄海晖: "基于多传感器融合的球形机器人室内定位关键技术研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114216482A (en) * 2021-12-14 2022-03-22 Oppo广东移动通信有限公司 Method and device for determining external trace parameter value, storage medium and electronic equipment
CN115289982A (en) * 2022-09-28 2022-11-04 天津大学建筑设计规划研究总院有限公司 Aruco code-based structural plane displacement visual monitoring method
CN117226853A (en) * 2023-11-13 2023-12-15 之江实验室 Robot kinematics calibration method, device, storage medium and equipment
CN117226853B (en) * 2023-11-13 2024-02-06 之江实验室 Robot kinematics calibration method, device, storage medium and equipment

Also Published As

Publication number Publication date
CN113223050B (en) 2022-07-26

Similar Documents

Publication Publication Date Title
CN113223050B (en) Robot motion track real-time acquisition method based on Aruco code
US8189051B2 (en) Moving object detection apparatus and method by using optical flow analysis
JP6095018B2 (en) Detection and tracking of moving objects
JP6363863B2 (en) Information processing apparatus and information processing method
US6181345B1 (en) Method and apparatus for replacing target zones in a video sequence
CN107038683B (en) Panoramic imaging method for moving object
CN110139031B (en) Video anti-shake system based on inertial sensing and working method thereof
CN108470356B (en) Target object rapid ranging method based on binocular vision
JP6261266B2 (en) Moving body detection device
CN108362205B (en) Space distance measuring method based on fringe projection
CN109934873B (en) Method, device and equipment for acquiring marked image
JP2014112055A (en) Estimation method for camera attitude and estimation system for camera attitude
CN112348775B (en) Vehicle-mounted looking-around-based pavement pit detection system and method
JP2018205870A (en) Object tracking method and device
KR101469099B1 (en) Auto-Camera Calibration Method Based on Human Object Tracking
CN116704048B (en) Double-light registration method
KR102065337B1 (en) Apparatus and method for measuring movement information of an object using a cross-ratio
CN111260725A (en) Dynamic environment-oriented wheel speed meter-assisted visual odometer method
CN110969135A (en) Vehicle logo recognition method in natural scene
CN107292932B (en) Head-on video speed measurement method based on image expansion rate
US10776928B1 (en) Efficient egomotion estimation using patch-based projected correlation
CN110473229B (en) Moving object detection method based on independent motion characteristic clustering
CN113191239A (en) Vehicle overall dimension dynamic detection system based on computer vision
CN112800890A (en) Road obstacle detection method based on surface normal vector
CN111291609A (en) Method for detecting dynamic background target of airport enclosure inspection robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant