CN115471530A - Robot self-positioning precision evaluation method based on laser radar - Google Patents

Robot self-positioning precision evaluation method based on laser radar Download PDF

Info

Publication number
CN115471530A
CN115471530A CN202211058259.2A CN202211058259A CN115471530A CN 115471530 A CN115471530 A CN 115471530A CN 202211058259 A CN202211058259 A CN 202211058259A CN 115471530 A CN115471530 A CN 115471530A
Authority
CN
China
Prior art keywords
registration
data
point cloud
algorithm
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211058259.2A
Other languages
Chinese (zh)
Inventor
管贻生
潘雅灵
张宏
何力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202211058259.2A priority Critical patent/CN115471530A/en
Publication of CN115471530A publication Critical patent/CN115471530A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a robot self-positioning accuracy evaluation method based on a laser radar, which specifically comprises the following steps: s1, collecting and recording data in an unknown environment by using a mobile robot loaded with a laser radar; s2, performing characteristic processing on the recorded data, and eliminating unstable points caused by motion distortion to obtain point cloud information; s3, converting the point cloud information into a global descriptor, and finding a sample pair set which is accessed again through a search algorithm; s4, collecting a local displacement vector of the data of the sample; s5, running the recorded data by adopting an SLAM algorithm and collecting an estimated track vector; and combining the local displacement vector and the track vector to obtain an offline registration pair, randomly selecting a registration pair from all the offline registration pairs, and calculating a statistical error. Compared with the traditional robot navigation performance evaluation technology, the method can achieve the same order of magnitude precision as an automatic capturing system by combining the natural landmarks.

Description

Robot self-positioning precision evaluation method based on laser radar
Technical Field
The invention relates to the field of robot algorithm positioning accuracy performance evaluation, in particular to a robot self-positioning accuracy evaluation method based on a laser radar.
Background
The performance evaluation of the robot SLAM algorithm in the aspect of positioning accuracy is an important subject of motivation robotics, and robots are increasingly used in medical education, industrial manufacturing and logistics storage industries. The SLAM algorithm is typically evaluated with the aid of expensive equipment such as reference data sets, artificial landmarks, motion capture systems, etc. However, the dataset-based approach is environment-specific, the use of motion capture systems limits the coverage and affordability of the space, and manual landmarks suffer from being prone to contamination, requiring extensive placement, and the like. Compared with an artificial landmark, an object in the nature has a natural landmark, a high-precision local displacement measurement is obtained by repeatedly visiting the same place twice or more and according to all observation information acquired by repeatedly visiting data, and then a global error of a track of a certain SLAM positioning algorithm is evaluated by using the local displacement measurement, namely a parameter of an error found by a parameter of a statistical model, so that self-positioning precision evaluation of the robot in motion is realized. However, the acquisition of high precision local displacement measurements is not easy.
In the prior art of robot navigation, there are three performance evaluation methods, which perform performance evaluation based on a reference data set, a motion capture system, and an artificial landmark, respectively. The dataset method usually uses a set of elaborated sensors (GPS, IMU, lidar and camera) and a lot of off-line processing for creating the true trajectory poses, is convenient to use, is suitable for performance comparison of different algorithms, and has the drawback that the result of the evaluation is specific to the environment; the drawback of the motion capture system, which is the most accurate and expensive method, whose pose can be determined with millimeter accuracy, is that, although it can provide accurate ground-true pose that does not drift over time, the object of interest must be within a sufficient number of camera fields of view and cover a limited area, and its cost is high; the method using the artificial landmarks provides a relatively cheap and simple evaluation method, and the scheme has the defects that the artificial landmarks are required to be arranged in a large quantity in the environment, the defects of easy pollution and shielding exist, and the confidence coefficient of the robot relative to the posture of the robot cannot be provided. And natural landmarks exist in objects in the nature, the natural landmarks are used for replacing the manual landmarks, and self-positioning precision evaluation of the robot can be realized robustly and naturally.
Therefore, the robot self-positioning precision evaluation method based on the laser radar is provided by combining the requirements and the defects that the cost is high, the evaluation result is specific to the environment and the like in the prior art.
Disclosure of Invention
The invention provides a robot self-positioning precision evaluation method based on a laser radar, which can achieve the precision of the same order of magnitude as that of an automatic capture system by means of the characteristic of high precision of the laser radar and combining with natural landmarks, and can simply and reliably realize the robot self-positioning precision evaluation.
The primary objective of the present invention is to solve the above technical problems, and the technical solution of the present invention is as follows:
the invention provides a robot self-positioning accuracy evaluation method based on a laser radar, which comprises the following steps:
s1, collecting recorded data in an ROS packet format in an unknown environment by using a mobile robot loaded with a laser radar.
And S2, exporting the recorded data in the ROS packet format, carrying out feature processing, and eliminating unstable points caused by motion distortion to obtain point cloud information after the feature processing.
And S3, converting the point cloud information after the characteristic processing into a global descriptor, and finding a sample pair set for revising the same position through a search algorithm.
S4, acquiring local displacement vectors of sample pair data by accessing a sample pair set, acquiring a related point cloud registration relation for sample-to-point cloud by using a point cloud characteristic method, searching by using a maximum cluster search algorithm to acquire a maximum registration pair related to a geometrical relation before and after registration, calculating a conversion matrix between two nearest points in the point cloud by using an ICP (inductively coupled plasma) algorithm, taking a corresponding point which meets the conversion matrix and has the minimum mean square error as an inner point, judging error registration and re-registering if the distance of the inner point is greater than a set threshold value, and abandoning the acquisition of the sample pair data if the set times of repeating the process are still error registration.
S5, repeating the step S4 until local displacement vectors of all the access samples to the data set are collected, and running the recorded data in the ROS packet format on line or off line by adopting an SLAM algorithm and collecting an estimated track vector; and combining all local displacement vectors with the trajectory vectors respectively to obtain a plurality of offline registration pairs, randomly selecting the registration pairs from all the offline registration pairs, and calculating a statistical error, wherein the error is the error of the trajectory.
Further, in step S1, the mobile robot is provided with: 2D and 3D laser radar and visual sensing navigator; when the mobile robot collects data in an unknown environment, the mobile robot moves in a random walking or fixed point navigation mode and collects the data, and records the data into an ROS packet format; the method for finding a repeat visit location includes: a global descriptor method, a Scan Context algorithm, an Intensity Scan Context algorithm, and a deep learning algorithm.
The sensors commonly used for acquiring point clouds comprise a laser radar and an RGBD camera, and the reason for adopting the 2D and 3D laser radars is that the laser radar has wide environmental adaptation range and high precision, and is a guarantee for acquiring high-precision local displacement measurement.
Furthermore, when the mobile robot collects in an unknown environment, the mobile robot needs to visit the same position at least twice, and the independence of sample data is met.
Further, the collected data comprise laser data and image data, and the mobile robot executes linear and steering motion in the environment and performs automatic obstacle avoidance and random search.
Further, the step S2 of performing feature processing on the recorded data in the ROS packet format specifically includes: extracting recording data from the ROS packet, calculating a feature matrix of a point neighborhood region for each frame of point cloud data in the recording data, extracting a feature value of the feature matrix, extracting a maximum point set of approximate features in a plurality of neighborhoods, or directly performing down-sampling on the recording data, uniformly dividing the recording data into a plurality of parts according to a laser scanning beam, reserving points of a current state index smaller than a set threshold value as angular points, reserving points of a planar index smaller than the set threshold value as surface points, eliminating unstable points caused by motion distortion, and obtaining point cloud information after feature processing.
The method for processing the point cloud features is used for obtaining stable and reliable feature points.
Further, the step S3 specifically includes: describing each frame of point cloud data by using a Scan Context algorithm, obtaining regions which meet set conditions and are repeatedly visited and are smaller than a certain threshold value through searching, and combining five or more than five repeatedly visited regions according to a first visit timestamp to obtain a sample pair set.
Further, in step S4, the process of acquiring the local displacement vector of the sample to the data specifically includes:
s41, using a point cloud feature extraction method as a feature extraction model, and using each frame of point cloud data after feature processing as input data of the feature extraction model to perform primary feature extraction;
s42, performing nearest neighbor search on the point cloud information after the feature processing to find a coarse registration point pair; extracting the maximum cluster registration pairs with geometric information relationship by adopting a coarse registration point pair obtained by further processing by adopting a maximum cluster search method, and generating an initialized registration matrix;
s43, further registering by adopting a point cloud registration algorithm, optimizing based on the initialized registration matrix to obtain an accurate registration relation between two groups of point clouds, calculating the average distance of the inner point pairs, matching again or selecting to abandon acquisition if the average distance is larger than a set threshold value, repeating the process for three times, abandoning the acquisition of the sample pair data if the registration is still wrong, and improving the robustness and accuracy of the local translation vector.
And for each point in the source point cloud, finding a corresponding point with the closest distance in the target point cloud to form an initial corresponding point pair. The corresponding relations in the initial corresponding point pair set are not correct, the final registration result can be influenced by the wrong corresponding relation, a conversion matrix between two point clouds is calculated through an ICP (inductively coupled plasma) algorithm, the mean square error between the point sets is minimum, and the corresponding point pair meeting the conversion matrix is an inner point. And repeating iteration until the convergence condition is met, and obtaining the average distance of the inner points between the two point clouds.
Further, the point cloud feature extraction method is an FPFH (field programmable gate flash) and 3dsmooth method, and the point cloud registration algorithm is an ICP (inductively coupled plasma) series algorithm.
Further, for each sample pair, performing feature extraction on the target point cloud and the source point cloud by using an FPFH algorithm, and obtaining a coarse registration pair through nearest neighbor search; converting the distance between two points in the same frame of point cloud into the geometric property of unchanged length of another frame of point cloud, and obtaining a maximum registration pair and a primary conversion matrix through a maximum clustering algorithm; and finally, further registering the maximum registration pair by using an ICP (inductively coupled plasma) algorithm to obtain an accurate conversion matrix.
Under the conversion matrix, if the average value of the distances between the inner point pairs is larger than a certain threshold value, re-registering is carried out, and the sample pair is abandoned after the process is repeated three times.
Further, the process of collecting the estimated trajectory vector in step S5 specifically includes:
s51, running the recorded data in the ROS packet format by adopting an SLAM algorithm, and storing an estimated track of the algorithm;
s52, defining a certain local translation vector as V x The SLAM algorithm which finds its corresponding time stamp estimates two poses of the trajectory, defining the translation vector between them as V y Forming an off-line registration pair (V) x ,V y );
S53, repeating the steps S51 and S52 until all the offline registration pairs are collected; several pairs are randomly selected from the set of all offline registration pairs, and the parameters σ of the sample set are estimated using the GLO algorithm for calculating the final error σ'.
The method can be used for evaluating different types of SLAM algorithms, can help test and select the optimal parameters suitable for a certain SLAM algorithm, and effectively reduces the parameter amount required by calculation and the consumption of calculation resources.
The invention provides a robot self-positioning accuracy evaluation system based on laser radar, which comprises a memory and a processor, wherein the memory comprises a robot self-positioning accuracy evaluation program based on laser radar, and the robot self-positioning accuracy evaluation program based on laser radar realizes the following steps when being executed by the processor:
s1, collecting recorded data in an ROS packet format in an unknown environment by using a mobile robot loaded with a laser radar.
And S2, exporting the recorded data in the ROS packet format, carrying out feature processing, and eliminating unstable points caused by motion distortion to obtain point cloud information after the feature processing.
And S3, converting the point cloud information after the feature processing into a global descriptor, and finding a sample pair set for revising the same position through a search algorithm.
S4, acquiring local displacement vectors of sample pair data by accessing a sample pair set, acquiring a related point cloud registration relation for sample-to-point cloud by using a point cloud characteristic method, searching by using a maximum cluster search algorithm to acquire a maximum registration pair related to a geometrical relation before and after registration, calculating a conversion matrix between two nearest points in the point cloud by using an ICP (inductively coupled plasma) algorithm, taking a corresponding point which meets the conversion matrix and has the minimum mean square error as an inner point, judging error registration and re-registering if the distance of the inner point is greater than a set threshold value, and abandoning the acquisition of the sample pair data if the set times of repeating the process are still error registration.
S5, repeating the step S4 until local displacement vectors of all the access samples to the data set are collected, and running the recorded data in the ROS packet format on line or off line by adopting an SLAM algorithm and collecting an estimated track vector; and combining all local displacement vectors with the trajectory vectors respectively to obtain a plurality of offline registration pairs, randomly selecting the registration pairs from all the offline registration pairs, and calculating a statistical error, wherein the error is the error of the trajectory.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the invention provides a robot self-positioning accuracy evaluation method based on a laser radar.
Drawings
Fig. 1 is a schematic diagram of a method for evaluating the self-positioning accuracy of a robot based on a laser radar according to the present invention.
Fig. 2 is a schematic flow chart of a robot self-positioning accuracy evaluation method based on a laser radar according to the present invention.
Fig. 3 is a schematic diagram of a system for evaluating the self-positioning accuracy of a robot based on a lidar.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention, taken in conjunction with the accompanying drawings and detailed description, is set forth below. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those specifically described herein, and therefore the scope of the present invention is not limited by the specific embodiments disclosed below.
Example 1
As shown in fig. 1, the invention provides a robot self-positioning accuracy evaluation method based on laser radar, which comprises the following steps:
s1, collecting recorded data in an ROS packet format in an unknown environment by using a mobile robot loaded with a laser radar.
And S2, exporting the recorded data in the ROS packet format, carrying out feature processing, and eliminating unstable points caused by motion distortion to obtain point cloud information after the feature processing.
And S3, converting the point cloud information after the characteristic processing into a global descriptor, and finding a sample pair set for revising the same position through a search algorithm.
S4, acquiring local displacement vectors of sample pair data by accessing a sample pair set, acquiring a related point cloud registration relation for sample-to-point cloud by using a point cloud characteristic method, searching by using a maximum cluster search algorithm to acquire a maximum registration pair related to a geometrical relation before and after registration, calculating a conversion matrix between two nearest points in the point cloud by using an ICP (inductively coupled plasma) algorithm, taking a corresponding point which meets the conversion matrix and has the minimum mean square error as an inner point, judging error registration and re-registering if the distance of the inner point is greater than a set threshold value, and abandoning the acquisition of the sample pair data if the set times of repeating the process are still error registration.
S5, repeating the step S4 until local displacement vectors of all the access samples to the data set are collected, and running the recorded data in the ROS packet format on line or off line by adopting an SLAM algorithm and collecting an estimated track vector; and combining all local displacement vectors with the track vector to obtain a plurality of off-line registration pairs, randomly selecting the registration pairs from all off-line registration pairs, and calculating a statistical error, wherein the error is the error of the track.
Further, in step S1, the mobile robot is provided with: 2D and 3D laser radar and visual sensing navigator; when the mobile robot collects data in an unknown environment, the mobile robot moves in a random walking or fixed point navigation mode and collects the data, and records the data into an ROS packet format; the method for finding a repeat visit location includes: a global descriptor method, a Scan Context algorithm, an Intensity Scan Context algorithm, and a deep learning algorithm.
The sensors commonly used for acquiring the point cloud comprise a laser radar and an RGBD camera, and the reason for adopting the 2D and 3D laser radars is that the laser radars have wide environmental adaptation range and high precision, and the method is a guarantee for acquiring high-precision local displacement measurement.
In a specific embodiment, the sensor device carried by the mobile platform or the mobile robot requires at least one lidar sensor, and the sensor is installed according to an algorithm to be evaluated.
Furthermore, when the mobile robot collects in an unknown environment, the mobile robot needs to visit the same position at least twice, and the independence of sample data is met.
In a specific embodiment, the method for finding the repeated access point is not limited to the global descriptor method, not limited to the Scan Context algorithm, and may use the Intensity Scan Context algorithm, or may use other methods for finding the loop detection, such as the deep learning algorithm.
Further, the collected data comprise laser data and image data, and the mobile robot executes linear and steering motion in the environment and performs automatic obstacle avoidance and random search.
Further, the step S2 of performing feature processing on the recorded data in the ROS packet format specifically includes: extracting recording data from the ROS packet, calculating a feature matrix of a point neighborhood region for each frame of point cloud data in the recording data, extracting a feature value of the feature matrix, extracting a maximum point set of approximate features in a plurality of neighborhoods, or directly performing down-sampling on the recording data, uniformly dividing the recording data into a plurality of parts according to a laser scanning beam, reserving points of a current state index smaller than a set threshold value as angular points, reserving points of a planar index smaller than the set threshold value as surface points, eliminating unstable points caused by motion distortion, and obtaining point cloud information after feature processing.
The method for processing the point cloud features is used for obtaining stable and reliable feature points.
Further, the step S3 specifically includes: describing each frame of point cloud data by using a Scan Context algorithm, obtaining regions which meet set conditions and are repeatedly visited and are smaller than a certain threshold value through searching, and combining five or more than five repeatedly visited regions according to a first visit timestamp to obtain a sample pair set.
Further, the step S4 of collecting a local displacement vector of the sample to the data specifically includes:
s41, using a point cloud feature extraction method as a feature extraction model, and using each frame of point cloud data after feature processing as input data of the feature extraction model to perform primary feature extraction;
s42, finding a coarse registration point pair by carrying out nearest neighbor search on the point cloud information after the characteristic processing; extracting the maximum cluster registration pairs with geometric information relation by adopting a coarse registration point pair obtained by further processing by a maximum cluster search method, and generating an initialized registration matrix;
s43, further registering by adopting a point cloud registration algorithm, optimizing based on the initialized registration matrix to obtain an accurate registration relation between two groups of point clouds, calculating the average distance of the inner point pairs, matching again or selecting to abandon acquisition if the average distance is larger than a set threshold value, repeating the process for three times, abandoning the acquisition of the sample pair data if the registration is still wrong, and improving the robustness and accuracy of the local translation vector.
Further, the point cloud feature extraction method is an FPFH (field programmable gate flash) and 3 dsmooh method, and the point cloud registration algorithm is an ICP (inductively coupled plasma) series algorithm.
Further, for each sample pair, performing feature extraction on the target point cloud and the source point cloud by using an FPFH algorithm, and obtaining a coarse registration pair through nearest neighbor search; converting the distance between two points in the same frame of point cloud into the geometric property of unchanged length of another frame of point cloud, and obtaining a maximum registration pair and a primary conversion matrix through a maximum clustering algorithm; and finally, further registering the maximum registration pair by using an ICP (inductively coupled plasma) algorithm to obtain an accurate conversion matrix.
Under the transformation matrix, if the average value of the distances between the inner point pairs is larger than a certain threshold value, re-registration is carried out, and the sample pair is abandoned after the process is repeated for three times.
Further, the process of acquiring the estimated trajectory vector in step S5 specifically includes:
s51, running the recorded data in the ROS packet format by adopting an SLAM algorithm, and storing an estimated track of the algorithm;
s52, defining a certain local translation vector as V x And finding out two postures of the track by using the SLAM algorithm of the corresponding time stamp, and defining a translation vector between the two postures as V y Forming an off-line registration pair (V) x ,V y );
S53, repeating the steps S51 and S52 until all the offline registration pairs are collected; several pairs are randomly selected from the set of all offline registration pairs, and the parameters σ of the sample set are estimated using the GLO algorithm for calculating the final error σ'.
The method can be used for evaluating different types of SLAM algorithms, can help test and select the optimal parameters suitable for a certain SLAM algorithm, and effectively reduces the parameter amount required by calculation and the consumption of calculation resources.
Example 2
Based on the above embodiment 1, with reference to fig. 2, this embodiment elaborates the evaluation process of the LOAM algorithm in the SLAM algorithm.
In a specific embodiment, an ROS package off-line LOAM algorithm is used, an estimated track of the algorithm is stored, and a certain local translation vector is defined as V x Finding two poses of the LOAM trajectory corresponding to the time stamp, and defining a translation vector between them as V y Forming an offline registration pair (V) x ,V y ). The above process is repeated until all the off-line registration pairs are collected. From the set of all pairs, 50000 pairs were randomly selected for calculation using the GLO algorithm, and the resulting average σ was calculated and used to calculate the final error σ'.
Example 3
As shown in fig. 3, the present invention further provides a lidar-based robot self-positioning accuracy evaluation system, which includes a memory and a processor, wherein the memory includes a lidar-based robot self-positioning accuracy evaluation program, and when executed by the processor, the lidar-based robot self-positioning accuracy evaluation program implements the following steps:
s1, collecting recorded data in an ROS packet format in an unknown environment by using a mobile robot loaded with a laser radar.
And S2, exporting the recorded data in the ROS packet format, carrying out feature processing, and eliminating unstable points caused by motion distortion to obtain point cloud information after the feature processing.
And S3, converting the point cloud information after the feature processing into a global descriptor, and finding a sample pair set for revising the same position through a search algorithm.
S4, accessing a sample pair set, collecting local displacement vectors of sample pair data, obtaining a related point cloud registration relation for sample-to-point cloud by using a point cloud characteristic method, searching by using a maximum cluster search algorithm to obtain a maximum registration pair related to a geometrical relation before and after registration, calculating a conversion matrix between two closest points in the point cloud by using an ICP (inductively coupled plasma) algorithm, taking a corresponding point which meets the conversion matrix and has the minimum mean square error as an inner point, judging as error registration and re-registering if the distance of the inner point is greater than a set threshold value, and abandoning the collection of the sample pair data if the set times of repeating the process are still error registration.
S5, repeating the step S4 until local displacement vectors of all the access samples to the data set are collected, and running the recorded data in the ROS packet format on line or off line by adopting an SLAM algorithm and collecting an estimated track vector; and combining all local displacement vectors with the trajectory vectors respectively to obtain a plurality of offline registration pairs, randomly selecting the registration pairs from all the offline registration pairs, and calculating a statistical error, wherein the error is the error of the trajectory.
Further, in step S1, the mobile robot is provided with: 2D and 3D laser radar and visual sensing navigator; when the mobile robot collects data in an unknown environment, the mobile robot moves in a random walking or fixed point navigation mode, collects the data and records the data in an ROS packet format.
The sensors commonly used for acquiring point clouds comprise a laser radar and an RGBD camera, and the reason for adopting the 2D and 3D laser radars is that the laser radar has wide environmental adaptation range and high precision, and is a guarantee for acquiring high-precision local displacement measurement.
Furthermore, when the mobile robot collects data in an unknown environment, the mobile robot needs to visit the same position at least twice, and the independence of sample data is met.
Further, the collected data comprise laser data and image data, and the mobile robot executes linear and steering motion in the environment and carries out automatic obstacle avoidance and random search.
Further, the step S2 of performing feature processing on the recorded data in the ROS packet format specifically includes: extracting recording data from the ROS packet, calculating a feature matrix of a point neighborhood region for each frame of point cloud data in the recording data, extracting a feature value of the feature matrix, extracting a maximum point set of approximate features in a plurality of neighborhoods, or directly performing down-sampling on the recording data, uniformly dividing the recording data into a plurality of parts according to a laser scanning beam, reserving points of a current state index smaller than a set threshold value as angular points, reserving points of a planar index smaller than the set threshold value as surface points, eliminating unstable points caused by motion distortion, and obtaining point cloud information after feature processing.
The method for processing the point cloud features is used for obtaining stable and reliable feature points.
Further, the step S3 specifically includes: describing each frame of point cloud data by using a Scan Context algorithm, obtaining regions which meet set conditions and are repeatedly visited and are smaller than a certain threshold value through searching, and combining five or more than five repeatedly visited regions according to a first visit timestamp to obtain a sample pair set.
Further, in step S4, the process of acquiring the local displacement vector of the sample to the data specifically includes:
s41, using a point cloud feature extraction method as a feature extraction model, and using each frame of point cloud data after feature processing as input data of the feature extraction model to perform primary feature extraction;
s42, performing nearest neighbor search on the point cloud information after the feature processing to find a coarse registration point pair; extracting the maximum cluster registration pairs with geometric information relation by adopting a coarse registration point pair obtained by further processing by a maximum cluster search method, and generating an initialized registration matrix;
s43, further registering by adopting a point cloud registration algorithm, optimizing based on an initialized registration matrix to obtain an accurate registration relation between two groups of point clouds, calculating the average distance of the inner point pairs, matching again or selecting to abandon acquisition if the average distance is larger than a set threshold value, repeating the process for three times, and abandoning acquisition of the sample pair data if the registration is still wrong, so as to improve the robustness and accuracy of the local translation vector.
Further, the point cloud feature extraction method is an FPFH (field programmable gate flash) and 3dsmooth method, and the point cloud registration algorithm is an ICP (inductively coupled plasma) series algorithm.
Further, for each sample pair, performing feature extraction on the target point cloud and the source point cloud by using an FPFH algorithm, and obtaining a coarse registration pair through nearest neighbor search; converting the distance between two points in the same frame of point cloud into the geometric property of unchanged length of another frame of point cloud, and obtaining a maximum registration pair and a primary conversion matrix through a maximum clustering algorithm; and finally, further registering the maximum registration pair by using an ICP (inductively coupled plasma) algorithm to obtain an accurate conversion matrix.
Under the conversion matrix, if the average value of the distances between the inner point pairs is larger than a certain threshold value, re-registering is carried out, and the sample pair is abandoned after the process is repeated three times.
Further, the process of acquiring the estimated trajectory vector in step S5 specifically includes:
s51, running the recorded data in the ROS packet format by adopting an SLAM algorithm, and storing an estimated track of the algorithm;
s52, defining a certain local translation vector as V x The SLAM algorithm which finds its corresponding time stamp estimates two poses of the trajectory, defining the translation vector between them as V y Forming an off-line registration pair (V) x ,V y );
S53, repeating the steps S51 and S52 until all the offline registration pairs are collected; several pairs are randomly selected from the set of all offline registration pairs, and the parameters σ of the sample set are estimated using the GLO algorithm for calculating the final error σ'.
The drawings depicting the positional relationship of the structures are for illustrative purposes only and are not to be construed as limiting the present patent.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A robot self-positioning precision evaluation method based on laser radar is characterized by comprising the following steps:
s1, collecting recorded data in an ROS (reactive oxygen species) package format in an unknown environment by using a mobile robot loaded with a laser radar;
s2, exporting the recorded data in the ROS packet format, carrying out feature processing, and eliminating unstable points caused by motion distortion to obtain point cloud information after the feature processing;
s3, converting the point cloud information after the feature processing into a global descriptor, and finding a sample pair set for revisiting the same position through a search algorithm;
s4, acquiring local displacement vectors of sample pair data by accessing a sample pair set, acquiring a related point cloud registration relation for sample-to-point cloud by using a point cloud characteristic method, searching by using a maximum cluster search algorithm to acquire a maximum registration pair related to a geometrical relation before and after registration, calculating a conversion matrix between two nearest points in the point cloud by using an ICP (inductively coupled plasma) algorithm, taking a corresponding point which meets the conversion matrix and has the minimum mean square error as an inner point, judging error registration and re-registering if the distance of the inner point is greater than a set threshold value, and abandoning the acquisition of the sample pair data if the set times of repeating the process are still error registration;
s5, repeating the step S4 until local displacement vectors of all the access samples to the data set are collected, and running the recorded data in the ROS packet format on line or off line by adopting an SLAM algorithm and collecting an estimated track vector; and combining all local displacement vectors with the track vector to obtain a plurality of off-line registration pairs, randomly selecting the registration pairs from all off-line registration pairs, and calculating a statistical error, wherein the error is the error of the track.
2. The lidar-based robot self-positioning accuracy evaluation method according to claim 1, wherein in step S1, the mobile robot is provided with: 2D and 3D laser radar and visual sensing navigator; when the mobile robot collects data in an unknown environment, the mobile robot moves in a random walking or fixed point navigation mode and collects the data, and records the data into an ROS packet format; the method for finding a repeat visit place includes: a global descriptor method, a Scan Context algorithm, an Intensity Scan Context algorithm, and a deep learning algorithm.
3. The method for evaluating the self-positioning accuracy of the robot based on the laser radar as claimed in claim 2, wherein when the mobile robot collects in an unknown environment, the mobile robot needs to visit the same position at least twice or more times to satisfy the independence of sample data; the collected data comprises laser data and image data, and the mobile robot executes linear and steering motion in the environment and carries out automatic obstacle avoidance and random search.
4. The method for evaluating the self-positioning accuracy of the robot based on the lidar according to claim 1, wherein the step S2 is to perform the feature processing on the recorded data in the ROS packet format, specifically:
extracting recording data from the ROS packet, calculating a feature matrix of a point neighborhood region for each frame of point cloud data in the recording data, extracting a feature value of the feature matrix, extracting a maximum point set of approximate features in a plurality of neighborhoods, or directly performing down-sampling on the recording data, uniformly dividing the recording data into a plurality of parts according to a laser scanning beam, reserving points of a current state index smaller than a set threshold value as angular points, reserving points of a planar index smaller than the set threshold value as surface points, eliminating unstable points caused by motion distortion, and obtaining point cloud information after feature processing.
5. The lidar-based robot self-positioning accuracy evaluation method according to claim 4, wherein the step S3 specifically comprises: describing each frame of point cloud data by using a Scan Context algorithm, obtaining regions which meet set conditions and are repeatedly visited and are smaller than a certain threshold value through searching, and combining five or more than five repeatedly visited regions according to a first visit timestamp to obtain a sample pair set.
6. The lidar-based robot self-positioning accuracy evaluation method according to claim 1, wherein the step S4 of acquiring a local displacement vector of the sample to the data specifically comprises:
s41, using a point cloud feature extraction method as a feature extraction model, and using each frame of point cloud data after feature processing as input data of the feature extraction model to perform primary feature extraction;
s42, performing nearest neighbor search on the point cloud information after the feature processing to find a coarse registration point pair; extracting the maximum cluster registration pairs with geometric information relationship by adopting a coarse registration point pair obtained by further processing by adopting a maximum cluster search method, and generating an initialized registration matrix;
s43, further registering by adopting a point cloud registration algorithm, optimizing based on the initialized registration matrix to obtain an accurate registration relation between two groups of point clouds, calculating the average distance of the inner point pairs, matching again or selecting to abandon acquisition if the average distance is larger than a set threshold value, repeating the process for three times, abandoning the acquisition of the sample pair data if the registration is still wrong, and improving the robustness and accuracy of the local translation vector.
7. The lidar-based robot self-positioning accuracy evaluation method according to claim 6, wherein the point cloud feature extraction method is FPFH, 3 dsmooh method, and the point cloud registration algorithm is ICP series algorithm.
8. The method for evaluating the self-positioning accuracy of the laser radar-based robot according to claim 7, wherein for each sample pair, feature extraction is performed on a target point cloud and a source point cloud by using an FPFH (fast Fourier transform and FH) algorithm, and a coarse registration pair is obtained by nearest neighbor search; converting the distance between two points in the same frame of point cloud into the geometric property of unchanged length of another frame of point cloud, and obtaining a maximum registration pair and a primary conversion matrix through a maximum clustering algorithm; finally, the maximum registration pair is further registered by using an ICP algorithm to obtain an accurate conversion matrix; under the transformation matrix, if the average value of the distances between the interior point pairs is greater than a certain threshold, re-registration is performed, and the sample pair is discarded if the process is repeated three times.
9. The method for evaluating the self-positioning accuracy of the laser radar-based robot according to claim 1, wherein the step S5 of collecting and estimating the trajectory vector specifically comprises:
s51, running the recorded data in the ROS packet format by adopting an SLAM algorithm, and storing an estimated track of the algorithm;
s52, defining a certain local translation vector as V x And finding out two postures of the track by using the SLAM algorithm of the corresponding time stamp, and defining a translation vector between the two postures as V y Forming an off-line registration pair (V) x ,V y );
S53, repeating the steps S51 and S52 until all the offline registration pairs are collected; several pairs are randomly selected from the set of all offline registration pairs, and the parameters σ of the sample set are estimated using the GLO algorithm for calculating the final error σ'.
10. A robot self-positioning accuracy evaluation system based on laser radar comprises a memory and a processor, wherein the memory comprises a robot self-positioning accuracy evaluation program based on the laser radar, and the robot self-positioning accuracy evaluation program based on the laser radar realizes the following steps when being executed by the processor:
s1, collecting recorded data in an ROS (reactive oxygen species) package format in an unknown environment by using a mobile robot loaded with a laser radar;
s2, exporting the recorded data in the ROS packet format, carrying out feature processing, and eliminating unstable points caused by motion distortion to obtain point cloud information after the feature processing;
s3, converting the point cloud information after the feature processing into a global descriptor, and finding a sample pair set for revising the same position through a search algorithm;
s4, acquiring local displacement vectors of sample pair data by accessing a sample pair set, acquiring a related point cloud registration relationship for sample pair cloud by using a point cloud characteristic method, searching by using a maximum cluster search algorithm to acquire a maximum registration pair related to a geometrical relationship before and after registration, calculating a conversion matrix between two closest points in the point cloud by using an ICP (inductively coupled plasma) algorithm, taking a corresponding point which meets the conversion matrix and has the minimum mean square error as an inner point, judging as error registration and re-registering if the distance of the inner point is greater than a set threshold, and abandoning the acquisition of the sample pair data if the set times of repeating the process are still error registration;
s5, repeating the step S4 until local displacement vectors of all the access samples to the data set are collected, and running the recorded data in the ROS packet format on line or off line by adopting an SLAM algorithm and collecting an estimated track vector; and combining all local displacement vectors with the trajectory vectors respectively to obtain a plurality of offline registration pairs, randomly selecting the registration pairs from all the offline registration pairs, and calculating a statistical error, wherein the error is the error of the trajectory.
CN202211058259.2A 2022-08-31 2022-08-31 Robot self-positioning precision evaluation method based on laser radar Pending CN115471530A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211058259.2A CN115471530A (en) 2022-08-31 2022-08-31 Robot self-positioning precision evaluation method based on laser radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211058259.2A CN115471530A (en) 2022-08-31 2022-08-31 Robot self-positioning precision evaluation method based on laser radar

Publications (1)

Publication Number Publication Date
CN115471530A true CN115471530A (en) 2022-12-13

Family

ID=84368459

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211058259.2A Pending CN115471530A (en) 2022-08-31 2022-08-31 Robot self-positioning precision evaluation method based on laser radar

Country Status (1)

Country Link
CN (1) CN115471530A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117095007A (en) * 2023-10-20 2023-11-21 深圳市兴东泰电子有限公司 PFA encapsulation flying lever clamping seat assembly monitoring system based on image processing

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117095007A (en) * 2023-10-20 2023-11-21 深圳市兴东泰电子有限公司 PFA encapsulation flying lever clamping seat assembly monitoring system based on image processing
CN117095007B (en) * 2023-10-20 2024-01-30 深圳市兴东泰电子有限公司 PFA encapsulation flying lever clamping seat assembly monitoring system based on image processing

Similar Documents

Publication Publication Date Title
Nieto et al. Recursive scan-matching SLAM
Jeong et al. Visual SLAM with line and corner features
Montesano et al. Probabilistic scan matching for motion estimation in unstructured environments
CN112014857A (en) Three-dimensional laser radar positioning and navigation method for intelligent inspection and inspection robot
Liang et al. Visual laser-SLAM in large-scale indoor environments
Vysotska et al. Improving SLAM by exploiting building information from publicly available maps and localization priors
CN108332752B (en) Indoor robot positioning method and device
Skrzypczynski Simultaneous localization and mapping: A feature-based probabilistic approach
Iocchi et al. Hough localization for mobile robots in polygonal environments
CN115471530A (en) Robot self-positioning precision evaluation method based on laser radar
KR102490521B1 (en) Automatic calibration through vector matching of the LiDAR coordinate system and the camera coordinate system
Donoso-Aguirre et al. Mobile robot localization using the Hausdorff distance
CN115471531A (en) Robot self-positioning precision evaluation method and system based on visual sensor
EP1307705A1 (en) Height measurement apparatus
CN115962773A (en) Method, device and equipment for synchronous positioning and map construction of mobile robot
Chen et al. 3d map building based on stereo vision
Lin et al. Vision-based mobile robot localization and mapping using the PLOT features
Llofriu et al. Mapping under changing trajectory estimates
Li et al. 2d lidar and camera fusion using motion cues for indoor layout estimation
Park et al. Localization of an unmanned ground vehicle based on hybrid 3D registration of 360 degree range data and DSM
Visser et al. Robust weighted scan matching with quadtrees
CN117433511B (en) Multi-sensor fusion positioning method
Bruns Lidar-based vehicle localization in an autonomous valet parking scenario
Hou et al. Visual Positioning System of Intelligent Robot Based on Improved Kalman Filter
Carlson et al. Robust real-time local laser scanner registration with uncertainty estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination