CN118310531A - Cross-scene positioning method and system for robot with coarse-to-fine point cloud registration - Google Patents

Cross-scene positioning method and system for robot with coarse-to-fine point cloud registration Download PDF

Info

Publication number
CN118310531A
CN118310531A CN202410463909.4A CN202410463909A CN118310531A CN 118310531 A CN118310531 A CN 118310531A CN 202410463909 A CN202410463909 A CN 202410463909A CN 118310531 A CN118310531 A CN 118310531A
Authority
CN
China
Prior art keywords
point cloud
registration
data
robot
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410463909.4A
Other languages
Chinese (zh)
Inventor
张伟
谢寅铎
张涛
赵越男
宋然
丛润民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202410463909.4A priority Critical patent/CN118310531A/en
Publication of CN118310531A publication Critical patent/CN118310531A/en
Pending legal-status Critical Current

Links

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention provides a method and a system for positioning a robot across scenes by registering coarse point cloud to fine point cloud, wherein the method comprises the following steps: acquiring data of two modes measured by a laser radar and an inertial measurement unit, wherein the laser radar data are calibrated and fused to generate unified point cloud data; coarse point cloud registration: fusing point cloud data and inertial measurement data by utilizing a multi-mode SLAM technology, and establishing a local conversion relation from an initial position to a robot body so as to perform rough attitude estimation; fine registration: the method comprises the steps of obtaining a rough guess matrix based on rough attitude estimation, taking fused point cloud data as airborne point cloud, providing reference point cloud data by map data, and carrying out normal distribution transformation registration based on the rough guess matrix, the airborne point cloud and the reference point cloud data, and obtaining the accurate position of the robot after the normal distribution transformation registration.

Description

Cross-scene positioning method and system for robot with coarse-to-fine point cloud registration
Technical Field
The invention belongs to the technical field of robot positioning, and particularly relates to a cross-scene positioning method and system for a robot with coarse-to-fine point cloud registration.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
Mobile robotics is one of the most important branches of research in the robotics field. Such robots need to be able to accurately position in a variety of different scenarios and have positioning capabilities across different scenarios. Accurate positioning is critical to real-time monitoring of the position and running state of the robot, and meanwhile, navigation instructions can be matched, so that the robot can efficiently complete various tasks. In the field of robots, the method based on the coarse-to-fine point cloud registration can enable the robots to realize accurate positioning in different scenes.
Camera-based positioning is a common method for determining the position and pose of a robot by acquiring and processing environmental image information. The method comprises the steps of extracting image features, matching the features, estimating the position and the posture, continuously updating the position and the like. The method has low cost and easy deployment, and is suitable for indoor environment and high-precision positioning requirements, such as a visual SLAM (Simultaneous Localization AND MAPPING) system. However, cameras are susceptible to factors such as brightness and weather, and thus have limited applicability in different scenes.
Currently, in outdoor environments, the dominant positioning method is to use real-time dynamic global positioning system (RTK-GPS) in combination with IMU data for accurate positioning. However, these methods cannot be used indoors due to significant attenuation of GPS signals caused by building shadows. The method based on the point cloud registration can be suitable for different scenes, including indoor and outdoor scenes. The ITERATIVE CLOSEST POINTS (ICP) algorithm is widely known, and can establish a correspondence between point clouds to achieve registration between the perception data and the geometric model. Kubelka et al introduced an ICP-based front end that accelerated the convergence of global map optimization by providing attitude estimates aligned with gravity vectors. The method aligns the current point cloud acquired from the airborne LiDAR with the reference point cloud, and estimates the attitude of the robot in a map coordinate system. Although such methods can achieve accurate registration of point clouds, they have low iterative speeds and are difficult to apply in different scenarios in real time. With the development of the deep learning technology, a point cloud registration method based on deep learning is also attracting a great deal of attention. For example Yew et al propose an end-to-end framework for predicting point probabilities and corresponding locations in a point cloud overlap region. Some studies have proposed a graph-based framework for estimating the correct correspondence, which plays a key role in many robotic tasks. However, the depth learning based point cloud registration requires higher computational resources and therefore its speed may be limited by the computing platform.
Mobile robots need to work in different scenarios, however, there is often a large difference in spatial structure between these scenarios, and therefore a higher requirement is placed on the generalization of the positioning method. Currently, the positioning method of the robot mainly focuses on the application of a specific scene, but the movement from one scene to another scene is easy to lose positioning, namely continuous positioning is difficult.
Point cloud registration refers to establishing a relationship between two or more point cloud data sets such that they are registered in alignment under the same coordinate system. Point cloud registration is typically used to fuse point cloud data from different sensors or at different times together to enable applications such as environmental modeling, location navigation, object recognition, and the like. However, the point cloud data is often affected by noise and occlusion, which may lead to instability and reduced accuracy of the registration results. In the traditional method, the point cloud registration is often characterized by slower iteration speed, which makes it difficult to adapt to scenes with higher requirements for real-time positioning. In the field of robots in particular, robots often need to be moved and operated in complex environments during the execution of tasks. These environments may include dynamic obstructions, irregular terrain, and other unpredictable factors. The quality and stability of the point cloud data may be affected due to jolt, vibration, etc. caused by the motion of the robot itself. Therefore, in such a case, the point cloud registration result of the robot is liable to be misaligned or lost, thereby resulting in inaccurate positioning.
Therefore, the positioning of the current mobile robot system has no universality and cannot be realized accurately under multiple scenes because the difference between different task scenes is usually large and the limitations of sensors such as cameras are added.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides a cross-scene positioning method for a robot registered from thick point cloud to thin point cloud, which can obtain accurate positioning of a mobile robot system.
To achieve the above object, one or more embodiments of the present invention provide the following technical solutions:
In a first aspect, a method for cross-scene positioning of a robot by coarse-to-fine point cloud registration is disclosed, comprising:
Acquiring data of two modes measured by a laser radar and an inertial measurement unit, wherein the laser radar data are calibrated and fused to generate unified point cloud data;
coarse point cloud registration: fusing point cloud data and inertial measurement data by utilizing a multi-mode SLAM technology, and establishing a local conversion relation from an initial position to a robot body so as to perform rough attitude estimation;
Fine registration: the method comprises the steps of obtaining a rough guess matrix based on rough attitude estimation, taking fused point cloud data as airborne point cloud, providing reference point cloud data by map data, and carrying out normal distribution transformation registration based on the rough guess matrix, the airborne point cloud and the reference point cloud data, and obtaining the accurate position of the robot after the normal distribution transformation registration.
As a further technical scheme, the number of the laser radars is multiple, coordinate system conversion is performed on data measured by the multiple laser radars, then preprocessing is performed on the data collected by the laser radars, then characteristics are extracted and matched for the laser radars, and finally, a registration algorithm is adopted to estimate transformation relations among the multiple laser radars according to matching relations among the characteristics so as to align the transformation relations into an integral point cloud.
As a further technical solution, the coarse point cloud registration specifically includes:
In step t, the fused point cloud obtained from the lidar is denoted as S t={p1,p2,…,pn;
The reference point cloud is denoted map= { x 1,x2,…,xn };
Let the rough guess matrix at t be expressed as
The operation of the coarse point cloud registration is as follows:
As shown in formula (1), the conversion from the Map coordinate system to the robot body coordinate system is represented, and the reference point cloud Map is converted to the robot body coordinate system body;
A conversion matrix from the map coordinate system to the robot body coordinate system is represented; the REG () method represents the algorithm of point cloud registration;
After the first registration step is completed, the following formula (2) shows the transformation from the map to the robot starting point;
Wherein T map→init represents a transformation matrix of the map coordinate system to the initial pose, A reverse transformation matrix representing the initial pose to the robot body coordinate system, the transformation relation being kept unchanged until the end of the task, so that a rough guess matrix T t guess can be calculated at T, as shown in the following formula (3);
As a further technical solution, the coarse guess matrix is to be used as a reference matrix in the fine registration process, which provides a basis for registration between the airborne point cloud and the reference point cloud, and the airborne point cloud is aligned with the reference point cloud based on the coarse guess matrix.
As a further technical scheme, each point is regarded as a gaussian distribution, the whole point cloud is expressed as a combination of a plurality of gaussian distributions, and the probability density function of the gaussian distribution is expressed as the following formula (4);
Wherein exp () function represents a natural exponential function, μ represents a mean value, Σ represents a covariance matrix, and its calculation formula is shown in the following formula (5);
When receiving the point cloud S t={p1,p2,…,pn from the lidar, it uses x' i=Tbody→map·pi by transforming to the reference point cloud space, where T body→map is the inverse of T map→body, T body→map represents the transformation relationship of the robot body coordinate system to the map coordinate system, and the objective function score converges, resulting in the optimal registration estimation.
As a further embodiment, the objective function is represented by the following formula (6).
As a further technical solution, the specific position obtained by the optimal registration estimation is obtained from the following formula (7), which represents the conversion from the map coordinate system to the robot body coordinate system using NDT algorithm, thereby obtaining the accurate position of the robot;
Wherein Loc t represents the position coordinates at time t, Is a rough estimation matrix of an NDT algorithm, which corresponds to the REG () method described above, is input at time t.
In a second aspect, a robotic cross-scene positioning system for coarse-to-fine point cloud registration is disclosed, comprising:
A data acquisition module configured to: acquiring data of two modes measured by a laser radar and an inertial measurement unit, wherein the laser radar data are calibrated and fused to generate unified point cloud data;
a coarse point cloud registration module configured to: fusing point cloud data and inertial measurement data by utilizing a multi-mode SLAM technology, and establishing a local conversion relation from an initial position to a robot body so as to perform rough attitude estimation;
a fine registration module configured to: the method comprises the steps of obtaining a rough guess matrix based on rough attitude estimation, taking fused point cloud data as airborne point cloud, providing reference point cloud data by map data, and carrying out normal distribution transformation registration based on the rough guess matrix, the airborne point cloud and the reference point cloud data, and obtaining the accurate position of the robot after the normal distribution transformation registration.
The one or more of the above technical solutions have the following beneficial effects:
The technical scheme of the embodiment provides a cross-scene positioning method of a robot based on coarse-to-fine point cloud registration. The coarse registration may establish a local conversion relationship from the initial position to the robot body, providing a coarse pose estimate. Fine registration uses the pose estimate as a Guess Matrix (Guess Matrix), aligns the on-board point cloud with the reference point cloud, and generates an accurate global pose. The registration mode avoids the defects of sensors such as GPS and the like by fusing LiDAR and IMU data. The method is suitable for indoor and outdoor cross-scene positioning, can provide accurate positioning results, and is not easy to lose. Meanwhile, in order to provide omnibearing sensing for the robot, the robot can support the simultaneous registration of a plurality of laser radars.
Additional aspects of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention.
FIG. 1 is a schematic diagram of a positioning method based on coarse-to-fine point cloud registration according to an embodiment of the present invention;
FIG. 2 is a multi-scenario environment of an embodiment of the present invention: sequentially forming a warehouse scene, a factory scene and an orchard scene graph from left to right;
FIG. 3 is a diagram showing a laser radar data fusion architecture according to an embodiment of the present invention;
fig. 4 is a graph of the positioning result of the coarse-to-fine point cloud registration corresponding to fig. 2 according to the embodiment of the present invention.
Detailed Description
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the invention. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the present invention.
Embodiments of the invention and features of the embodiments may be combined with each other without conflict.
The invention provides a general idea:
the robot is positioned step by step, and accurate positioning is completed through the action of each step. The present embodiment applies coarse-to-fine point cloud registration for localization.
Firstly, the method fuses LiDAR (laser radar) and IMU data, and establishes a local conversion relation from an initial position to a robot body, so that rough attitude estimation is provided. Then, the pose estimation is used as a Guess Matrix (Guess Matrix), and the airborne point cloud is aligned with the reference point cloud to generate an accurate global pose. Through these steps, accurate positioning of the mobile robotic system can be obtained.
Example 1
The embodiment discloses a cross-scene positioning method of a robot registered by thick-to-thin point clouds, wherein the thick registration can establish a local conversion relation from an initial position to a robot body, namely, the position and the direction of the robot relative to a starting point are converted into the position and the direction relative to a coordinate system of the robot body, so that rough attitude estimation is provided. Fine registration uses the pose estimate as a Guess Matrix (Guess Matrix), aligns the on-board point cloud with the reference point cloud, and generates an accurate global pose. The registration mode avoids the defects of sensors such as GPS and the like by fusing LiDAR and IMU data. The method is suitable for indoor and outdoor cross-scene positioning, can provide accurate positioning results, and is not easy to lose. Meanwhile, in order to provide omnibearing sensing for the robot, the technical scheme of the embodiment can support the simultaneous registration of a plurality of laser radars.
As shown in fig. 1, in the technical scheme of the embodiment, point cloud data of multiple lidars such as LiDAR 1, liDAR 2 and the like are fused, and a multi-mode SLAM technology is utilized to fuse LiDAR feature points and IMU data into a fine registration process to provide a Guess Matrix (Guess Matrix); and then, taking the fused point cloud data as an airborne point cloud, and finely registering the airborne point cloud data with reference point cloud data provided by map data stored in a robot system to obtain fine pose estimation. Therefore, accurate positioning of the robot can be achieved in the map.
The plurality of lidars are specifically installed around the robot, for example, when two lidars are used, they are installed at the front and rear ends of the robot, respectively. When a plurality of lidars are used, the radar is uniformly installed around the robot to reduce blind areas. And acquiring point cloud data of each laser radar. The IMU is mounted inside the robot and provides data such as acceleration, angular velocity.
According to the technical scheme of the embodiment, the multi-source sensor data are processed and fused, and the capability of the robot for identifying and representing different scene structures is improved through the point cloud registration technology from thick to thin, so that the positioning capability of the mobile robot in multiple scenes is finally improved.
The different scene structures are characterized mainly by fused point cloud data, and the positions of the robots are judged by comparing the airborne point cloud data with map data.
In one or more embodiments, regarding multi-scenario environment settings:
Three common working scenes are set for the orchard transportation mobile robot: warehouse scenes, factory scenes, and orchard scenes, as shown in fig. 2. These scenes have significant differences in both spatial structure and type of obstacle. In a warehouse scene, objects such as boxes, tables, mechanical devices and the like with different sizes exist; in a factory scene, the roads are more regular, but there are moving vehicles and pedestrians; in an orchard setting, common obstacles include weeds, fruit trees, and the like. In view of the need for a transport robot to continuously perform tasks in these three different scenarios, a cross-scenario continuous positioning method is needed that can achieve accurate and reliable positioning in various different work environments.
Sensor input:
In one or more embodiments, a coarse-to-fine point cloud registration method is adopted, and sensor input comprises two data modalities of LiDAR and IMU. This embodiment is directed to using multiple lidar sensors to reduce dead zones, with the lidar data being externally calibrated to generate a uniform point cloud. Constructing the laser radar data fusion shown in fig. 3, firstly, converting a coordinate system, and converting the data of all the laser radars to the same coordinate system to ensure that the data of a plurality of the laser radars are in a uniform coordinate system. The data collected by the lidar is then preprocessed, e.g., noise removed, filtered, etc., to reduce interference during subsequent registration. And then, matching the features extracted by the laser radar to find out the corresponding relation between the features. Finally, according to the matching relation between the features, a registration algorithm is adopted to estimate the transformation relation between the plurality of laser radars so as to align the laser radars into an integral point cloud.
More specifically, the laser radar extraction features are mainly the representation of the pointing cloud data on the surrounding environment, including the shape, the distance and the like of the point cloud; feature points in the lidar data are compared and correspondence between them is found, which achieves matching by calculating a similarity measure between the feature points (a distance measure between point clouds).
And estimating an initial transformation relation between the two laser radars according to the matched feature pairs. And optimizing the initial transformation relation obtained by estimation by using a minimized square error iterative optimization algorithm as an initial value so as to minimize the matching error. And applying the optimized transformation relation to point cloud data to be registered, and aligning the point cloud data to the same coordinate system.
In the above embodiment, global consistency can be achieved by the above manner: by registering point clouds obtained by scanning a plurality of laser radars, a globally consistent environment map can be obtained, and accurate positioning and navigation can be realized.
The applicability is wide: the point cloud registration method based on the laser radar and the IMU is suitable for various environments and scenes, including indoor, outdoor, complex terrains and the like, and has strong universality.
In one or more embodiments, the lidar sensor provides highly accurate distance and direction information, and by measuring the time and angle of reflection of the laser beam, the exact position of the target object can be obtained. The data may be used to construct a high resolution map of the environment to help the robot or vehicle accurately locate and plan the path. The IMU sensor provides information such as acceleration, angular velocity, attitude angle and the like, and the parameters can be used for measuring and analyzing the motion, direction and environmental condition of an object and are widely applied to the fields of navigation, attitude estimation, motion control and the like. And the accuracy and the robustness of positioning can be improved by fusing the fused laser radar data (point cloud data) and IMU data. By fusing the attitude information of the IMU, more accurate registration of laser radar scanning data can be provided, and therefore more reliable navigation and positioning are achieved.
In one or more embodiments, regarding the coarse registration process:
In the process of point cloud registration, a reasonable guessing matrix is provided, so that the speed and accuracy of registration can be remarkably improved. Typically, this guess matrix is provided by the mobile robot's on-board odometer. However, the odometer of a robot typically relies on the rotation of the wheels to estimate the distance and direction of movement. On rough terrain or crushed stone in the orchard, the wheels of the robot may be subjected to vibration or skidding, which may cause an increase in accumulated error of the odometer data. In view of this, the use of virtual odometers can be a solution.
According to the technical scheme of the embodiment, SLAM technology is introduced to realize local conversion T init→body from the initial gesture to the robot body, so that an airborne odometer is replaced. SLAM techniques are commonly used to simultaneously estimate the position and pose of a robot, as well as construct an environment map. The robot can utilize sensor data (such as lidar, cameras, IMU, etc.) to infer its own position and the structure of the surrounding environment in real time, thereby achieving autonomous positioning. In the embodiment, the point cloud corresponding to the laser radar is fused with IMU data by utilizing a multi-mode SLAM technology, and compared with an original airborne odometer, the method and the device provide more accurate local mileage information. This virtual odometer provides an initial rough estimate for subsequent point cloud registration.
The IMU data is input to a forward propagation module, which calculates an output forward propagation state and propagation covariance, and the propagation covariance represents an error between the real state and the propagation state. The fused laser radar data and the output of the forward module are input to the backward propagation motion compensation module together, and backward propagation generates a relative pose which realizes transition from a local measurement result to a scanning final measurement result. The scan final measurement is then input to an iterative kalman filter.
The iterative Kalman filter comprises a residual calculation module, a state updating module and a convergence judging module. The residual is defined as the distance between the global coordinates of the feature point estimate and the nearest plane (or edge) in the map. The forward propagation state is updated iteratively until the residual converges. After convergence, the optimal state estimate and covariance can be calculated. The optimal state estimation is the local mileage information.
In particular implementation, at step t, the fused point cloud obtained from the lidar is denoted as S t={p1,p2,…,pn, the reference point cloud is denoted as map= { x 1,x2,…,xn }, where p 1…pn denotes a specific each point of the fused point cloud data, x 1…xn denotes each point of the Map point cloud data, and the rough guess matrix at the time of t is assumed to be expressed asThe operation of the coarse point cloud registration is as follows.
This formula represents the conversion from the Map coordinate system to the robot body coordinate system, converting the reference point cloud (Map) into the robot body coordinate system (body) for aligning the point cloud in the Map coordinate system into the robot body coordinate system, so as to accurately locate the position and posture of the robot in the Map, as shown in the following formula (1).
Which is used for the calculation of the following procedure.
And the conversion matrix from the map coordinate system to the robot body coordinate system is represented. The REG () method represents an algorithm for point cloud registration.
After the first registration step is completed, this formula represents the transformation from the map to the robot starting point, as shown in the following formula (2), by which the position in the map coordinate system is converted to the initial position for calculating the coarse guess matrix.
Wherein T map→init represents a transformation matrix of the map coordinate system to the initial pose,And (3) representing a reverse transformation matrix from the initial pose to the robot body coordinate system. The transformation relationship remains unchanged until the end of the task. Thus coarse guessing the matrixCan be calculated at t as shown in the following formula (3).
This coarse guess matrix will serve as a reference matrix in the fine registration process, which provides a basis for registration between the on-board point cloud and the reference point cloud. By utilizing the matrix, the on-board point cloud and the reference point cloud can be effectively aligned, so that more accurate attitude estimation is realized.
In one or more embodiments, with respect to the fine registration process:
the present example technique utilizes a Normal Distribution Transform (NDT) algorithm to obtain a fine pose estimate at time t. NDT is an efficient point cloud registration method, each point is regarded as a gaussian distribution, and the whole point cloud can be represented as a combination of multiple gaussian distributions. The probability density function of the gaussian distribution can be expressed as shown in the following equation (4).
Wherein exp () function represents a natural exponential function. x is a point in the point cloud data, i.e., a position in three-dimensional space, μ represents a mean value, Σ represents a covariance matrix, and its calculation formula is shown in the following formula (5). Where x i represents the ith point in the point cloud data and n represents the total number of points in the point cloud data.
When accepting the point cloud S t={p1,p2,…,pn from the lidar, it uses x i =Tbody→map·pi by transforming to the reference point cloud space. Wherein T body→map is an inverse matrix of T map→body, and T body→map represents a transformation relationship from the robot body coordinate system to the map coordinate system. Once the objective function score converges, an optimal registration estimate may be obtained. Let mu i be the reference point, be the point in the map data, the objective function is expressed as shown in the following equation (6).
The specific position from the optimal registration estimate can be obtained from the following equation (7), which represents the transformation from the map coordinate system to the robot body coordinate system using the NDT algorithm, thus obtaining the precise position of the robot.
Wherein Loc t represents the position coordinates at time t,Is a rough estimation matrix of an NDT algorithm, which corresponds to the REG () method described above, is input at time t.
Through the thick-to-thin point cloud registration process, the patent builds a conversion relationship from a map coordinate system to an initial posture and then to a robot body coordinate system. Registration process as shown in fig. 1, the various steps and flows of coarse-to-fine point cloud registration are illustrated in fig. 1. The positioning result of the coarse-to-fine point cloud registration is shown in fig. 4, corresponding to the scene in fig. 2.
In order to verify the validity of the above-described solution, the evaluation index is positioned with respect to the robot:
the design of the evaluation index aims at measuring the accuracy of positioning in an actual scene, so that the superiority of the positioning scheme based on the thick-to-thin point cloud registration is revealed. In the embodiment, three key indexes are selected to evaluate the performance of the cross-scene positioning method of the mobile robot based on the registration from the thick point cloud to the thin point cloud, the key aspects of positioning accuracy, attitude estimation and environmental adaptability are covered by the indexes, and an important basis is provided for comprehensively evaluating the method.
Number of positioning losses: this index measures how often the robot is lost in locating it during the execution of a task. The fewer the number of lost positioning is, the stronger the positioning capability of the robot in different environments is, and the better the stability and reliability are.
Attitude error: the posing error indicates the degree of difference between the actual pose and the estimated pose of the robot. The method is used for measuring the accuracy of the mobile robot when the mobile robot accurately reaches the designated task point and quantifying the position deviation degree when the mobile robot reaches the designated task point. A lower attitude error indicates that the method can provide a more accurate attitude estimate.
Consider a contextual task for a robot to navigate to a target point, assuming that the coordinates of the predefined task point areThe actual arrival position of the mobile robot is expressed as Where x and y represent two-dimensional position coordinates Loc (x, y) in the map frame. The attitude error can be calculated as shown in the following equation (8).
Representing the euclidean distance between the task point and the arrival point, i.e. the distance between the two points. When the position deviatesBeyond d max = 120cm, the present patent will consider this navigation positioning task to fail.
Environmental suitability: the applicability and the robustness of the positioning system based on the method under different environments are evaluated. The method comprises the steps of positioning performance in various environments such as indoor, outdoor, complex terrain and the like, and can adapt to positioning requirements in various scenes.
In the technical scheme of the embodiment, the environmental adaptability is measured by using an attitude error variance, and a calculation formula is shown in the following formula (9).
Wherein PE i represents the posing error of each time a task point is reached,Representing the average of the posing errors for the task under test. When the attitude error variance is large, the environmental adaptability of the positioning is considered to be weak.
Example two
It is an object of the present embodiment to provide a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, which processor implements the steps of the above method when executing the program.
Example III
An object of the present embodiment is to provide a computer-readable storage medium.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the above method.
Example IV
An object of the present embodiment is to provide a cross-scene positioning system of a robot registered by thick-to-thin point clouds, comprising:
A data acquisition module configured to: acquiring data of two modes measured by a laser radar and an inertial measurement unit, wherein the laser radar data are calibrated and fused to generate unified point cloud data;
a coarse point cloud registration module configured to: fusing point cloud data and inertial measurement data by utilizing a multi-mode SLAM technology, and establishing a local conversion relation from an initial position to a robot body so as to perform rough attitude estimation;
a fine registration module configured to: the method comprises the steps of obtaining a rough guess matrix based on rough attitude estimation, taking fused point cloud data as airborne point cloud, providing reference point cloud data by map data, and carrying out normal distribution transformation registration based on the rough guess matrix, the airborne point cloud and the reference point cloud data, and obtaining the accurate position of the robot after the normal distribution transformation registration.
The steps involved in the devices of the second, third and fourth embodiments correspond to those of the first embodiment of the method, and the detailed description of the embodiments can be found in the related description section of the first embodiment. The term "computer-readable storage medium" should be taken to include a single medium or multiple media including one or more sets of instructions; it should also be understood to include any medium capable of storing, encoding or carrying a set of instructions for execution by a processor and that cause the processor to perform any one of the methods of the present invention.
It will be appreciated by those skilled in the art that the modules or steps of the invention described above may be implemented by general-purpose computer means, alternatively they may be implemented by program code executable by computing means, whereby they may be stored in storage means for execution by computing means, or they may be made into individual integrated circuit modules separately, or a plurality of modules or steps in them may be made into a single integrated circuit module. The present invention is not limited to any specific combination of hardware and software.
While the foregoing description of the embodiments of the present invention has been presented in conjunction with the drawings, it should be understood that it is not intended to limit the scope of the invention, but rather, it is intended to cover all modifications or variations within the scope of the invention as defined by the claims of the present invention.

Claims (10)

1. A cross-scene positioning method of a robot registered from thick points to thin points is characterized by comprising the following steps:
Acquiring data of two modes measured by a laser radar and an inertial measurement unit, wherein the laser radar data are calibrated and fused to generate unified point cloud data;
coarse point cloud registration: fusing point cloud data and inertial measurement data by utilizing a multi-mode SLAM technology, and establishing a local conversion relation from an initial position to a robot body so as to perform rough attitude estimation;
Fine registration: the method comprises the steps of obtaining a rough guess matrix based on rough attitude estimation, taking fused point cloud data as airborne point cloud, providing reference point cloud data by map data, and carrying out normal distribution transformation registration based on the rough guess matrix, the airborne point cloud and the reference point cloud data, and obtaining the accurate position of the robot after the normal distribution transformation registration.
2. The method for positioning a robot across scenes according to claim 1, wherein the number of the laser radars is a plurality, the data measured by the plurality of the laser radars are subjected to coordinate system conversion, then the data collected by the laser radars are preprocessed, then the features extracted by the laser radars are matched, finally the transformation relationship among the plurality of the laser radars is estimated by adopting a registration algorithm according to the matching relationship among the features, so that the laser radars are aligned into an integral point cloud.
3. The method for positioning a robot across a scene from coarse to fine point cloud registration as claimed in claim 1, wherein the coarse point cloud registration specifically comprises:
In step t, the fused point cloud obtained from the lidar is denoted as S t={p1,p2,…,pn;
The reference point cloud is denoted map= { x 1,x2,…,xn };
Let the rough guess matrix at t be expressed as
The operation of the coarse point cloud registration is as follows:
As shown in formula (1), the conversion from the Map coordinate system to the robot body coordinate system is represented, and the reference point cloud Map is converted to the robot body coordinate system body;
A conversion matrix from the map coordinate system to the robot body coordinate system is represented; the REG () method represents the algorithm of point cloud registration;
After the first registration step is completed, the following formula (2) shows the transformation from the map to the robot starting point;
Wherein T map→init represents a transformation matrix of the map coordinate system to the initial pose, Inverse transformation matrix representing initial pose to robot body coordinate system, which transformation relation remains unchanged before the task ends, thus rough guessing matrixCan be calculated at t, as shown in the following formula (3);
4. a method of cross-scene positioning for a robot with coarse-to-fine point cloud registration as claimed in claim 1, wherein the coarse guess matrix is used as a reference matrix in the fine registration process, which provides a basis for registration between the on-board point cloud and the reference point cloud, and the on-board point cloud is aligned with the reference point cloud based on the coarse guess matrix.
5. The method for positioning a robot across scenes according to claim 1, wherein each point is regarded as a gaussian distribution, the whole point cloud is expressed as a combination of a plurality of gaussian distributions, and a probability density function of the gaussian distribution is expressed as the following formula (4);
Wherein exp () function represents a natural exponential function, μ represents a mean value, Σ represents a covariance matrix, and its calculation formula is shown in the following formula (5);
When receiving the point cloud S t={p1,p2,…,pn from the lidar, it uses x' i=Tbody→map·pi by transforming to the reference point cloud space, where T body→map is the inverse of T map→body, T body→map represents the transformation relationship of the robot body coordinate system to the map coordinate system, and the objective function score converges, resulting in the optimal registration estimation.
6. The method for positioning a robot across a scene by registering coarse to fine point clouds according to claim 1, wherein the objective function is represented by the following formula (6).
7. The cross-scene positioning method of a robot registered by a coarse-to-fine point cloud as claimed in claim 1, wherein the specific position obtained by the optimal registration estimation is obtained from the following formula (7), which represents the conversion from a map coordinate system to a robot body coordinate system by using an NDT algorithm, so as to obtain the accurate position of the robot;
where Los t denotes the position coordinates at time t, Is a rough estimation matrix of an NDT algorithm, which corresponds to the REG () method described above, is input at time t.
8. A cross-scene positioning system of a robot registered by a coarse-to-fine point cloud, comprising:
A data acquisition module configured to: acquiring data of two modes measured by a laser radar and an inertial measurement unit, wherein the laser radar data are calibrated and fused to generate unified point cloud data;
a coarse point cloud registration module configured to: fusing point cloud data and inertial measurement data by utilizing a multi-mode SLAM technology, and establishing a local conversion relation from an initial position to a robot body so as to perform rough attitude estimation;
a fine registration module configured to: the method comprises the steps of obtaining a rough guess matrix based on rough attitude estimation, taking fused point cloud data as airborne point cloud, providing reference point cloud data by map data, and carrying out normal distribution transformation registration based on the rough guess matrix, the airborne point cloud and the reference point cloud data, and obtaining the accurate position of the robot after the normal distribution transformation registration.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method of any of the preceding claims 1-7 when the program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, performs the steps of the method of any of the preceding claims 1-7.
CN202410463909.4A 2024-04-17 2024-04-17 Cross-scene positioning method and system for robot with coarse-to-fine point cloud registration Pending CN118310531A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410463909.4A CN118310531A (en) 2024-04-17 2024-04-17 Cross-scene positioning method and system for robot with coarse-to-fine point cloud registration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410463909.4A CN118310531A (en) 2024-04-17 2024-04-17 Cross-scene positioning method and system for robot with coarse-to-fine point cloud registration

Publications (1)

Publication Number Publication Date
CN118310531A true CN118310531A (en) 2024-07-09

Family

ID=91730779

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410463909.4A Pending CN118310531A (en) 2024-04-17 2024-04-17 Cross-scene positioning method and system for robot with coarse-to-fine point cloud registration

Country Status (1)

Country Link
CN (1) CN118310531A (en)

Similar Documents

Publication Publication Date Title
CN110178048B (en) Method and system for generating and updating vehicle environment map
CN112347840B (en) Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
Yagfarov et al. Map comparison of lidar-based 2d slam algorithms using precise ground truth
JP7179110B2 (en) Positioning method, device, computing device, computer-readable storage medium and computer program
CN109211251B (en) Instant positioning and map construction method based on laser and two-dimensional code fusion
Segal et al. Generalized-icp.
CN111220153B (en) Positioning method based on visual topological node and inertial navigation
CN110889808B (en) Positioning method, device, equipment and storage medium
JP4984659B2 (en) Own vehicle position estimation device
CN114459467B (en) VI-SLAM-based target positioning method in unknown rescue environment
WO2018091685A1 (en) Self-calibrating sensor system for a wheeled vehicle
Pang et al. Low-cost and high-accuracy LIDAR SLAM for large outdoor scenarios
Wen et al. GNSS/LiDAR integration aided by self-adaptive Gaussian mixture models in urban scenarios: An approach robust to non-Gaussian noise
Lee et al. Robust 3-dimension point cloud mapping in dynamic environment using point-wise static probability-based NDT scan-matching
CN117387604A (en) Positioning and mapping method and system based on 4D millimeter wave radar and IMU fusion
Aggarwal Machine vision based SelfPosition estimation of mobile robots
CN118310531A (en) Cross-scene positioning method and system for robot with coarse-to-fine point cloud registration
CN114721377A (en) Improved Cartogrier based SLAM indoor blind guiding robot control method
de Haag et al. Laser‐Based Navigation
Escourrou et al. Ndt localization with 2d vector maps and filtered lidar scans
Heiden et al. Heterogeneous sensor fusion via confidence-rich 3d grid mapping: Application to physical robots
Das et al. Sensor fusion in autonomous vehicle using LiDAR and camera Sensor
Akai et al. Teaching-Playback Navigation Without a Consistent Map
Park et al. Localization of an unmanned ground vehicle based on hybrid 3D registration of 360 degree range data and DSM
Taylor et al. Parameterless automatic extrinsic calibration of vehicle mounted lidar-camera systems

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination