CN115639570A - Robot positioning and mapping method integrating laser intensity and point cloud geometric features - Google Patents
Robot positioning and mapping method integrating laser intensity and point cloud geometric features Download PDFInfo
- Publication number
- CN115639570A CN115639570A CN202211304813.0A CN202211304813A CN115639570A CN 115639570 A CN115639570 A CN 115639570A CN 202211304813 A CN202211304813 A CN 202211304813A CN 115639570 A CN115639570 A CN 115639570A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- point
- intensity
- robot
- geometric
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Abstract
The invention discloses a robot positioning and mapping method fusing laser intensity and point cloud geometric characteristics, which comprises the following steps: step 1, fusing point cloud geometrical characteristics and laser intensity of point cloud data of the obtained mobile robot laser radar to obtain fused point cloud data, and classifying the fused point cloud data through a point cloud machine learning classification model; step 2, establishing an environment map with intensity according to the point cloud geometrical characteristics and the laser intensity of the classified fused point cloud data, and jointly optimizing the optimal estimation pose of the mobile robot by using a geometrical residual error and an intensity residual error obtained by the map; and 3, carrying out real-time loop detection on the environment map with the intensity, and updating the environment map with the intensity and the pose of the mobile robot. The method can reduce the robot drifting problem by fusing the laser intensity and the point cloud characteristics, and the machine learning algorithm is only used for characteristic classification and real-time loop detection of laser radar point cloud, so that the calculation pressure is reduced, and the real-time positioning precision of the robot is ensured.
Description
Technical Field
The invention relates to the field of mobile robot navigation, in particular to a robot positioning and mapping method fusing laser intensity and point cloud geometric characteristics.
Background
The mobile robot needs to acquire the self pose and sense the surrounding environment in real time in the navigation process. For this purpose, it is first necessary to record a fixed reference object in a predefined activity area and then to calculate the current pose of the robot using the distances between the robot and a number of pre-recorded reference objects. The process of recording the reference object is called mapping, the process of calculating the pose is called positioning, and the mapping and positioning processes have mutual dependency. In practical applications, synchronous positioning And Mapping (SLAM) is usually adopted to estimate the pose of the robot, and SLAM is a passive positioning scheme And does not depend on external tags (such as GPS And UWB base stations) deployed in the environment. It only relies on the robot's own sensor (such as laser radar or camera), to calculate the robot's position (i.e. location) in the environment, and the environment map that is needed in the follow-up mobile navigation process.
The robot needs synchronous positioning and mapping (SLAM) in the autonomous moving process, and mapping and navigation need the robot to know the position of the robot. Therefore, accurate positioning is a key issue that needs to be addressed by SLAM. At present, the laser SLAM calculates the pose of a robot according to the registration of a kinematic model and scanning point clouds of two continuous frames of laser radars, and simultaneously, the pose of global estimation is optimized and adjusted by Loop-around detection (Loop-closed detection), so that accurate pose and map information are obtained. For structured environments with poor scene diversity (such as long corridors), low-level geometric features will be severely degraded due to the large number of similarities. Lidar (Lidar) relies solely on point cloud registration of geometric features in the environment with large errors. Meanwhile, when the environment becomes large, the calculation time of loop detection also increases, thereby affecting the real-time performance of positioning. Therefore, in order to improve the real-time positioning accuracy of the robot in a single geometric structure environment, the existing laser SLAM technology mostly adopts modes such as multi-sensor fusion and end-to-end deep neural network training.
The most common method of multi-sensor fusion is the information fusion of laser radar and camera. Although the camera can provide more accurate feature detection, the high frequency camera can also reduce the odometry error of the robot to some extent. However, the camera is susceptible to ambient light and may not work properly in dark places, without texture, etc. At the same time, the calibration and information alignment of the lidar and the camera also present additional challenges.
For the fusion schemes of laser radar, active sensors UWB, GPS and the like, the active sensors usually rely on the linear propagation of electromagnetic waves, and have the problem of large distance measurement error in a large-range non-line-of-sight environment, thereby causing large fusion positioning error. And thus are generally only suitable for use in small scale and relatively open environments. And multiple base stations need to be deployed in a deployment.
The end-to-end based approach mainly uses neural networks to fit the raw laser scan data without any specific intensity analysis and formulation calculations. Therefore, it is relatively difficult to correlate data between new target objects and objects in existing maps, and a large amount of training data is required to do so indirectly. In practical applications, collecting, labeling and training data is often difficult and time consuming, and may create performance inconsistencies for different environments, as well as requiring more computing resources.
Therefore, how to provide a method which can reduce the accumulated error of the real-time positioning of the mobile robot in the structured environment, improve the laser point cloud registration, reduce the calculation cost in the loop detection process, provide more accurate real-time positioning, and only need rely on a laser radar sensor without fusion of an external active sensor is the problem to be solved.
In view of the above, the present invention is particularly proposed.
Disclosure of Invention
The invention aims to provide a robot positioning and mapping method fusing laser intensity and point cloud geometric features, which can improve the efficiency of laser point cloud registration in a structured environment with poor scene diversity and rich intensity features, thereby improving the real-time positioning and mapping accuracy of a robot and further solving the technical problems in the prior art.
The purpose of the invention is realized by the following technical scheme:
a method for robot positioning and mapping by fusing laser intensity and point cloud geometric features comprises the following steps:
and 3, carrying out real-time loop detection on the environment map with the intensity, and updating the pose of the environment map with the intensity and the pose of the mobile robot according to a loop detection result.
Compared with the prior art, the robot positioning and mapping method fusing the laser intensity and the point cloud geometric characteristics, provided by the invention, has the beneficial effects that:
the method comprises the steps of firstly fusing point cloud geometric features and laser intensity of point cloud data of a laser radar of the mobile robot to obtain fused point cloud data, classifying the fused point cloud data according to features through a point cloud machine learning classification model, constructing an environment map with intensity, optimizing the optimal estimation pose of the mobile robot by using geometric residual errors and intensity residual errors obtained by using the environment map with intensity in a combined manner, performing real-time loop detection on the environment map with intensity, updating the environment map with intensity and the pose of the mobile robot according to a loop detection result, and reducing the robot drifting problem by using the intensity of laser point cloud which can well reflect the features of the surrounding environment of the robot and the fused laser intensity and point cloud features; and the real-time loop detection is carried out in real time, so that the real-time positioning precision of the robot is ensured. The method only needs the laser radar sensor, and does not need to be fused with other sensors, so that the challenge of external calibration of a plurality of sensors is reduced, and the calculation pressure is also reduced. And due to the advantages of the laser radar, the system can be well suitable for different environments and is also suitable for positioning and mapping of large-scale environment robots.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
Fig. 1 is a flowchart of a robot positioning and mapping method fusing laser intensity and point cloud geometric features according to an embodiment of the present invention.
Fig. 2 is a technical route map of the robot positioning and mapping method with the laser intensity and the point cloud geometric features fused provided by the embodiment of the invention.
Fig. 3 is a schematic diagram illustrating a point cloud machine learning classification model in the method according to the embodiment of the present invention.
Fig. 4 is a schematic diagram of a scanning context containing geometric and intensity information in the method according to the embodiment of the present invention.
Detailed Description
The technical scheme in the embodiment of the invention is clearly and completely described in combination with the specific content of the invention; it should be understood that the described embodiments are only some of the embodiments of the present invention, not all of the embodiments, and are not intended to limit the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
The terms that may be used herein are first described as follows:
the term "and/or" means that either or both can be achieved, for example, X and/or Y means that both cases include "X" or "Y" as well as three cases including "X and Y".
The terms "comprising," "including," "containing," "having," or other similar terms of meaning should be construed as non-exclusive inclusions. For example: including a feature (e.g., material, component, ingredient, carrier, formulation, material, dimension, part, component, mechanism, device, step, process, method, reaction condition, processing condition, parameter, algorithm, signal, data, product, or article, etc.) that is not specifically recited, should be interpreted to include not only the specifically recited feature but also other features not specifically recited and known in the art.
The term "consisting of … …" is meant to exclude any technical feature elements not specifically listed. If used in a claim, the term shall render the claim closed except for the usual impurities associated therewith which do not include the technical features other than those explicitly listed. If the term occurs in only one clause of the claims, it is defined only to the elements explicitly recited in that clause, and elements recited in other clauses are not excluded from the overall claims.
Unless expressly stated or limited otherwise, the terms "mounted," "connected," and "secured," etc., are to be construed broadly, as for example: can be fixedly connected, can also be detachably connected or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms herein can be understood by those of ordinary skill in the art as appropriate.
The terms "central," "longitudinal," "lateral," "length," "width," "thickness," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," "clockwise," "counterclockwise," and the like are used in an orientation or positional relationship that is indicated based on the orientation or positional relationship shown in the drawings for ease of description and simplicity of description only, and are not intended to imply or imply that the referenced device or element must have a particular orientation, be constructed and operated in a particular orientation, and are therefore not to be considered limiting herein.
The robot positioning and mapping method with the laser intensity and the point cloud geometric features fused provided by the invention is described in detail below. Details which are not described in detail in the embodiments of the invention belong to the prior art which is known to the person skilled in the art. Those not specifically mentioned in the examples of the present invention were carried out according to the conventional conditions in the art or conditions suggested by the manufacturer. The reagents or instruments used in the examples of the present invention are not specified by manufacturers, and are all conventional products available by commercial purchase.
As shown in fig. 1, an embodiment of the present invention provides a method for positioning and mapping a robot by fusing laser intensity and point cloud geometric features, including:
and 3, carrying out real-time loop detection on the environment map with the intensity, and updating the pose of the environment map with the intensity and the pose of the mobile robot according to a loop detection result.
In step 1 of the method, the method for classifying the fusion features according to the features through the point cloud machine learning classification model comprises the following steps:
projecting the fusion point cloud data of the mobile robot laser radar to a reference plane, and projecting the fusion point cloud data of the mobile robot laser radar to the reference planeDividing the examination plane into two-dimensional grid points with equal size, and recording the height of each two-dimensional grid pointMinimum height of two-dimensional grid points adjacent to the two-dimensional grid pointCarrying out double-threshold filtering on each two-dimensional grid point through the following filtering formula, and dividing each two-dimensional grid point into a non-ground point and a ground point, wherein the filtering formula is as follows:
whereinRepresenting two-dimensional grid points; NG represents that the two-dimensional grid points are non-ground points; g represents that the two-dimensional grid points are ground points; h is k Representing the heights of different layers of point clouds projected onto the same grid; delta h 1 Representing a height threshold for partitioning point clouds in the same grid; delta h 2 Representing a height threshold for partitioning the point clouds in adjacent grids;
non-ground point for calculating geometrical characteristics and laser intensity of fused point cloud and spherical neighborhood thereofThe information matrix of (2) is:
wherein the content of the first and second substances,is a neighborhoodThe center of gravity of;is a neighborhood regionThe laser intensity mean value of (1), points with similar laser intensity to non-ground points are given higher weight;
processing the point cloud machine learning classification model according to a principal component analysis method to obtain a characteristic value lambda corresponding to each non-ground point 1 ,λ 2 ,λ 3 ;
Obtaining corresponding characteristic value lambda of each non-ground point 1 ,λ 2 ,λ 3 And calculating the local linearity, the flatness and the curvature of each non-ground point, wherein the local linearity is = (lambda 1-lambda 2)/lambda 1, the flatness is = (lambda 2-lambda 3)/lambda 1, and the curvature is = lambda 3/(lambda 1+ lambda 2+ lambda 3), and the non-ground points are divided into linear points, different plane points and vertexes according to the local linearity, the flatness and the curvature.
In step 2 of the method, an intensity environment map is established according to the point cloud geometric features and the laser intensity of the classified fused point cloud data, and the optimal estimated pose of the mobile robot is obtained by jointly optimizing a geometric residual error and an intensity residual error obtained according to the established intensity environment map, wherein the method comprises the following steps:
dividing the three-dimensional space into grid cells, and jointly representing each grid cell m by using an intensity measurement I and a geometric occupation grid probability i Grid cell m observed at current time t i Is obtained by the following formula:
wherein the content of the first and second substances,is the grid cell m observed at the current time t i If the intensity observation value is 0, it indicates that there is no reflected signal, i.e. the grid cell does not include an obstacle object;is to the grid cell m i Total number of observations of (d);
the geometric occupation probability value of the grid cell observed at the current moment t is obtained by the following formula:
wherein odds (p) represents a geometric occupancy probability value of the grid cell observed at the current time t; p represents whether a grid cell is occupied; clamp () is an interval-defining function;
calculating the optimal pose of the geometric residual estimation robot by matching the current fusion point cloud data classified differently with a global map, wherein,
if the classification of the fused point cloud data is linear characteristic pointSearching the distance from the linear feature point in the global mapTwo nearest linear points p 1 And p 2 The linear characteristic pointTo the linear point p 1 And p 2 The geometric residual of (a) is defined as:
wherein p is i As the current point of the time-point,is the current point p i A transition point in the intensity environment map;
if the classification of the fused point cloud data is a plane feature pointSearching for feature points which are far away from the plane from the global mapThree nearest plane points p 1 、p 2 And p 3 The characteristic point of the planeTo three plane points p 1 、p 2 And p 3 The geometric residual of (a) is defined as:
if the classification of the fused point cloud data is the vertex characteristic pointThe vertex feature pointThe geometric residual of (a) is defined as:
by minimizing the transition point in the current point pi and the intensity environment mapIntensity residual between to match features with constructed intensity environment mapComprises the following steps:
and finally, jointly optimizing the geometric residual error and the strength residual error according to the following formula to obtain the optimal estimated pose of the mobile robot, wherein the optimal estimated pose is as follows:
in step 3 of the method, the real-time loop detection is performed on the band-intensity environment map in the following manner, including:
and 3, judging the similarity of the loop frame and the candidate frame according to whether the identification difference between the cells of the loop frame and the candidate frame is less than a given threshold 51, and after the loop is determined if the identification difference is less than the threshold, adding the constraint between the loop frame and the candidate frame into the back-end global optimization to correct the drift of the mobile robot.
An embodiment of the present invention further provides a processing device, including:
at least one memory for storing one or more programs;
at least one processor capable of executing one or more programs stored in the memory, the one or more programs, when executed by the processor, causing the processor to implement the method described above.
Embodiments of the present invention further provide a readable storage medium storing a computer program, which when executed by a processor can implement the method described above.
In conclusion, the method provided by the embodiment of the invention has the advantages that for the structured environment with poor scene diversity and rich intensity information, the intensity of the laser point cloud can well reflect the characteristics of the surrounding environment of the robot, and the drift problem of the robot can be reduced by fusing the laser intensity and the point cloud characteristics. And the point cloud machine learning classification model in the method is only used for feature classification of laser radar point cloud, so that the calculation amount can be reduced, the front end real-time performance is improved, meanwhile, the point cloud registration can be improved after classification, and a method of key frame election and scanning context matching is adopted during loop detection, so that the calculation pressure is reduced, and the real-time positioning precision of the robot is ensured.
In order to more clearly show the technical solutions and the technical effects provided by the present invention, the method for positioning and mapping a robot that integrates laser intensity and geometrical characteristics of a point cloud provided by the embodiment of the present invention is described in detail below with specific embodiments.
Example 1
As shown in fig. 1, an embodiment of the present invention provides a method for positioning and mapping a robot by fusing laser intensity and geometrical features of a point cloud. The method does not need other external sensors in a structural environment with poor scene diversity but rich intensity characteristics, improves point cloud registration, reduces the accumulated error of the robot, reduces the calculation speed of loop detection and improves the real-time positioning accuracy of the robot under the condition of only depending on a laser radar.
In the embodiment of the invention, the mapping and positioning system receives point cloud data returned by the laser radar, and autonomous pose estimation and environment mapping construction of the robot are realized. The whole system is divided into 3 parts: 1) Fusing laser intensity information and point cloud geometric feature information, and performing feature extraction; 2) Registering the geometric and strength characteristics of adjacent frames, and constructing an environment map with strength; 3) And performing loop detection, and updating the map and the pose. According to the method, point cloud classification is carried out by a machine learning method according to the geometric information and the intensity information of the laser point cloud, and the classified point cloud is used for point cloud registration and loop detection.
As shown in fig. 1 and fig. 2, the technical route chart of the method of the present invention mainly includes: and fusing the geometrical characteristics and the laser intensity characteristics of the point cloud, extracting the characteristics by using a machine learning method, estimating the pose of the robot, constructing an intensity map and performing real-time loop detection.
Fig. 3 shows a machine learning classification framework for point clouds, which classifies laser point cloud data into different feature classes by linear transformation. Firstly, point cloud data of the laser radar is projected to a reference plane. Then dividing the reference plane into two-dimensional grids with equal size, and recording the height of each gridAnd minimum height of its adjacent gridIt is divided into Non-Ground (NG) and Ground (G) points by dual threshold filtering. The specific division formula is as follows:
for non-ground points, the geometric and strength information is fused, and the point and the spherical neighborhood thereof are calculatedThe information matrix of (2):
whereinIs a neighborhoodThe center of gravity of the vehicle,is the intensity average of the neighborhood. Points with similar intensity as the feature points are given higher weight. Then, a principal component analysis method is adopted to obtain the corresponding characteristic value lambda of each point 1 ,λ 2 ,λ 3 And finally, calculating the local linearity, flatness and curvature of the point and dividing the point cloud into different plane points, linear points and vertexes according to the local linearity, flatness and curvature of the point.
The intensity map described in the embodiments of the present invention is maintained and updated by occupying a grid, the three-dimensional space being divided into grid cells, each grid cell m i The intensity measure I and the geometrical occupancy grid probability are jointly represented. For time t observation grid cells, the intensity measurement is given by the following formula:
whereinIs the current observation of the intensity and,is to the cell grid m i If the grid does not contain an object, the intensity value is 0, indicating no reflected signal.
For a time t observation grid cell, its geometric occupancy probability value may be determined by:
where p denotes whether the grid is occupied, and clamp is a range-defining function.
In order to estimate the optimal pose of the robot, the geometric residual is calculated by matching the currently classified different feature point clouds with the global map, and the geometric residual can be calculated by a given point pi and its transformation pointFrom allAnd searching the corresponding feature points in the local map for calculation.
For linear feature points, two nearest linear points p1 and p are searched 2 The residual error from this point to the linear feature point is defined as:
for the plane feature point, searching three nearest plane points p 1 、p 2 、p 3 The residual error from this point to the plane feature point is defined as:
the residual for the vertex feature is defined as:
meanwhile, the intensity residual error is calculated by matching the features with the constructed intensity map, and the current point p can be minimized i With transition points in intensity mapsThe intensity residuals in between to achieve:
and finally, jointly optimizing the geometric residual error and the intensity residual error according to the following formula to obtain the optimal pose estimation:
in embodiments of the present invention, real-time loop detection reduces retrieval time by using key frame election and scanning context. The selection of key frames is mainly based on the following criteria: the robot displacement exceeds a preset distance, the rotation angle of the robot exceeds a preset angle, and the time of the robot displacement exceeds preset time. The global descriptor obtained from the context of the scan point cloud data is a two-dimensional matrix (see fig. 4) calculated by equally dividing the polar coordinates of the two-dimensional matrix into sectors and rings in azimuth and radial angles, and the identity of each cell is obtained by geometric and intensity weighting. While taking into account that the map increases with time, the calculation pressure becomes greater. To solve this problem, a logical judgment is used to judge the similarity between the loop frame and the candidate frame according to the criterion of whether the mark in the cell is 0. After the loop is determined, the constraint between two frames can be added into the optimization to correct the drift of the robot and improve the positioning accuracy.
Example 2
According to the structured environment with poor scene diversity and rich intensity information, the laser radar data is acquired by 16-line LiDAR, the output frequency is 10Hz, the vertical field of view is [ +15 degrees, -15 degrees ], and the horizontal field of view is 360 degrees; the vertical resolution is 2 degrees, and the horizontal resolution is 0.2 degrees; the minimum and maximum ranges of range finding are 0.2 meters and 100 meters, respectively; and outputting three-dimensional coordinate and intensity information of each point is supported. Based on a three-dimensional simulation environment, a scene of a long straight tunnel is simulated, the length of the tunnel is about 400 meters, and reflection marks are arranged on two sides of a wall in a staggered mode at intervals of 20 meters. During data acquisition, the moving speed of the robot is kept between 15 and 20km/h, and 1000 frames of point cloud data including geometric and strength information are acquired.
The algorithm of the point cloud machine learning classification model is a principal component analysis algorithm, laser point cloud intensity and geometric characteristic information are fused, and output of the laser point cloud intensity and geometric characteristic information is classified into ground points, plane points, linear points and vertexes according to characteristics.
The algorithm for estimating the optimal pose and jointly optimizing the geometric residual error and the strength residual error is a Levenberg-Marquardt optimization algorithm. As shown in fig. 4, for the judgment of the current loopback, the similarity score is obtained by taking the average cosine value of each sector, and the result is determined to be a loopback if the score exceeds a threshold value, so as to globally optimize and correct the pose and update the map.
In summary, the method of the embodiment of the invention can well reflect the characteristics of the surrounding environment of the robot for the structured environment with poor scene diversity but rich intensity information, and can reduce the robot drift problem by fusing the laser intensity and the point cloud characteristics. Moreover, the machine learning algorithm in the method is only used for feature classification of laser radar point cloud, and the loop detection adopts a method of key frame election and scanning context matching, so that the calculation pressure is reduced, and the real-time positioning accuracy of the robot is ensured. In addition, the method only needs a laser radar sensor and does not need to be fused with other sensors. This reduces the challenge of external calibration of multiple sensors, which also reduces the computational effort. And due to the advantages of the laser radar, the system can be well suitable for different environments and is also suitable for positioning and mapping of large-scale environment robots.
It can be known that, by applying the method of the present invention, in an environment with a deficient geometric structure and rich strength information, more geometric spatial features (such as vertexes, lines, cylinders, walls, etc.) can be extracted by using a statistical-based method, and meanwhile, a degradation factor algorithm is used to evaluate the pose estimation degradation condition and determine the degradation direction.
Those of ordinary skill in the art will understand that: all or part of the processes of the methods for implementing the embodiments may be implemented by a program, which may be stored in a computer-readable storage medium, and when executed, may include the processes of the embodiments of the methods as described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims. The information disclosed in this background section is only for enhancement of understanding of the general background of the invention and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art.
Claims (4)
1. A robot positioning and mapping method fusing laser intensity and point cloud geometric features is characterized by comprising the following steps:
step 1, point cloud data of a laser radar of a mobile robot are obtained, point cloud geometric features and laser intensity of the point cloud data are fused to obtain fused point cloud data, and the fused point cloud data are classified according to features through a point cloud machine learning classification model;
step 2, establishing an intensity environment map according to the point cloud geometric features and the laser intensity of the classified fused point cloud data, and jointly optimizing the optimal estimation pose of the mobile robot according to a geometric residual error and an intensity residual error which are obtained by the established intensity environment map;
and 3, carrying out real-time loop detection on the environment map with the intensity, and updating the environment map with the intensity and the estimated pose of the mobile robot according to a loop detection result.
2. The method for robot localization and mapping of fusion laser intensity and point cloud geometric features according to claim 1, wherein in the step 1, the classification of the fusion features by features through a point cloud machine learning classification model comprises:
projecting the fusion point cloud data of the mobile robot laser radar to a reference plane, dividing the reference plane into two-dimensional grid points with equal size, and recording the height of each two-dimensional grid pointMinimum height of two-dimensional grid points adjacent to the two-dimensional grid pointBy passingThe following filtering formula is used for carrying out double-threshold filtering on each two-dimensional grid point, and dividing each two-dimensional grid point into a non-ground point and a ground point, wherein the filtering formula is as follows:
wherein, the meaning of each parameter is as follows:representing two-dimensional grid points; NG represents that the two-dimensional grid points are non-ground points; g represents that the two-dimensional grid point is a ground point; h is k Representing different point cloud heights in the same layer of grid; delta h 1 Representing the threshold value of the cloud height of points in the same grid; delta h 2 Representing the threshold value of the point cloud height in the adjacent grids;
non-ground point for calculating geometrical characteristics and laser intensity of fused point cloud and spherical neighborhood thereofThe information matrix of (a) is:
wherein, the first and the second end of the pipe are connected with each other,is a neighborhoodThe center of gravity of;is a neighborhood regionLaser intensity mean value of, point cover having similar laser intensity to non-ground pointGiving higher weight;
processing the point cloud machine learning classification model according to a principal component analysis method to obtain a characteristic value lambda corresponding to each non-ground point 1 ,λ 2 ,λ 3 ;
By obtaining corresponding characteristic value lambda of each non-ground point 1 ,λ 2 ,λ 3 And calculating the local linearity, the planeness and the curvature of each non-ground point, wherein the local linearity is not (lambda 1-lambda 2)/lambda 1, the planeness is not (lambda 2-lambda 3)/lambda 1, and the curvature is not (lambda 3/(lambda 1+ lambda 2+ lambda 3), and each non-ground point is divided into a linear point, a different plane point and a vertex according to the local linearity, the planeness and the curvature.
3. The method for robot localization and mapping according to claim 2, wherein in step 2, an environment map with intensity is created according to the classified point cloud geometric features and laser intensity of the fused point cloud data, and the optimal estimated pose of the mobile robot is obtained by jointly optimizing geometric residuals and intensity residuals obtained according to the created environment map with intensity, including:
dividing the three-dimensional space into grid cells, and jointly representing each grid cell m by using an intensity measurement I and a geometric occupation grid probability i Grid cell m observed at current time t i Is obtained by the following formula:
wherein, the first and the second end of the pipe are connected with each other,is the grid cell m observed at the current time t i If the intensity observation value is 0, it indicates that there is no reflected signal, i.e. the grid cell does not contain an obstacle object;is to the grid cell m i Total number of observations of (d);
the geometric occupation probability value of the grid cell observed at the current moment t is obtained by the following formula:
wherein odds (p) represents a geometric occupancy probability value of the grid cell observed at the current time t; p represents whether a grid cell is occupied; clamp () is an interval-defining function;
calculating the optimal pose of the geometric residual estimation robot by matching the current fusion point cloud data classified differently with a global map, wherein,
if the classification of the fused point cloud data is linear characteristic pointSearching the linear characteristic point from the global mapTwo nearest linear points p 1 And p 2 The linear characteristic pointTo the linear point p 1 And p 2 The geometric residual of (a) is defined as:
wherein p is i As the current feature point, the feature point is selected,is the current point p i A transition point in the intensity environment map;
if the classification of the fused point cloud data is a plane feature pointSearching for feature points which are far away from the plane from the global mapThree nearest plane points p 1 、p 2 And p 3 The characteristic point of the planeTo three plane points p 1 、p 2 And p 3 The geometric residual of (a) is defined as:
if the classification of the fused point cloud data is the vertex characteristic pointThe vertex feature pointThe geometric residual of (a) is defined as:
by minimizing the current point p i Conversion point in environment map with intensityIntensity residual between to match features with constructed intensity environment mapComprises the following steps:
and finally, jointly optimizing the geometric residual error and the strength residual error according to the following formula to obtain the optimal estimated pose of the mobile robot, wherein the optimal estimated pose is as follows:
4. the method for positioning and mapping a robot fusing laser intensity and point cloud geometric features according to claim 1 or 2, wherein in the step 3, the real-time loop detection is performed on the intensity environment map, and the pose of the intensity environment map and the pose of the mobile robot are updated according to the loop detection result, and the method comprises the following steps:
step 1, selecting a key frame to obtain the key frame, wherein the key frame is selected according to the following criteria: the robot displacement exceeds the preset distance by 20.0 meters, the rotation angle of the robot exceeds the preset angle by 0.67 radian, and the time of the robot displacement exceeds the preset time by 30 seconds;
step 2, extracting the scanning point cloud data context into a global descriptor for each key frame, obtaining a two-dimensional matrix through the global descriptor, calculating by equally dividing a polar coordinate into cells and rings of sectors in an azimuth angle and a radial angle, and obtaining the identification of each cell through geometric and strength weighting;
and 3, judging the similarity of the loop frame and the candidate frame according to whether the identification difference of the cells of the loop frame and the candidate frame is smaller than a given threshold 51, and after the loop is determined if the identification difference of the cells of the loop frame and the candidate frame is smaller than the threshold, adding the constraint between the loop frame and the candidate frame into the back-end global optimization to correct the drift of the mobile robot.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211304813.0A CN115639570A (en) | 2022-10-24 | 2022-10-24 | Robot positioning and mapping method integrating laser intensity and point cloud geometric features |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211304813.0A CN115639570A (en) | 2022-10-24 | 2022-10-24 | Robot positioning and mapping method integrating laser intensity and point cloud geometric features |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115639570A true CN115639570A (en) | 2023-01-24 |
Family
ID=84945004
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211304813.0A Pending CN115639570A (en) | 2022-10-24 | 2022-10-24 | Robot positioning and mapping method integrating laser intensity and point cloud geometric features |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115639570A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117095061A (en) * | 2023-10-20 | 2023-11-21 | 山东大学 | Robot pose optimization method and system based on point cloud strength salient points |
-
2022
- 2022-10-24 CN CN202211304813.0A patent/CN115639570A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117095061A (en) * | 2023-10-20 | 2023-11-21 | 山东大学 | Robot pose optimization method and system based on point cloud strength salient points |
CN117095061B (en) * | 2023-10-20 | 2024-02-09 | 山东大学 | Robot pose optimization method and system based on point cloud strength salient points |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102257610B1 (en) | EXTRINSIC CALIBRATION METHOD OF PLURALITY OF 3D LiDAR SENSORS FOR AUTONOMOUS NAVIGATION SYSTEM | |
CN105547305B (en) | A kind of pose calculation method based on wireless location and laser map match | |
US6728608B2 (en) | System and method for the creation of a terrain density model | |
CN110361027A (en) | Robot path planning method based on single line laser radar Yu binocular camera data fusion | |
CN112950781A (en) | Point cloud map construction method for multi-sensor dynamic weighting fusion of special scene | |
EP2687868B1 (en) | Systems and methods for correlating reduced evidence grids | |
KR102110813B1 (en) | SLAM method and apparatus robust to wireless environment change | |
CN112346463B (en) | Unmanned vehicle path planning method based on speed sampling | |
CN110542908A (en) | laser radar dynamic object perception method applied to intelligent driving vehicle | |
CN113345018A (en) | Laser monocular vision fusion positioning mapping method in dynamic scene | |
JPWO2007069726A1 (en) | Self-position identification method and apparatus and three-dimensional shape measurement method and apparatus | |
CN114526745B (en) | Drawing construction method and system for tightly coupled laser radar and inertial odometer | |
CN112904358B (en) | Laser positioning method based on geometric information | |
US20230236280A1 (en) | Method and system for positioning indoor autonomous mobile robot | |
CN115639570A (en) | Robot positioning and mapping method integrating laser intensity and point cloud geometric features | |
CN114659514A (en) | LiDAR-IMU-GNSS fusion positioning method based on voxelized fine registration | |
CN115728803A (en) | System and method for continuously positioning urban driving vehicle | |
CN115908539A (en) | Target volume automatic measurement method and device and storage medium | |
CN110285805A (en) | A kind of adaptive-interpolation/division processing method of data void holes | |
Murphy et al. | Using incomplete online metric maps for topological exploration with the gap navigation tree | |
CN115494533A (en) | Vehicle positioning method, device, storage medium and positioning system | |
CN111239761B (en) | Method for indoor real-time establishment of two-dimensional map | |
CN114488247A (en) | Method for analyzing mobility of equipment based on high-precision Beidou differential positioning | |
CN114485613A (en) | Multi-information fusion underwater robot positioning method | |
CN114266821A (en) | Online positioning method and device, terminal equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |