CN111583369B - Laser SLAM method based on facial line angular point feature extraction - Google Patents

Laser SLAM method based on facial line angular point feature extraction Download PDF

Info

Publication number
CN111583369B
CN111583369B CN202010317771.9A CN202010317771A CN111583369B CN 111583369 B CN111583369 B CN 111583369B CN 202010317771 A CN202010317771 A CN 202010317771A CN 111583369 B CN111583369 B CN 111583369B
Authority
CN
China
Prior art keywords
point
points
information
point cloud
line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010317771.9A
Other languages
Chinese (zh)
Other versions
CN111583369A (en
Inventor
胡超芳
张帅鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202010317771.9A priority Critical patent/CN111583369B/en
Publication of CN111583369A publication Critical patent/CN111583369A/en
Application granted granted Critical
Publication of CN111583369B publication Critical patent/CN111583369B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a laser SLAM method based on facial line angular point feature extraction, which comprises the following steps: initializing a map and various parameters, and allocating memory; (2) collecting point clouds collected by the 3D laser radar; (3) carrying out data preprocessing on the collected point cloud; (4) carrying out projection; (5) removing the blocking points; (6) extracting the characteristics of surfaces, lines and angular points; (7) carrying out surface normal vector estimation on the surface and line characteristics; (8) performing front-end registration calculation on the corner feature information; (9) Removing redundant parts in the point cloud, and storing key frame information, pose information and track information of a carrier; (10) Performing loop detection on the key frame information and the motion track information; (11) carrying out back-end optimization; (12) Updating the pose, fusing and building a map, and releasing map information and track information; (13) And (4) jumping to the step (2), and continuing running until the graph building is completed or the process is closed.

Description

Laser SLAM method based on facial line angular point feature extraction
Technical Field
The invention relates to the field of synchronous positioning and drawing of three-dimensional laser radars, in particular to a laser SLAM method based on facial line corner feature extraction.
Background
In the research field of mobile robots, simultaneous Localization and Mapping (SLAM) has been a popular research topic, and SLAM provides a navigation map and a real-time position for a robot, which are the prerequisites for the robot to perform path planning and path tracking, so that it occupies a very important position in mobile robot navigation. Since the problem of SLAM was addressed in 1986, SLAM technology has developed significantly in various industries, and a large number of SLAM-related algorithms have emerged in the market today, such as: unmanned automobile, service robot, unmanned aerial vehicle, unmanned underwater vehicle, virtual reality, augmented reality and the like. The environmental sensing sensors commonly used in SLAM include cameras, liDAR (Light Detection and Ranging), RADAR (RADAR), sonar, and the like.
The laser sensor has the characteristics of high ranging precision, small influence of light, electromagnetic interference resistance and the like, but simultaneously has a plurality of defects, such as easy influence of rain, snow and fog weather, large data volume obtained by scanning each time and the like. Currently, a laser SLAM framework is generally divided into four key modules, namely front-end scanning matching, back-end optimization, loop detection and map construction. The front-end scanning matching can give a pose and a map in a short time, but due to inevitable error accumulation, the rear-end optimization just optimizes the odometer and the map information after long-time incremental scanning matching; loop detection is responsible for reducing drift phenomena of the global map by detecting closed loops so as to generate a global consistency map; the map building module is responsible for generating and maintaining global maps. Front end scanning matching is the core step of laser SLAM, and the working content is that the pose of the previous frame is known and the pose of the current frame is estimated by using the relation between adjacent frames; the scan matching algorithm currently prevailing in laser SLAM includes: iterative Closest Point (Iterative Closest Point) and variants, correlation Scan Match (Correlation Scan Match), optimization-based methods (Optimization-based methods), normal Distribution Transformation (Normal Distribution Transformation), and the like. The above conventional method often has a high requirement on hardware, and although the accuracy is high, the amount of data processed in front-end matching is very large. Author Ji Zhang in the paper LOAM, lidar Odometry and Mapping in Real-time, mentions a SLAM method based on line and surface feature extraction, the method has no loop detection function, and the extracted line and surface features are used for front-end interframe matching, so that the method has faster matching speed.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides a laser SLAM method based on facial line angular point feature extraction.
The purpose of the invention is realized by the following technical scheme:
a laser SLAM method based on facial line angular point feature extraction comprises the following steps:
(1) Initializing a map and various parameters, and allocating memory;
(2) Collecting point clouds collected by the 3D laser radar;
(3) Carrying out data preprocessing on the collected point cloud, and removing point cloud points exceeding the effective range of the laser radar;
(4) Projecting the point cloud preprocessed in the step (3) into a point cloud spherical coordinate system of 16 x 1800;
(5) Removing blocking points of the point cloud obtained in the step (4);
(6) Extracting the characteristics of the point cloud in the step (5), and extracting the characteristics of a surface, a line and an angular point according to the calculation of the smoothness;
(7) Performing surface normal vector estimation on the surface and line features extracted in the step (6), calculating the curvature of points according to the surface normal vector of the point cloud, and classifying the point cloud according to different curvatures of the points to extract more accurate surface and line features and improve the accuracy of pose estimation;
(8) Performing front-end registration calculation according to the surface and line features extracted in the step (7) and the angular point feature information extracted in the step (6), and calculating pose information and odometer information of the carrier;
(9) Removing redundant parts in the point cloud, and storing key frame information, pose information and track information of a carrier;
(10) Performing loop detection according to the key frame information and the motion track information in the step (9);
(11) Performing back-end optimization on the extracted features and the existing pose according to the existing map and the latest point cloud;
(12) Updating the pose, fusing and building the map, and publishing the map information and the track information;
(13) And (4) jumping to the step (2), and continuing to operate until the graph building is completed or the process is closed.
Further, the pretreatment of the point cloud in the step (3) is specifically as follows:
(301) Traversing all original point clouds, setting effective distances, and using a filter to remove point clouds outside the effective distances;
(302) Taking the point cloud processed in the step (301) as input, calculating a cube which just can wrap the point cloud, dividing the big cube into different small cubes, calculating the mass center of points in each small cube, and approximating all the points in each small cube through the mass center to realize down-sampling processing of the point cloud;
(303) Traversing all the point clouds processed in the step (302), performing k neighbor search on each point, calculating the average distance between each point and the k neighbor thereof, if the average distance of one point is greater than a threshold value d, regarding the point as a outlier point, and removing all the outliers after calculation;
d=mean+stddev*n
wherein mean is the average distance from the searched k points to the point, stddev is the standard deviation; n is 1.0.
Further, the point cloud is a 3D point cloud, the information contained in the 3D point cloud includes three-dimensional coordinates (x, y, z) scanned to each point, and a specific method for projecting a certain point from a three-dimensional coordinate system to a sphere coordinate system is as follows:
(401) Calculating an angle alpha in the vertical direction; finding a few lines row with points in the vertical direction;
(402) Calculating an angle β in the horizontal direction; solving a second line col of a point in the horizontal direction;
(403) Calculating the depth distance of the point from the laser radar
Figure BDA0002460100640000031
The point originally expressed by three parameters (x, y, z) can be realized by using two values (row, col).
Further, in step (5), the occlusion point removal refers to the points in the point cloud that are occluded from each other and are close to each other.
Further, in the extraction of the surface, line and corner features in the step (6), smoothness calculation is firstly carried out, wherein a smoothness calculation formula is as follows:
Figure BDA0002460100640000032
the smoothness of the ith point is calculated from s points around the ith point, typically s =10, where r i Is the depth distance of the ith point, r j Is the depth distance of point j; then, the point cloudAll points in the sequence are sorted according to smoothness, and a threshold parameter b is set 1 ,b 2 ,b 3 Wherein b is 1 >b 2 >b 3 (ii) a Screening out a value of smoothness greater than b 1 The points of the values are treated as angular points; sieving to obtain the product with smoothness greater than b 2 Is less than b 1 Treating the points as line characteristics, and screening out the points with the smoothness less than b 3 The points of values are treated as surface features.
Further, in the step (7), surface normal vector estimation is carried out on the line features and the surface features extracted in the step (6), and the point p is calculated according to the surface normal vector of the point cloud i The curvature procedure is as follows:
(701) Selecting a current point p i
(702) Searching for point p by using kd-tree algorithm i K nearest neighbor points form a covariance matrix A;
(703) And (3) calculating a covariance matrix A, wherein the corresponding covariance matrix A is as follows:
Figure BDA0002460100640000033
where k is the point p i The number of the neighboring points is,
Figure BDA0002460100640000034
representing the three-dimensional centroid of the nearest neighbor element, λ j is the eigenvalue of the covariance matrix, vj is the jth eigenvector;
(704) Solving eigenvalues lambda 1, lambda 2 and lambda 3 of the covariance matrix obtained in the third step;
(705) Taking the minimum characteristic value lambda min in lambda 1, lambda 2 and lambda 3 characteristics;
(706) Calculating p i Curvature of (2): λ min/(λ 1+ λ 2+ λ 3).
Further, the pose information of the carrier in the step (8) is represented by a transformation matrix T, T is represented by a rotation matrix R and a vector T, and the rotation matrix R = f (theta) is represented by an Euler angle yaw ,θ pitch ,θ roll ) The translation vector may be represented as t = (t) x ,t y ,t z ) Finding out the corresponding plane by utilizing the corresponding relation of the characteristics of the point clouds of the two adjacent frames obtained in the step (7) and the step (6), finding out the plane which is closest to the point in the previous frame data and corresponds to the point on the screened surface characteristics, and calculating the distance d between the point and the plane p (ii) a After finding out the corresponding corner point and line feature, for the points on the screened line feature, finding out the line feature which is closest to the point in the last frame data corresponding to the points, and calculating the distance d from the point to the line segment e Solving a transformation matrix T by using a Gauss-Newton method according to the following nonlinear equation set;
F P (t z ,θ pitch ,θ roll )=d p
F e (t x ,t yyaw )=d e
and after the transformation matrix is solved, the pose information and the track information of the carrier are solved according to the initial values.
Further, in the step (10), in the process of loop detection, comparing the currently acquired key frame with the stored historical key frame information, if similar key frames exist, simultaneously performing k neighbor search on the current track point, and if the track point existing in the range of r meters exists and the constraint condition is met, judging that loop exists; if the loop exists, the information coincident with the historical key frame is removed, and the image is built by using the characteristic information and the key frame information which are stored in the history.
Furthermore, constraint conditions added to loop detection in the step (10) have time constraint and distance constraint, and if the difference between the current frame and the overlapped historical key frame is within t seconds, the loop is judged to be invalid; and if the distance between the current track point and the milemeter information of the overlapped historical track point is within s meters, judging that the loop is invalid.
Compared with the prior art, the technical scheme of the invention has the following beneficial effects:
in a noisy field environment containing trees and grasses, unreliable plane and straight line features are often extracted by a classical LOAM algorithm, and the pose estimation error is increased. When the unmanned vehicle moves rapidly in the closed environment, the classic LOAM algorithm cannot achieve a reliable map building effect.
Compared with the traditional laser SLAM method, the method has the advantages that the data processing amount is reduced, the hardware requirement is reduced, the real-time positioning and drawing effects can be achieved, compared with the classical LOAM method based on line-plane feature extraction, the loop detection function is added, and the positioning precision and the drawing precision are improved.
Drawings
FIG. 1 is a flow chart showing the specific operation of the method of the present invention.
FIG. 2 is an explanatory diagram of the chokepoints requiring culling in the present invention.
FIG. 3 shows a loop detection distance constraint graph for the purposes of the present invention.
FIG. 4 shows a node and messaging diagram of the invention operating in a ROS system, where the ellipses represent node information and the rectangles represent topic message information.
Fig. 5a and 5b are graphs showing comparison between the effect of the algorithm of the present invention and the effect of the conventional load algorithm using the same indoor data set, where fig. 5a is the test effect of the conventional load algorithm, and fig. 5b is the test effect of the algorithm of the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and the specific embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The specific framework of the laser SLAM method based on the facial line corner feature extraction is shown in figure 1.
The key steps are as follows:
1. initializing a map and various parameters, and allocating memory;
2. collecting point clouds collected by the 3D laser radar;
3. carrying out data preprocessing on the collected point cloud, and removing point cloud points exceeding the effective range of the laser radar; the point cloud pretreatment specifically comprises the following steps:
(301) Traversing all original point clouds, setting effective distances, and using a filter to remove point clouds outside the effective distances;
(302) Taking the point cloud processed in the step (301) as input, calculating a cube which just can wrap the point cloud, dividing the big cube into different small cubes, calculating the mass center of points in each small cube, and approximating all the points in the cube by using the mass center, thereby realizing the down-sampling processing of the point cloud;
(303) And (3) traversing all the point clouds processed in the step (302), performing k neighbor search on each point, and calculating the average distance between the point and the k neighbor of the point, wherein if the average distance of one point is greater than a threshold value d (the calculation formula is shown below), the point is considered as a outlier point, and n is usually 1.0. And after the calculation is finished, removing the outlier points.
d=mean+stddev*n
Where mean is the average distance from the searched k points to this point and stddev is the standard deviation.
4. Projecting the point cloud preprocessed in the step 3, and projecting the point cloud into a 16 x 1800 point cloud spherical coordinate system; the present embodiment is described by taking VPL-16 lidar (line number is 16) as an example, but the present invention is not limited to this lidar model, and parameters can be modified according to the radar model, and still fall within the scope of the present invention. The data collected by the 3D laser radar are 3D point clouds, the information contained in the 3D point clouds comprises three-dimensional coordinates (x, y, z) scanned to each point, and a specific method for projecting a certain point from a three-dimensional coordinate system to a spherical coordinate system is as follows:
(401) The angle alpha in the vertical direction is calculated. The laser radar is divided into 16 lines in the vertical direction, and the vertical resolution ratio is high
Figure BDA0002460100640000061
The scanning range is 32 °, so the number row of the point in the vertical direction is also found, and the calculation formula is as follows:
Figure BDA0002460100640000062
Figure BDA0002460100640000063
wherein ang bottom Is the starting angle in the vertical direction, the vertical resolution
Figure BDA0002460100640000064
(402) The angle β in the horizontal direction is calculated. The laser radar is divided into 1800 lines in the horizontal direction, the scanning range is 360 degrees, the number col of the point in the horizontal direction is solved, and the calculation formula is as follows:
β=arctan(x/y)
col=β/ang resx
wherein the horizontal resolution is ang resx =0.2°;
(403) Calculating the depth distance of the point from the laser radar
Figure BDA0002460100640000065
In this step, the point which originally needs three parameters (x, y, z) to be represented can be represented by two values (row, col), and a basis is laid for the subsequent feature extraction and parameter storage, for example, the distance parameter of the point cloud is stored by using the matrix M, then the row of M and the column of col are stored as r. Similarly, in addition to the distance information, information such as a location and a mileage meter may be stored.
5. And (4) removing blocking points from the point cloud in the step (4), wherein the blocking points refer to points which are mutually shielded and are close to each other in the point cloud, the description of the blocking points is shown in fig. 2, the point O is the position of the laser radar, the points in the black filling area can be deleted to form a range far away from the plane B, and the points in the range near to the plane A are reserved.
6. Extracting the characteristics of the point cloud in the step 5, and extracting the characteristics of surfaces, lines and angular points according to the calculation of the smoothness; in the extraction, firstly, smoothness calculation is carried out, and a smoothness calculation formula is as follows:
Figure BDA0002460100640000066
we calculate the smoothness of the ith point from s points around the ith point, usually s =10, where r i Is the depth distance of the ith point, r j The depth distance of point j. Then, the smoothness of all points in the point cloud is sorted, and a threshold parameter b is set 1 ,b 2 ,b 3 Wherein b is 1 >b 2 >b 3 . Screening for a value of smoothness greater than b 1 Points of value, treating them as angular points, and screening out points with smoothness greater than b 2 Is less than b 1 By treating them as line features, screening for points with smoothness less than b 3 Points of values are treated as surface features.
7. And (4) performing surface normal vector estimation on the surface and line features extracted in the step (6), calculating the curvature of the points according to the surface normal vector of the point cloud, and classifying the point cloud according to different curvatures of the points to extract more accurate surface and line features and improve the accuracy of pose estimation. Calculating a point p from a surface normal vector of the point cloud i The steps of curvature are as follows:
(701) Selecting a current point p i
(702) Searching for point p by using kd-tree algorithm i K nearest neighbor points form a covariance matrix A;
(703) And (3) calculating a covariance matrix A, wherein the corresponding covariance matrix A is as follows:
Figure BDA0002460100640000071
where k is the point p i The number of adjacent points is,
Figure BDA0002460100640000072
representing the three-dimensional centroid of the nearest neighbor element, λ j is the eigenvalue of the covariance matrix, and Vj is the jth eigenvector;
(704) Solving eigenvalues lambda 1, lambda 2 and lambda 3 of the covariance matrix obtained in the third step;
(705) Taking the minimum characteristic value lambda min in lambda 1, lambda 2 and lambda 3 characteristics;
(706) Calculating p i Curvature of (2): λ min/(λ 1+ λ 2+ λ 3).
p i After the curvature calculation is finished, the extracted line features and surface features are further screened by using a point cloud segmentation method based on a normal vector.
8. Performing front-end registration calculation according to the surface and line features extracted in the step 7 and the angular point feature information extracted in the step 6, and calculating pose information and odometer information of the carrier;
the pose information of the carrier is represented by a transformation matrix T, T can be represented by a rotation matrix R and a vector T, and the rotation matrix is represented by an Euler angle as R = f (theta) yaw ,θ pitch ,θ roll ) The translation vector may be represented as t = (t) x ,t y ,t z ) The feature corresponding relation of two adjacent frames of point clouds can be obtained by the surfaces, lines and angular points extracted in the steps 7 and 6, a corresponding plane is found, the point on the surface feature screened by people is found, the plane closest to the point in the previous frame data corresponding to the point is found, and the distance d from the point to the plane is calculated p . After finding out the corresponding corner point and line feature, for the points on the screened line feature, finding out the line feature which is closest to the point in the last frame data corresponding to the points, and calculating the distance d from the point to the line segment e Then, the transformation matrix T is solved by using the gauss-newton method according to the following nonlinear equation system.
F P (t z ,θ pitch ,θ roll )=d p
F e (t x ,t yyaw )=d e
After the transformation matrix is solved, the pose information and the track information of the carrier can be solved according to the initial value.
9. Removing redundant parts in the point cloud, and storing key frame information, pose information and track information of a carrier; specifically, the method comprises the following steps: and screening the extracted facial line angular point characteristics to keep the facial line angular point characteristics in a qualified quantity all the time, screening the pose information and the track point information to ensure that the difference between two adjacent poses is not too close, and simultaneously, performing downsampling on the track points to keep the interval between the two track points, wherein the information is stored as key frame information.
10. Performing loop detection according to the key frame information and the motion track information in the step 9;
in the process of loop detection, comparing a currently acquired key frame with stored historical key frame information, if similar key frames exist, simultaneously performing k neighbor search on a current track point, and if the track point which appears in the range of r meters exists and the following constraint conditions are met, judging that a loop exists. If the loop exists, the information coincident with the historical key frame is removed, and the image is built by using the characteristic information and the key frame information which are stored in the history. According to the invention, time constraint and distance constraint are added to loop detection, and if the difference between the current frame and the overlapped historical key frame is within t seconds, the loop is judged to be invalid; if the distance between the current track point and the overlapped odometer information of the historical track point is within s meters, the loop is judged to be invalid, the loop detection distance is restricted as shown in fig. 3, if T1, T2 and Tn respectively represent three track points in the adjacent search range, but the loop is judged to exist between T1 and Tn and not exist between T1 and T2 by using the method provided by the invention. (r, t, s in this step are all adjustable parameters, and can be adjusted according to the change of the surrounding environment, and in a general structured environment, r =5,t =10,s =10 is usually taken).
11. Performing back-end optimization on the extracted features and the existing pose according to the existing map and the latest point cloud;
12. updating pose, fusing map building, and issuing map information and track information
13. And (5) jumping to the step 2, and continuing running until the graph building is completed or the process is closed.
Specifically, in the context of the ROS system in the environment of ubuntu16.04, the method of the present invention can proceed as follows and achieve good results in a double extraction as shown in FIG. 4.
(1) And opening data publishing nodes of a data set to be tested by using the rossbag play, publishing the point cloud acquired by the laser radar, and then operating four nodes in the algorithm.
(2) And (2) the first node is a point cloud projection node, the node processes the point cloud released in the step (1), the processing process comprises point cloud preprocessing and point cloud projection, and the point cloud is projected into a spherical coordinate system.
(3) After the issued original point cloud is processed by the first node, the first node divides the projected point cloud and the preprocessed outlier point into two topics and issues the two topics to the second node.
(4) And the second node is an interframe matching node, the ROS system receives the message in the topic issued by the first node, then carries out coordinate transformation processing on the outlier point, carries out double feature extraction processing on the projected point cloud, and calculates the pose and the odometer information of the carrier by registering the extracted surface, line and corner point features.
(5) After the processing in the step (4), the second node also issues two topics to the third node, the first topic is a transformed outlier point, and the second topic is calculated pose and odometer information.
(6) And (3) the third node is a loop detection and rear-end optimization node, nonlinear optimization is carried out on the odometer and the pose information in the step (4) by receiving the information issued by the second node and applying a Levenberg-Marquardt method, then key frame extraction is carried out by utilizing the received information, and loop detection is carried out according to the extracted key frame and some constraints.
(7) After the processing in the step (6), the third node issues the key frame information, the odometry information and the track point information to the fourth node.
(8) The fourth node is a fusion map building node, and by receiving the message issued by the third node, maps with different formats are generated, and track points of carrier operation are displayed on the map, so that maps with different forms such as a point cloud map and an octree map can be generated according to requirements.
As shown in fig. 5a and 5b, when only laser point cloud is input, the algorithm of the present invention and the conventional LOAM algorithm are tested with the same indoor data set, and it can be seen that the algorithm of the present invention generates smaller drift and finds a loop, which is significantly better than the conventional LOAM algorithm.
The present invention is not limited to the above-described embodiments. The foregoing description of the specific embodiments is intended to describe and illustrate the technical solutions of the present invention, and the above specific embodiments are merely illustrative and not restrictive. Those skilled in the art can make many changes and modifications to the invention without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (9)

1. A laser SLAM method based on facial line angular point feature extraction is characterized by comprising the following steps:
(1) Initializing a map and various parameters, and allocating memory;
(2) Collecting point clouds collected by the 3D laser radar;
(3) Carrying out data preprocessing on the collected point cloud, and removing point cloud points exceeding the effective range of the laser radar;
(4) Projecting the point cloud preprocessed in the step (3) into a point cloud spherical coordinate system of 16 x 1800;
(5) Removing blocking points of the point cloud obtained in the step (4);
(6) Extracting the characteristics of the point cloud in the step (5), and extracting the characteristics of a surface, a line and an angular point according to the calculation of the smoothness;
(7) Performing surface normal vector estimation on the surface and line features extracted in the step (6), calculating the curvature of points according to the surface normal vector of the point cloud, and classifying the point cloud according to different curvatures of the points to extract more accurate surface and line features and improve the accuracy of pose estimation;
(8) Performing front-end registration calculation according to the surface and line characteristics extracted in the step (7) and the angular point characteristic information extracted in the step (6), and calculating pose information and odometer information of the carrier;
(9) Removing redundant parts in the point cloud, and storing key frame information, pose information and track information of a carrier;
(10) Performing loop detection according to the key frame information and the motion track information in the step (9);
(11) Performing back-end optimization on the extracted features and the existing pose according to the existing map and the latest point cloud;
(12) Updating the pose, fusing and building a map, and releasing map information and track information;
(13) And (4) jumping to the step (2), and continuing running until the graph building is completed or the process is closed.
2. The laser SLAM method based on facial line corner feature extraction as claimed in claim 1, wherein the pre-processing of the point cloud in step (3) is as follows:
(301) Traversing all original point clouds, setting effective distances, and using a filter to remove point clouds outside the effective distances;
(302) Taking the point cloud processed in the step (301) as input, calculating a cube which just can wrap the point cloud, dividing the big cube into different small cubes, calculating the mass center of points in each small cube, and approximating all the points in each small cube through the mass center to realize down-sampling processing of the point cloud;
(303) Traversing all point clouds processed in the step (302), performing k neighbor search on each point, calculating the average distance between each point and the k neighbor thereof, if the average distance of one point is greater than a threshold value d, considering the point as a outlier point, and eliminating all the outlier points after calculation;
d=mean+stddev*n
wherein mean is the average distance from the searched k points to the point, stddev is the standard deviation; n is 1.0.
3. The laser SLAM method based on facial line corner feature extraction as claimed in claim 1, wherein in step (4), the point cloud is 3D point cloud, the information contained in the 3D point cloud includes three-dimensional coordinates (x, y, z) scanned to each point, and the specific method for projecting a certain point from the three-dimensional coordinate system to the spherical coordinate system is as follows:
(401) Calculating an angle alpha in the vertical direction; finding out a few lines row with points in the vertical direction;
(402) Calculating an angle β in the horizontal direction; solving a second line col of a point in the horizontal direction;
(403) Calculating the depth distance of the point from the laser radar
Figure FDA0002460100630000021
The point originally expressed by three parameters (x, y, z) can be realized by using two values (row, col).
4. The laser SLAM method based on the facial line corner feature extraction as claimed in claim 1, wherein in step (5), the block point removal refers to the points in the point cloud which are mutually occluded and are close to each other.
5. The laser SLAM method based on the facial line corner feature extraction as claimed in claim 1, wherein in the facial line corner feature extraction of step (6), smoothness calculation is firstly carried out, and the smoothness calculation formula is as follows:
Figure FDA0002460100630000022
the smoothness of the ith point is calculated from s points around the ith point, typically s =10, where r i Is the depth distance of the ith point,r j is the depth distance of point j; then, the smoothness of all points in the point cloud is sorted, and a threshold parameter b is set 1 ,b 2 ,b 3 Wherein b is 1 >b 2 >b 3 (ii) a Screening out a value of smoothness greater than b 1 The points of the values are treated as angular points; screening out the product with smoothness greater than b 2 Is less than b 1 Treating the points as line characteristics, and screening out the points with the smoothness less than b 3 The points of values are treated as surface features.
6. The laser SLAM method based on the line corner feature extraction as claimed in claim 1, wherein in step (7), the surface normal vector estimation is performed on the line feature and the surface feature extracted in step (6), and the point p is calculated according to the surface normal vector of the point cloud i The curvature procedure is as follows:
(701) Selecting a current point p i
(702) Searching for point p by using kd-tree algorithm i Forming a covariance matrix A by the nearest k points;
(703) And (3) calculating a covariance matrix A, wherein the corresponding covariance matrix A is as follows:
Figure FDA0002460100630000023
where k is the point p i The number of adjacent points is,
Figure FDA0002460100630000024
representing the three-dimensional centroid of the nearest neighbor element, λ j is the eigenvalue of the covariance matrix, and Vj is the jth eigenvector;
(704) Solving the eigenvalues lambda 1, lambda 2 and lambda 3 of the covariance matrix obtained in the third step;
(705) Taking the minimum characteristic value lambda min in lambda 1, lambda 2 and lambda 3 characteristics;
(706) Calculating p i Curvature of (2): λ min/(λ 1+ λ 2+ λ 3).
7. According toThe laser SLAM method based on the facial line corner feature extraction as claimed in claim 1, wherein the pose information of the carrier in step (8) is represented by transformation matrix T, T is represented by rotation matrix R and vector T, and the Euler angle is used to represent the rotation matrix R = f (theta) yaw ,θ pitch ,θ roll ) The translation vector may be represented as t = (t) x ,t y ,t z ) Finding out the corresponding plane by utilizing the corresponding relation of the characteristics of the point clouds of the two adjacent frames obtained in the step (7) and the step (6), finding out the plane which is closest to the point in the previous frame data and corresponds to the point on the screened surface characteristics, and calculating the distance d between the point and the plane p (ii) a After finding out the corresponding corner point and line feature, for the points on the screened line feature, finding out the line feature which is closest to the point in the last frame data corresponding to the points, and calculating the distance d from the point to the line segment e Solving a transformation matrix T by utilizing a Gauss-Newton method according to the following nonlinear equation set;
F P (t z ,θ pitch ,θ roll )=d p
F e (t x ,t yyaw )=d e
and after the transformation matrix is solved, the pose information and the track information of the carrier are solved according to the initial values.
8. The laser SLAM method based on facial line corner feature extraction as claimed in claim 1, wherein in step (10), in the process of loop detection, comparing the currently acquired key frame with the stored historical key frame information, if similar key frames exist, simultaneously performing k neighbor search on the current track point, if the track point which appears in the range of r meters is searched and the constraint condition is satisfied, judging that loop exists; if the loop exists, the information coincident with the historical key frame is removed, and the image is built by using the characteristic information and the key frame information which are stored in the history.
9. The laser SLAM method based on facial line corner feature extraction as claimed in claim 8, wherein the constraint conditions added to the loop detection in step (10) have time constraint and distance constraint, if the difference between the current frame and the overlapped historical key frame is within t seconds, the loop is judged to be invalid; and if the distance between the current track point and the overlapped odometer information of the historical track point is within s meters, judging that the loop is invalid.
CN202010317771.9A 2020-04-21 2020-04-21 Laser SLAM method based on facial line angular point feature extraction Active CN111583369B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010317771.9A CN111583369B (en) 2020-04-21 2020-04-21 Laser SLAM method based on facial line angular point feature extraction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010317771.9A CN111583369B (en) 2020-04-21 2020-04-21 Laser SLAM method based on facial line angular point feature extraction

Publications (2)

Publication Number Publication Date
CN111583369A CN111583369A (en) 2020-08-25
CN111583369B true CN111583369B (en) 2023-04-18

Family

ID=72116787

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010317771.9A Active CN111583369B (en) 2020-04-21 2020-04-21 Laser SLAM method based on facial line angular point feature extraction

Country Status (1)

Country Link
CN (1) CN111583369B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111966109B (en) * 2020-09-07 2021-08-17 中国南方电网有限责任公司超高压输电公司天生桥局 Inspection robot positioning method and device based on flexible direct current converter station valve hall
CN112241002B (en) * 2020-10-11 2022-10-18 西北工业大学 Novel robust closed-loop detection method based on Karto SLAM
CN112305558B (en) * 2020-10-22 2023-08-01 中国人民解放军战略支援部队信息工程大学 Mobile robot track determination method and device using laser point cloud data
CN112543859B (en) * 2020-10-28 2022-07-15 华为技术有限公司 Positioning method, positioning device, electronic equipment and storage medium
CN112665575B (en) * 2020-11-27 2023-12-29 重庆大学 SLAM loop detection method based on mobile robot
CN112348897A (en) * 2020-11-30 2021-02-09 上海商汤临港智能科技有限公司 Pose determination method and device, electronic equipment and computer readable storage medium
CN112684430A (en) * 2020-12-23 2021-04-20 哈尔滨工业大学(威海) Indoor old person walking health detection method and system, storage medium and terminal
CN112767490B (en) * 2021-01-29 2022-06-21 福州大学 Outdoor three-dimensional synchronous positioning and mapping method based on laser radar
CN112837241A (en) * 2021-02-09 2021-05-25 贵州京邦达供应链科技有限公司 Method and device for removing image-building ghost and storage medium
CN112907491B (en) * 2021-03-18 2023-08-22 中煤科工集团上海有限公司 Laser point cloud loop detection method and system suitable for underground roadway
CN113379841B (en) * 2021-06-21 2024-04-30 上海仙工智能科技有限公司 Laser SLAM method based on phase correlation method and factor graph and readable storage medium thereof
WO2023000294A1 (en) * 2021-07-23 2023-01-26 深圳市大疆创新科技有限公司 Pose estimation method, laser-radar-inertial odometer, movable platform and storage medium
CN113298875B (en) * 2021-07-28 2021-10-15 浙江华睿科技股份有限公司 Laser positioning data verification method and device, electronic equipment and storage medium
CN113503875B (en) * 2021-09-10 2021-12-14 江苏霆升科技有限公司 Data association graph establishing method based on extended features
CN113744317B (en) * 2021-09-13 2024-03-15 浙江大学湖州研究院 Ackerman chassis track generation method only depending on point cloud under unstructured road surface
CN116030212B (en) * 2023-03-28 2023-06-02 北京集度科技有限公司 Picture construction method, equipment, vehicle and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016162568A1 (en) * 2015-04-10 2016-10-13 The European Atomic Energy Community (Euratom), Represented By The European Commission Method and device for real-time mapping and localization
CN109522832A (en) * 2018-11-06 2019-03-26 浙江工业大学 It is a kind of based on order cloud sheet section matching constraint and track drift optimization winding detection method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016162568A1 (en) * 2015-04-10 2016-10-13 The European Atomic Energy Community (Euratom), Represented By The European Commission Method and device for real-time mapping and localization
CN109522832A (en) * 2018-11-06 2019-03-26 浙江工业大学 It is a kind of based on order cloud sheet section matching constraint and track drift optimization winding detection method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Sun mingwei.etc."Trajectory Planning and Interactive Simulation Software Design for Flight Vechicle with Specified Terminal Time".《Journal of System Simulation》.2018,全文. *
胡超芳 ; 杨娜 ; 王娜 ; .多无人机模糊多目标分布式地面目标协同追踪.控制理论与应用.2018,(第08期),全文. *

Also Published As

Publication number Publication date
CN111583369A (en) 2020-08-25

Similar Documents

Publication Publication Date Title
CN111583369B (en) Laser SLAM method based on facial line angular point feature extraction
CN108152831B (en) Laser radar obstacle identification method and system
CN111126269B (en) Three-dimensional target detection method, device and storage medium
Huang Review on LiDAR-based SLAM techniques
CN113168717B (en) Point cloud matching method and device, navigation method and equipment, positioning method and laser radar
CN111429514A (en) Laser radar 3D real-time target detection method fusing multi-frame time sequence point clouds
CN111337941A (en) Dynamic obstacle tracking method based on sparse laser radar data
Zhou et al. T-LOAM: Truncated least squares LiDAR-only odometry and mapping in real time
Weon et al. Object Recognition based interpolation with 3d lidar and vision for autonomous driving of an intelligent vehicle
CN114526745B (en) Drawing construction method and system for tightly coupled laser radar and inertial odometer
CN113432600A (en) Robot instant positioning and map construction method and system based on multiple information sources
Ding et al. Vehicle pose and shape estimation through multiple monocular vision
CN113506318B (en) Three-dimensional target perception method under vehicle-mounted edge scene
CN110427797B (en) Three-dimensional vehicle detection method based on geometric condition limitation
CN113985445A (en) 3D target detection algorithm based on data fusion of camera and laser radar
US20230083965A1 (en) Map construction method, apparatus and storage medium
CN114120075A (en) Three-dimensional target detection method integrating monocular camera and laser radar
CN114325634A (en) Method for extracting passable area in high-robustness field environment based on laser radar
CN116449384A (en) Radar inertial tight coupling positioning mapping method based on solid-state laser radar
CN114648584B (en) Robustness control method and system for multi-source fusion positioning
CN114217665A (en) Camera and laser radar time synchronization method, device and storage medium
CN115908539A (en) Target volume automatic measurement method and device and storage medium
Cai et al. A lightweight feature map creation method for intelligent vehicle localization in urban road environments
Ma et al. RoLM: Radar on LiDAR map localization
Zhang et al. A LiDAR-intensity SLAM and loop closure detection method using an intensity cylindrical-projection shape context descriptor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant