CN114299386A - Laser SLAM method integrating laser odometer and loop detection - Google Patents
Laser SLAM method integrating laser odometer and loop detection Download PDFInfo
- Publication number
- CN114299386A CN114299386A CN202111552395.2A CN202111552395A CN114299386A CN 114299386 A CN114299386 A CN 114299386A CN 202111552395 A CN202111552395 A CN 202111552395A CN 114299386 A CN114299386 A CN 114299386A
- Authority
- CN
- China
- Prior art keywords
- pose
- frame
- feature
- loop
- map
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 60
- 238000001514 detection method Methods 0.000 title claims abstract description 39
- 230000009466 transformation Effects 0.000 claims abstract description 27
- 238000005457 optimization Methods 0.000 claims abstract description 12
- 238000000605 extraction Methods 0.000 claims abstract description 4
- 239000011159 matrix material Substances 0.000 claims description 18
- 238000004364 calculation method Methods 0.000 claims description 10
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 230000001131 transforming effect Effects 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 5
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Landscapes
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention relates to a laser SLAM method integrating a laser odometer and loop detection, and belongs to the technical field of automatic driving. The method comprises the following steps: performing feature extraction by utilizing smoothness between the point and the point in the neighborhood; matching the characteristic points to obtain the pose of the vehicle pose of the current frame in a world coordinate system; meanwhile, the feature points are transmitted into a loop detection module to construct a global descriptor for loop detection, and if loop is detected, the point cloud of a loop frame is output; obtaining pose transformation between the vehicle pose of the current frame and the vehicle pose of the loop frame by using an inter-frame matching method based on the feature points; and adding the pose transformation of the vehicle pose of the current frame and the vehicle pose of the previous frame and the pose transformation between the vehicle pose of the current frame and the vehicle pose of the loop frame into a pose graph, carrying out global pose optimization, and finally outputting a new pose. The invention reduces the operation cost and improves the effect of global pose optimization.
Description
Technical Field
The invention belongs to the technical field of automatic driving, and relates to a laser SLAM method integrating a laser odometer and loop detection.
Background
Lidar-based simultaneous localization and mapping (laser SLAM) has evolved into a three-part framework, namely front-end odometry, loop detection and global optimization. Most of the existing 3D laser SLAM algorithms have already achieved a good effect on the source data set, but there still exist some limitations. The biggest problem is that many laser SLAM algorithms only have front-end odometers and do not have loop detection, which causes the accumulation of vehicle pose estimation errors as the scene expands. Although many laser SLAM methods have adopted loop detection, most of them directly build pose constraints on a certain loop method, such that SLAM positioning accuracy cannot achieve ideal effects.
In summary, the existing laser SLAM method mainly has the following problems and disadvantages:
(1) the front-end odometer cannot eliminate the accumulated error, and therefore the positioning accuracy in a large scene is poor.
(2) Most laser SLAM methods fusing loop detection adopt an original point cloud matching method to construct pose constraints of vehicles among loop frames, so that the operation efficiency is low.
Disclosure of Invention
In view of this, the present invention aims to provide a laser SLAM method integrating a laser odometer and loop detection, which solves the problem that the positioning accuracy of the existing laser SLAM method is not ideal enough. The method combines the existing F-LOAM and Scan Context, performs self-adaptive distance judgment on loop detection on the basis, and calculates the pose constraint of the vehicle between a pair of loop frames by adopting a method based on feature points, thereby reducing the operation cost and improving the global pose optimization effect.
In order to achieve the purpose, the invention provides the following technical scheme:
a laser SLAM method for fusing laser odometer and loop detection comprises the following steps:
s1: performing feature extraction on the original point cloud input by the current frame by utilizing smoothness between the point and the point in the neighborhood;
s2: matching the current frame feature point cloud (plane point cloud and boundary point cloud) with the feature point cloud sub-map established before to obtain the pose of the current frame vehicle pose in a world coordinate system;
meanwhile, the feature points are transmitted into a loop detection module to construct a global descriptor for loop detection, and if loop is detected, the point cloud of a loop frame is output;
s3: obtaining pose transformation between the vehicle pose of the current frame and the vehicle pose of the loop frame by using an inter-frame matching method based on the feature points;
s4: and adding the pose transformation of the vehicle pose of the current frame and the vehicle pose of the previous frame and the pose transformation between the vehicle pose of the current frame and the vehicle pose of the loop frame into a pose graph, carrying out global pose optimization, and finally outputting a new pose.
Further, step S1 specifically includes the following steps:
s11: the point and its neighborhood point specifically include: the quantity of the point fortune collected by the laser radar in the horizontal direction is larger than that of the point cloud in the vertical direction. Therefore, the horizontal point cloud acquired by each line of laser beams has abundant characteristics. Order toAs the ith point cloud in the mth line laser beam of the kth frame laser radar, the adjacent point with the same laser beam is the neighborhood point;
s12: the smoothness calculation specifically includes: the point cloud data of a laser beam in the k frame is sequenced according to the angle between the point cloud data and the X axis of the laser radar to form annular data which is connected end to end, and thenComprises aThe same amount of point clouds in clockwise and counterclockwise directionsAsThe number of neighborhood points of; for each point, the smoothness is calculated as:
where c represents the smoothness of the current point.
Further, in step S2, obtaining the pose of the vehicle pose of the current frame in the world coordinate system specifically includes the following steps:
s201: constructing a characteristic sub-map, which specifically comprises the following steps: assuming that the current frame is k, projecting the feature point cloud of the k-1 frame to the world coordinate system by adopting the pose of the k-1 frame vehicle in the world coordinate system and adding the feature point cloud of the k-1 frame into the feature map of the k-2 frame to obtain a feature sub-map of the k-1 frame; wherein the first frame point cloud data is directly used as a feature map; intercepting the feature point cloud contained in the k-1 frame feature map within a set threshold range by the k-1 frame feature sub-map;
s202: finding out the corresponding characteristic line/surface of the current frame characteristic point cloud in the characteristic sub-map specifically comprises the following steps: storing a plane feature point cloud and a boundary feature point cloud in the feature sub-map respectively by using two 3 DKD-trees; each plane feature point and each boundary feature point of the current frame respectively find five nearest points in the corresponding feature point cloud by using a KD-tree method, and a covariance matrix formed by the five points is calculated to obtain a feature line/surface corresponding to each feature point of the current frame in the feature sub-map; wherein the characteristic line is represented as: points on the lineAnd line segment direction vectorThe feature plane is expressed as: position of planeSum normal vector
S203: the method for obtaining the pose of the vehicle pose of the current frame under the world coordinate system through characteristic point cloud matching specifically comprises the following steps: firstly, the distance from a feature point to a feature line/surface of a sub map is constructed as follows:
wherein d ise(Pe) Representing the distance of the feature point to the feature line in the sub-map, ds(Ps) Represents the distance of the feature point to the feature plane in the sub-map,showing the pose of the current frame vehicle in the world coordinate system,representing boundary points in the vehicle coordinate system,representing a plane point in a vehicle coordinate system;
s204: optimizing pose transformation between current frame and sub map by minimizing distance between feature point and feature line/surface
Further, in step S2, the feature points are transmitted to a loop detection module to construct a global descriptor for loop detection, and if a loop is detected, the point cloud of the loop frame is output, which specifically includes the following steps
S211: transmitting the current frame feature point cloud into a Scan Context method frame to obtain a primary loop detection result;
s212: odometer estimation using two frames of data detected as loopsAndodometer estimation value for calculating pose transformation matrix between them
S213: calculating the distance between two frames of position and pose transformation matrix odometer estimated valuesWherein,respectively representing the values of x, y and z taken out from the pose transformation matrix;
s214: the loop detection with a distance greater than the threshold is discarded as a false detection.
The calculation formula of the threshold is as follows:where k denotes the number of frames between the current frame and the last loop detection frame.
Further, step S3 specifically includes the following steps:
s31: the frame forming a loop with the current frame is called the loop frame for short; projecting the loop frame and the frames around the loop frame to a world coordinate system to form a characteristic sub-map;
s32: calculating the pose transformation matrix of the current frame and the loop frame by the same method as the method in S2
Further, step S4 specifically includes the following steps:
s41: position and pose of vehicle under world coordinate system by using k-1 frameAnd the pose of the k-frame vehicle under the world coordinate systemPosition and posture conversion of the vehicle from the k-1 frame to the k frame is obtained through calculationAdding the pose map into the pose map;
s42: transforming the pose of the current frame and the loop frame into a matrixAdding the pose graph into a pose graph;
s43: and performing global optimization on the pose graph to obtain a new pose of each frame.
The invention has the beneficial effects that:
(1) the invention judges the loop detection detected by the Scan Context method by using the self-adaptive distance judgment method, reduces the wrong loop detection, reduces the calculation time of global optimization and improves the positioning precision of the SLAM.
(2) The invention adopts a method based on feature matching to construct the constraint between the loop frames, thereby reducing the operation cost.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Drawings
For the purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a flow chart of a laser SLAM method of the present invention;
FIG. 2 is a sub-map of the current frame and loop frame features in the loop pose constructed by the present invention;
FIG. 3 is a map constructed by using the global optimized pose according to the present invention;
FIG. 4 is a comparison graph of absolute pose errors for the method of the present invention and the compared method.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.
Referring to fig. 1 to 4, fig. 1 is a flowchart of a laser SLAM method for integrating a laser odometer and loop detection, the method comprising the steps of:
step 1: performing feature extraction on the original point cloud input by the current frame by utilizing smoothness between the point and the point in the neighborhood; the method specifically comprises the following steps:
step 1.1: the point and its neighborhood point specifically include: often the lidar is employed in the horizontal directionThe quantity of collected point shipments is larger than the quantity of point clouds in the vertical direction. Therefore, the horizontal point cloud acquired by each line of laser beams has abundant characteristics. Order toAnd as the ith point cloud in the mth line laser beam of the kth frame laser radar, the adjacent point with the same laser beam is the neighborhood point.
Step 1.2: the smoothness calculation specifically includes: sorting the point cloud data of a laser beam in k frames according to the angle between the point cloud data and the X axis of the laser radar to form annular data which is connected end to end, and thenComprises aThe same amount of point clouds in clockwise and counterclockwise directionsAsThe number of neighboring points. For each point, the smoothness is calculated by the following method:
step 2: carrying out feature point matching on the current frame feature point cloud (plane point cloud and boundary point cloud) and a feature point cloud sub-map established before to obtain the pose of the current frame vehicle pose in a world coordinate system; the method specifically comprises the following steps:
step 201: constructing a characteristic sub-map, which specifically comprises the following steps: and if the current frame is k, projecting the feature point cloud of the k-1 frame to the world coordinate system by adopting the pose of the k-1 frame vehicle in the world coordinate system and adding the feature point cloud of the k-1 frame to the feature map of the k-2 frame to obtain the feature sub map of the k-1 frame. Wherein the first frame point cloud data is directly used as a feature map. And intercepting the characteristic point cloud contained in the k-1 frame characteristic map within the set threshold range by the k-1 frame characteristic sub-map.
Step 202: finding out the corresponding characteristic line/surface of the current frame characteristic point cloud in the characteristic sub-map specifically comprises the following steps: and respectively storing the plane feature point cloud and the boundary feature point cloud in the feature sub-map by using two 3D KD-trees. And respectively finding the nearest 5 points in the corresponding feature point cloud by each plane feature point and each boundary feature point of the current frame by using a KD-tree method, and calculating a covariance matrix formed by the five points to obtain a feature line/surface corresponding to each feature point of the current frame in the feature sub-map. Wherein the characteristic line is represented as: points on the lineAnd line segment direction vectorThe feature plane is expressed as: position of planeSum normal vector
Step 203: and matching the feature point cloud to obtain the pose of the vehicle pose of the current frame under a world coordinate system: firstly, constructing the distance from the feature point to the feature line/surface of the sub-map: whereinAnd representing the pose of the current frame vehicle under the world coordinate system. Then, the pose transformation between the current frame and the sub-map can be optimized by minimizing the distance between the feature points to the feature line/plane
And step 3: simultaneously with the step 2, the feature points are transmitted into a loop detection module to construct a global descriptor for loop detection, and if loop is detected, the point cloud of a loop frame is output; comprises the following steps:
step 3.1: transmitting the current frame feature point cloud into a ScanContext method frame to obtain a primary loop detection result;
step 3.2: odometer estimation using two frames of data detected as loops Odometer estimation value for calculating pose transformation matrix between them
Step 3.3: calculating the distance between two frames of position and pose transformation matrix odometer estimated valuesWhereinRespectively representing the values of x, y and z taken out from the pose transformation matrix;
step 3.4: the loop detection with a distance greater than the threshold is discarded as a false detection. The threshold calculation formula is:where k denotes the number of frames between the current frame and the last loop detection frame.
And 4, step 4: and obtaining pose transformation between the vehicle pose of the current frame and the vehicle pose of the loop frame by using an interframe matching method based on the feature points. The method specifically comprises the following steps:
step 4.1: a frame constituting a loop with the current frame is simply referred to as a loop frame. The loop frame and its surrounding frames are projected to the world coordinate system to form a feature sub-map, as shown in fig. 2, the current frame and the loop frame feature sub-map.
Step 4.2: calculating the pose transformation matrix of the current frame and the loop frame by adopting the same method as the step 2
And 5: and adding the pose transformation between the vehicle pose of the current frame and the vehicle pose of the previous frame and the pose transformation between the vehicle pose of the current frame and the vehicle pose of the loop frame into a pose graph, carrying out global pose optimization, and finally outputting a new pose. The method specifically comprises the following steps:
step 5.1: position and pose of vehicle under world coordinate system by using k-1 frameAnd the pose of the k-frame vehicle under the world coordinate systemPosition and posture conversion of the vehicle from the k-1 frame to the k frame is obtained through calculationAnd adds it to the pose graph.
Step 5.1: transforming the pose of the current frame and the loop frame into a matrixAnd adding the position and pose graph.
Step 5.1: and performing global optimization on the pose graph to obtain a new pose of each frame.
As shown in FIG. 3, the invention utilizes a map constructed by global optimized poses. Compared with a Simple-SC-F-LOAM method which directly combines an F-LOAM method and a Scan Context method, the method provided by the invention has the advantages that the higher positioning precision is realized, and the calculation efficiency is higher. Because the Simple-SC-F-LOAM method can detect a plurality of wrong loops, and the adaptive distance judgment method is adopted to optimize the points, the SLAM positioning accuracy of the invention is higher. Fig. 4 shows the absolute pose errors on the KITTI datasets 00 and 05 for the present invention versus the compared method, indicating the higher accuracy of SLAM positioning for the present invention. And because the excessive wrong loop can increase the global optimization time, and the invention adopts a method based on the characteristic points to construct the pose constraint between a pair of loop frames of the vehicle, thereby reducing the operation cost.
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.
Claims (7)
1. A laser SLAM method for integrating a laser odometer and loop detection is characterized by comprising the following steps:
s1: performing feature extraction on the original point cloud input by the current frame by utilizing smoothness between the point and the point in the neighborhood;
s2: matching the current frame feature point cloud with the feature point cloud sub-map established previously to obtain the pose of the current frame vehicle pose in a world coordinate system;
meanwhile, the feature points are transmitted into a loop detection module to construct a global descriptor for loop detection, and if loop is detected, the point cloud of a loop frame is output;
s3: obtaining pose transformation between the vehicle pose of the current frame and the vehicle pose of the loop frame by using an inter-frame matching method based on the feature points;
s4: and adding the pose transformation of the vehicle pose of the current frame and the vehicle pose of the previous frame and the pose transformation between the vehicle pose of the current frame and the vehicle pose of the loop frame into a pose graph, carrying out global pose optimization, and finally outputting a new pose.
2. The laser SLAM method of claim 1, wherein step S1 specifically comprises the steps of:
s11: the point and its neighborhood point specifically include: order toAs the ith point cloud in the mth line laser beam of the kth frame laser radar, the adjacent point with the same laser beam is the neighborhood point;
s12: the smoothness calculation specifically includes: the point cloud data of a laser beam in the k frame is sequenced according to the angle between the point cloud data and the X axis of the laser radar to form annular data which is connected end to end, and thenComprises aThe same amount of point clouds in clockwise and counterclockwise directionsAsThe number of neighborhood points of; for each point, the smoothness is calculated as:
where c represents the smoothness of the current point.
3. The laser SLAM method according to claim 2, wherein in step S2, obtaining the pose of the vehicle pose of the current frame in the world coordinate system includes the following steps:
s201: constructing a characteristic sub-map, which specifically comprises the following steps: assuming that the current frame is k, projecting the feature point cloud of the k-1 frame to the world coordinate system by adopting the pose of the k-1 frame vehicle in the world coordinate system and adding the feature point cloud of the k-1 frame into the feature map of the k-2 frame to obtain a feature sub-map of the k-1 frame; wherein the first frame point cloud data is directly used as a feature map; intercepting the feature point cloud contained in the K-1 frame feature map within the set threshold range by the K-1 frame feature sub-map;
s202: finding out the corresponding characteristic line/surface of the current frame characteristic point cloud in the characteristic sub-map specifically comprises the following steps: storing a plane feature point cloud and a boundary feature point cloud in the feature sub-map respectively by using two 3D KD-trees; each plane feature point and each boundary feature point of the current frame respectively find five nearest points in the corresponding feature point cloud by using a KD-tree method, and a covariance matrix formed by the five points is calculated to obtain a feature line/surface corresponding to each feature point of the current frame in the feature sub-map; wherein the characteristic line is represented as: points on the lineAnd line segment direction vectorThe feature plane is expressed as: position of planeSum normal vector
S203: the method for obtaining the pose of the vehicle pose of the current frame under the world coordinate system through characteristic point cloud matching specifically comprises the following steps: firstly, the distance from a feature point to a feature line/surface of a sub map is constructed as follows:
wherein d ise(Pe) Representing the distance of the feature point to the feature line in the sub-map, ds(Ps) Represents the distance of the feature point to the feature plane in the sub-map,showing the pose of the current frame vehicle in the world coordinate system,representing boundary points in the vehicle coordinate system,representing a plane point in a vehicle coordinate system;
4. The laser SLAM method as claimed in claim 3 wherein step S2, introducing the feature points into a loop detection module to construct a global descriptor for loop detection, and outputting the point cloud of the loop frame if loop is detected, comprises the following steps
S211: transmitting the current frame feature point cloud into a Scan Context method frame to obtain a primary loop detection result;
s212: odometer estimation using two frames of data detected as loopsAndodometer estimation value for calculating pose transformation matrix between them
S213: calculating the distance between two frames of position and pose transformation matrix odometer estimated valuesWherein,respectively representing the values of x, y and z taken out from the pose transformation matrix;
s214: the loop detection with a distance greater than the threshold is discarded as a false detection.
6. The laser SLAM method of claim 4, wherein step S3 specifically includes the steps of:
s31: the frame forming a loop with the current frame is called the loop frame for short; projecting the loop frame and the frames around the loop frame to a world coordinate system to form a characteristic sub-map;
7. The laser SLAM method of claim 6, wherein step S4 specifically includes the steps of:
s41: position and pose of vehicle under world coordinate system by using k-1 frameAnd the pose of the k-frame vehicle under the world coordinate systemPosition and posture conversion of the vehicle from the k-1 frame to the k frame is obtained through calculationAdding the pose map into the pose map;
s42: transforming the pose of the current frame and the loop frame into a matrixAdding the pose graph into a pose graph;
s43: and performing global optimization on the pose graph to obtain a new pose of each frame.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111552395.2A CN114299386A (en) | 2021-12-17 | 2021-12-17 | Laser SLAM method integrating laser odometer and loop detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111552395.2A CN114299386A (en) | 2021-12-17 | 2021-12-17 | Laser SLAM method integrating laser odometer and loop detection |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114299386A true CN114299386A (en) | 2022-04-08 |
Family
ID=80967354
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111552395.2A Pending CN114299386A (en) | 2021-12-17 | 2021-12-17 | Laser SLAM method integrating laser odometer and loop detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114299386A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115451942A (en) * | 2022-09-26 | 2022-12-09 | 西南交通大学 | Vehicle bottom inspection robot SLAM loop detection method based on wheel characteristics |
CN115661255A (en) * | 2022-12-15 | 2023-01-31 | 中国科学技术大学 | Laser SLAM loop detection and correction method |
CN116358532A (en) * | 2023-05-31 | 2023-06-30 | 小米汽车科技有限公司 | Loop detection method and device, storage medium and vehicle |
-
2021
- 2021-12-17 CN CN202111552395.2A patent/CN114299386A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115451942A (en) * | 2022-09-26 | 2022-12-09 | 西南交通大学 | Vehicle bottom inspection robot SLAM loop detection method based on wheel characteristics |
CN115661255A (en) * | 2022-12-15 | 2023-01-31 | 中国科学技术大学 | Laser SLAM loop detection and correction method |
CN116358532A (en) * | 2023-05-31 | 2023-06-30 | 小米汽车科技有限公司 | Loop detection method and device, storage medium and vehicle |
CN116358532B (en) * | 2023-05-31 | 2023-09-26 | 小米汽车科技有限公司 | Loop detection method and device, storage medium and vehicle |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114299386A (en) | Laser SLAM method integrating laser odometer and loop detection | |
CN112014857B (en) | Three-dimensional laser radar positioning and navigation method for intelligent inspection and inspection robot | |
CN114862949B (en) | Structured scene visual SLAM method based on dot-line surface characteristics | |
CN110097553B (en) | Semantic mapping system based on instant positioning mapping and three-dimensional semantic segmentation | |
CN108682027A (en) | VSLAM realization method and systems based on point, line Fusion Features | |
CN112258600A (en) | Simultaneous positioning and map construction method based on vision and laser radar | |
CN108615246B (en) | Method for improving robustness of visual odometer system and reducing calculation consumption of algorithm | |
CN106056643B (en) | A kind of indoor dynamic scene SLAM method and system based on cloud | |
CN113269094B (en) | Laser SLAM system and method based on feature extraction algorithm and key frame | |
CN111427047A (en) | Autonomous mobile robot S L AM method in large scene | |
CN113223045A (en) | Vision and IMU sensor fusion positioning system based on dynamic object semantic segmentation | |
WO2021175434A1 (en) | System and method for predicting a map from an image | |
CN114088081A (en) | Map construction method for accurate positioning based on multi-segment joint optimization | |
CN112507056A (en) | Map construction method based on visual semantic information | |
CN114219022B (en) | Multi-sensor multi-target tracking method combining cluster analysis and particle swarm optimization algorithm | |
Li et al. | A collaborative relative localization method for vehicles using vision and LiDAR sensors | |
CN115143958A (en) | Multi-sensor fusion SLAM method based on GPU acceleration | |
CN117392268A (en) | Laser scanning mapping method and system based on self-adaption combined CPD and ICP algorithm | |
CN117036447A (en) | Indoor scene dense three-dimensional reconstruction method and device based on multi-sensor fusion | |
CN115507842A (en) | Surface element-based lightweight unmanned aerial vehicle map construction method | |
Martínez-Carranza et al. | Efficient visual odometry using a structure-driven temporal map | |
CN115170826A (en) | Local search-based fast optical flow estimation method for small moving target and storage medium | |
CN112614162B (en) | Indoor vision rapid matching and positioning method and system based on space optimization strategy | |
CN111239761B (en) | Method for indoor real-time establishment of two-dimensional map | |
CN113763468A (en) | Positioning method, device, system and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |