CN116429084A - Dynamic environment 3D synchronous positioning and mapping method - Google Patents

Dynamic environment 3D synchronous positioning and mapping method Download PDF

Info

Publication number
CN116429084A
CN116429084A CN202310257034.8A CN202310257034A CN116429084A CN 116429084 A CN116429084 A CN 116429084A CN 202310257034 A CN202310257034 A CN 202310257034A CN 116429084 A CN116429084 A CN 116429084A
Authority
CN
China
Prior art keywords
point cloud
laser radar
static background
map
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310257034.8A
Other languages
Chinese (zh)
Inventor
王忠立
王颖博
陆腾飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jiaotong University
Original Assignee
Beijing Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jiaotong University filed Critical Beijing Jiaotong University
Priority to CN202310257034.8A priority Critical patent/CN116429084A/en
Publication of CN116429084A publication Critical patent/CN116429084A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/383Indoor data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Automation & Control Theory (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The application provides a dynamic environment 3D synchronous positioning and mapping method, which comprises the following steps: acquiring the original point cloud of the laser radar and inertial measurement data of an inertial measurement unit sensor; fusing the original point cloud and the inertial measurement data to obtain fused data; processing the fusion data by adopting an end-to-end full convolution semantic segmentation network to obtain a potential motion target point cloud and a static background point cloud; executing an interactive multimode trackless Kalman filtering method based on probability association on the potential motion target point cloud to realize multi-target tracking; and inputting the static background point cloud into a synchronous positioning and mapping module to finish synchronous positioning and mapping. The scheme is as follows: the multi-target tracking, synchronous positioning and mapping tasks can be completed simultaneously, and the robustness and the accuracy of the SLAM in a dynamic environment are improved.

Description

Dynamic environment 3D synchronous positioning and mapping method
Technical Field
The invention belongs to the technical field of artificial intelligence, and particularly relates to a dynamic environment 3D synchronous positioning and mapping method.
Background
LiDAR (Light Detection and Ranging, liDAR) -based 3D SLAM (Simultaneous Localization And Mapping, synchronous positioning and mapping) has been one of the research hotspots in the field of autopilot and mobile robots in recent years. The SLAM technology plays an important role in realizing real-time navigation and mapping of the autonomous mobile robot in an unknown environment. Although SLAM technology has made some breakthroughs over many years of research, its performance tends to be limited in dynamic large scale environments. For example, most SLAM systems assume that the surrounding environment is static, but the real world must contain dynamic objects, which makes SLAM algorithms prone to failure in practical dynamic application scenarios. How to construct a robust, accurate map for a complex dynamic environment remains a challenging task, and is becoming more and more interesting to the industry.
SLAM involves a complex series of calculations and algorithms that use sensors to construct maps and structures in an unknown environment and locate position and orientation. However, most SLAM algorithms adopt a feature extraction method to match the front and rear frames of the LiDAR scan, and assume that the actual application scene is a static environment, which results in that the algorithm cannot be applied to the emerging LiDAR in different scan modes, is easily affected by the environment, and has poor application effect.
Disclosure of Invention
The embodiment of the present disclosure aims to provide a dynamic environment 3D synchronous positioning and mapping method.
In order to solve the technical problems, the embodiments of the present application are implemented in the following manner:
the application provides a dynamic environment 3D synchronous positioning and mapping method, which comprises the following steps:
acquiring the original point cloud of the laser radar and inertial measurement data of an inertial measurement unit sensor;
fusing the original point cloud and the inertial measurement data to obtain fused data;
processing the fusion data by adopting an end-to-end full convolution semantic segmentation network to obtain a potential motion target point cloud and a static background point cloud;
executing an interactive multimode trackless Kalman filtering method based on probability association on the potential motion target point cloud to realize multi-target tracking;
and inputting the static background point cloud into a synchronous positioning and mapping module to finish synchronous positioning and mapping.
In one embodiment, the method further comprises:
distortion correction is carried out on the fusion data to obtain corrected fusion data;
processing the fusion data by adopting an end-to-end full convolution semantic segmentation network to obtain a potential motion target point cloud and a static background point cloud, wherein the method comprises the following steps:
and processing corrected fusion data by adopting an end-to-end full convolution semantic segmentation network to obtain potential motion target point cloud and static background point cloud.
In one embodiment, distortion correction is performed on the fusion data to obtain corrected fusion data, including:
and according to the time interval of the start and the end of the laser radar frame scanning, utilizing the IMU to pre-integrate and estimate the motion in the time period corresponding to the time interval, adjusting the one-to-one correspondence between the laser radar frame and the vehicle motion through the geometric relationship between the laser radar and the inertial measurement unit sensor, and carrying out distortion correction on the fusion data to obtain corrected fusion data.
In one embodiment, the static background point cloud inputs a synchronous positioning and mapping module to complete synchronous positioning and mapping, including:
performing accumulated registration on the static background point cloud and the map points by adopting a direct registration method to obtain a static map;
the initial map points are obtained by estimating a first frame static background point cloud according to the initial pose, and the first frame static background point cloud updates the map according to a direct registration method.
In one embodiment, a direct registration method is used to perform cumulative registration on a static background point cloud and a map point to obtain a static map, including:
carrying out state prediction by adopting measurement information of an inertial measurement unit sensor to obtain a predicted state; correcting motion distortion of the same frame of points sampled by the laser radar in the continuous motion process under different postures according to the prediction state, and projecting all points in each frame of static background point cloud to the end of each scanning to obtain corrected point cloud;
the state estimation in the iterative motion process obtains the optimal state and covariance estimation;
converting the corrected point cloud into a world coordinate system according to the optimal state and covariance estimation to obtain the corrected point cloud under the world coordinate system;
and inserting the corrected point cloud under the world coordinate system into the existing map until all the static background point clouds are inserted into the existing map, and obtaining the static map.
In one embodiment, the state estimation in the iterative motion process finds the optimal state and covariance estimation, including:
according to the optimal state estimation and covariance output by the last mileometer and the current input measured value of the inertial measurement unit sensor, obtaining the prediction state of the inertial measurement unit sensor, projecting the laser radar points in each measured static background point cloud into a world coordinate system through the prediction state, searching the preset number of nearest neighbors in the existing map, and fitting the nearest neighbors to the local facet block; determining a measurement equation according to the local facet blocks;
and obtaining state distribution according to a measurement equation, combining prior Gaussian distribution, obtaining state posterior probability distribution by adopting a maximum posterior probability estimation method, and repeating iteration until the optimal state and covariance estimation are obtained.
In one embodiment, determining the measurement equation from the local facet block includes:
the fitted local facet blocks have normal vectors used in the measurement model
Figure BDA0004130030570000041
And centroid->
Figure BDA0004130030570000042
The measurement equation is determined according to the normal vector and the centroid:
Figure BDA0004130030570000043
wherein I is the sensor coordinate system of the inertial measurement unit, G is the world coordinate system, L is the laser radar coordinate system,
Figure BDA0004130030570000044
for the laser radar pose->
Figure BDA0004130030570000045
Is an external parameter between the laser radar and the inertial measurement unit sensor, k is the laser radar scanning serial number,/-for the laser radar>
Figure BDA0004130030570000046
Representing a point obtained by the kth scanning in the local laser radar coordinate system L at the end of the scanning; />
Figure BDA0004130030570000047
The noise pollution generated by laser radar measurement consists of ranging and beam directional noise.
In one embodiment, the problem of the maximum a posteriori probability estimation method is solved by an iterative kalman filter, wherein a matrix of inverted state dimensions is used to calculate the kalman gain used in the kalman filter.
In one embodiment, the method further comprises: and eliminating accumulated errors of the synchronous positioning and mapping module by adopting a closed loop detection module based on a factor graph.
In one embodiment, the method for eliminating the accumulated error of the synchronous positioning and mapping module by adopting the closed loop detection module based on the factor graph comprises the following steps:
when the optimal state estimation obtained by the new scanning of the laser radar is added into the factor graph, the Euclidean distance is adopted to find the prior state closest to the optimal state estimation, the current key frame and the existing map are projected into the world coordinate system, the key frame and the current frame are matched, the relative transformation between the key frame and the current frame is obtained, and the relative transformation between the key frame and the current frame is added into the factor graph as a cyclic closure factor.
The technical scheme provided by the embodiment of the present specification can be seen from the following scheme: the method has the advantages of high efficiency and accurate generation result, the robustness of SLAM in a dynamic environment is improved through the detection and removal of a moving target and a closed loop detection module based on a factor graph, the point cloud registration based on a direct method is designed, and the real-time updating, the high-efficiency searching and the management of a 3D map are realized by combining a ikd-tree data structure.
Drawings
In order to more clearly illustrate the embodiments of the present description or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some of the embodiments described in the present description, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a dynamic environment 3D synchronous positioning and mapping method provided by the application;
FIG. 2 is a schematic illustration of a measurement model provided herein;
fig. 3 is a schematic diagram of an implementation process of the closed loop detection module provided in the present application.
Detailed Description
In order to make the technical solutions in the present specification better understood by those skilled in the art, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present specification, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are intended to be within the scope of the present disclosure.
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be apparent to those skilled in the art that various modifications and variations can be made in the specific embodiments of the present disclosure without departing from the scope or spirit of the disclosure. Other embodiments will be apparent to the skilled person from the description of the present application. The specification and examples are exemplary only.
As used herein, the terms "comprising," "including," "having," "containing," and the like are intended to be inclusive and mean an inclusion, but not limited to.
The "parts" in the present application are all parts by mass unless otherwise specified.
According to the research trend of the dynamic environment SLAM at home and abroad at present, from the aspect of theoretical research, the method aims at the indoor low-dynamic environment, considers the time dimension in the map construction process, or adopts an EKF (Extended Kalman Filter, extended Kalman filtering) framework to combine with a multi-sensor fusion method and the like to ensure the map drawing precision under the dynamic environment; one is to use bayesian models to infer that each 3D point is in a static or dynamic state by repeated observation, or to distinguish between moving and stationary objects based on a modified RANSAC (Random Sample Consensus ) algorithm, for large environments with moving objects outdoors. Research scholars at home and abroad do a great deal of work on improving the robustness of SLAM algorithm under dynamic environment, including adopting a point cloud segmentation or registration method to realize pose estimation with 6 degrees of freedom, utilizing semantic information provided by semantic segmentation to filter dynamic objects, adding semantic constraint in scanning registration and the like. However, these methods mostly treat moving objects as outliers and use separation and filtering to remove them, instead of implementing SLAM and MOT simultaneously (Multiple Object Tracking, multi-objective tracking).
LCD (loop closure detection, loop detection) is typically an essential component of SLAM, which can be described as a process that attempts to find a match between a current observation and a previously accessed location. Some excellent algorithms achieve very high performance even without an LCD, but as the environmental scale increases, the performance of an LCD can decrease, especially in complex dynamic environments. The existing method mostly adopts global descriptors, measures the similarity between scenes by calculating similarity scores and utilizing the intensity information of points and combining a mixed distance measurement method, improves the position identification precision, or adopts different methods for low-similarity environments and complex environments to improve the LCD speed.
In practical application, SLAM and MOT in dynamic environment are two indivisible scene understanding tasks, synchronous completion of the tasks is a key for the robot to realize autonomy truly, and a robust LCD method can greatly reduce drift errors of estimated tracks and maps. Theoretically, SLAM and MOT are reciprocal and reciprocal, MOT can eliminate the influence of dynamic object on SLAM, improve SLAM precision and robustness, accurate SLAM result helps the solution of MOT task. The feature detection and matching method based on Euclidean distance improves the matching rate by selecting a sub-key frame set for matching, and is equivalent to the prior LCD method based on the point cloud descriptor. And accurately subdividing different module tasks, removing redundant dynamic interference, and realizing a high-efficiency and accurate 3D SLAM result.
Aiming at the problem that SLAM performance is easy to be limited in a complex dynamic environment, the application provides a dynamic environment 3D SLAM method, an end-to-end full convolution semantic segmentation network (Fully Convolutional Networks for semantic Segmentation, FCNN) is adopted to segment potential moving targets, residual static background point clouds are transmitted to a static SLAM module based on a direct point cloud registration method, real-time updating, efficient searching and management of a 3D map are realized by utilizing a ikd-tree data structure, and a multi-factor graph closed loop detection thread is added into the dynamic environment 3D SLAM method, so that detection precision and robustness are further improved.
The invention is described in further detail below with reference to the drawings and examples.
Referring to fig. 1, a flow diagram of a dynamic environment 3D synchronization positioning and mapping method suitable for use in the embodiments of the present application is shown.
As shown in fig. 1, a method for 3D synchronous positioning and mapping in a dynamic environment may include:
s110, acquiring the original point cloud of the laser radar and inertial measurement data of an inertial measurement unit sensor.
Specifically, the raw point cloud of the laser radar and inertial measurement data of an inertial measurement unit (Inertial measurement unit, IMU) sensor can utilize an open source data set to download raw data corresponding to an odometer data set in a KITTI data set, convert raw data raw point cloud files into two bag packets in different forms by utilizing a KITTI2bag conversion tool, extract required topics by adopting a python program and fuse the topics into new bag packets as input data of an experiment. Data collected in real time by each sensor may also be utilized.
Exemplary, raw data original point cloud data and calibration files corresponding to the sequence of the od data set 00-10 in the KITTI data set are downloaded, content with the same size is intercepted according to the start frame number and the end frame number corresponding to the raw data original point cloud data and the calibration files, the raw data original point cloud files are converted into bag packets with two different forms by using a KITTI2bag conversion tool, required topics are extracted by adopting a python program and fused into new bag packets, and the new bag packets are used as input data of an experiment.
S120, fusing the original point cloud and the inertial measurement data to obtain fused data.
Because a large number of moving objects often exist in an urban road environment where an automatic driving vehicle is located, the direct use of LiDAR sensor data can influence the accuracy of pose estimation and map building due to noise of point cloud matching caused by moving targets such as vehicles, and an IMU is adopted as an auxiliary sensor for correcting motion distortion.
In one embodiment, the method further comprises: and carrying out distortion correction on the fusion data to obtain corrected fusion data.
Specifically, an IMU is adopted as an auxiliary sensor for distortion correction, motion in the time period is estimated by utilizing IMU pre-integration according to the time interval between the beginning and the end of laser radar frame scanning, the one-to-one correspondence between the laser radar frame and the motion of a vehicle is adjusted through the geometric relationship between the laser radar and the IMU sensor, distortion correction (namely distortion correction) is carried out on fused LiDAR and IMU data (namely fusion data), corrected fusion data are obtained, and initial pose estimation is completed on the basis of the corrected fusion data.
S130, processing the fusion data by adopting an end-to-end full convolution semantic segmentation network to obtain a potential motion target point cloud and a static background point cloud.
It can be understood that, when the fusion data is subjected to distortion correction, the FCNN is adopted to process the fusion data in S130, that is, the FCNN is adopted to process the corrected fusion data. The fusion data below refers to corrected fusion data when not emphasized.
S140, executing an interactive multimode trackless Kalman filtering method based on probability association on the potential motion target point cloud to achieve multi-target tracking.
Specifically, S130 and S140 are moving object detection stages, in which, in order to provide robustness of SLAM in a dynamic environment, a moving object tracking method based on detection is adopted, a detection part accurately detects and segments potential moving objects such as automobiles, pedestrians, riders and the like based on FCNN, and a point cloud to which the detection part belongs is regarded as an outer point, and the point cloud remaining after removing the part of dynamic objects is a static background point cloud; and then the tracking part takes detection and segmentation results (namely potential motion target point cloud) as input, and executes an interactive multi-mode unscented Kalman filtering method based on probability association to realize the MOT function. The rest static background point cloud is transmitted to the SLAM module to complete the SLAM function.
In this step, FCNN is used to detect and segment potential moving objects, which are discarded in the mapping process, including possible moving objects that may cause interference in subsequent map positioning.
S150, inputting a synchronous positioning and mapping module by a static background point cloud to finish synchronous positioning and mapping, wherein the synchronous positioning and mapping module comprises:
performing accumulated registration on the static background point cloud and the map points by adopting a direct registration method to obtain a static map;
the initial map points are obtained by estimating a first frame static background point cloud according to the initial pose, and the first frame static background point cloud updates the map points according to a direct registration method.
Specifically, when in an initial state, there is no static background point cloud (or called LiDAR point cloud), along with the scanning of LiDAR for a period of time, the static background original points sampled in sequence are accumulated into a first frame of static background point cloud, then initial pose estimation is obtained (the initial pose estimation is the motion of LiDAR in the period of time, including rotation and translation), then the first frame of static background point cloud is converted from a LiDAR coordinate system to a world coordinate system (also called global coordinate system) according to the initial pose estimation, and at the moment, the point cloud under the first frame of world coordinate system is used as an initial map.
And then LiDAR scanning to obtain a first frame, and updating the map by the frame static background point cloud according to a direct registration method. Specifically, the static background point cloud of the frame after the first frame may be converted into the world coordinate system, and then the point cloud in the world coordinate system may be inserted (registered) into the existing map.
The direct registration of all LiDAR points increases the calculation amount and reduces the efficiency, in order to realize the real-time updating, efficient searching and management of the 3D map, a ikd-tree data structure can be adopted to efficiently represent the large-scale dense point cloud map, and in order to prevent the size of the map from being unconstrained, only map points in a large local area with the length L around the current position of the LiDAR are reserved on the tree. Point insertion, deletion or reinsertion can be realized by using point type operation or frame type operation of incremental updating, and related subtrees are reconstructed by actively monitoring the balance characteristics after the incremental updating and the incremental operation, so that the attribute range and distance information on tree nodes are dynamically rebalanced, and the search process is quickened by combining a K nearest neighbor method (KNN), thereby maintaining hard instantaneity, supporting a multithreading parallel computing architecture, improving algorithm efficiency and guaranteeing instantaneity.
In one embodiment, a direct registration method is used to perform cumulative registration on a static background point cloud and a map point to obtain a static map, including:
carrying out state prediction by adopting measurement information of an inertial measurement unit sensor to obtain a predicted state; correcting motion distortion of the same frame of points sampled by the laser radar in the continuous motion process under different postures according to the prediction state, and projecting all points in each frame of static background point cloud to the end of each scanning to obtain corrected point cloud;
the state estimation in the iterative motion process obtains the optimal state and covariance estimation;
converting the corrected point cloud into a world coordinate system according to the optimal state and covariance estimation to obtain the corrected point cloud under the world coordinate system;
and inserting the corrected point cloud under the world coordinate system into the existing map until all the static background point clouds are inserted into the existing map, and obtaining the static map.
The method for calculating the optimal state and covariance estimation by the state estimation in the iterative motion process comprises the following steps:
according to the optimal state estimation and covariance output by the last mileometer and the current input measured value of the inertial measurement unit sensor, obtaining the prediction state of the inertial measurement unit sensor, projecting the laser radar points in each measured static background point cloud into a world coordinate system through the prediction state, searching the preset number of nearest neighbors in the existing map, and fitting the nearest neighbors to the local facet block; determining a measurement equation according to the local facet blocks;
and obtaining state distribution according to a measurement equation, combining prior Gaussian distribution, obtaining state posterior probability distribution by adopting a maximum posterior probability estimation method, and repeating iteration until the optimal state and covariance estimation are obtained.
Wherein determining a measurement equation from the local facet blocks comprises:
the fitted local facet blocks have normal vectors used in the measurement model
Figure BDA0004130030570000121
And centroid->
Figure BDA0004130030570000122
The measurement equation is determined according to the normal vector and the centroid:
Figure BDA0004130030570000131
wherein I is the sensor coordinate system of the inertial measurement unit, G is the world coordinate system, L is the laser radar coordinate system,
Figure BDA0004130030570000132
for the laser radar pose->
Figure BDA0004130030570000133
Is an external parameter between the laser radar and the IMU, k is the laser radar scanning serial number, and +.>
Figure BDA0004130030570000134
Representing a point obtained by the kth scanning in the local laser radar coordinate system L at the end of the scanning; />
Figure BDA0004130030570000135
The noise pollution generated by laser radar measurement consists of ranging and beam directional noise.
Specifically, according to the IMU optimal state estimation and covariance obtained by the last milemeter output and the IMU current input, the IMU predicted state is obtained
Figure BDA0004130030570000136
Through the prediction state and other relevant parameters, each measured LiDAR point is projected into a global coordinate system, the nearest 5 points are searched in a map represented by ikd-Tree by using a KNN algorithm, then a local facet block is fitted by the found nearest neighbor, and the fitted local facet block has a normal vector ∈used in a measurement model>
Figure BDA0004130030570000137
And centroid->
Figure BDA0004130030570000138
From this the measurement equation can be derived:
Figure BDA0004130030570000139
wherein I is an IMU coordinate system, G is a global coordinate system, L is a LiDAR coordinate system,
Figure BDA00041300305700001310
for LiDAR pose, the->
Figure BDA00041300305700001311
For the external parameters between LiDAR and IMU, k is LiDAR scan number, +.>
Figure BDA00041300305700001312
Represents the point obtained in the kth scan in the local LiDAR coordinate system L at the end of the scan. />
Figure BDA00041300305700001313
The noise pollution generated by laser radar measurement consists of ranging and beam directional noise. The measurement model is shown in fig. 2. It will be appreciated that FIG. 2 is also a direct registration model of LiDAR points, i.e., a direct registration method implements a model that shows how points are registered with a surface. Obtaining state distribution according to approximate measurement equation, and estimating by MAP (maximum a-postior probability) in combination with prior Gaussian distributionThe state posterior probability distribution is obtained by the calculation method, namely the maximum posterior estimation result. This process is repeated until the optimal state and covariance estimate are found, and then LiDAR points that are directly registered and transformed into the global coordinate system are inserted into the map represented by ikd-tree. Wherein the maximum a posteriori estimation problem is solved by an iterative kalman filter, and wherein the kalman gain calculation used requires inverting the matrix of state dimensions instead of the measurement dimensions used in previous work.
The method is used for registering the original point cloud accumulation points and the map points by adopting a direct method, does not need to perform operations of matching a previous frame and a subsequent frame such as feature extraction, and is suitable for 3D radar data of any kind.
In one embodiment, the dynamic environment 3D synchronous positioning and mapping method provided by the present application further includes: the method for eliminating the accumulated error of the synchronous positioning and mapping module by adopting the closed loop detection module based on the factor graph can comprise the following steps:
when the optimal state estimation obtained by the new scanning of the laser radar is added into the factor graph, the Euclidean distance is adopted to find the prior state closest to the optimal state estimation, the current key frame and the existing map are projected into the world coordinate system, the key frame and the current frame are matched, the relative transformation between the key frame and the current frame is obtained, and the relative transformation between the key frame and the current frame is added into the factor graph as a cyclic closure factor.
Specifically, the closed loop detection module adopts closed loop detection based on Euclidean distance and iterative nearest neighbor matching (ICP) from frame to local graph, unifies the closed loop detection based on factor graph, the factor graph realizes identification matching of the same place frame according to state relation, and when LiDAR (light detection and ranging) is newly scanned to obtain state x k+1 When added to the factor graph, the euclidean distance is used first to find its approximate a priori state (15 m), for example: x is x 5 Is one of the candidates found and then attempts to match the key frame { F 5-j ,...,F 5 ,...,F 5+j And current frame F k+1 (j=12). Before matching, the current key frame and the local map are projected into a world coordinate system, and the relative transformation delta T is obtained 5,k+1 After that, it is added to the graph as a cyclic closure factor.The implementation is shown in fig. 3.
And the long-term accumulated error of the SLAM system is eliminated through a closed-loop detection module, so that the map building accuracy is improved.
In the dynamic environment 3D synchronous positioning and mapping method provided by the application, the track and map result can be saved through the ROS communication service, and the track value and the true value are compared and visualized by using the EVO tool.
According to the dynamic environment 3D synchronous positioning and mapping method, aiming at the problem that SLAM performance is easy to limit in a complex dynamic environment, an end-to-end full convolution semantic segmentation network (FCNN) is adopted to segment potential moving targets, residual static background point clouds are transmitted to a static SLAM module based on a direct point cloud registration method, real-time updating, efficient searching and management of a 3D map are achieved through a ikd-tree data structure, and a multi-factor map closed-loop detection module is added into the dynamic environment, so that detection precision and robustness are further improved.
It should be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises an element.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.

Claims (10)

1. A method for 3D synchronous positioning and mapping in a dynamic environment, the method comprising:
acquiring the original point cloud of the laser radar and inertial measurement data of an inertial measurement unit sensor;
fusing the original point cloud and the inertial measurement data to obtain fused data;
processing the fusion data by adopting an end-to-end full convolution semantic segmentation network to obtain potential motion target point cloud and static background point cloud;
executing an interactive multimode trackless Kalman filtering method based on probability association on the potential motion target point cloud to realize multi-target tracking;
and the static background point cloud inputs a synchronous positioning and mapping module to finish synchronous positioning and mapping.
2. The method according to claim 1, wherein the method further comprises:
carrying out distortion correction on the fusion data to obtain corrected fusion data;
processing the fusion data by adopting an end-to-end full convolution semantic segmentation network to obtain a potential motion target point cloud and a static background point cloud, wherein the method comprises the following steps of:
and processing the corrected fusion data by adopting an end-to-end full convolution semantic segmentation network to obtain potential motion target point cloud and static background point cloud.
3. The method of claim 2, wherein the distortion correcting the fused data to obtain corrected fused data comprises:
and according to the time interval of the start and the end of the laser radar frame scanning, estimating the motion in the time period corresponding to the time interval by utilizing IMU pre-integration, adjusting the one-to-one correspondence between the laser radar frame and the vehicle motion through the geometric relationship between the laser radar and the inertial measurement unit sensor, and carrying out distortion correction on the fusion data to obtain corrected fusion data.
4. The method of claim 1, wherein the static background point cloud input synchronization positioning and mapping module performs synchronization positioning and mapping, comprising:
performing accumulated registration on the static background point cloud and map points by adopting a direct registration method to obtain a static map;
the initial map points are obtained by estimating a first frame static background point cloud according to the initial pose, and the first frame static background point cloud updates the map according to a direct registration method.
5. The method of claim 4, wherein the performing cumulative registration of the static background point cloud and map points using a direct registration method to obtain a static map comprises:
carrying out state prediction by adopting measurement information of an inertial measurement unit sensor to obtain a predicted state; correcting motion distortion of the same frame of points sampled by the laser radar in the continuous motion process under different postures according to the prediction state, and projecting all points in each frame of static background point cloud to the end of each scanning to obtain corrected point cloud;
the state estimation in the iterative motion process obtains the optimal state and covariance estimation;
converting the corrected point cloud into a world coordinate system according to the optimal state and covariance estimation to obtain the corrected point cloud under the world coordinate system;
and inserting the corrected point cloud under the world coordinate system into the existing map until all the static background point clouds are inserted into the existing map, so as to obtain the static map.
6. The method of claim 5, wherein the state estimation during the iterative motion process finds an optimal state and covariance estimate, comprising:
according to the optimal state estimation and covariance output by the last milemeter and the current input measured value of the inertial measurement unit sensor, obtaining the prediction state of the inertial measurement unit sensor, projecting the laser radar point in each measured static background point cloud into a world coordinate system through the prediction state, searching the preset number of nearest neighbors in the existing map, and fitting a local facet block by using the nearest neighbors; determining a measurement equation from the local facet blocks;
and obtaining state distribution according to the measurement equation, combining prior Gaussian distribution, obtaining state posterior probability distribution by adopting a maximum posterior probability estimation method, and repeating iteration until the optimal state and covariance estimation are obtained.
7. The method of claim 6, wherein said determining a measurement equation from said local facet block comprises:
the fitted local facet blocks have normal vectors used in the measurement model
Figure FDA0004130030560000031
And centroid->
Figure FDA0004130030560000032
Determining a measurement equation according to the normal vector and the centroid as follows:
Figure FDA0004130030560000033
wherein I is the sensor coordinate system of the inertial measurement unit, G is the world coordinate system, L is the laser radar coordinate system,
Figure FDA0004130030560000034
for the laser radar pose->
Figure FDA0004130030560000035
Is an external parameter between the laser radar and the inertial measurement unit sensor, k is the laser radar scanning serial number,
Figure FDA0004130030560000036
indicating the end of scanningThe point obtained by the kth scanning in the local laser radar coordinate system L; />
Figure FDA0004130030560000037
The noise pollution generated by laser radar measurement consists of ranging and beam directional noise.
8. The method of claim 6, wherein the problem of the maximum a posteriori probability estimation method is solved by an iterative kalman filter, wherein a matrix of inverted state dimensions is used to calculate the kalman gain used in the kalman filter.
9. The method according to any one of claims 1-8, further comprising: and eliminating the accumulated error of the synchronous positioning and mapping module by adopting a closed loop detection module based on a factor graph.
10. The method of claim 9, wherein said employing a factor-based closed loop detection module to eliminate accumulated errors of the synchronous positioning and mapping module comprises:
when the optimal state estimation obtained by the new scanning of the laser radar is added into the factor graph, the Euclidean distance is adopted to find the prior state closest to the optimal state estimation, the current key frame and the existing map are projected into a world coordinate system, the key frame and the current frame are matched, the relative transformation between the key frame and the current frame is obtained, and the relative transformation between the key frame and the current frame is added into the factor graph as a cyclic closure factor.
CN202310257034.8A 2023-03-08 2023-03-08 Dynamic environment 3D synchronous positioning and mapping method Pending CN116429084A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310257034.8A CN116429084A (en) 2023-03-08 2023-03-08 Dynamic environment 3D synchronous positioning and mapping method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310257034.8A CN116429084A (en) 2023-03-08 2023-03-08 Dynamic environment 3D synchronous positioning and mapping method

Publications (1)

Publication Number Publication Date
CN116429084A true CN116429084A (en) 2023-07-14

Family

ID=87078709

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310257034.8A Pending CN116429084A (en) 2023-03-08 2023-03-08 Dynamic environment 3D synchronous positioning and mapping method

Country Status (1)

Country Link
CN (1) CN116429084A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117215309A (en) * 2023-09-27 2023-12-12 河南工业大学 Positioning and mapping method and positioning and mapping device for unmanned vehicle grain depot cleaning system
CN117612070A (en) * 2024-01-19 2024-02-27 福思(杭州)智能科技有限公司 Static truth value data correction method and device and storage medium
CN118014850A (en) * 2024-04-09 2024-05-10 绘见科技(深圳)有限公司 SLAM precision enhancement method based on plane characteristics

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117215309A (en) * 2023-09-27 2023-12-12 河南工业大学 Positioning and mapping method and positioning and mapping device for unmanned vehicle grain depot cleaning system
CN117612070A (en) * 2024-01-19 2024-02-27 福思(杭州)智能科技有限公司 Static truth value data correction method and device and storage medium
CN117612070B (en) * 2024-01-19 2024-05-03 福思(杭州)智能科技有限公司 Static truth value data correction method and device and storage medium
CN118014850A (en) * 2024-04-09 2024-05-10 绘见科技(深圳)有限公司 SLAM precision enhancement method based on plane characteristics

Similar Documents

Publication Publication Date Title
Zhao et al. A robust laser-inertial odometry and mapping method for large-scale highway environments
CN112268559B (en) Mobile measurement method for fusing SLAM technology in complex environment
CN116429084A (en) Dynamic environment 3D synchronous positioning and mapping method
Anderson et al. Towards relative continuous-time SLAM
Chiu et al. Robust vision-aided navigation using sliding-window factor graphs
CN112965063B (en) Robot mapping and positioning method
Niu et al. IC-GVINS: A robust, real-time, INS-centric GNSS-visual-inertial navigation system
Zhou et al. π-LSAM: LiDAR smoothing and mapping with planes
CN114018248B (en) Mileage metering method and image building method integrating code wheel and laser radar
CN113920198B (en) Coarse-to-fine multi-sensor fusion positioning method based on semantic edge alignment
CN110751123B (en) Monocular vision inertial odometer system and method
CN116429116A (en) Robot positioning method and equipment
Huai et al. Stereo-inertial odometry using nonlinear optimization
Bai et al. Perception-aided visual-inertial integrated positioning in dynamic urban areas
Yuan et al. Sr-lio: Lidar-inertial odometry with sweep reconstruction
CN115950414A (en) Adaptive multi-fusion SLAM method for different sensor data
Zhu et al. Camera, LiDAR, and IMU based multi-sensor fusion SLAM: A survey
Zhong et al. A factor graph optimization mapping based on normaldistributions transform
Wang et al. SW-LIO: A Sliding Window Based Tightly Coupled LiDAR-Inertial Odometry
CN115164887A (en) Pedestrian navigation positioning method and device based on laser radar and inertia combination
Burnett et al. Continuous-Time Radar-Inertial and Lidar-Inertial Odometry using a Gaussian Process Motion Prior
CN114323038B (en) Outdoor positioning method integrating binocular vision and 2D laser radar
Yuan et al. LIWO: LiDAR-Inertial-Wheel Odometry
Wang et al. Lidar-Inertial SLAM Method for Accurate and Robust Mapping
Cao et al. A Multi-sensor Deep Fusion SLAM Algorithm based on TSDF map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination