CN111983639B - Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU - Google Patents

Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU Download PDF

Info

Publication number
CN111983639B
CN111983639B CN202010860418.5A CN202010860418A CN111983639B CN 111983639 B CN111983639 B CN 111983639B CN 202010860418 A CN202010860418 A CN 202010860418A CN 111983639 B CN111983639 B CN 111983639B
Authority
CN
China
Prior art keywords
frame
data
pose
laser radar
imu
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010860418.5A
Other languages
Chinese (zh)
Other versions
CN111983639A (en
Inventor
杨建松
余小欢
陈嵩
白云峰
张合勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Yixun Technology Co ltd
Original Assignee
Zhejiang Guangpo Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Guangpo Intelligent Technology Co ltd filed Critical Zhejiang Guangpo Intelligent Technology Co ltd
Priority to CN202010860418.5A priority Critical patent/CN111983639B/en
Publication of CN111983639A publication Critical patent/CN111983639A/en
Application granted granted Critical
Publication of CN111983639B publication Critical patent/CN111983639B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C22/00Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention relates to a Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU, which comprises the steps of carrying out tight coupling joint initialization on a plurality of pieces of image data obtained by a Multi-Camera and data obtained by an IMU inertial measurement unit to obtain an initial pose of a system; the method comprises the steps that point cloud data of a laser radar frame are obtained by a laser radar sensor, the point cloud data are preprocessed, and the point cloud is divided into strong angular points, weak angular points, jiang Ping surface points and weak plane points; optimizing the frame pose of the laser radar through the initial pose of the system; the frame pose of the laser radar is further optimized through closed loop detection; and performing map splicing by using the optimized laser radar frame pose. According to the invention, the data obtained by the three sensors are fused, so that the frame pose of the laser radar is more accurate, and the robustness and stability of a positioning system are effectively improved.

Description

Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU
Technical Field
The invention relates to the technical fields of computer vision, robots and the like, in particular to a Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU.
Background
In recent years, the unmanned industry is gradually going into the field of view of the public. SLAM can make the mobile vehicle realize the location under the unable normal environment of working of GPS to track and discern dynamic vehicle and pedestrian in order to realize intelligent obstacle avoidance, auxiliary driving and autonomous navigation provide probably.
In positioning and patterning, SLAM is mainly used to obtain raw data by means of sensors, with the assistance of IMU. In the case of existing developments, SLAM has two implementation forms: one is laser SLAM mainly using laser radar; the other is vision SLAM based on a camera. Laser SLAM is currently mainly used indoors and in a smaller range of environments. The visual SLAM is wide in range and can be more suitable for outdoor environments, but has strong light dependence and cannot work in dark-light or texture-free areas.
With the rapid development of unmanned, in order to adapt to market environment, unmanned use scenes are more and more abundant, and the unmanned is suitable for indoor environment and outdoor environment; the indoor structural environment is ideal; the outdoor illumination has severe variation, more dynamic targets such as vehicles and pedestrians, more repeated and disordered texture scenes and great challenges for unmanned mass production. Therefore, the method of using a single sensor to perform positioning composition cannot meet the construction requirement of a high-precision map in a complex scene.
Therefore, a new scheme is necessary to be provided for fusing various sensor data so as to solve the problem of building an environment map in an outdoor complex environment.
Disclosure of Invention
Aiming at the condition that the existing single sensor cannot meet the outdoor complex environment, the invention provides a Multi-sensor SLAM method based on Multi-camera/Lidar/IMU.
The invention provides a Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU, which comprises the following steps:
s1: carrying out tight coupling joint initialization on a plurality of pieces of image data obtained by the multi-view camera and data obtained by the IMU inertial measurement unit to obtain an initial pose of the system;
s2, acquiring point cloud data of a laser radar frame by a laser radar sensor, preprocessing the point cloud data, and dividing the point cloud into strong angular points, weak angular points, jiang Ping plane points and weak plane points;
s3, optimizing the frame pose of the laser radar through the initial pose of the system;
s4: the frame pose of the laser radar is further optimized through closed loop detection;
s5: and performing map splicing by using the optimized laser radar frame pose.
Preferably, obtaining the initialization pose of the system comprises the following steps:
s11: performing hard synchronous triggering of the multi-camera through the raspberry group;
s12: obtaining multi-eye image and IMU data from a multi-eye camera, and aligning the multi-eye image data with the IMU data through a time stamp;
s13: multithreading is used for tracking visual characteristics of the multi-view images, and the tracked results are stored;
s14: preprocessing the multi-view image data and IMU data;
s15: carrying out joint initialization on the multi-view image data and the IMU data, respectively carrying out tight coupling processing on the multi-view image information obtained by the multi-view camera and the IMU data, and carrying out tight coupling processing on the processed data and the IMU data again to obtain a multi-view tight coupling visual inertial odometer;
s16: optimizing the multi-purpose tightly-coupled visual inertial odometer and performing marginalization treatment at the same time;
s17: and obtaining the optimized multi-vision inertial odometer, obtaining the initialized pose of the system and preparing for the next step.
Preferably, the multithreading performs visual feature tracking of the multi-view image, and mainly uses an optical flow tracking method in OpenCV.
Preferably, preprocessing the multi-view image data and IMU data includes the steps of:
s141: firstly, IMU data between two adjacent frames are taken out, system initialization pose estimation is carried out, and an initial yaw angle is set to be zero;
s142: pre-integrating IMU data among frames mainly in IMU data preprocessing, and simultaneously estimating a Jacobian matrix of a state variable of the integrated measurement variance and the pre-integrated value;
s143: preprocessing the multi-view image data: firstly, judging a key frame, judging whether to press the feature into a feature linked list according to the ID of the feature point, judging whether to set the frame as the key frame according to the average parallax of the front frame and the rear frame, and determining the optimized position of the rear sliding window.
Preferably, the step of optimizing the multi-objective close-coupled visual odometer comprises:
s161, based on the rear end optimization of a sliding window, optimizing state quantity by using an open-source optimization library Ceres, constructing a cost function by calculating the reprojection error of an image and IMU measurement residual error, and optimizing by using a nonlinear overall least squares method;
s162: and (3) marginalizing, namely determining a data frame in the marginalized sliding window according to whether the current frame is a key frame or not.
Preferably, the method for preprocessing the laser radar point cloud data comprises the following steps:
s21: receiving original 16-line point cloud data of a laser radar sensor;
s22: calculating a scanning start angle and a scanning end angle of each Scan, calculating a vertical angle and a line number of each point, and a horizontal angle and a column number of each point according to coordinate values of each point, and storing the vertical angle and the line number into a complete Scan;
s23: calculating the flatness between two adjacent upper and lower points according to the coordinates of the same column, so as to determine whether the point is a ground point;
s24: releasing the point cloud after segmentation and segmentation information;
s25: calculating the curvature of each point according to the segmented point cloud;
s26: and sorting the point clouds of each line by using curvature, dividing the point clouds into strong angular points, weak angular points, jiang Ping plane points and weak plane points according to the value of the curvature, and issuing.
Preferably, the step of optimizing the frame pose of the laser radar through the initial pose of the system comprises the following steps:
s31: receiving the optimized visual inertial odometer, the pose of a key frame, the preprocessed point cloud data and the IMU data, and storing the data into corresponding queues;
s32: aligning each data frame through each data time stamp, constructing a closed-loop detection key frame, processing the segmented point cloud data, aligning the data, changing a coordinate system, converting the coordinate system of the laser radar into a coordinate system of a plurality of cameras, and storing the coordinate system into a corresponding queuing container; meanwhile, index of a key frame is pressed in the opposite column of the laser radar data;
s33: and processing the laser radar frame data to obtain the pose of the optimized laser radar frame.
Preferably, the step of processing the lidar frame data includes:
s331: taking out the index of the key frame, and initializing the gesture of the key frame of the first frame, namely setting the Euler angles of the gesture to be zero;
s332: when the index is larger than 1, a previous frame of the current frame is taken out, and corresponding laser radar frame data and IMU frame data are obtained between time stamps of the current frame and the previous frame;
s333: acquiring the pose of a visual inertial odometer of a previous frame;
s334: only one frame of laser radar data participates in constructing a cloud map between the time stamps of the current frame and the previous frame, and because the time interval exists between the laser radar frame and the current frame, the pose is different, the IMU frame data is utilized for pushing the position, and the point cloud is converted into a world coordinate system by utilizing the more accurate pose obtained by pushing the position;
s335: and further optimizing the frame pose of the laser radar obtained by the position pushing.
Preferably, the method for further optimizing the pose of the laser radar frame comprises the following steps:
s3351: converting the point cloud data under the world coordinate system into the coordinate system of the previous frame, and simultaneously storing the point cloud data into a point cloud container;
s3352: when the capacity of the point cloud container is larger than 1, the point cloud of the laser radar of the current frame is taken out, meanwhile, the visual inertial odometer pose of the current frame is taken out, the pose is primarily corrected, and a relatively accurate initial value is provided for optimizing the follow-up current frame to the local map;
s3353: constructing a local map by utilizing point cloud data of 20 frames adjacent to the current frame;
s3354: and optimizing the current frame to the local map, constructing a cost function of the minimum distance between the points by utilizing two types of points in the current frame and the local map according to the corner points and the face points obtained by preprocessing the point cloud data of the laser radar frame, and performing Gaussian Newton iterative optimization to obtain the pose of the laser radar frame after optimization.
Preferably, the step of further optimizing the laser radar frame pose by closed loop detection includes:
s41: and detecting a closed-loop candidate frame which is more similar to the current frame by using a DBoW visual dictionary, calculating pose transformation from the current frame to the closed-loop candidate frame by using a RANSAC algorithm, simultaneously calculating a common view point between the two, and storing closed-loop information for closed-loop optimization when the number of the common view points is larger than a threshold value.
And S42, for the detected closed-loop information, performing pose adjustment in a pose chart optimization mode, and updating the optimized pose into the laser radar pose.
The beneficial effects of the invention are as follows:
the multi-view camera can obtain a plurality of pieces of image data, the image data are respectively and tightly coupled with the IMU data to obtain initial pose of a plurality of systems, then the initial pose of each system and the IMU data are tightly coupled again to obtain an initial pose of a more accurate system, the pose of a laser Lei Zhen frame is optimized by using the initial pose to obtain a pose of a more accurate laser radar frame, finally the pose of the laser radar frame is further optimized by using closed loop detection to obtain a final laser radar frame pose, and a high-precision map is constructed by using the pose of the laser radar frame; according to the invention, the data obtained by the three sensors are fused, so that the frame pose of the laser radar is more accurate, and the robustness and stability of a positioning system are effectively improved.
Drawings
FIG. 1 is a flow chart of a Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU;
Detailed Description
The present invention will be described in detail below with reference to the specific embodiments shown in the drawings, but these embodiments are not limited to the present invention, and structural, method, or functional modifications made by those skilled in the art based on these embodiments are included in the scope of the present invention.
As shown in fig. 1, a Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU includes the steps of:
s1: carrying out tight coupling joint initialization on a plurality of pieces of image data obtained by the multi-view camera and data obtained by the IMU inertial measurement unit to obtain an initial pose of the system;
the method for obtaining the system initialization pose comprises the following steps:
s11: performing hard synchronous triggering of the multi-camera through the raspberry group;
s12: obtaining multi-eye image data and IMU data from a multi-eye camera, and aligning the multi-eye image data and the IMU data through a time stamp;
in a specific embodiment, a plurality of cameras are arranged on the front, the back, the left and the right of the system, binocular cameras are arranged on the front, the back and the two sides, and monocular cameras are arranged on the left and the right sides; six pieces of different position image information can be obtained simultaneously when the system shoots, and surrounding environments can be shot in an omnibearing manner.
S13: the multithreading is used for visual feature tracking of the multi-view images, and an optical flow tracking method in OpenCV is mainly used.
S14: preprocessing the multi-view image data and IMU data;
preprocessing the multi-view image data and IMU data comprises the following steps:
s141: firstly, IMU data between two adjacent frames are taken out, system initialization pose estimation is carried out, and an initial yaw angle is set to be zero;
s142: pre-integrating IMU data among frames mainly in IMU data preprocessing, and simultaneously estimating a Jacobian matrix of a state variable of the integrated measurement variance and the pre-integrated value;
s143: preprocessing a multi-view image: firstly, judging a key frame, judging whether to press the feature into a feature linked list according to the ID of the feature point, judging whether to set the frame as the key frame according to the average parallax of the front frame and the rear frame, and determining the optimized position of the rear sliding window.
Because the IMU visual inertial measurement unit and the multi-camera acquire different types of signals and have large differences in sampling rates, the IMU data generally needs to be integrated between two frames of images in order to synchronize the data of the two. And simultaneously estimating the Jacobian matrix of the integrated measurement variance and the pre-integrated value to the state variable.
S15: carrying out joint initialization on the multi-view image and the IMU data, carrying out tight coupling processing on a plurality of pieces of image information obtained by the multi-view camera and the IMU data respectively, and carrying out tight coupling processing on the processed data and the IMU data again to obtain a multi-view tight coupling visual inertial odometer;
s16: optimizing the multi-purpose tightly-coupled visual inertial odometer and performing marginalization treatment at the same time;
based on the back-end optimization of the sliding window, optimizing the state quantity by using an open-source optimization library Ceres, constructing a cost function by calculating the reprojection error of the image and the IMU measurement residual error, and optimizing by using a nonlinear overall least squares method; the marginalizing process determines the data frames within the marginalized sliding window based on whether the current frame is a key frame.
S17: and obtaining the optimized multi-vision inertial odometer, obtaining the initialized pose of the system and preparing for the next step.
S2, acquiring point cloud data of a laser radar frame by a laser radar sensor, preprocessing the point cloud data, and dividing the point cloud into strong angular points, weak angular points, jiang Ping planar points and weak planar points;
the method for preprocessing the laser radar point cloud data comprises the following steps:
s21: receiving original 16-line point cloud data of a laser radar sensor;
s22: calculating a scanning start angle and a scanning end angle of each Scan, calculating a vertical angle and a line number of each point, and a horizontal angle and a column number of each point according to coordinate values of each point, and storing the vertical angle and the line number into a complete Scan;
s23: calculating the flatness between two adjacent upper and lower points according to the coordinates of the same column, so as to determine whether the point is a ground point;
s24: releasing the point cloud after segmentation and segmentation information;
s25: calculating the curvature of each point according to the segmented point cloud;
s26: and sorting the point clouds of each line by using curvature, dividing the point clouds into strong angular points, weak angular points, jiang Ping plane points and weak plane points according to the value of the curvature, and issuing.
S3, optimizing the frame pose of the laser radar through the initial pose of the system;
s31: receiving the optimized visual inertial odometer, the pose of a key frame, the preprocessed point cloud data and the IMU data, and storing the data into corresponding queues;
s32: aligning each data frame through each data time stamp, constructing a closed loop detection frame, processing the segmented point cloud data, aligning the data, changing a coordinate system, converting the coordinate system of the laser radar into a coordinate system of a plurality of cameras, and storing the coordinate system into a corresponding queuing container; meanwhile, pressing in the index of the key frame in the column of the laser radar data;
s33: and processing the laser radar frame data to obtain the pose of the optimized laser radar frame.
The step of processing the laser radar frame data comprises the following steps:
s331: taking out the index of the key frame, and initializing the gesture of the key frame of the first close frame, namely setting the Euler angles of the gesture to be zero;
s332: when the index is larger than 1, a previous frame of the current frame is taken out, and corresponding laser radar frame and IMU frame data are acquired between time stamps of the current frame and the previous frame;
s333: acquiring the pose of a visual inertial odometer of a previous frame;
s334: only one frame of laser radar data participates in constructing a cloud map between the time stamps of the current frame and the previous frame, and because the time interval exists between the laser radar frame and the current frame, the pose is different, the IMU frame data is utilized for pushing the position, and the point cloud is converted into a world coordinate system by utilizing the more accurate pose obtained by pushing the position;
s335: and further optimizing the frame pose of the laser radar obtained by the position pushing.
The method for further optimizing the pose of the laser radar frame comprises the following steps:
s3351: converting the point cloud data under the world coordinate system into the coordinate system of the previous frame, and simultaneously storing the point cloud data into a point cloud container;
s3352: when the capacity of the point cloud container is larger than 1, the point cloud of the laser radar of the current frame is taken out, meanwhile, the position and the posture of the visual inertial odometer of the current frame are corrected preliminarily, and a relatively accurate initial value is provided for optimizing the follow-up current frame to the local map;
s3353: constructing a local map by utilizing point cloud data of 20 frames adjacent to the current frame;
s3354: and optimizing the current frame to the local map, constructing a cost function of the minimum distance between the points by utilizing two types of points in the current frame and the local map according to the corner points and the face points obtained by preprocessing the point cloud data of the laser radar frame, and performing Gaussian Newton iterative optimization to obtain the pose of the laser radar frame after optimization.
S4: the frame pose of the laser radar is further optimized through closed loop detection;
the step of further optimizing the frame pose of the laser radar comprises the following steps:
s41: detecting a closed-loop candidate frame which is similar to the current frame by using a DBoW visual dictionary, calculating pose transformation from the current frame to the closed-loop candidate frame by using a RANSAC algorithm, and simultaneously calculating a common view point between the two, and storing closed-loop information for closed-loop optimization when the number of the common view points is larger than a threshold value;
and S42, for the detected closed-loop information, performing pose adjustment in a pose chart optimization mode, and updating the optimized pose into the laser radar pose.
S5: and performing map splicing by using the optimized laser radar frame pose.
The multi-view camera can obtain a plurality of pieces of image data, the image data are respectively and tightly coupled with the IMU data to obtain initial pose of a plurality of systems, then the initial pose of each system and the IMU data are tightly coupled again to obtain an initial pose of a more accurate system, the pose of a laser Lei Zhen frame is optimized by using the initial pose to obtain a pose of a more accurate laser radar frame, finally the pose of the laser radar frame is further optimized by using closed loop detection to obtain a final laser radar frame pose, and a high-precision map is constructed by using the pose of the laser radar frame; according to the invention, the data obtained by the three sensors are fused, so that the frame pose of the laser radar is more accurate, and the robustness and stability of a positioning system are effectively improved.
Although the preferred embodiments of the present invention have been disclosed for illustrative purposes, those skilled in the art will appreciate that various modifications, additions and substitutions are possible, without departing from the scope and spirit of the invention as disclosed in the accompanying claims.

Claims (8)

1. A Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU is characterized in that: the method comprises the following steps:
s1: carrying out tight coupling joint initialization on a plurality of pieces of image data obtained by the multi-view camera and data obtained by the IMU inertial measurement unit to obtain an initial pose of the system;
s2, acquiring point cloud data of a laser radar frame by a laser radar sensor, preprocessing the point cloud data, and dividing the point cloud into strong angular points, weak angular points, jiang Ping plane points and weak plane points;
s3, optimizing the frame pose of the laser radar through the initial pose of the system;
s4: the frame pose of the laser radar is further optimized through closed loop detection;
s5: map splicing is carried out by utilizing the optimized laser radar frame pose;
the step S3 of optimizing the frame pose of the laser radar through the initial pose of the system comprises the following steps:
s31: receiving the optimized visual inertial odometer, the pose of a key frame, the preprocessed point cloud data and the IMU data, and storing the data into corresponding queues;
s32: aligning each data frame through each data time stamp, constructing a closed-loop detection key frame, processing the segmented point cloud data, aligning the data, changing a coordinate system, converting the coordinate system of the laser radar into a coordinate system of a plurality of cameras, and storing the coordinate system into a corresponding queuing container; meanwhile, index of a key frame is pressed in the opposite column of the laser radar data;
s33: processing the laser radar frame data to obtain the pose of the optimized laser radar frame;
the step of processing the laser radar frame data in step S33 includes:
s331: taking out the index of the key frame, and initializing the gesture of the key frame of the first frame, namely setting the Euler angles of the gesture to be zero;
s332: when the index is larger than 1, a previous frame of the current frame is taken out, and corresponding laser radar frame data and IMU frame data are obtained between time stamps of the current frame and the previous frame;
s333: acquiring the pose of a visual inertial odometer of a previous frame;
s334: only one frame of laser radar data participates in constructing a cloud map between the time stamps of the current frame and the previous frame, and because the time interval exists between the laser radar frame and the current frame, the pose is different, the IMU frame data is utilized for pushing the position, and the point cloud is converted into a world coordinate system by utilizing the more accurate pose obtained by pushing the position;
s335: and further optimizing the frame pose of the laser radar obtained by the position pushing.
2. The Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU of claim 1, wherein: the method for obtaining the initialization pose of the system comprises the following steps:
s11: performing hard synchronous triggering of the multi-camera through the raspberry group;
s12: obtaining multi-eye image and IMU data from a multi-eye camera, and aligning the multi-eye image data with the IMU data through a time stamp;
s13: multithreading is used for tracking visual characteristics of the multi-view images, and the tracked results are stored;
s14: preprocessing the multi-view image data and IMU data;
s15: carrying out joint initialization on the multi-view image data and the IMU data, respectively carrying out tight coupling processing on the multi-view image information obtained by the multi-view camera and the IMU data, and carrying out tight coupling processing on the processed data and the IMU data again to obtain a multi-view tight coupling visual inertial odometer;
s16: optimizing the multi-purpose tightly-coupled visual inertial odometer and performing marginalization treatment at the same time;
s17: and obtaining the optimized multi-vision inertial odometer, obtaining the initialized pose of the system and preparing for the next step.
3. The Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU of claim 2, wherein: the multithreading is used for visual feature tracking of the multi-view images, and an optical flow tracking method in OpenCV is mainly used.
4. The Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU of claim 2, wherein: preprocessing the multi-view image data and IMU data comprises the following steps:
s141: firstly, IMU data between two adjacent frames are taken out, system initialization pose estimation is carried out, and an initial yaw angle is set to be zero;
s142: pre-integrating IMU data among frames mainly in IMU data preprocessing, and simultaneously estimating a Jacobian matrix of a state variable of the integrated measurement variance and the pre-integrated value;
s143: preprocessing the multi-view image data: firstly, judging a key frame, judging whether to press the feature into a feature linked list according to the ID of the feature point, judging whether to set the frame as the key frame according to the average parallax of the front frame and the rear frame, and determining the optimized position of the rear sliding window.
5. The Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU of claim 2, wherein: the step of optimizing the multi-objective close-coupled visual odometer includes:
s161, based on the rear end optimization of a sliding window, optimizing state quantity by using an open-source optimization library Ceres, constructing a cost function by calculating the reprojection error of an image and IMU measurement residual error, and optimizing by using a nonlinear overall least squares method;
s162: and (3) marginalizing, namely determining a data frame in the marginalized sliding window according to whether the current frame is a key frame or not.
6. The Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU of claim 1, wherein: the method for preprocessing the laser radar point cloud data comprises the following steps:
s21: receiving original 16-line point cloud data of a laser radar sensor;
s22: calculating a scanning start angle and a scanning end angle of each Scan, calculating a vertical angle and a line number of each point, and a horizontal angle and a column number of each point according to coordinate values of each point, and storing the vertical angle and the line number into a complete Scan;
s23: calculating the flatness between two adjacent upper and lower points according to the coordinates of the same column, so as to determine whether the point is a ground point;
s24: releasing the point cloud after segmentation and segmentation information;
s25: calculating the curvature of each point according to the segmented point cloud;
s26: and sorting the point clouds of each line by using curvature, dividing the point clouds into strong angular points, weak angular points, jiang Ping plane points and weak plane points according to the value of the curvature, and issuing.
7. The Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU of claim 1, wherein: the method for further optimizing the pose of the laser radar frame comprises the following steps:
s3351: converting the point cloud data under the world coordinate system into the coordinate system of the previous frame, and simultaneously storing the point cloud data into a point cloud container;
s3352: when the capacity of the point cloud container is larger than 1, the point cloud of the laser radar of the current frame is taken out, meanwhile, the visual inertial odometer pose of the current frame is taken out, the pose is primarily corrected, and a relatively accurate initial value is provided for optimizing the follow-up current frame to the local map;
s3353: constructing a local map by utilizing point cloud data of 20 frames adjacent to the current frame;
s3354: and optimizing the current frame to the local map, constructing a cost function of the minimum distance between the points by utilizing two types of points in the current frame and the local map according to the corner points and the face points obtained by preprocessing the point cloud data of the laser radar frame, and performing Gaussian Newton iterative optimization to obtain the pose of the laser radar frame after optimization.
8. The Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU of claim 1, wherein: the step of further optimizing the laser radar frame pose by closed loop detection comprises:
s41: detecting a closed-loop candidate frame which is more similar to the current frame by using a DBoW visual dictionary, calculating pose transformation from the current frame to the closed-loop candidate frame by using a RANSAC algorithm, simultaneously calculating a common view point between the two, storing closed-loop information for closed-loop optimization when the common view point number is larger than a threshold value,
and S42, carrying out pose adjustment on the detected closed-loop information by using a pose chart optimizing mode, and updating the optimized pose into each laser radar pose.
CN202010860418.5A 2020-08-25 2020-08-25 Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU Active CN111983639B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010860418.5A CN111983639B (en) 2020-08-25 2020-08-25 Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010860418.5A CN111983639B (en) 2020-08-25 2020-08-25 Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU

Publications (2)

Publication Number Publication Date
CN111983639A CN111983639A (en) 2020-11-24
CN111983639B true CN111983639B (en) 2023-06-02

Family

ID=73443183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010860418.5A Active CN111983639B (en) 2020-08-25 2020-08-25 Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU

Country Status (1)

Country Link
CN (1) CN111983639B (en)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112197770B (en) * 2020-12-02 2021-03-12 北京欣奕华数字科技有限公司 Robot positioning method and positioning device thereof
CN112461230B (en) * 2020-12-07 2023-05-09 优必康(青岛)科技有限公司 Robot repositioning method, apparatus, robot, and readable storage medium
CN112649016B (en) * 2020-12-09 2023-10-03 南昌大学 Visual inertial odometer method based on dotted line initialization
CN112785702B (en) * 2020-12-31 2023-06-20 华南理工大学 SLAM method based on tight coupling of 2D laser radar and binocular camera
CN112965063B (en) * 2021-02-11 2022-04-01 深圳市安泽智能机器人有限公司 Robot mapping and positioning method
CN112991515B (en) * 2021-02-26 2022-08-19 山东英信计算机技术有限公司 Three-dimensional reconstruction method, device and related equipment
CN112907491B (en) * 2021-03-18 2023-08-22 中煤科工集团上海有限公司 Laser point cloud loop detection method and system suitable for underground roadway
CN113066105B (en) * 2021-04-02 2022-10-21 北京理工大学 Positioning and mapping method and system based on fusion of laser radar and inertial measurement unit
CN113432600B (en) * 2021-06-09 2022-08-16 北京科技大学 Robot instant positioning and map construction method and system based on multiple information sources
CN113432595A (en) * 2021-07-07 2021-09-24 北京三快在线科技有限公司 Equipment state acquisition method and device, computer equipment and storage medium
CN114088104B (en) * 2021-07-23 2023-09-29 武汉理工大学 Map generation method under automatic driving scene
CN113721260B (en) * 2021-08-26 2023-12-12 南京邮电大学 Online combined calibration method for laser radar, binocular camera and inertial navigation
CN113776517A (en) * 2021-09-03 2021-12-10 Oppo广东移动通信有限公司 Map generation method, device, system, storage medium and electronic equipment
CN113776519B (en) * 2021-09-14 2022-10-21 西南科技大学 AGV vehicle mapping and autonomous navigation obstacle avoidance method under lightless dynamic open environment
CN115371699B (en) * 2021-09-30 2024-03-15 达闼科技(北京)有限公司 Visual inertial odometer method and device and electronic equipment
CN114199235B (en) * 2021-11-29 2023-11-03 珠海一微半导体股份有限公司 Positioning system and positioning method based on sector depth camera
CN114170280B (en) * 2021-12-09 2023-11-28 北京能创科技有限公司 Laser odometer method, system and device based on double windows
CN114234967B (en) * 2021-12-16 2023-10-20 浙江大学 Six-foot robot positioning method based on multi-sensor fusion
CN114370871A (en) * 2022-01-13 2022-04-19 华南理工大学 Close coupling optimization method for visible light positioning and laser radar inertial odometer
CN114088087B (en) * 2022-01-21 2022-04-15 深圳大学 High-reliability high-precision navigation positioning method and system under unmanned aerial vehicle GPS-DENIED
CN114648584B (en) * 2022-05-23 2022-08-30 北京理工大学前沿技术研究院 Robustness control method and system for multi-source fusion positioning
CN115280960A (en) * 2022-07-08 2022-11-04 江苏大学 Combine harvester steering control method based on field vision SLAM
CN115560744A (en) * 2022-08-22 2023-01-03 深圳市普渡科技有限公司 Robot, multi-sensor-based three-dimensional mapping method and storage medium
CN115311353B (en) * 2022-08-29 2023-10-10 玩出梦想(上海)科技有限公司 Multi-sensor multi-handle controller graph optimization tight coupling tracking method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108401461A (en) * 2017-12-29 2018-08-14 深圳前海达闼云端智能科技有限公司 Three-dimensional mapping method, device and system, cloud platform, electronic equipment and computer program product
CN108827315A (en) * 2018-08-17 2018-11-16 华南理工大学 Vision inertia odometer position and orientation estimation method and device based on manifold pre-integration
CN110261870A (en) * 2019-04-15 2019-09-20 浙江工业大学 It is a kind of to synchronize positioning for vision-inertia-laser fusion and build drawing method
CN111337947A (en) * 2020-05-18 2020-06-26 深圳市智绘科技有限公司 Instant mapping and positioning method, device, system and storage medium
CN111402339A (en) * 2020-06-01 2020-07-10 深圳市智绘科技有限公司 Real-time positioning method, device, system and storage medium
CN111561923A (en) * 2020-05-19 2020-08-21 北京数字绿土科技有限公司 SLAM (simultaneous localization and mapping) mapping method and system based on multi-sensor fusion

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108401461A (en) * 2017-12-29 2018-08-14 深圳前海达闼云端智能科技有限公司 Three-dimensional mapping method, device and system, cloud platform, electronic equipment and computer program product
CN108827315A (en) * 2018-08-17 2018-11-16 华南理工大学 Vision inertia odometer position and orientation estimation method and device based on manifold pre-integration
CN110261870A (en) * 2019-04-15 2019-09-20 浙江工业大学 It is a kind of to synchronize positioning for vision-inertia-laser fusion and build drawing method
CN111337947A (en) * 2020-05-18 2020-06-26 深圳市智绘科技有限公司 Instant mapping and positioning method, device, system and storage medium
CN111561923A (en) * 2020-05-19 2020-08-21 北京数字绿土科技有限公司 SLAM (simultaneous localization and mapping) mapping method and system based on multi-sensor fusion
CN111402339A (en) * 2020-06-01 2020-07-10 深圳市智绘科技有限公司 Real-time positioning method, device, system and storage medium

Also Published As

Publication number Publication date
CN111983639A (en) 2020-11-24

Similar Documents

Publication Publication Date Title
CN111983639B (en) Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU
US20230260151A1 (en) Simultaneous Localization and Mapping Method, Device, System and Storage Medium
CN110009739B (en) Method for extracting and coding motion characteristics of digital retina of mobile camera
CN110033489B (en) Method, device and equipment for evaluating vehicle positioning accuracy
CN109029433B (en) Method for calibrating external parameters and time sequence based on vision and inertial navigation fusion SLAM on mobile platform
CN111882612B (en) Vehicle multi-scale positioning method based on three-dimensional laser detection lane line
CN111024066B (en) Unmanned aerial vehicle vision-inertia fusion indoor positioning method
CN109509230A (en) A kind of SLAM method applied to more camera lens combined type panorama cameras
CN103886107B (en) Robot localization and map structuring system based on ceiling image information
CN110044354A (en) A kind of binocular vision indoor positioning and build drawing method and device
CN111862673B (en) Parking lot vehicle self-positioning and map construction method based on top view
CN113706626B (en) Positioning and mapping method based on multi-sensor fusion and two-dimensional code correction
CN111830953A (en) Vehicle self-positioning method, device and system
CN113658337B (en) Multi-mode odometer method based on rut lines
CN112734765A (en) Mobile robot positioning method, system and medium based on example segmentation and multi-sensor fusion
CN112669354A (en) Multi-camera motion state estimation method based on vehicle incomplete constraint
CN110827353A (en) Robot positioning method based on monocular camera assistance
CN115936029A (en) SLAM positioning method and device based on two-dimensional code
CN114413958A (en) Monocular vision distance and speed measurement method of unmanned logistics vehicle
CN112945233A (en) Global drift-free autonomous robot simultaneous positioning and map building method
CN116804553A (en) Odometer system and method based on event camera/IMU/natural road sign
CN116468786A (en) Semantic SLAM method based on point-line combination and oriented to dynamic environment
CN112432653B (en) Monocular vision inertial odometer method based on dotted line characteristics
CN114234967A (en) Hexapod robot positioning method based on multi-sensor fusion
CN113011212A (en) Image recognition method and device and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240409

Address after: Room 509-2, 5th Floor, Building 1, Dingchuang Wealth Center, Cangqian Street, Yuhang District, Hangzhou City, Zhejiang Province, 311100

Patentee after: Hangzhou Yixun Technology Co.,Ltd.

Country or region after: China

Address before: Room 303-5, block B, building 1, 268 Shiniu Road, nanmingshan street, Liandu District, Lishui City, Zhejiang Province 323000

Patentee before: Zhejiang Guangpo Intelligent Technology Co.,Ltd.

Country or region before: China