CN113763549A - Method, device and storage medium for simultaneous positioning and mapping by fusing laser radar and IMU - Google Patents

Method, device and storage medium for simultaneous positioning and mapping by fusing laser radar and IMU Download PDF

Info

Publication number
CN113763549A
CN113763549A CN202110953344.4A CN202110953344A CN113763549A CN 113763549 A CN113763549 A CN 113763549A CN 202110953344 A CN202110953344 A CN 202110953344A CN 113763549 A CN113763549 A CN 113763549A
Authority
CN
China
Prior art keywords
imu
map
sub
initial
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110953344.4A
Other languages
Chinese (zh)
Other versions
CN113763549B (en
Inventor
张�林
王�忠
沈莹
赵生捷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN202110953344.4A priority Critical patent/CN113763549B/en
Publication of CN113763549A publication Critical patent/CN113763549A/en
Application granted granted Critical
Publication of CN113763549B publication Critical patent/CN113763549B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Navigation (AREA)

Abstract

The invention relates to a method, a device and a storage medium for simultaneous positioning and mapping by fusing a laser radar and an IMU. After laser radar point cloud data and IMU data are preprocessed, a system is initialized, the preprocessed point cloud and an existing map are registered by adopting a direct method, relevant parameters are calculated so as to construct a factor map, an initial 3D sub-map is obtained, a 3D sub-map is mapped to a 2D sub-map, feature points are extracted, 2D rigid body transformation parameters are calculated, a 3D coordinate initial value is calculated, the value of a 3D coordinate is continuously updated through iterative optimization, and finally the 3D coordinate and the 3D map which meet the accuracy requirement are output. Compared with the prior art, the method adopts a direct method to register the point cloud with the existing map, and obtains final output through subsequent continuous iteration, and has the advantages of adaptability to multiple radar systems, high detection efficiency and the like.

Description

Method, device and storage medium for simultaneous positioning and mapping by fusing laser radar and IMU
Technical Field
The invention relates to the field of simultaneous positioning and mapping, in particular to a method, a device and a storage medium for simultaneous positioning and mapping by fusing a laser radar and an IMU.
Background
Currently, in the fields related to autonomy, such as autonomous mapping and robotics, simultaneous localization and mapping (SLAM) is a very important basic technology. SLAM constructs an environmental map while providing real-time location estimation for the carrier, enabling higher level applications such as autonomous navigation, path planning, and the like to be implemented. However, in complex environments, such as tall buildings, trees, etc., where obstructions often make GPS, etc., unusable, one relies primarily on active sensors such as laser radar (LiDAR) and IMU for SLAM system construction. However, the existing fusion mode of the laser radar and the IMU directly uses the measurement value of the IMU for calculation, the real-time performance is lacked, the precision is low, and in the registration method of point cloud information, the characteristic point-based method is highly coupled with the point acquisition mode of the laser radar sensor, and the method is not easy to be expanded to a multi-radar system. Therefore, the problem of accuracy and adaptability still exists in the simultaneous positioning mapping method at present.
Disclosure of Invention
The present invention aims to overcome the defects of the prior art, and provides a method, an apparatus and a storage medium for simultaneous positioning and mapping by fusing a laser radar and an IMU, so as to solve the problems of low accuracy and poor adaptability of the current positioning and mapping method.
The purpose of the invention can be realized by the following technical scheme:
a simultaneous positioning and mapping method fusing a laser radar and an IMU comprises the following steps:
and S1, acquiring the point cloud data and the IMU data, and preprocessing the point cloud data and the IMU data.
S2, initializing a positioning and mapping system by using the preprocessed point cloud data and IMU data, registering the preprocessed point cloud data with the existing map according to the initialized positioning and mapping system to obtain an initial registration value, and calculating the current gravity vector, the initial attitude and an IMU integral value.
And S3, constructing a factor graph model according to the gravity vector, the registration value and the IMU pre-integration.
And S4, inputting the preprocessed point cloud data into the factor graph model to obtain real-time 3D carrier coordinates, and generating an initial 3D sub-map according to the 3D carrier coordinates and the preprocessed point cloud data.
And S5, projecting the initial 3D sub-map in the horizontal direction according to the initial posture to obtain a test 2D sub-map.
And S6, extracting the feature points of the test 2D sub-map, matching the feature points of the test 2D sub-map with the feature points of the existing 2D sub-map through a positioning map building system, and calculating 2D rigid body transformation parameters according to the matched feature points when the number of the matched feature points exceeds a number threshold.
S7, calculating to obtain a 3D coordinate initial value according to the preprocessed point cloud data, the initial 3D sub-map and the 2D rigid body transformation parameters, performing sliding registration in the Z direction by taking the 3D coordinate initial value as a starting point to obtain a progressive registration value, and calculating 3D coordinate conversion parameters according to the progressive registration value; and establishing a pose graph according to the 3D coordinate conversion parameters, and obtaining a further attitude. (ii) a
S8, judging that the difference of the two adjacent 3D coordinate conversion parameters is smaller than a parameter threshold value, if so, executing a step S10; if not, step S9 is executed.
And S9, keeping the original 3D sub-map unchanged, remapping the original 3D sub-map according to the advanced attitude to obtain a new 2D sub-map, and executing the step S6.
And S10, taking the 3D coordinates corresponding to the current 3D coordinate conversion parameters as final 3D coordinates, importing the final 3D coordinates into the initial 3D sub-map, and generating the final 3D sub-map.
Further, the initialization in step S2 includes static initialization or dynamic initialization.
Further, the static initialization includes: and calculating a rotation matrix between the average value of the IMU acceleration measurement values and the gravity vector of the world coordinate system, determining an initial attitude, setting the average value of the IMU angular velocity measurement values as an IMU deviation initial value, and setting the IMU velocity and the coordinate as 0.
Further, the dynamic initialization includes: establishing a starting frame coordinate system, and calculating a point cloud relative rotation value and a pre-integral relative rotation value to obtain an initial IMU deviation value; and calculating the point cloud relative displacement value and the pre-integral relative displacement value to obtain the IMU speed, the coordinates, the initial posture and the initial gravity vector, and converting all values into a world coordinate system according to the initial gravity vector.
Further, the feature points of the 2D sub-map tested in step S6 are SURF feature points.
Further, the preprocessing step in step S1 includes: pre-integrating IMUs with timestamps smaller than the time stamp of the main radar, and reordering all laser points with sampling timestamps in the auxiliary radar falling in the time stamp by taking the time stamp of the main radar as reference according to the time stamp; and converting each laser point into an IMU coordinate system, calculating relative motion and performing motion distortion correction to obtain preprocessed point cloud data.
Further, the number threshold in step S6 is in the range of 4 to 6.
Further, the point cloud data and the IMU data are acquired by a plurality of sixteen-line lidar and IMU in step S1.
A simultaneous positioning and mapping device fusing a laser radar and an IMU (inertial measurement Unit) comprises a memory and a processor; a memory for storing a computer program; a processor for, when executing a computer program, implementing the method of simultaneous localization mapping fusing lidar and an IMU as above.
A computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of a method of simultaneous localization mapping fusing lidar and an IMU as described above.
Compared with the prior art, the invention has the following advantages:
1. the method for directly registering the preprocessed point cloud and the existing map is adopted, so that the system adaptability is stronger, parameter change can be reflected in real time, the system is easy to expand to a multi-radar input system, the 3D sub-map is mapped to the 2D sub-map, then the characteristic points are extracted, the 2D rigid body transformation parameters are calculated, the detection efficiency is improved, the initial coordinates can be obtained, and finally the coordinates are optimized in a circulating iteration mode, so that the result is more accurate.
2. The static initialization or the dynamic initialization is supported during the system initialization, and the application range is wide; and a world coordinate system is introduced in the initialization, which can be matched with an application using the coordinate system.
Drawings
FIG. 1 is a schematic flow chart of the present invention.
FIG. 2 is a schematic diagram of processing feature points of a 2D sub-map according to the present invention.
FIG. 3 is a schematic diagram illustrating the spatial relationship conversion from a 2D sub-map to a 3D sub-map.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
The embodiment provides a method for simultaneously positioning and constructing a map by fusing a laser radar and an IMU, wherein the process is shown in fig. 1 and specifically expanded as follows:
s1, point cloud data and IMU data of the multi-radar system are obtained and preprocessed: and pre-integrating IMU readings with timestamps less than the time stamp of the main radar to obtain pre-integrated measurement values and initial estimation values of relative motion. And taking the time stamps of the laser points in all the auxiliary radars as reference, taking all the points of which the sampling time stamps of the laser points in all the auxiliary radars fall in the interval, and reordering the points according to the time stamps to obtain the point cloud which grows according to the time sequence. And (4) carrying out seat standard conversion on each radar point to an IMU coordinate system. And calculating relative motion of each radar point from the IMU integral value according to the timestamp of the radar point, and performing motion distortion correction on the radar point to obtain the point cloud data subjected to distortion correction under the main radar timestamp.
And step S2, initializing the positioning and mapping system by using the preprocessed point cloud data and IMU data, wherein the initialization process can be divided into static initialization and dynamic initialization. The static initialization specifically comprises the steps of aligning the average value of the acceleration measurement values of the IMU with the gravity vector of the world coordinate system, calculating a rotation matrix between the two vectors, determining the IMU posture at the initial moment, and taking the average value of the IMU angular velocity measurement values as the initial value of the IMU gyroscope Bias. The position and velocity of the IMU are initialized to 0. The dynamic initialization specifically comprises the steps of registering multi-frame point clouds in a window by using a Normal Distribution Transform algorithm, calculating relative motion between the point clouds, calculating an IMU pre-integration value between adjacent frames under an initial frame coordinate system in the window, and calculating the IMU pre-integration value between adjacent frames. And simultaneously solving the bias of the IMU gyroscope by the point cloud relative rotation value and the pre-integrated relative rotation value, and simultaneously solving the position, the speed, the attitude and the gravity vector of the IMU in the initial time window by the point cloud relative displacement and the pre-integrated relative displacement. And converting the state values of all frames in the window into a world coordinate system according to the solved gravity vector and the state. And registering the preprocessed point cloud data with the existing map according to the initialized positioning mapping system to obtain an initial registration value, and calculating the current gravity vector, the initial attitude and an IMU integral value by using data of at least 3 frames in the past.
And S3, constructing a factor graph model according to the gravity vector, the registration value and the IMU pre-integration, and updating parameters such as IMU deviation.
And step S4, inputting the preprocessed point cloud data into a factor graph model to obtain real-time 3D carrier coordinates, and generating an initial 3D sub-map according to the 3D carrier coordinates and the preprocessed point cloud data.
And step S5, projecting the initial 3D sub-map in the horizontal direction according to the initial posture to obtain a test 2D sub-map.
Step S6, extracting SURF feature points of the test 2D sub-map, calculating a feature descriptor, and matching the feature points of the test 2D sub-map with the feature points of the existing 2D sub-map, as shown in fig. 2, when the number of the matched feature points exceeds 5, determining that two sub-maps exist in a loop, and calculating 2D rigid body transformation parameters according to the feature points of the sub-maps existing in the loop.
Step S7, as shown in fig. 3, calculating to obtain a 3D coordinate initial value according to a spatial relationship between an original frame and an initial 3D sub-map and a 2D rigid body transformation parameter, performing sliding registration in the Z direction with the 3D coordinate initial value as a starting point to obtain an advanced registration value, and calculating a 3D coordinate conversion parameter according to the advanced registration value by local iterative optimization; and establishing a pose graph according to the 3D coordinate conversion parameters, and performing combined optimization to obtain an advanced posture.
Step S8, judging that the difference of the two adjacent 3D coordinate conversion parameters is smaller than a parameter threshold value, if so, executing step S10; if not, step S9 is executed.
And step S9, keeping the original 3D sub-map unchanged, remapping the original 3D sub-map according to the advanced pose to obtain a new 2D sub-map, and executing step S6 (updating the values of the 3D conversion coordinates and the advanced pose).
And step S10, taking the 3D coordinates corresponding to the current 3D coordinate conversion parameters as final 3D coordinates, importing the final 3D coordinates into the initial 3D sub-map, and generating the final 3D sub-map.
The embodiment also provides a simultaneous positioning and mapping device fusing the laser radar and the IMU, which comprises a memory and a processor; a memory for storing a computer program; a processor for, when executing a computer program, implementing the method of simultaneous localization mapping fusing lidar and an IMU as described above.
The present embodiment also provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of a method for simultaneous localization and mapping in a fusion lidar and an IMU as described above.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (10)

1. A simultaneous positioning and mapping method fusing a laser radar and an IMU is characterized by comprising the following steps:
s1, point cloud data and IMU data are obtained, and the point cloud data and the IMU data are preprocessed;
s2, initializing a positioning and mapping system by using the preprocessed point cloud data and IMU data, registering the preprocessed point cloud data with the existing map according to the initialized positioning and mapping system to obtain an initial registration value, and calculating a current gravity vector, an initial posture and an IMU integral value;
s3, constructing a factor graph model according to the gravity vector, the registration value and the IMU pre-integration;
s4, inputting the preprocessed point cloud data into the factor graph model to obtain real-time 3D carrier coordinates, and generating an initial 3D sub-map according to the 3D carrier coordinates and the preprocessed point cloud data;
s5, projecting the initial 3D sub-map in the horizontal direction according to the initial posture to obtain a test 2D sub-map;
s6, extracting feature points of the test 2D sub-map, matching the feature points of the test 2D sub-map with the feature points of the existing 2D sub-map through a positioning map building system, and calculating 2D rigid body transformation parameters according to the matched feature points when the number of the matched feature points exceeds a number threshold;
s7, calculating to obtain a 3D coordinate initial value according to the preprocessed point cloud data, the initial 3D sub-map and the 2D rigid body transformation parameters, performing sliding registration in the Z direction by taking the 3D coordinate initial value as a starting point to obtain a progressive registration value, and calculating 3D coordinate conversion parameters according to the progressive registration value; establishing a pose graph according to the 3D coordinate conversion parameters, and obtaining a progressive gesture;
s8, judging that the difference of the two adjacent 3D coordinate conversion parameters is smaller than a parameter threshold value, if so, executing a step S10; if not, go to step S9;
s9, keeping the original 3D sub-map unchanged, re-mapping to obtain a new 2D sub-map according to the advanced attitude, and executing the step S6;
and S10, taking the 3D coordinates corresponding to the current 3D coordinate conversion parameters as final 3D coordinates, importing the final 3D coordinates into the initial 3D sub-map, and generating the final 3D sub-map.
2. The method of claim 1, wherein the initialization in step S2 comprises static initialization or dynamic initialization.
3. The method of claim 2, wherein the static initialization comprises: and calculating a rotation matrix between the average value of the IMU acceleration measurement values and the gravity vector of the world coordinate system, determining an initial attitude, setting the average value of the IMU angular velocity measurement values as an IMU deviation initial value, and setting the IMU velocity and the coordinate as 0.
4. The method of claim 2, wherein the dynamic initialization comprises: establishing a starting frame coordinate system, and calculating a point cloud relative rotation value and a pre-integral relative rotation value to obtain an initial IMU deviation value; and calculating the point cloud relative displacement value and the pre-integral relative displacement value to obtain the IMU speed, the coordinates, the initial posture and the initial gravity vector, and converting all values into a world coordinate system according to the initial gravity vector.
5. The method for simultaneous localization and mapping of a lidar and an IMU according to claim 1, wherein the feature points of the 2D sub-map tested in step S6 are SURF feature points.
6. The method for simultaneous localization and mapping of lidar and IMU according to claim 1, wherein the preprocessing step in step S1 comprises: pre-integrating IMUs with timestamps smaller than the time stamp of the main radar, and reordering all laser points with sampling timestamps in the auxiliary radar falling in the time stamp by taking the time stamp of the main radar as reference according to the time stamp; and converting each laser point into an IMU coordinate system, calculating relative motion and performing motion distortion correction to obtain preprocessed point cloud data.
7. The method for simultaneous localization and mapping of a lidar and an IMU according to claim 1, wherein the quantity threshold in step S6 is 4-6.
8. The method for simultaneous localization and mapping of a lidar and an IMU according to claim 1, wherein the point cloud data and IMU data are obtained by a plurality of sixteen-line lidar and IMU in step S1.
9. A simultaneous positioning and mapping device fusing a laser radar and an IMU (inertial measurement Unit) is characterized by comprising a memory and a processor; the memory for storing a computer program; the processor, when executing the computer program, is configured to implement the method for simultaneous localization and mapping of fusion lidar and an IMU according to any of claims 1 to 8.
10. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of a method of simultaneous localization and mapping of a fusion lidar and an IMU according to any of claims 1 to 8.
CN202110953344.4A 2021-08-19 2021-08-19 Simultaneous positioning and mapping method and device integrating laser radar and IMU and storage medium Active CN113763549B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110953344.4A CN113763549B (en) 2021-08-19 2021-08-19 Simultaneous positioning and mapping method and device integrating laser radar and IMU and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110953344.4A CN113763549B (en) 2021-08-19 2021-08-19 Simultaneous positioning and mapping method and device integrating laser radar and IMU and storage medium

Publications (2)

Publication Number Publication Date
CN113763549A true CN113763549A (en) 2021-12-07
CN113763549B CN113763549B (en) 2023-07-07

Family

ID=78790411

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110953344.4A Active CN113763549B (en) 2021-08-19 2021-08-19 Simultaneous positioning and mapping method and device integrating laser radar and IMU and storage medium

Country Status (1)

Country Link
CN (1) CN113763549B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114046792A (en) * 2022-01-07 2022-02-15 陕西欧卡电子智能科技有限公司 Unmanned ship water surface positioning and mapping method, device and related components
CN114593724A (en) * 2022-01-21 2022-06-07 北京邮电大学 Cluster fusion positioning method and device
CN116878488A (en) * 2023-09-07 2023-10-13 湘江实验室 Picture construction method and device, storage medium and electronic device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190235083A1 (en) * 2016-03-11 2019-08-01 Kaarta, Inc. Laser scanner with real-time, online ego-motion estimation
CN111207774A (en) * 2020-01-17 2020-05-29 山东大学 Method and system for laser-IMU external reference calibration
WO2020155616A1 (en) * 2019-01-29 2020-08-06 浙江省北大信息技术高等研究院 Digital retina-based photographing device positioning method
CN113066105A (en) * 2021-04-02 2021-07-02 北京理工大学 Positioning and mapping method and system based on fusion of laser radar and inertial measurement unit

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190235083A1 (en) * 2016-03-11 2019-08-01 Kaarta, Inc. Laser scanner with real-time, online ego-motion estimation
WO2020155616A1 (en) * 2019-01-29 2020-08-06 浙江省北大信息技术高等研究院 Digital retina-based photographing device positioning method
CN111207774A (en) * 2020-01-17 2020-05-29 山东大学 Method and system for laser-IMU external reference calibration
CN113066105A (en) * 2021-04-02 2021-07-02 北京理工大学 Positioning and mapping method and system based on fusion of laser radar and inertial measurement unit

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张杰;周军;: "一种改进ICP算法的移动机器人激光与视觉建图方法研究", 机电工程, no. 12 *
张立志;陈殿生;刘维惠;: "基于混合地图的护理机器人室内导航方法", 北京航空航天大学学报, no. 05 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114046792A (en) * 2022-01-07 2022-02-15 陕西欧卡电子智能科技有限公司 Unmanned ship water surface positioning and mapping method, device and related components
CN114593724A (en) * 2022-01-21 2022-06-07 北京邮电大学 Cluster fusion positioning method and device
CN116878488A (en) * 2023-09-07 2023-10-13 湘江实验室 Picture construction method and device, storage medium and electronic device
CN116878488B (en) * 2023-09-07 2023-11-28 湘江实验室 Picture construction method and device, storage medium and electronic device

Also Published As

Publication number Publication date
CN113763549B (en) 2023-07-07

Similar Documents

Publication Publication Date Title
CN112347840B (en) Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
CN110070615B (en) Multi-camera cooperation-based panoramic vision SLAM method
CN109307508B (en) Panoramic inertial navigation SLAM method based on multiple key frames
CN112634451B (en) Outdoor large-scene three-dimensional mapping method integrating multiple sensors
CN113066105A (en) Positioning and mapping method and system based on fusion of laser radar and inertial measurement unit
CN113763549B (en) Simultaneous positioning and mapping method and device integrating laser radar and IMU and storage medium
CN109522832B (en) Loop detection method based on point cloud segment matching constraint and track drift optimization
CN112987065B (en) Multi-sensor-integrated handheld SLAM device and control method thereof
CN114526745B (en) Drawing construction method and system for tightly coupled laser radar and inertial odometer
WO2020063878A1 (en) Data processing method and apparatus
CN111366153B (en) Positioning method for tight coupling of laser radar and IMU
CN112781594A (en) Laser radar iteration closest point improvement algorithm based on IMU coupling
CN112967392A (en) Large-scale park mapping and positioning method based on multi-sensor contact
CN114013449B (en) Data processing method and device for automatic driving vehicle and automatic driving vehicle
CN115272596A (en) Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene
CN114323033B (en) Positioning method and equipment based on lane lines and feature points and automatic driving vehicle
CN113188557B (en) Visual inertial integrated navigation method integrating semantic features
CN113933818A (en) Method, device, storage medium and program product for calibrating laser radar external parameter
CN116429116A (en) Robot positioning method and equipment
CN116577801A (en) Positioning and mapping method and system based on laser radar and IMU
CN115950414A (en) Adaptive multi-fusion SLAM method for different sensor data
CN113379915B (en) Driving scene construction method based on point cloud fusion
Hu et al. Efficient Visual-Inertial navigation with point-plane map
CN113960614A (en) Elevation map construction method based on frame-map matching
Liu et al. Laser 3D tightly coupled mapping method based on visual information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant