CN117554984A - Single-line laser radar indoor SLAM positioning method and system based on image understanding - Google Patents

Single-line laser radar indoor SLAM positioning method and system based on image understanding Download PDF

Info

Publication number
CN117554984A
CN117554984A CN202311474781.3A CN202311474781A CN117554984A CN 117554984 A CN117554984 A CN 117554984A CN 202311474781 A CN202311474781 A CN 202311474781A CN 117554984 A CN117554984 A CN 117554984A
Authority
CN
China
Prior art keywords
slam
data
laser radar
module
indoor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311474781.3A
Other languages
Chinese (zh)
Inventor
王波
陈宗仁
王瑞杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Institute of Science and Technology
Original Assignee
Guangdong Institute of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Institute of Science and Technology filed Critical Guangdong Institute of Science and Technology
Priority to CN202311474781.3A priority Critical patent/CN117554984A/en
Publication of CN117554984A publication Critical patent/CN117554984A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a single-line laser radar indoor SLAM positioning method and system based on image understanding, which belong to the technical field of autonomous navigation of robots and comprise the following steps: s1, selecting an experimental platform and designing hardware of a robot; s2, collecting and preprocessing data input into the sensor; s3, implementing local SLAM mapping; s4, implementing global SLAM mapping to complete indoor SLAM positioning; according to the invention, by capturing images and depth information by using the RGB-D camera, semantic features can be extracted from the images, a visual map of the environment is constructed, the registration of continuous frames obtained by the single-line laser radar is facilitated, accurate and stable positioning and mapping in the indoor environment are realized, the method is superior to the traditional single-line laser radar SLAM method without image interpretation, the understanding of a laser SLAM algorithm and environment depth information optimized by the image is fused, accurate and stable positioning and mapping are realized in the indoor environment, and the applicability is higher.

Description

Single-line laser radar indoor SLAM positioning method and system based on image understanding
Technical Field
The invention relates to the technical field of autonomous navigation of robots, in particular to a single-line laser radar indoor SLAM positioning method and system based on image understanding.
Background
Meanwhile, positioning and map construction have been widely used in various fields, such as robotics, autopilot and virtual reality, and indoor SLAM is particularly important for applications such as indoor navigation, building inspection and rescue tasks. The single-line laser radar has become a commonly used sensor in indoor SLAM because of low cost and high precision. However, its limited field of view makes it difficult to obtain a comprehensive three-dimensional point cloud. In addition, the conventional single line lidar SLAM method is prone to drift and requires frequent closed loop correction.
The SLAM algorithm is various, different solutions exist for different problems, nonlinear optimization multithreading is used in SLAM, the algorithm successfully solves the real-time attitude estimation problem of SLAM, but lacks a loop detection function, and the algorithm is only suitable for constructing a small-scale map; the low-drift SLAM method based on the point-line-plane attitude estimation aiming at the indoor scene successfully solves the SLAM mapping problem of the low-texture indoor scene, but the recognition rate is still lower in the complex scene; and the close coupling mapping of the solid-state and mechanical laser radars is realized in the li-om solid-state laser radar inertial ranging SLAM scheme, and the scheme designs a novel feature extraction method for irregular and unique scanning modes of the Livox horizons, so that the feature extraction tracking and mapping precision under complex scenes is improved. However, the use of lipox horizons is not inexpensive.
Analyzing the existing related laser radar SLAM positioning algorithm and technology, the semantic SLAM method enables an SLAM system to obtain higher-level scene understanding, but has a certain limitation in practical application.
Aiming at the problems, the invention provides a single-line laser radar indoor SLAM positioning method and system based on image understanding.
Disclosure of Invention
The invention provides a single-line laser radar indoor SLAM positioning method and system based on image understanding, which solve the problems.
The invention provides the following technical scheme:
an indoor SLAM positioning method of a single-line laser radar based on image understanding comprises the following steps:
s1, selecting an experimental platform and designing hardware of a robot;
s2, collecting and preprocessing data input into the sensor;
s3, implementing local SLAM mapping;
s4, implementing global SLAM mapping to complete indoor SLAM positioning.
In one possible design, in the step S2, the specific steps are as follows:
s2.1, preprocessing laser ranging data acquired by a single-line laser radar through a voxel filter and a self-adaptive voxel filter, and inputting the result into a local SLAM module;
s2.2, sending an image frame captured by an RGB-D camera into a YOLOv5 target detection network, detecting a priori dynamic target to obtain a next frame, extracting semantic features, and outputting accurate camera attitude estimation;
s2.3, preprocessing the IMU data by using an inertial tracker, and sending the result, the odometer pose data and the camera pose estimation into a pose extrapolation device.
In one possible design, in the step S3, the specific steps are as follows:
s3.1, calculating an attitude estimation value by using the odometer and IMU data;
s3.2, taking the attitude estimation value as an initial guess, matching laser radar data and updating the value of an attitude estimator;
and S3.3, after each frame of the data of the laser radar is subjected to motion filtering, the frames are accumulated and combined to form a sub-graph.
In one possible design, the step S4 includes the steps of:
s4.1, inserting the new scanning frame and all previous scanning frames into the completed sub-graph;
s4.2, performing loop detection by using an optimization algorithm branch delimitation;
s4.3, calculating constraint relations among the gestures based on the detected loop constraint;
and S4.4, finally optimizing all constraints by using a gesture optimization algorithm to obtain a more accurate gesture estimation result.
A system for use in a single line lidar indoor SLAM positioning method based on image understanding, the system comprising: the system comprises a sensor data input module, a semantic detection module, a local SLAM module and a global SLAM module;
the sensor data input module is used for inputting data to the semantic detection module, the local SLAM module and the global SLAM module;
the semantic detection module is used for extracting semantic information and detecting a dynamic target in a scene;
the local SLAM module is used for constructing a local map and updating the attitude estimation;
the global SLAM module is used for detecting and performing global optimization to eliminate accumulated errors.
In one possible design, the input data of the sensor data input module includes laser scan data, odometer pose data, IMU measurement data, fixed frame pose data, and image frames.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
According to the invention, by capturing images and depth information by using the RGB-D camera, semantic features can be extracted from the images, a visual map of the environment is constructed, the registration of continuous frames obtained by the single-line laser radar is facilitated, accurate and stable positioning and mapping in the indoor environment are realized, the method is superior to the traditional single-line laser radar SLAM method without image interpretation, the understanding of a laser SLAM algorithm and environment depth information optimized by the image is fused, accurate and stable positioning and mapping are realized in the indoor environment, and the applicability is higher.
Drawings
Fig. 1 is a schematic flow chart of a single-line laser radar indoor SLAM positioning method based on image understanding provided by an embodiment of the invention;
FIG. 2 is a schematic diagram of a hardware design of a robot according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a mapping effect evaluation comparison result provided by an embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described below with reference to the accompanying drawings in the embodiments of the present invention.
In order that the above objects, features and advantages of the invention will be readily understood, a more particular description of the invention will be rendered by reference to the appended drawings. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. The invention may be embodied in many other forms than described herein and similarly modified by those skilled in the art without departing from the spirit or scope of the invention, which is therefore not limited to the specific embodiments disclosed below.
Example 1
Referring to fig. 1, a system of the present embodiment is applied to a single-line laser radar indoor SLAM positioning method based on image understanding, and the system is based on a Cartographer framework, and includes a sensor data input module, a semantic detection module, a local SLAM module and a global SLAM module.
The sensor data input module is used for inputting data to the semantic detection module, the local SLAM module and the global SLAM module, and adopts laser scanning data, an odometer gesture, IMU measurement data and a fixed frame gesture as input sensor data, and simultaneously, an image frame captured by an RGB-D camera is added for extracting semantic features and depth information from the images;
in the processing of laser scanning data, the whole positioning and mapping process is based on optimizing the attitude estimation and map grid occupation probability of a robot through the scanning data, wherein in the process, a Cartographer system firstly preprocesses the scanning data through two voxel filters, then inputs the scanning data into a local SLAM module, builds a local map by using an SLAM algorithm and generates a subgraph, so that the Cartographer system can efficiently and accurately map and position an indoor environment, and provides reliable technical support for applications such as indoor navigation and mobile robots;
the odometer pose data, IMU measurement data and fixed frame pose data are mainly used to provide better initial values for the optimization solution process, where IMU data is the dominant part, initial pose estimates are provided for local SLAMs by using IMU trackers and pose extrapolaters, and the raw data of these sensors will also be used in sparse pose adjustment of global SLAMs.
The semantic detection module is used for extracting semantic information and detecting dynamic targets in a scene, the extracted semantic information can be used for identifying the dynamic targets in the scene, so that the robustness of SLAM in the dynamic environment is improved, in the semantic detection module, YOLOv5 is used as a semantic detection network to extract the semantic information, small targets can be detected and real-time performance is guaranteed while accuracy is improved, YOLOv5 contains more than 80 targets in different categories, and training categories are very suitable for indoor SLAM scenes;
the input image frames are subjected to semantic detection by using a YOLOv5 target detection network, only a priori dynamic target is detected in the detection process, other targets are not detected, the detection result is transmitted to a tracking thread for preprocessing, and feature points on the dynamic target are removed, so that accurate camera attitude estimation is obtained.
The local SLAM module is used for constructing a local map and updating the posture estimation, the input of the process is the initial posture generated by the posture extrapolator, and the output is a local subgraph, and the implementation comprises the following steps:
scan matching, motion filtering stationary and garbage discarding components;
scanning and matching: the method comprises the steps that an optimal gesture of a current moment relative to a previous period is calculated, a least square problem is established by a nonlinear optimization method, an initial gesture obtained by an extrapolator is used as input, the output is an optimal scanning matching gesture, and a Ceres library of Google is used in a Cartographer to solve the least square problem;
motion filter: the goal of the module is to reduce the number of scan frames inserted into each sub-graph, once the scan matcher generates a new gesture, calculate the gesture to change from the last gesture, then invoke the motion filter, if the gesture change is not obvious or too small, the scan will be removed, the scan will be inserted into the current sub-graph only when the distance, angle or time threshold between the gestures changes significantly, ensuring that the number of scans in the sub-graph is not too large, and reducing noise and errors in the map;
subgraph: every time a scan data is obtained, a match is made with the newly created sub-graph to insert the scan data for that frame into the best position in the sub-graph, the sub-graph is updated while new data frames are being inserted, a certain amount of data is combined into a sub-graph, and when no new scan is inserted into the sub-graph, the sub-graph is considered complete and the algorithm creates the next sub-graph.
The global SLAM module is used for detecting and performing global optimization to eliminate accumulated errors, the input is a new scanning frame and a completed sub-image, the output is an optimized attitude estimation, and the implementation includes:
calculating constraint, sparse posture adjustment and loop detection;
calculating constraints: calculating constraints, namely establishing relations between adjacent frames based on various forms of information, such as attitude difference, visual feature matching, IMU data and the like, wherein the constraints are used as inputs of an optimizer to estimate the trajectories of a camera and a robot and adjust a map;
sparse posture adjustment: if good matching is obtained, the loop detection process is finished in this way, and the existence of a loop is detected, then, according to the current scanning gesture and the gesture which is the best matching with the current scanning gesture in the subgraph, all the gestures in the subgraph are optimized, the aim is to minimize residual error, the loop optimization problem is also a nonlinear least square problem, and the Ceres library of google is used for solving in a Cartograph;
loop detection: during loop detection, all created subgraphs and current laser scans are used for matching, if the distances between the current scan and all completed subgraphs are close enough, a proper matching strategy can find loop closure, and a branch-and-bound optimization method is applied for efficient searching for reducing computational complexity and improving real-time loop detection efficiency.
Example 2
Referring to fig. 1-3, an indoor SLAM positioning method of a single line lidar based on image understanding includes the following steps:
s1, selecting an experimental platform and designing hardware of a robot:
the 64-bit Ubuntu system is selected, and a hardware sensor of the robot comprises: the upper main board is raspberry pie 4B; the bottom main board is selected from English WeidajetsonNanoB 01; the main control adopts STM32F103RET6 chip; the laser radar is integrated with a Silan A1 single-line laser radar, a nine-axis IMU (MPU 9250) and an odometer; meanwhile, an ackerman steering chassis is adopted, and the rear wheels of the chassis are driven by 2 direct current motors with encoders.
S2, collecting and preprocessing data input into a sensor:
s2.1, preprocessing laser ranging data acquired by a single-line laser radar through a voxel filter and a self-adaptive voxel filter, and inputting the result into a local SLAM module;
s2.2, sending an image frame captured by an RGB-D camera into a YOLOv5 target detection network, detecting a priori dynamic target to obtain a next frame, extracting semantic features, and outputting accurate camera attitude estimation;
s2.3, preprocessing the IMU data by using an inertial tracker, and sending the result, the odometer pose data and the camera pose estimation into a pose extrapolation device.
S3, implementing local SLAM mapping:
s3.1, calculating an attitude estimation value by using the odometer and IMU data;
s3.2, taking the attitude estimation value as an initial guess, matching laser radar data and updating the value of an attitude estimator;
and S3.3, after each frame of the data of the laser radar is subjected to motion filtering, the frames are accumulated and combined to form a sub-graph.
S4, implementing global SLAM mapping to complete indoor SLAM positioning:
s4.1, inserting the new scanning frame and all previous scanning frames into the completed sub-graph;
s4.2, performing loop detection by using an optimization algorithm branch delimitation;
s4.3, calculating constraint relations among the gestures based on the detected loop constraint;
and S4.4, finally optimizing all constraints by using a gesture optimization algorithm to obtain a more accurate gesture estimation result.
According to the method and the system, the RGB-D camera is used for capturing an image, semantic features and depth information are extracted from the image, and a visual map of the environment is built, so that registration of continuous frames obtained by the single-line laser radar is facilitated, accurate and stable positioning and mapping are finally realized in an indoor environment, and autonomous positioning and mapping of the robot to an indoor complex scene are realized.
The present invention is not limited to the above embodiments, and any person skilled in the art can easily think about the changes or substitutions within the technical scope of the present invention, and the changes or substitutions are intended to be covered by the scope of the present invention; embodiments of the invention and features of the embodiments may be combined with each other without conflict. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (6)

1. The indoor SLAM positioning method based on image understanding for the single-line laser radar is characterized by comprising the following steps of:
s1, selecting an experimental platform and designing hardware of a robot;
s2, collecting and preprocessing data input into the sensor;
s3, implementing local SLAM mapping;
s4, implementing global SLAM mapping to complete indoor SLAM positioning.
2. The single-line laser radar indoor SLAM positioning method based on image understanding according to claim 1, wherein in the step S2, the specific steps are as follows:
s2.1, preprocessing laser ranging data acquired by a single-line laser radar through a voxel filter and a self-adaptive voxel filter, and inputting the result into a local SLAM module;
s2.2, sending an image frame captured by an RGB-D camera into a YOLOv5 target detection network, detecting a priori dynamic target to obtain a next frame, extracting semantic features, and outputting accurate camera attitude estimation;
s2.3, preprocessing the IMU data by using an inertial tracker, and sending the result, the odometer pose data and the camera pose estimation into a pose extrapolation device.
3. The single-line laser radar indoor SLAM positioning method based on image understanding according to claim 1, wherein in the step S3, the specific steps are as follows:
s3.1, calculating an attitude estimation value by using the odometer and IMU data;
s3.2, taking the attitude estimation value as an initial guess, matching laser radar data and updating the value of an attitude estimator;
and S3.3, after each frame of the data of the laser radar is subjected to motion filtering, the frames are accumulated and combined to form a sub-graph.
4. The single-line lidar indoor SLAM positioning method based on image understanding according to claim 1, wherein the step S4 comprises the steps of:
s4.1, inserting the new scanning frame and all previous scanning frames into the completed sub-graph;
s4.2, performing loop detection by using an optimization algorithm branch delimitation;
s4.3, calculating constraint relations among the gestures based on the detected loop constraint;
and S4.4, finally optimizing all constraints by using a gesture optimization algorithm to obtain a more accurate gesture estimation result.
5. A system for single line lidar indoor SLAM localization method based on image understanding, the system comprising: the system comprises a sensor data input module, a semantic detection module, a local SLAM module and a global SLAM module;
the sensor data input module is used for inputting data to the semantic detection module, the local SLAM module and the global SLAM module;
the semantic detection module is used for extracting semantic information and detecting a dynamic target in a scene;
the local SLAM module is used for constructing a local map and updating the attitude estimation;
the global SLAM module is used for detecting and performing global optimization to eliminate accumulated errors.
6. The image understanding based single line lidar indoor SLAM positioning method of claim 5, wherein the sensor data input module input data comprises laser scan data, odometer pose data, IMU measurement data, fixed frame pose data, and image frames.
CN202311474781.3A 2023-11-08 2023-11-08 Single-line laser radar indoor SLAM positioning method and system based on image understanding Pending CN117554984A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311474781.3A CN117554984A (en) 2023-11-08 2023-11-08 Single-line laser radar indoor SLAM positioning method and system based on image understanding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311474781.3A CN117554984A (en) 2023-11-08 2023-11-08 Single-line laser radar indoor SLAM positioning method and system based on image understanding

Publications (1)

Publication Number Publication Date
CN117554984A true CN117554984A (en) 2024-02-13

Family

ID=89819582

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311474781.3A Pending CN117554984A (en) 2023-11-08 2023-11-08 Single-line laser radar indoor SLAM positioning method and system based on image understanding

Country Status (1)

Country Link
CN (1) CN117554984A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180075643A1 (en) * 2015-04-10 2018-03-15 The European Atomic Energy Community (Euratom), Represented By The European Commission Method and device for real-time mapping and localization
US20190329407A1 (en) * 2018-04-30 2019-10-31 Beijing Jingdong Shangke Information Technology Co., Ltd. System and method for multimodal mapping and localization
CN111983639A (en) * 2020-08-25 2020-11-24 浙江光珀智能科技有限公司 Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU
US20210078174A1 (en) * 2019-09-17 2021-03-18 Wuyi University Intelligent medical material supply robot based on internet of things and slam technology
CN113238554A (en) * 2021-05-08 2021-08-10 武汉科技大学 Indoor navigation method and system based on SLAM technology integrating laser and vision
CN113674412A (en) * 2021-08-12 2021-11-19 浙江工商大学 Pose fusion optimization-based indoor map construction method and system and storage medium
CN114092714A (en) * 2021-11-19 2022-02-25 江苏科技大学 Household mowing robot positioning method based on enhanced loop detection and repositioning
CN114721377A (en) * 2022-03-22 2022-07-08 盐城工学院 Improved Cartogrier based SLAM indoor blind guiding robot control method
CN114782626A (en) * 2022-04-14 2022-07-22 国网河南省电力公司电力科学研究院 Transformer substation scene mapping and positioning optimization method based on laser and vision fusion
CN115015956A (en) * 2022-04-12 2022-09-06 南京邮电大学 Laser and vision SLAM system of indoor unmanned vehicle
CN116929335A (en) * 2023-07-26 2023-10-24 西南科技大学 Radiation field map construction method based on multisource information perception

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180075643A1 (en) * 2015-04-10 2018-03-15 The European Atomic Energy Community (Euratom), Represented By The European Commission Method and device for real-time mapping and localization
US20190329407A1 (en) * 2018-04-30 2019-10-31 Beijing Jingdong Shangke Information Technology Co., Ltd. System and method for multimodal mapping and localization
US20210078174A1 (en) * 2019-09-17 2021-03-18 Wuyi University Intelligent medical material supply robot based on internet of things and slam technology
CN111983639A (en) * 2020-08-25 2020-11-24 浙江光珀智能科技有限公司 Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU
CN113238554A (en) * 2021-05-08 2021-08-10 武汉科技大学 Indoor navigation method and system based on SLAM technology integrating laser and vision
CN113674412A (en) * 2021-08-12 2021-11-19 浙江工商大学 Pose fusion optimization-based indoor map construction method and system and storage medium
CN114092714A (en) * 2021-11-19 2022-02-25 江苏科技大学 Household mowing robot positioning method based on enhanced loop detection and repositioning
CN114721377A (en) * 2022-03-22 2022-07-08 盐城工学院 Improved Cartogrier based SLAM indoor blind guiding robot control method
CN115015956A (en) * 2022-04-12 2022-09-06 南京邮电大学 Laser and vision SLAM system of indoor unmanned vehicle
CN114782626A (en) * 2022-04-14 2022-07-22 国网河南省电力公司电力科学研究院 Transformer substation scene mapping and positioning optimization method based on laser and vision fusion
CN116929335A (en) * 2023-07-26 2023-10-24 西南科技大学 Radiation field map construction method based on multisource information perception

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杜博文等: "多传感融合的激光雷达SLAM 及应用进展", 《天津职业技术师范大学学报》, 30 September 2023 (2023-09-30), pages 1 - 5 *

Similar Documents

Publication Publication Date Title
CN112634451B (en) Outdoor large-scene three-dimensional mapping method integrating multiple sensors
CN112734852B (en) Robot mapping method and device and computing equipment
CN112304307B (en) Positioning method and device based on multi-sensor fusion and storage medium
CN111275763B (en) Closed loop detection system, multi-sensor fusion SLAM system and robot
US10546387B2 (en) Pose determination with semantic segmentation
US9990726B2 (en) Method of determining a position and orientation of a device associated with a capturing device for capturing at least one image
CN110587597B (en) SLAM closed loop detection method and detection system based on laser radar
CN113706626B (en) Positioning and mapping method based on multi-sensor fusion and two-dimensional code correction
US12008785B2 (en) Detection, 3D reconstruction and tracking of multiple rigid objects moving in relation to one another
CN115049700A (en) Target detection method and device
CN112419497A (en) Monocular vision-based SLAM method combining feature method and direct method
CN116879870A (en) Dynamic obstacle removing method suitable for low-wire-harness 3D laser radar
CN116188417A (en) Slit detection and three-dimensional positioning method based on SLAM and image processing
Wei et al. Novel robust simultaneous localization and mapping for long-term autonomous robots
Guerra et al. New validation algorithm for data association in SLAM
Thomas et al. Delio: Decoupled lidar odometry
CN117419719A (en) IMU-fused three-dimensional laser radar positioning and mapping method
CN116045965A (en) Multi-sensor-integrated environment map construction method
CN117554984A (en) Single-line laser radar indoor SLAM positioning method and system based on image understanding
Li-Chee-Ming et al. Augmenting visp’s 3d model-based tracker with rgb-d slam for 3d pose estimation in indoor environments
Pan et al. LiDAR-IMU Tightly-Coupled SLAM Method Based on IEKF and Loop Closure Detection
CN114419155B (en) Visual image construction method based on laser radar assistance
CN117611677B (en) Robot positioning method based on target detection and structural characteristics
Tamjidi et al. A pose estimation method for unmanned ground vehicles in GPS denied environments
CN117191051A (en) Method and equipment for realizing autonomous navigation and target identification of lunar surface detector

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination