CN117554984A - A single-line lidar indoor SLAM positioning method and system based on image understanding - Google Patents
A single-line lidar indoor SLAM positioning method and system based on image understanding Download PDFInfo
- Publication number
- CN117554984A CN117554984A CN202311474781.3A CN202311474781A CN117554984A CN 117554984 A CN117554984 A CN 117554984A CN 202311474781 A CN202311474781 A CN 202311474781A CN 117554984 A CN117554984 A CN 117554984A
- Authority
- CN
- China
- Prior art keywords
- slam
- data
- indoor
- module
- line lidar
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000013507 mapping Methods 0.000 claims abstract description 18
- 238000005457 optimization Methods 0.000 claims abstract description 15
- 238000013461 design Methods 0.000 claims abstract description 10
- 238000001514 detection method Methods 0.000 claims description 24
- 230000036544 posture Effects 0.000 claims description 5
- 238000001914 filtration Methods 0.000 claims description 4
- 238000005259 measurement Methods 0.000 claims description 4
- 230000003044 adaptive effect Effects 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 abstract description 4
- 230000007613 environmental effect Effects 0.000 abstract description 2
- 238000007781 pre-processing Methods 0.000 abstract description 2
- 230000008569 process Effects 0.000 description 6
- 230000008859 change Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- BLRPTPMANUNPDV-UHFFFAOYSA-N Silane Chemical compound [SiH4] BLRPTPMANUNPDV-UHFFFAOYSA-N 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Automation & Control Theory (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种基于图像理解的单线激光雷达室内SLAM定位方法及系统,属于机器人自主导航技术领域,包括以下步骤:S1、选用实验平台以及设计机器人的硬件;S2、对输入传感器的数据进行采集以及预处理;S3、实施局部SLAM建图;S4、实施全局SLAM建图,完成室内SLAM定位;本发明通过使用RGB‑D摄像头捕获图像和深度信息,可以从图像中提取语义特征,构建环境的视觉地图,便于用于辅助单线激光雷达获得的连续帧的注册,且实现了室内环境中的精确稳健的定位和映射,优于传统的没有图像解释的单线激光雷达SLAM方法,融合了图优化的激光SLAM算法和环境深度信息的理解,在室内环境中实现了精确稳健的定位和映射,适用性更强。
The invention discloses a single-line lidar indoor SLAM positioning method and system based on image understanding, which belongs to the technical field of robot autonomous navigation and includes the following steps: S1. Select an experimental platform and design the hardware of the robot; S2. Perform the input sensor data Acquisition and preprocessing; S3, implement local SLAM mapping; S4, implement global SLAM mapping, and complete indoor SLAM positioning; by using RGB-D cameras to capture images and depth information, this invention can extract semantic features from images and build an environment The visual map facilitates the registration of consecutive frames obtained by assisting single-line lidar, and achieves accurate and robust positioning and mapping in indoor environments. It is superior to the traditional single-line lidar SLAM method without image interpretation and incorporates graph optimization. The laser SLAM algorithm and the understanding of environmental depth information achieve accurate and robust positioning and mapping in indoor environments, with stronger applicability.
Description
技术领域Technical field
本发明涉及机器人自主导航技术领域,尤其涉及一种基于图像理解的单线激光雷达室内SLAM定位方法及系统。The present invention relates to the technical field of robot autonomous navigation, and in particular to a single-line laser radar indoor SLAM positioning method and system based on image understanding.
背景技术Background technique
同时定位与地图构建已经在各种领域得到了广泛应用,例如机器人技术、自动驾驶和虚拟现实,而室内SLAM对室内导航、建筑检查和救援任务等应用尤为重要。由于单线激光雷达成本低廉且精度高,已成为室内SLAM中常用的传感器。但是,其有限的视场使其难以获得全面的三维点云。此外,传统的单线激光雷达SLAM方法容易产生漂移,需要频繁闭环校正。Simultaneous positioning and map construction have been widely used in various fields, such as robotics, autonomous driving, and virtual reality, and indoor SLAM is particularly important for applications such as indoor navigation, building inspection, and rescue missions. Due to its low cost and high accuracy, single-line lidar has become a commonly used sensor in indoor SLAM. However, its limited field of view makes it difficult to obtain comprehensive 3D point clouds. In addition, the traditional single-line lidar SLAM method is prone to drift and requires frequent closed-loop correction.
SLAM算法种类繁多,针对不同问题存在不同的解决方案,在SLAM中使用非线性优化多线程,该算法成功地解决了SLAM的实时姿态估计问题,但缺乏循环检测功能,仅适合构建小规模地图;而一些针对室内场景的基于点线面姿态估计的低漂移SLAM方法,该方法成功解决了低纹理室内场景的SLAM映射问题,但在复杂场景下其识别率仍较低;且lili-om固态激光雷达惯性测距SLAM方案中实现了固态和机械激光雷达的紧耦合映射,该方案为Livox Horizon的不规则和独特的扫描模式设计了一种新的特征提取方法,提高了复杂场景下的特征提取跟踪和映射精度。但是,由于使用了Livox Horizon,其价格不便宜。There are many types of SLAM algorithms, and there are different solutions for different problems. Nonlinear optimization multi-threading is used in SLAM. This algorithm successfully solves the real-time attitude estimation problem of SLAM, but it lacks loop detection function and is only suitable for building small-scale maps; Some low-drift SLAM methods based on point-line-plane pose estimation for indoor scenes have successfully solved the SLAM mapping problem of low-texture indoor scenes, but their recognition rate is still low in complex scenes; and the lili-om solid-state laser Tightly coupled mapping of solid-state and mechanical lidar is implemented in the radar inertial ranging SLAM solution. This solution designs a new feature extraction method for the irregular and unique scanning pattern of Livox Horizon, improving feature extraction in complex scenes. Tracking and mapping accuracy. However, because it uses Livox Horizon, it doesn't come cheap.
分析现有相关的激光雷达SLAM定位算法和技术,语义SLAM方法使SLAM系统可以获得更高层次的场景理解,但是实际应用起来存在一定的限制性。Analyzing existing related lidar SLAM positioning algorithms and technologies, the semantic SLAM method enables the SLAM system to obtain a higher level of scene understanding, but there are certain limitations in practical application.
针对上述问题,本发明文件提出了一种基于图像理解的单线激光雷达室内SLAM定位方法及系统。In response to the above problems, this invention document proposes a single-line lidar indoor SLAM positioning method and system based on image understanding.
发明内容Contents of the invention
本发明提供了一种基于图像理解的单线激光雷达室内SLAM定位方法及系统,解决了上述提出的问题。The present invention provides a single-line laser radar indoor SLAM positioning method and system based on image understanding, which solves the above-mentioned problems.
本发明提供了如下技术方案:The present invention provides the following technical solutions:
一种基于图像理解的单线激光雷达室内SLAM定位方法,包括以下步骤:A single-line lidar indoor SLAM positioning method based on image understanding, including the following steps:
S1、选用实验平台以及设计机器人的硬件;S1. Select the experimental platform and design the robot hardware;
S2、对输入传感器的数据进行采集以及预处理;S2. Collect and preprocess the data input from the sensor;
S3、实施局部SLAM建图;S3. Implement local SLAM mapping;
S4、实施全局SLAM建图,完成室内SLAM定位。S4. Implement global SLAM mapping and complete indoor SLAM positioning.
在一种可能的设计中,所述步骤S2中,具体步骤为:In a possible design, in step S2, the specific steps are:
S2.1、对单线激光雷达采集的激光测距数据通过体素滤波器和自适应体素滤波器进行预处理,并将结果输入局部SLAM模块;S2.1. Preprocess the laser ranging data collected by the single-line lidar through the voxel filter and the adaptive voxel filter, and input the results into the local SLAM module;
S2.2、对RGB-D摄像头捕捉的图像帧送入YOLOv5目标检测网络,检测出先验的动态目标,得到下一帧,进行语义特征提取,输出准确的相机姿态估计;S2.2. Send the image frames captured by the RGB-D camera to the YOLOv5 target detection network to detect a priori dynamic targets, obtain the next frame, extract semantic features, and output accurate camera pose estimation;
S2.3、接着对IMU数据使用惯性跟踪器进行预处理,将结果和里程计位姿数据以及相机姿态估计一起送入位姿外推器。S2.3. Then use the inertial tracker to preprocess the IMU data, and send the results to the pose extrapolator together with the odometry pose data and camera pose estimation.
在一种可能的设计中,所述步骤S3中,具体步骤为:In a possible design, in step S3, the specific steps are:
S3.1、使用里程计和IMU数据计算姿态估计值;S3.1. Calculate attitude estimation using odometry and IMU data;
S3.2、以姿态估计值作为初始猜测,匹配激光雷达数据并更新姿态估计器的值;S3.2. Use the attitude estimate as the initial guess, match the lidar data and update the value of the attitude estimator;
S3.3、对激光雷达的数据每一帧进行运动滤波后,其帧积累组合形成一个子图。S3.3. After performing motion filtering on each frame of lidar data, the frames are accumulated and combined to form a subgraph.
在一种可能的设计中,所述步骤S4中,具有步骤为:In one possible design, step S4 includes the following steps:
S4.1、将新扫描帧和所有之前的扫描帧插入完成的子图中;S4.1. Insert the new scan frame and all previous scan frames into the completed subgraph;
S4.2、而后使用优化算法分支定界进行环路检测;S4.2. Then use the optimization algorithm branch and bound to detect loops;
S4.3、基于检测到的环路约束计算姿态之间的约束关系;S4.3. Calculate the constraint relationship between postures based on the detected loop constraints;
S4.4、最后使用姿态优化算法优化所有约束,以获得更准确的姿态估计结果。S4.4. Finally, use the attitude optimization algorithm to optimize all constraints to obtain more accurate attitude estimation results.
一种系统,应用于基于图像理解的单线激光雷达室内SLAM定位方法,所述系统包括:传感器数据输入模块、语义检测模块、局部SLAM模块和全局SLAM模块;A system applied to a single-line lidar indoor SLAM positioning method based on image understanding. The system includes: a sensor data input module, a semantic detection module, a local SLAM module and a global SLAM module;
所述传感器数据输入模块,用于输入数据至语义检测模块、局部SLAM模块以及全局SLAM模块;The sensor data input module is used to input data to the semantic detection module, the local SLAM module and the global SLAM module;
所述语义检测模块,用于提取语义信息,且检测场景中的动态目标;The semantic detection module is used to extract semantic information and detect dynamic targets in the scene;
所述局部SLAM模块,用于构建局部地图并更新姿态估计;The local SLAM module is used to construct a local map and update attitude estimation;
所述全局SLAM模块,用于检测并进行全局优化以消除累积错误。The global SLAM module is used to detect and perform global optimization to eliminate accumulated errors.
在一种可能的设计中,所述传感器数据输入模块的输入数据包括激光扫描数据、里程计姿态数据、IMU测量数据、固定框架姿态数据和图像帧。In a possible design, the input data of the sensor data input module includes laser scanning data, odometry attitude data, IMU measurement data, fixed frame attitude data and image frames.
应当理解的是,以上的一般描述和后文的细节描述仅是示例性的,并不能限制本发明。It should be understood that the above general description and the following detailed description are only exemplary and do not limit the present invention.
本发明通过使用RGB-D摄像头捕获图像和深度信息,可以从图像中提取语义特征,构建环境的视觉地图,便于用于辅助单线激光雷达获得的连续帧的注册,且实现了室内环境中的精确稳健的定位和映射,优于传统的没有图像解释的单线激光雷达SLAM方法,融合了图优化的激光SLAM算法和环境深度信息的理解,在室内环境中实现了精确稳健的定位和映射,适用性更强。By using an RGB-D camera to capture images and depth information, the present invention can extract semantic features from images, build a visual map of the environment, facilitate the registration of continuous frames obtained by assisting single-line lidar, and achieve accurate detection in indoor environments. Robust positioning and mapping, which is better than the traditional single-line lidar SLAM method without image interpretation. It integrates the graph-optimized laser SLAM algorithm and the understanding of environmental depth information to achieve accurate and robust positioning and mapping in indoor environments. Applicability Stronger.
附图说明Description of the drawings
图1为本发明实施例所提供的基于图像理解的单线激光雷达室内SLAM定位方法的流程示意图;Figure 1 is a schematic flow chart of the single-line lidar indoor SLAM positioning method based on image understanding provided by an embodiment of the present invention;
图2为本发明实施例所提供的机器人硬件设计示意图;Figure 2 is a schematic diagram of the robot hardware design provided by the embodiment of the present invention;
图3为本发明实施例所提供的映射效果评估对比结果示意图。Figure 3 is a schematic diagram of mapping effect evaluation and comparison results provided by an embodiment of the present invention.
具体实施方式Detailed ways
下面结合本发明实施例中的附图对本发明实施例进行描述。The embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention.
为使本发明的上述目的、特征和优点能够更加明显易懂,下面结合附图对本发明的具体实施方式做详细的说明。在下面的描述中阐述了很多具体细节以便于充分理解本发明。但是本发明能够以很多不同于在此描述的其它方式来实施,本领域技术人员可以在不违背本发明内涵的情况下做类似改进,因此本发明不受下面公开的具体实施的限制。In order to make the above objects, features and advantages of the present invention more obvious and easy to understand, the specific embodiments of the present invention will be described in detail below with reference to the accompanying drawings. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, the present invention can be implemented in many other ways different from those described here. Those skilled in the art can make similar improvements without departing from the connotation of the present invention. Therefore, the present invention is not limited to the specific implementation disclosed below.
实施例1Example 1
参照图1,本实施例的一种系统,应用于基于图像理解的单线激光雷达室内SLAM定位方法,该系统基于Cartographer框架上,包括传感器数据输入模块、语义检测模块、局部SLAM模块和全局SLAM模块。Referring to Figure 1, a system in this embodiment is applied to the single-line lidar indoor SLAM positioning method based on image understanding. The system is based on the Cartographer framework and includes a sensor data input module, a semantic detection module, a local SLAM module and a global SLAM module. .
传感器数据输入模块,用于输入数据至语义检测模块、局部SLAM模块以及全局SLAM模块,传感器数据输入模块采用激光扫描数据、里程计姿态、IMU测量数据和固定框架姿态作为输入传感器数据,同时还增加了RGB-D摄像头捕捉的图像帧,用以从这些图像中提取语义特征和深度信息;The sensor data input module is used to input data to the semantic detection module, local SLAM module and global SLAM module. The sensor data input module uses laser scanning data, odometer attitude, IMU measurement data and fixed frame attitude as input sensor data, and also adds Image frames captured by RGB-D cameras are used to extract semantic features and depth information from these images;
激光扫描数据的处理中,整个定位和映射过程基于通过扫描数据优化机器人的姿态估计和地图占用栅格概率,其过程中,Cartographer系统首先通过两个体素滤波器对扫描数据进行预处理,然后将其输入到局部SLAM模块中,使用SLAM算法构建局部地图并生成子图,使得Cartographer系统可以高效和精确地映射和定位室内环境,为室内导航和移动机器人等应用提供可靠的技术支持;In the processing of laser scanning data, the entire positioning and mapping process is based on optimizing the robot's attitude estimation and map occupation grid probability through scanning data. In the process, the Cartographer system first preprocesses the scanning data through two voxel filters, and then It is input into the local SLAM module, and the SLAM algorithm is used to construct a local map and generate sub-maps, so that the Cartographer system can efficiently and accurately map and position the indoor environment, providing reliable technical support for applications such as indoor navigation and mobile robots;
里程计姿态数据、IMU测量数据和固定框架姿态数据主要用于为优化求解过程提供较好的初始值,其中IMU数据是主力部分,通过使用IMU跟踪器和姿态外推器,为局部SLAM提供初始姿态估计,并且这些传感器的原始数据也将在全局SLAM的稀疏姿态调整中被使用。Odometer attitude data, IMU measurement data and fixed frame attitude data are mainly used to provide better initial values for the optimization solution process. IMU data is the main part. Through the use of IMU trackers and attitude extrapolators, it provides initial values for local SLAM. Pose estimation, and the raw data from these sensors will also be used in the sparse pose adjustment of global SLAM.
语义检测模块,用于提取语义信息,且检测场景中的动态目标,提取的语义信息可用于识别场景中的动态目标,从而提高SLAM在动态环境中的鲁棒性,语义检测模块中,采用YOLOv5作为语义检测网络来提取语义信息,在提高准确率的同时能检测小目标并保证实时性能,且YOLOv5包含80多个不同类别的目标,训练类别非常适合室内SLAM场景;The semantic detection module is used to extract semantic information and detect dynamic targets in the scene. The extracted semantic information can be used to identify dynamic targets in the scene, thereby improving the robustness of SLAM in dynamic environments. In the semantic detection module, YOLOv5 is used As a semantic detection network to extract semantic information, it can detect small targets and ensure real-time performance while improving accuracy, and YOLOv5 contains more than 80 different categories of targets, and the training category is very suitable for indoor SLAM scenes;
输入的图像帧使用YOLOv5目标检测网络进行语义检测,在检测过程中,仅检测出先验的动态目标,其他目标不进行检测,检测结果传递给跟踪线程进行预处理,移除动态目标上的特征点,得到准确的相机姿态估计。The input image frame is semantically detected using the YOLOv5 target detection network. During the detection process, only a priori dynamic targets are detected, and other targets are not detected. The detection results are passed to the tracking thread for preprocessing, and features on the dynamic targets are removed. point to obtain accurate camera pose estimation.
局部SLAM模块,用于构建局部地图并更新姿态估计,其过程的输入是姿态外推器生成的初始姿态,输出是局部子图,其实现包括:The local SLAM module is used to build a local map and update the attitude estimate. The input of the process is the initial attitude generated by the attitude extrapolator, and the output is the local subgraph. Its implementation includes:
扫描匹配、运动滤波静止和无用数据丢弃组件;Scan matching, motion filtering static and garbage discard components;
扫描匹配:该模块的目标是计算当前时刻相对于之前一段时间的最优姿态,利用非线性优化方法建立一个最小二乘问题,以外推器获得的初始姿态作为输入,输出是最优的扫描匹配姿态,Cartographer中使用谷歌的Ceres库求解最小二乘问题;Scan matching: The goal of this module is to calculate the optimal posture at the current moment relative to the previous period of time, using a nonlinear optimization method to establish a least squares problem, with the initial posture obtained by the extrapolator as input, and the output is the optimal scan matching Attitude, Google's Ceres library is used in Cartographer to solve the least squares problem;
运动滤波器:该模块的目标是减少插入到每个子图中的扫描帧的数量,一旦扫描匹配器生成一个新的姿态,计算该姿态与最后一个姿态的变化,然后调用运动滤波器,如果姿态变化不明显或太小,该扫描将被移除,只有当姿态之间的距离、角度或时间阈值变化显著时,该扫描才会插入当前子图,确保子图中的扫描数不会太多,也降低了地图中的噪声和错误;Motion Filter: The goal of this module is to reduce the number of scan frames inserted into each subgraph. Once the scan matcher generates a new pose, calculate the change of this pose from the last pose and then call the motion filter if the pose If the change is not obvious or too small, the scan will be removed. Only when the distance, angle or time threshold between poses changes significantly, the scan will be inserted into the current subgraph to ensure that the number of scans in the subgraph is not too many. , also reduces noise and errors in the map;
子图:每当获得一个扫描数据时,就与最近建立的子图进行匹配,以将该帧的扫描数据插入子图中最佳位置,在不断插入新的数据帧的同时,子图也在更新,一定量的数据会组合成一个子图,当没有新的扫描插入子图时,认为该子图完成,算法就会创建下一个子图。Subgraph: Whenever a scan data is obtained, it is matched with the recently established subgraph to insert the scan data of this frame into the best position in the subgraph. While new data frames are continuously inserted, the subgraph is also Update, a certain amount of data is combined into a subgraph, and when no new scans are inserted into the subgraph, the subgraph is considered complete and the algorithm creates the next subgraph.
全局SLAM模块,用于检测并进行全局优化以消除累积错误,输入是新的扫描帧和完成的子图,输出是优化后的姿态估计,其实现包括:The global SLAM module is used to detect and perform global optimization to eliminate accumulated errors. The input is the new scan frame and the completed subgraph, and the output is the optimized pose estimation. Its implementation includes:
计算约束、稀疏姿态调整和环路检测;Computational constraints, sparse pose adjustment and loop detection;
计算约束:计算约束是指基于各种形式的信息建立相邻帧之间的关系,如姿态差异、视觉特征匹配、IMU数据等,这些约束将作为优化器的输入来估计相机和机器人的轨迹,并调整地图;Computational constraints: Computational constraints refer to establishing the relationship between adjacent frames based on various forms of information, such as pose differences, visual feature matching, IMU data, etc. These constraints will be used as input to the optimizer to estimate the trajectory of the camera and robot. and adjust the map;
稀疏姿态调整:如果获得了良好的匹配,环路检测过程就此结束,并检测到了环路存在,接着,根据当前扫描姿态和子图中与之最匹配的姿态,优化子图中的所有姿态,目标是最小化残差误差,环路优化问题也是一个非线性最小二乘问题,在Cartographer中使用谷歌的Ceres库求解;Sparse pose adjustment: If a good match is obtained, the loop detection process ends and the existence of a loop is detected. Then, all poses in the subgraph are optimized based on the current scan pose and the pose that best matches it in the subgraph. The target It is to minimize the residual error. The loop optimization problem is also a nonlinear least squares problem, which is solved using Google's Ceres library in Cartographer;
环路检测:在环路检测期间,所有创建的子图和当前激光扫描都用于匹配,如果当前扫描和所有完成的子图距离足够接近,合适的匹配策略就可以找到环路闭合,为降低计算复杂性和提高实时环路检测效率,应用分支定界优化方法进行高效搜索。Loop detection: During loop detection, all created subgraphs and the current laser scan are used for matching. If the current scan and all completed subgraphs are close enough, a suitable matching strategy can find the loop closure, which reduces To reduce computational complexity and improve real-time loop detection efficiency, branch-and-bound optimization methods are applied for efficient search.
实施例2Example 2
参照图1-图3,一种基于图像理解的单线激光雷达室内SLAM定位方法,包括以下步骤:Referring to Figures 1-3, a single-line lidar indoor SLAM positioning method based on image understanding includes the following steps:
S1、选用实验平台以及设计机器人的硬件:S1. Select the experimental platform and design the robot’s hardware:
选用64位Ubuntu系统,机器人的硬件传感器包括:上层主板选用树莓派4B;底层主板选用英伟达JetsonNanoB01;主控选用STM32F103RET6芯片;集成有思岚A1单线激光雷达、九轴IMU(MPU9250)、里程计;同时采用阿克曼转向底盘,底盘后轮采用2个带编码器的直流电机驱动。A 64-bit Ubuntu system is selected. The robot's hardware sensors include: Raspberry Pi 4B for the upper motherboard; NVIDIA Jetson NanoB01 for the bottom motherboard; STM32F103RET6 chip for the main control; integrated Silan A1 single-line lidar, nine-axis IMU (MPU9250), and odometer. ; At the same time, it adopts Ackerman steering chassis, and the rear wheels of the chassis are driven by 2 DC motors with encoders.
S2、对输入传感器的数据进行采集以及预处理:S2. Collect and preprocess the input sensor data:
S2.1、对单线激光雷达采集的激光测距数据通过体素滤波器和自适应体素滤波器进行预处理,并将结果输入局部SLAM模块;S2.1. Preprocess the laser ranging data collected by the single-line lidar through the voxel filter and the adaptive voxel filter, and input the results into the local SLAM module;
S2.2、对RGB-D摄像头捕捉的图像帧送入YOLOv5目标检测网络,检测出先验的动态目标,得到下一帧,进行语义特征提取,输出准确的相机姿态估计;S2.2. Send the image frames captured by the RGB-D camera to the YOLOv5 target detection network to detect a priori dynamic targets, obtain the next frame, extract semantic features, and output accurate camera pose estimation;
S2.3、接着对IMU数据使用惯性跟踪器进行预处理,将结果和里程计位姿数据以及相机姿态估计一起送入位姿外推器。S2.3. Then use the inertial tracker to preprocess the IMU data, and send the results to the pose extrapolator together with the odometry pose data and camera pose estimation.
S3、实施局部SLAM建图:S3. Implement local SLAM mapping:
S3.1、使用里程计和IMU数据计算姿态估计值;S3.1. Calculate attitude estimation using odometry and IMU data;
S3.2、以姿态估计值作为初始猜测,匹配激光雷达数据并更新姿态估计器的值;S3.2. Use the attitude estimate as the initial guess, match the lidar data and update the value of the attitude estimator;
S3.3、对激光雷达的数据每一帧进行运动滤波后,其帧积累组合形成一个子图。S3.3. After performing motion filtering on each frame of lidar data, the frames are accumulated and combined to form a subgraph.
S4、实施全局SLAM建图,完成室内SLAM定位:S4. Implement global SLAM mapping and complete indoor SLAM positioning:
S4.1、将新扫描帧和所有之前的扫描帧插入完成的子图中;S4.1. Insert the new scan frame and all previous scan frames into the completed subgraph;
S4.2、而后使用优化算法分支定界进行环路检测;S4.2. Then use the optimization algorithm branch and bound to detect loops;
S4.3、基于检测到的环路约束计算姿态之间的约束关系;S4.3. Calculate the constraint relationship between postures based on the detected loop constraints;
S4.4、最后使用姿态优化算法优化所有约束,以获得更准确的姿态估计结果。S4.4. Finally, use the attitude optimization algorithm to optimize all constraints to obtain more accurate attitude estimation results.
该方法及系统,通过使用RGB-D摄像头捕获图像,从图像中提取语义特征和深度信息,并构建环境的视觉地图,用于帮助单线激光雷达获得的连续帧的注册,最终在室内环境中实现精确稳健的定位和映射,实现机器人对室内复杂场景的自主定位和建图。The method and system capture images using RGB-D cameras, extract semantic features and depth information from the images, and build a visual map of the environment to help the registration of continuous frames obtained by single-line lidar, and are ultimately implemented in indoor environments. Accurate and robust positioning and mapping enable robots to autonomously position and map complex indoor scenes.
以上,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内;在不冲突的情况下,本发明的实施例及实施例中的特征可以相互组合。因此,本发明的保护范围应以权利要求的保护范围为准。The above are only specific embodiments of the present invention, but the protection scope of the present invention is not limited thereto. Any person familiar with the technical field can easily think of changes or substitutions within the technical scope disclosed by the present invention, and all of them should be covered. Within the scope of the present invention; without conflict, the embodiments of the present invention and the features in the embodiments can be combined with each other. Therefore, the protection scope of the present invention should be subject to the protection scope of the claims.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311474781.3A CN117554984A (en) | 2023-11-08 | 2023-11-08 | A single-line lidar indoor SLAM positioning method and system based on image understanding |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311474781.3A CN117554984A (en) | 2023-11-08 | 2023-11-08 | A single-line lidar indoor SLAM positioning method and system based on image understanding |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117554984A true CN117554984A (en) | 2024-02-13 |
Family
ID=89819582
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311474781.3A Pending CN117554984A (en) | 2023-11-08 | 2023-11-08 | A single-line lidar indoor SLAM positioning method and system based on image understanding |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117554984A (en) |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180075643A1 (en) * | 2015-04-10 | 2018-03-15 | The European Atomic Energy Community (Euratom), Represented By The European Commission | Method and device for real-time mapping and localization |
US20190329407A1 (en) * | 2018-04-30 | 2019-10-31 | Beijing Jingdong Shangke Information Technology Co., Ltd. | System and method for multimodal mapping and localization |
CN111983639A (en) * | 2020-08-25 | 2020-11-24 | 浙江光珀智能科技有限公司 | Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU |
US20210078174A1 (en) * | 2019-09-17 | 2021-03-18 | Wuyi University | Intelligent medical material supply robot based on internet of things and slam technology |
CN113238554A (en) * | 2021-05-08 | 2021-08-10 | 武汉科技大学 | Indoor navigation method and system based on SLAM technology integrating laser and vision |
CN113674412A (en) * | 2021-08-12 | 2021-11-19 | 浙江工商大学 | Indoor map construction method, system and storage medium based on pose fusion optimization |
CN114092714A (en) * | 2021-11-19 | 2022-02-25 | 江苏科技大学 | Household mowing robot positioning method based on enhanced loop detection and repositioning |
CN114721377A (en) * | 2022-03-22 | 2022-07-08 | 盐城工学院 | Improved Cartogrier based SLAM indoor blind guiding robot control method |
CN114782626A (en) * | 2022-04-14 | 2022-07-22 | 国网河南省电力公司电力科学研究院 | Substation scene mapping and positioning optimization method based on fusion of laser and vision |
CN115015956A (en) * | 2022-04-12 | 2022-09-06 | 南京邮电大学 | Laser and vision SLAM system of indoor unmanned vehicle |
CN116929335A (en) * | 2023-07-26 | 2023-10-24 | 西南科技大学 | Radiation field map construction method based on multisource information perception |
-
2023
- 2023-11-08 CN CN202311474781.3A patent/CN117554984A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180075643A1 (en) * | 2015-04-10 | 2018-03-15 | The European Atomic Energy Community (Euratom), Represented By The European Commission | Method and device for real-time mapping and localization |
US20190329407A1 (en) * | 2018-04-30 | 2019-10-31 | Beijing Jingdong Shangke Information Technology Co., Ltd. | System and method for multimodal mapping and localization |
US20210078174A1 (en) * | 2019-09-17 | 2021-03-18 | Wuyi University | Intelligent medical material supply robot based on internet of things and slam technology |
CN111983639A (en) * | 2020-08-25 | 2020-11-24 | 浙江光珀智能科技有限公司 | Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU |
CN113238554A (en) * | 2021-05-08 | 2021-08-10 | 武汉科技大学 | Indoor navigation method and system based on SLAM technology integrating laser and vision |
CN113674412A (en) * | 2021-08-12 | 2021-11-19 | 浙江工商大学 | Indoor map construction method, system and storage medium based on pose fusion optimization |
CN114092714A (en) * | 2021-11-19 | 2022-02-25 | 江苏科技大学 | Household mowing robot positioning method based on enhanced loop detection and repositioning |
CN114721377A (en) * | 2022-03-22 | 2022-07-08 | 盐城工学院 | Improved Cartogrier based SLAM indoor blind guiding robot control method |
CN115015956A (en) * | 2022-04-12 | 2022-09-06 | 南京邮电大学 | Laser and vision SLAM system of indoor unmanned vehicle |
CN114782626A (en) * | 2022-04-14 | 2022-07-22 | 国网河南省电力公司电力科学研究院 | Substation scene mapping and positioning optimization method based on fusion of laser and vision |
CN116929335A (en) * | 2023-07-26 | 2023-10-24 | 西南科技大学 | Radiation field map construction method based on multisource information perception |
Non-Patent Citations (1)
Title |
---|
杜博文等: "多传感融合的激光雷达SLAM 及应用进展", 《天津职业技术师范大学学报》, 30 September 2023 (2023-09-30), pages 1 - 5 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Shan et al. | Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping | |
Zhu et al. | Camvox: A low-cost and accurate lidar-assisted visual slam system | |
CN112734852B (en) | Robot mapping method and device and computing equipment | |
Walch et al. | Image-based localization using lstms for structured feature correlation | |
CN113706626A (en) | Positioning and mapping method based on multi-sensor fusion and two-dimensional code correction | |
CN112634451A (en) | Outdoor large-scene three-dimensional mapping method integrating multiple sensors | |
CN111462207A (en) | RGB-D simultaneous positioning and map creation method integrating direct method and feature method | |
CN108481327A (en) | A kind of positioning device, localization method and the robot of enhancing vision | |
CN112419497A (en) | Monocular vision-based SLAM method combining feature method and direct method | |
CN112450820B (en) | Pose optimization method, mobile robot and storage medium | |
CN118752498B (en) | Autonomous navigation method and system for a steel bar tying robot | |
CN117367427A (en) | Multi-mode slam method applicable to vision-assisted laser fusion IMU in indoor environment | |
CN112541423A (en) | Synchronous positioning and map construction method and system | |
Jin et al. | Multi-region scene matching based localisation for autonomous vision navigation of UAVs | |
CN208289901U (en) | A kind of positioning device and robot enhancing vision | |
CN110531618A (en) | Closed loop based on effective key frame detects robot self-localization error cancelling method | |
JP6410231B2 (en) | Alignment apparatus, alignment method, and computer program for alignment | |
CN116429116A (en) | Robot positioning method and equipment | |
Thomas et al. | Delio: Decoupled lidar odometry | |
CN118552711A (en) | Image processing method and system for robot navigation vision positioning | |
CN108492308B (en) | A method and system for determining variational optical flow based on mutual structure-guided filtering | |
CN117554984A (en) | A single-line lidar indoor SLAM positioning method and system based on image understanding | |
CN116429087A (en) | Visual SLAM method suitable for dynamic environment | |
CN108986162A (en) | Vegetable and background segment method based on Inertial Measurement Unit and visual information | |
Li-Chee-Ming et al. | Augmenting visp’s 3d model-based tracker with rgb-d slam for 3d pose estimation in indoor environments |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |