WO2022257801A1 - Slam-based mobile robot mine scene reconstruction method and system - Google Patents
Slam-based mobile robot mine scene reconstruction method and system Download PDFInfo
- Publication number
- WO2022257801A1 WO2022257801A1 PCT/CN2022/095982 CN2022095982W WO2022257801A1 WO 2022257801 A1 WO2022257801 A1 WO 2022257801A1 CN 2022095982 W CN2022095982 W CN 2022095982W WO 2022257801 A1 WO2022257801 A1 WO 2022257801A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- point cloud
- data
- laser
- slam
- mobile robot
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 230000000007 visual effect Effects 0.000 claims abstract description 34
- 238000012545 processing Methods 0.000 claims abstract description 23
- 230000033001 locomotion Effects 0.000 claims abstract description 20
- 238000005457 optimization Methods 0.000 claims abstract description 17
- 230000010354 integration Effects 0.000 claims abstract description 10
- 238000001514 detection method Methods 0.000 claims abstract description 8
- 238000001914 filtration Methods 0.000 claims abstract description 8
- 230000004927 fusion Effects 0.000 claims description 11
- 238000003860 storage Methods 0.000 claims description 8
- 239000011159 matrix material Substances 0.000 claims description 7
- 230000008569 process Effects 0.000 claims description 6
- 230000001133 acceleration Effects 0.000 claims description 3
- 238000003384 imaging method Methods 0.000 claims description 3
- 238000010845 search algorithm Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 9
- 238000004590 computer program Methods 0.000 description 8
- 238000013507 mapping Methods 0.000 description 6
- 238000005259 measurement Methods 0.000 description 4
- 238000010276 construction Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000003190 augmentative effect Effects 0.000 description 2
- 239000000428 dust Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration by the use of local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G06T5/80—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Remote Sensing (AREA)
- Computer Graphics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The present disclosure provides a SLAM-based mobile robot mine scene reconstruction method and system: acquiring synchronously calibrated laser point cloud data and visual point cloud data measured by a mobile robot; fusing the acquired laser point cloud data and visual point cloud data; performing point cloud motion distortion removal processing and point cloud filtering processing on the fused point cloud data; using a graph optimization-based multi-constraint factor graph algorithm in combination with the processed point cloud data, adding IMU pre-integration data, point cloud key frame data and GNSS data into a constraint subgraph, and performing loop closure detection to obtain a reconstructed 3D map; the present disclosure can effectively implement 3D reconstruction in a mine scenario, and ultimately obtain a point cloud map with color, improving the precision of mine scene reconstruction.
Description
本公开涉及场景重构技术领域,特别涉及一种基于SLAM的移动机器人矿山场景重建方法及系统。The present disclosure relates to the technical field of scene reconstruction, in particular to a SLAM-based mobile robot mine scene reconstruction method and system.
本部分的陈述仅仅是提供了与本公开相关的背景技术,并不必然构成现有技术。The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
随着5G时代与无人驾驶时代的到来,同步定位与建图(Simultaneous Localization and Mapping,SLAM)技术在无人驾驶定位、高精地图采集、AR(Augmented Reality,增强现实)、测绘等领域得到广泛的应用。With the advent of the 5G era and the unmanned driving era, Simultaneous Localization and Mapping (SLAM) technology has been widely used in unmanned driving positioning, high-precision map collection, AR (Augmented Reality, augmented reality), surveying and mapping and other fields. Wide range of applications.
发明人发现,在矿山环境下,由于空气灰尘、漂浮颗粒物比较多,导致激光雷达、视觉传感器的数据采集不精准,致使SLAM在矿山场景下应用受到很大的限制;此外,激光雷达只能获得物体的距离信息和光强信息,不能获得物体的颜色信息,生成的三维重建地图不够丰富形象,不能满足人在虚拟场景下远程指挥工程机械,而视觉相机能获得丰富的带颜色的点云,但其测量数据不够精确,因此,单一传感器不能满足矿山环境下SLAM建图的需要。The inventors found that in the mine environment, due to the large amount of dust and floating particles in the air, the data collection of lidar and visual sensors is not accurate, which greatly restricts the application of SLAM in mine scenarios; in addition, lidar can only obtain The distance information and light intensity information of the object cannot obtain the color information of the object, and the generated 3D reconstruction map is not rich enough to meet the needs of people remotely commanding construction machinery in the virtual scene, while the visual camera can obtain rich colored point clouds. But its measurement data is not accurate enough, therefore, a single sensor cannot meet the needs of SLAM mapping in the mine environment.
发明内容Contents of the invention
为了解决现有技术的不足,本公开提供了一种基于SLAM的移动机器人矿山场景重建方法及系统,能够有效实现矿山场景下的三维重建,并且最终获得带有颜色的点云地图,提高了矿山场景重建的精度。In order to solve the deficiencies of the prior art, the present disclosure provides a SLAM-based mobile robot mine scene reconstruction method and system, which can effectively realize the three-dimensional reconstruction in the mine scene, and finally obtain a colored point cloud map, which improves the mine environment. Accuracy of scene reconstruction.
为了实现上述目的,本公开采用如下技术方案:In order to achieve the above purpose, the present disclosure adopts the following technical solutions:
本公开第一方面提供了一种基于SLAM的移动机器人矿山场景重建方法。The first aspect of the present disclosure provides a SLAM-based mobile robot mine scene reconstruction method.
一种基于SLAM的移动机器人矿山场景重建方法,包括以下过程:A SLAM-based mobile robot mine scene reconstruction method, including the following processes:
获取同步标定过的通过移动机器人测得的激光点云数据和视觉点云数据;Obtain synchronously calibrated laser point cloud data and visual point cloud data measured by mobile robots;
对获取的激光点云数据和视觉点云数据进行融合;Fusion of acquired laser point cloud data and visual point cloud data;
对融合后的点云数据进行点云运动畸变去除处理和点云滤波处理;Perform point cloud motion distortion removal processing and point cloud filtering processing on the fused point cloud data;
结合处理后的点云数据,采用基于图优化的多约束因子图算法,将IMU(Inertial Measurement Unit,惯性测量单元)预积分数据、点云关键帧数据以及GNSS(Global Navigation Satellite System,全球导航卫星系统)数据加入约束子图,进行回环检测后得到重建后的三维地图。Combined with the processed point cloud data, using a multi-constraint factor graph algorithm based on graph optimization, the IMU (Inertial Measurement Unit, inertial measurement unit) pre-integration data, point cloud key frame data and GNSS (Global Navigation Satellite System, global navigation satellite system) data into the constrained submap, and the reconstructed 3D map is obtained after loop closure detection.
进一步的,利用当前激光帧的特征信息与地图的特征信息进行迭代优化,将IMU预积分数据、点云关键帧数据以及GNSS数据作为图优化算法的因子,添加至因子图中,并执行因子图优化,更新所有关键帧位姿,最终得到优化后的位姿以及重建后的三维地图Further, use the feature information of the current laser frame and the feature information of the map to iteratively optimize, add the IMU pre-integration data, point cloud key frame data and GNSS data as the factors of the graph optimization algorithm to the factor graph, and execute the factor graph Optimize, update all key frame poses, and finally get the optimized pose and reconstructed 3D map
更进一步的,在历史关键帧中寻找关键帧,对当前帧与关键帧附近的几帧点云进行匹配,得到转换位姿,构建闭环因子图数据,加入因子图进行优化。Further, look for keyframes in historical keyframes, match the current frame with several frames of point clouds near the keyframes, obtain transformed poses, construct closed-loop factor graph data, and add factor graphs for optimization.
更进一步的,在历史关键帧之中,以距离小于预设值且时间间隔大于预设值的帧作为关键帧。Furthermore, among the historical key frames, frames whose distance is smaller than a preset value and whose time interval is larger than a preset value are used as key frames.
进一步的,对获取的激光点云数据和视觉点云数据进行融合,包括:Further, the acquired laser point cloud data and visual point cloud data are fused, including:
利用时间戳插值算法,获得匹配的激光点云数据和视觉点云数据,利用小孔成像原理,将激光点云投影至相机平面,并依据XYZ坐标将投影至相机平面的激光点与视觉点云进行匹配,获得融合后的点云PointCloud:XYZRIRGB;Use the time stamp interpolation algorithm to obtain matching laser point cloud data and visual point cloud data, use the principle of pinhole imaging to project the laser point cloud to the camera plane, and project the laser point and visual point cloud on the camera plane according to the XYZ coordinates Perform matching to obtain the fused point cloud PointCloud: XYZRIRGB;
其中,X为激光点云的x轴坐标,Y为激光点云的y轴坐标,Z为激光点云的z轴坐标,I为激光点云的光强,R为每个激光线束的线束序号,RGB为视觉点云的颜色信息。Among them, X is the x-axis coordinate of the laser point cloud, Y is the y-axis coordinate of the laser point cloud, Z is the z-axis coordinate of the laser point cloud, I is the light intensity of the laser point cloud, and R is the beam number of each laser beam , RGB is the color information of the visual point cloud.
进一步的,点云运动畸变去除处理,包括:Further, point cloud motion distortion removal processing includes:
假设T时刻为一帧激光开始扫描时刻,T+ΔT时刻为该帧激光扫描结束时刻,对IMU三轴的角速度数据和加速度数据进行预积分,得到ΔT时间内机器人相对运动,然后将ΔT时间内的所有激光点转换至第一个激光点,实现激光点云运动畸变去除。Assuming that time T is the start time of a frame of laser scanning, and time T+ΔT is the end time of laser scanning of this frame, pre-integrate the angular velocity data and acceleration data of the three axes of the IMU to obtain the relative motion of the robot within ΔT time, and then calculate the relative motion of the robot within ΔT time All the laser points of the laser point are converted to the first laser point to realize the removal of laser point cloud motion distortion.
进一步的,点云滤波处理,包括:Further, point cloud filtering processing includes:
将每帧激光点云投影为深度图像,多线激光雷达的扫描线束为r线,每帧每条线扫描得到n个激光点,深度图像为r*n的矩阵,每个矩阵包含点云的坐标信息和光强信息;Project each frame of laser point cloud into a depth image, the scanning line bundle of multi-line lidar is r lines, and each line scans each frame to obtain n laser points, and the depth image is an r*n matrix, each matrix contains the point cloud Coordinate information and light intensity information;
利用深度图像获得相邻点云之间的位置关系,并利用kd-tree搜索算法,对点云紧邻点进行搜索,依据点云之间的几何关系,对地面点及噪点进行滤波处理。Use the depth image to obtain the positional relationship between adjacent point clouds, and use the kd-tree search algorithm to search the adjacent points of the point cloud, and filter the ground points and noise points according to the geometric relationship between the point clouds.
本公开第二方面提供了一种基于SLAM的移动机器人矿山场景重建系统。The second aspect of the present disclosure provides a SLAM-based mobile robot mine scene reconstruction system.
一种基于SLAM的移动机器人矿山场景重建系统,包括:A SLAM-based mobile robot mine scene reconstruction system, including:
数据获取模块,被配置为:获取同步标定过的通过移动机器人测得的激光点云数据和视觉点云数据;The data acquisition module is configured to: acquire synchronously calibrated laser point cloud data and visual point cloud data measured by the mobile robot;
点云融合模块,被配置为:对获取的激光点云数据和视觉点云数据进行融合;The point cloud fusion module is configured to: fuse the acquired laser point cloud data and visual point cloud data;
点云数据处理模块,被配置为:对融合后的点云数据进行点云运动畸变去 除处理和点云滤波处理;The point cloud data processing module is configured to: carry out point cloud motion distortion removal processing and point cloud filter processing to the point cloud data after fusion;
三维地图重建模块,被配置为:结合处理后的点云数据,采用基于图优化的多约束因子图算法,将IMU预积分数据、点云关键帧数据以及GNSS数据加入约束子图,进行回环检测后得到重建后的三维地图。The 3D map reconstruction module is configured to: combine the processed point cloud data, adopt a multi-constraint factor graph algorithm based on graph optimization, add IMU pre-integration data, point cloud key frame data and GNSS data into the constraint subgraph, and perform loop closure detection After that, the reconstructed 3D map is obtained.
本公开第三方面提供了一种计算机可读存储介质,其上存储有程序,该程序被处理器执行时实现如本公开第一方面所述的基于SLAM的移动机器人矿山场景重建方法中的步骤。The third aspect of the present disclosure provides a computer-readable storage medium, on which a program is stored, and when the program is executed by a processor, the steps in the SLAM-based mobile robot mine scene reconstruction method described in the first aspect of the present disclosure are implemented. .
本公开第四方面提供了一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的程序,所述处理器执行所述程序时实现如本公开第一方面所述的基于SLAM的移动机器人矿山场景重建方法中的步骤。The fourth aspect of the present disclosure provides an electronic device, including a memory, a processor, and a program stored in the memory and operable on the processor, and the processor implements the program described in the first aspect of the present disclosure when executing the program. Steps in the SLAM-based mine scene reconstruction method for mobile robots.
与现有技术相比,本公开的有益效果是:Compared with the prior art, the beneficial effects of the present disclosure are:
1、本公开所述的方法、系统、介质或电子设备,能够有效实现矿山场景下的三维重建,并且最终获得带有颜色的点云地图,提高了矿山场景重建的精度。1. The method, system, medium or electronic device described in the present disclosure can effectively realize the three-dimensional reconstruction in the mine scene, and finally obtain a colored point cloud map, which improves the accuracy of the mine scene reconstruction.
2、本公开所述的方法、系统、介质或电子设备,利用激光雷达、IMU、北斗定位、视觉相机实现多传感器融合,弥补了单一传感器在矿山等复杂场景下建图不精确的缺点,实现可靠稳定的SLAM建图。2. The method, system, medium or electronic device described in this disclosure uses laser radar, IMU, Beidou positioning, and visual cameras to achieve multi-sensor fusion, which makes up for the inaccurate shortcomings of a single sensor in complex scenes such as mines, and realizes Reliable and stable SLAM mapping.
3、本公开所述的方法、系统、介质或电子设备,将激光雷达及视觉相机点云融合,能够构建带有颜色信息的三维重建地图,满足了远程沉浸式施工的需要。3. The method, system, medium or electronic device described in the present disclosure integrates the laser radar and the visual camera point cloud to construct a three-dimensional reconstruction map with color information, which meets the needs of remote immersive construction.
4、本公开所述的方法、系统、介质或电子设备,针对矿山环境,将3D激光点云投影成深度图,并提取地面点,针对矿山环境灰尘较多的情况,利用聚类算法,实现了噪点的精准滤波处理。4. The method, system, medium or electronic device described in this disclosure projects the 3D laser point cloud into a depth map for the mine environment, and extracts the ground points, and uses a clustering algorithm to realize Accurate filtering of noise.
5、本公开所述的方法、系统、介质或电子设备,利用GTSAM开源库,将IMU预积分因子,激光点云、北斗定位信息、视觉点云添加至约束因子,实现了位姿实时跟新与精准建图。5. The method, system, medium or electronic device described in this disclosure uses the GTSAM open source library to add IMU pre-integration factors, laser point clouds, Beidou positioning information, and visual point clouds to the constraint factors, realizing real-time pose update and precise mapping.
本公开附加方面的优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本公开的实践了解到。Advantages of additional aspects of the disclosure will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the disclosure.
构成本公开的一部分的说明书附图用来提供对本公开的进一步理解,本公开的示意性实施例及其说明用于解释本公开,并不构成对本公开的不当限定。The accompanying drawings constituting a part of the present disclosure are used to provide a further understanding of the present disclosure, and the exemplary embodiments and descriptions of the present disclosure are used to explain the present disclosure, and do not constitute improper limitations to the present disclosure.
图1为本公开实施例1提供的基于SLAM的移动机器人矿山场景重建方法的流程示意图。FIG. 1 is a schematic flowchart of a SLAM-based mobile robot mine scene reconstruction method provided by Embodiment 1 of the present disclosure.
图2为本公开实施例1提供的激光点云和视觉点云融合方法示意图。FIG. 2 is a schematic diagram of a laser point cloud and visual point cloud fusion method provided by Embodiment 1 of the present disclosure.
图3为本公开实施例1提供的去除地面点几何关系示意图。FIG. 3 is a schematic diagram of the geometric relationship of ground points removed according to Embodiment 1 of the present disclosure.
图4为本公开实施例1提供的聚类算法去除噪点方法示意图。FIG. 4 is a schematic diagram of a method for removing noise points by a clustering algorithm provided in Embodiment 1 of the present disclosure.
下面结合附图与实施例对本公开作进一步说明。The present disclosure will be further described below in conjunction with the accompanying drawings and embodiments.
应该指出,以下详细说明都是示例性的,旨在对本公开提供进一步的说明。除非另有指明,本文使用的所有技术和科学术语具有与本公开所属技术领域的普通技术人员通常理解的相同含义。It should be noted that the following detailed description is exemplary and intended to provide further explanation of the present disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
需要注意的是,这里所使用的术语仅是为了描述具体实施方式,而非意图限制根据本公开的示例性实施方式。如在这里所使用的,除非上下文另外明确指出,否则单数形式也意图包括复数形式,此外,还应当理解的是,当在本说明书中使用术语“包含”和/或“包括”时,其指明存在特征、步骤、操作、器 件、组件和/或它们的组合。It should be noted that the terminology used herein is only for describing specific embodiments, and is not intended to limit the exemplary embodiments according to the present disclosure. As used herein, unless the context clearly dictates otherwise, the singular is intended to include the plural, and it should also be understood that when the terms "comprising" and/or "comprising" are used in this specification, they mean There are features, steps, operations, means, components and/or combinations thereof.
在不冲突的情况下,本公开中的实施例及实施例中的特征可以相互组合。In the case of no conflict, the embodiments in the present disclosure and the features in the embodiments can be combined with each other.
实施例1:Example 1:
如图1所示,本公开实施例1提供了一种基于SLAM的移动机器人矿山场景重建方法,包括以下过程:As shown in Figure 1, Embodiment 1 of the present disclosure provides a method for reconstructing a mine scene based on SLAM for a mobile robot, including the following process:
S1:利用时间戳将激光点云和视觉点云进行同步及标定,并定义全新数据结构实现点云数据融合。S1: Synchronize and calibrate laser point cloud and visual point cloud by using time stamp, and define a new data structure to realize point cloud data fusion.
S2:利用多线激光雷达与IMU融合,对激光点云进行运动畸变去除,并针对矿山场景进行点云滤波处理;S2: Use the multi-line laser radar and IMU fusion to remove the motion distortion of the laser point cloud and filter the point cloud for the mine scene;
S3:采用基于图优化的多约束因子图算法,将IMU、激光雷达、GNSS、等约束信息加入约束子图,实现后端回环检测及建图。S3: Using a multi-constraint factor graph algorithm based on graph optimization, IMU, laser radar, GNSS, and other constraint information are added to the constraint subgraph to realize back-end loopback detection and map building.
S1中,主要包括以下内容:In S1, it mainly includes the following contents:
利用时间戳将激光点云和视觉点云进行同步及标定,并定义全新数据结构实现点云数据融合,如图2所示。The laser point cloud and the visual point cloud are synchronized and calibrated by using the time stamp, and a new data structure is defined to realize point cloud data fusion, as shown in Figure 2.
利用GNSS授时系统,对激光雷达和相机进行时间硬同步,利用固定标定板的方式对激光雷达及相机进行空间标定,获得外参矩阵后,将相机投影至激光雷达坐标系。定义利用PCL点云库定义新的点云数据结构:The GNSS timing system is used to perform time hard synchronization of the lidar and the camera, and the space calibration of the lidar and the camera is performed by using a fixed calibration plate. After obtaining the external parameter matrix, the camera is projected to the lidar coordinate system. Definition Use the PCL point cloud library to define a new point cloud data structure:
PointCloud:XYZRIRGB(激光点云的数据结构为PointCloud:XYZRI)。PointCloud: XYZRIRGB (the data structure of the laser point cloud is PointCloud: XYZRI).
其中,X为激光点云的x轴坐标,Y为激光点云的y轴坐标,Z为激光点云的z轴坐标,I为激光点云的光强,R为每个激光线束的线束序号,RGB为视觉点云的颜色信息。Among them, X is the x-axis coordinate of the laser point cloud, Y is the y-axis coordinate of the laser point cloud, Z is the z-axis coordinate of the laser point cloud, I is the light intensity of the laser point cloud, and R is the beam number of each laser beam , RGB is the color information of the visual point cloud.
当激光雷达及相机时空标定完成后,将每帧激光数据、每帧视觉相机数据 保存至容器中,并利用时间戳插值算法,获得匹配的激光点云及视觉点云,并利用小孔成像原理,将激光点云投影至相机平面,并依据XYZ坐标将投影至相机平面的激光点与视觉点云进行匹配,获得融合后的点云。此融合后的点云XYZIR取激光点云数据,RGB信息取视觉点云,RGB信息仅参与后续建图,因此,仍可当作激光点云进行处理。由于相机工作原理,带颜色的点云在整个激光点云中仍属于少部分,因此,下文仍以激光点云作为总称。After the laser radar and camera space-time calibration is completed, save each frame of laser data and each frame of visual camera data in the container, and use the time stamp interpolation algorithm to obtain the matching laser point cloud and visual point cloud, and use the principle of pinhole imaging , project the laser point cloud to the camera plane, and match the laser point projected to the camera plane with the visual point cloud according to the XYZ coordinates to obtain the fused point cloud. The fused point cloud XYZIR takes laser point cloud data, RGB information takes visual point cloud, and RGB information only participates in subsequent mapping, so it can still be processed as a laser point cloud. Due to the working principle of the camera, the colored point cloud is still a small part of the entire laser point cloud, so the laser point cloud is still used as a general term below.
S2中,主要包括以下内容:In S2, it mainly includes the following contents:
T时刻为一帧激光开始扫描时刻,T+ΔT时刻为该帧激光扫描结束时刻,对IMU三轴的角速度、加速度数据进行预积分,求得ΔT时间内机器人相对运动(包括线性运动、非线性运动),然后将ΔT时刻内的所有激光点转换至第一个激光点,实现激光点云运动畸变去除。利用此方法可有效去除激光点云的运动畸变。Time T is the start time of a frame of laser scanning, and time T+ΔT is the end time of laser scanning of this frame. Pre-integrate the angular velocity and acceleration data of the three axes of the IMU to obtain the relative motion of the robot (including linear motion, nonlinear Movement), and then convert all the laser points within the ΔT time to the first laser point to realize the removal of laser point cloud motion distortion. This method can effectively remove the motion distortion of the laser point cloud.
利用PCL库将每帧激光点云投影为深度图像,多线激光雷达的扫描线束为r线,每帧每条线扫描得到n个激光点。则深度图像为r*n的矩阵,每个矩阵包含点云的坐标信息、光强信息。利用深度图像可获得相邻点云之间的位置关系,并利用kd-tree搜索算法,对点云紧邻点进行搜索。依据点云之间的几何关系,对地面点及噪点进行滤波处理。Use the PCL library to project each frame of laser point cloud into a depth image. The scanning line beam of the multi-line lidar is r lines, and n laser points are obtained by scanning each line of each frame. Then the depth image is an r*n matrix, and each matrix contains coordinate information and light intensity information of the point cloud. The positional relationship between adjacent point clouds can be obtained by using the depth image, and the kd-tree search algorithm is used to search the adjacent points of the point cloud. According to the geometric relationship between the point clouds, the ground points and noise points are filtered.
其中,去除地面点的几何关系如图3所示,坐标系中心在激光雷达几何中心。Among them, the geometric relationship of removing ground points is shown in Figure 3, and the center of the coordinate system is at the geometric center of the lidar.
其中,OA、OB为相邻两激光束的同一时刻的激光点打到地面的距离,α、β为两激光束与水平面的夹角,由于矿山地面崎岖不平,因此当α<2.5°,且相邻两点的高度差,即Δ
z<5cm,视为地面点。对每一个角度α及Δ
z进行计算,当点云被计算为地面点后,将地面点滤波去除。
Among them, OA and OB are the distances between two adjacent laser beams hitting the ground at the same time, and α and β are the angles between the two laser beams and the horizontal plane. Since the mine ground is rough, when α<2.5°, and The height difference between two adjacent points, that is, Δ z < 5cm, is regarded as a ground point. Calculate each angle α and Δ z , and when the point cloud is calculated as a ground point, filter and remove the ground point.
其中,去除相关噪点的几何关系,如图4所示。Among them, the geometric relationship of related noise points is removed, as shown in FIG. 4 .
其中,OA、OB为两条激光线束的深度,考虑到矿山场景下灰尘较多,将α的阈值设置为A,当α>A,就认为是离群点;当α<A,认为是同一物体,且在深度图的行、列分别计算,当被认为是同一物体的激光点数量>30时,则认为是同一物体。Among them, OA and OB are the depths of the two laser beams. Considering that there is a lot of dust in the mine scene, the threshold of α is set to A. When α>A, it is considered to be an outlier; when α<A, it is considered to be the same Objects are calculated separately in the rows and columns of the depth map. When the number of laser points considered to be the same object is >30, it is considered to be the same object.
S3中,主要包括以下内容:In S3, it mainly includes the following contents:
采用基于图优化的多约束因子图算法,将IMU、激光点云、GNSS等约束信息加入约束子图,实现后端回环检测及建图。Using a multi-constraint factor graph algorithm based on graph optimization, IMU, laser point cloud, GNSS and other constraint information are added to the constraint subgraph to realize back-end loopback detection and map building.
利用scan-to-map,利用当前激光帧的特征信息与地图的特征信息进行迭代优化,并且更新所有关键帧位姿;Use scan-to-map to iteratively optimize using the feature information of the current laser frame and the feature information of the map, and update all key frame poses;
利用gtsam开源库,将IMU预积分,激光雷达点云关键帧、GNSS信息作为图优化算法的因子,添加至因子图中,并执行因子图优化,更新所有关键帧位姿;Using the gtsam open source library, IMU pre-integration, lidar point cloud key frames, and GNSS information are used as factors of the graph optimization algorithm, added to the factor graph, and factor graph optimization is performed to update all key frame poses;
在过去20s左右的历史关键帧之中,寻找距离较近、时间间隔较远的帧作为关键帧,并对当前帧与关键帧附近几帧点云进行匹配,得到转换位姿,构建闭环因子图数据,加入因子图进行优化。Among the historical key frames in the past 20s or so, look for frames with a closer distance and a longer time interval as the key frame, and match the current frame with the point clouds of several frames near the key frame to obtain the converted pose and construct a closed-loop factor map data, adding factor graphs for optimization.
利用gtsam开源库,得到最终优化位姿,并实现3维点云的构建。Using the gtsam open source library, the final optimized pose is obtained, and the construction of the 3D point cloud is realized.
融合之后的点云,既具有激光点云测量数据精确的特点,也具有视觉点云颜色纹理丰富的特点,最终可获得具有颜色信息的三维重建地图。The fused point cloud not only has the characteristics of accurate laser point cloud measurement data, but also has the characteristics of rich color and texture of visual point cloud, and finally a 3D reconstruction map with color information can be obtained.
实施例2:Example 2:
本公开实施例2提供了一种基于SLAM的移动机器人矿山场景重建系统,包括:Embodiment 2 of the present disclosure provides a SLAM-based mobile robot mine scene reconstruction system, including:
数据获取模块,被配置为:获取同步标定过的通过移动机器人测得的激光点云数据和视觉点云数据;The data acquisition module is configured to: acquire synchronously calibrated laser point cloud data and visual point cloud data measured by the mobile robot;
点云融合模块,被配置为:对获取的激光点云数据和视觉点云数据进行融合;The point cloud fusion module is configured to: fuse the acquired laser point cloud data and visual point cloud data;
点云数据处理模块,被配置为:对融合后的点云数据进行点云运动畸变去除处理和点云滤波处理;The point cloud data processing module is configured to: perform point cloud motion distortion removal processing and point cloud filtering processing on the fused point cloud data;
三维地图重建模块,被配置为:结合处理后的点云数据,采用基于图优化的多约束因子图算法,将IMU预积分数据、点云关键帧数据以及GNSS数据加入约束子图,进行回环检测后得到重建后的三维地图。The 3D map reconstruction module is configured to: combine the processed point cloud data, adopt a multi-constraint factor graph algorithm based on graph optimization, add IMU pre-integration data, point cloud key frame data and GNSS data into the constraint subgraph, and perform loop closure detection After that, the reconstructed 3D map is obtained.
所述系统的工作方法与实施例1提供的基于SLAM的移动机器人矿山场景重建方法相同,这里不再赘述。The working method of the system is the same as the SLAM-based mobile robot mine scene reconstruction method provided in Embodiment 1, and will not be repeated here.
实施例3:Example 3:
本公开实施例3提供了一种计算机可读存储介质,其上存储有程序,该程序被处理器执行时实现如本公开实施例1所述的基于SLAM的移动机器人矿山场景重建方法中的步骤。Embodiment 3 of the present disclosure provides a computer-readable storage medium on which a program is stored, and when the program is executed by a processor, the steps in the method for reconstructing a mine scene based on SLAM for a mobile robot as described in Embodiment 1 of the present disclosure are implemented. .
实施例4:Example 4:
本公开实施例4提供了一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的程序,所述处理器执行所述程序时实现如本公开实施例1所述的基于SLAM的移动机器人矿山场景重建方法中的步骤。Embodiment 4 of the present disclosure provides an electronic device, including a memory, a processor, and a program stored in the memory and operable on the processor. When the processor executes the program, the implementation as described in Embodiment 1 of the present disclosure Steps in the SLAM-based mine scene reconstruction method for mobile robots.
本领域内的技术人员应明白,本公开的实施例可提供为方法、系统、或计算机程序产品。因此,本公开可采用硬件实施例、软件实施例、或结合软件和硬件方面的实施例的形式。而且,本公开可采用在一个或多个其中包含有计算 机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。Those skilled in the art should understand that the embodiments of the present disclosure may be provided as methods, systems, or computer program products. Accordingly, the present disclosure may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to magnetic disk storage, optical storage, etc.) having computer-usable program code embodied therein.
本公开是参照根据本公开实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the present disclosure. It should be understood that each procedure and/or block in the flowchart and/or block diagram, and a combination of procedures and/or blocks in the flowchart and/or block diagram can be realized by computer program instructions. These computer program instructions may be provided to a general purpose computer, special purpose computer, embedded processor, or processor of other programmable data processing equipment to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing equipment produce a An apparatus for realizing the functions specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to operate in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture comprising instruction means, the instructions The device realizes the function specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded onto a computer or other programmable data processing device, causing a series of operational steps to be performed on the computer or other programmable device to produce a computer-implemented process, thereby The instructions provide steps for implementing the functions specified in the flow chart or blocks of the flowchart and/or the block or blocks of the block diagrams.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random AccessMemory,RAM)等。Those of ordinary skill in the art can understand that all or part of the processes in the methods of the above embodiments can be implemented through computer programs to instruct related hardware, and the programs can be stored in a computer-readable storage medium. During execution, it may include the processes of the embodiments of the above-mentioned methods. Wherein, the storage medium may be a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM) or a random access memory (Random AccessMemory, RAM), etc.
以上所述仅为本公开的优选实施例而已,并不用于限制本公开,对于本领域的技术人员来说,本公开可以有各种更改和变化。凡在本公开的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本公开的保护范围之内。The above descriptions are only preferred embodiments of the present disclosure, and are not intended to limit the present disclosure. For those skilled in the art, the present disclosure may have various modifications and changes. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present disclosure shall be included within the protection scope of the present disclosure.
Claims (10)
- 一种基于SLAM的移动机器人矿山场景重建方法,其特征在于:包括以下过程:A kind of mobile robot mine scene reconstruction method based on SLAM, it is characterized in that: comprise following process:获取同步标定过的通过移动机器人测得的激光点云数据和视觉点云数据;Obtain synchronously calibrated laser point cloud data and visual point cloud data measured by mobile robots;对获取的激光点云数据和视觉点云数据进行融合;Fusion of acquired laser point cloud data and visual point cloud data;对融合后的点云数据进行点云运动畸变去除处理和点云滤波处理;Perform point cloud motion distortion removal processing and point cloud filtering processing on the fused point cloud data;结合处理后的点云数据,采用基于图优化的多约束因子图算法,将IMU预积分数据、点云关键帧数据以及GNSS数据加入约束子图,进行回环检测后得到重建后的三维地图。Combined with the processed point cloud data, the multi-constraint factor graph algorithm based on graph optimization is used to add the IMU pre-integration data, point cloud key frame data and GNSS data into the constrained subgraph, and the reconstructed 3D map is obtained after loop closure detection.
- 如权利要求1所述的基于SLAM的移动机器人矿山场景重建方法,其特征在于:The SLAM-based mobile robot mine scene reconstruction method according to claim 1, characterized in that:利用当前激光帧的特征信息与地图的特征信息进行迭代优化,将IMU预积分数据、点云关键帧数据以及GNSS数据作为图优化算法的因子,添加至因子图中,并执行因子图优化,更新所有关键帧位姿,最终得到优化后的位姿以及重建后的三维地图。Use the characteristic information of the current laser frame and the characteristic information of the map to perform iterative optimization, and use the IMU pre-integration data, point cloud key frame data and GNSS data as the factors of the graph optimization algorithm, add them to the factor graph, and perform factor graph optimization, update All key frame poses, and finally the optimized pose and the reconstructed 3D map.
- 如权利要求2所述的基于SLAM的移动机器人矿山场景重建方法,其特征在于:The SLAM-based mobile robot mine scene reconstruction method as claimed in claim 2, characterized in that:在历史关键帧中寻找关键帧,对当前帧与关键帧附近的几帧点云进行匹配,得到转换位姿,构建闭环因子图数据,加入因子图进行优化。Find the key frame in the historical key frame, match the current frame with the point cloud of several frames near the key frame, get the transformed pose, construct the closed-loop factor map data, and add the factor map for optimization.
- 如权利要求3所述的基于SLAM的移动机器人矿山场景重建方法,其特征在于:The mobile robot mine scene reconstruction method based on SLAM as claimed in claim 3, characterized in that:在历史关键帧之中,以距离小于预设值且时间间隔大于预设值的帧作为关键帧。Among the historical key frames, the frames whose distance is smaller than the preset value and whose time interval is larger than the preset value are used as key frames.
- 如权利要求1所述的基于SLAM的移动机器人矿山场景重建方法,其特征在于:The mobile robot mine scene reconstruction method based on SLAM as claimed in claim 1, characterized in that:对获取的激光点云数据和视觉点云数据进行融合,包括:Fusion of acquired laser point cloud data and visual point cloud data, including:利用时间戳插值算法,获得匹配的激光点云数据和视觉点云数据,利用小孔成像原理,将激光点云投影至相机平面,并依据XYZ坐标将投影至相机平面的激光点与视觉点云进行匹配,获得融合后的点云PointCloud:XYZRIRGB;Use the time stamp interpolation algorithm to obtain matching laser point cloud data and visual point cloud data, use the principle of pinhole imaging to project the laser point cloud to the camera plane, and project the laser point and visual point cloud on the camera plane according to the XYZ coordinates Perform matching to obtain the fused point cloud PointCloud: XYZRIRGB;其中,X为激光点云的x轴坐标,Y为激光点云的y轴坐标,Z为激光点云的z轴坐标,I为激光点云的光强,R为每个激光线束的线束序号,RGB为视觉点云的颜色信息。Among them, X is the x-axis coordinate of the laser point cloud, Y is the y-axis coordinate of the laser point cloud, Z is the z-axis coordinate of the laser point cloud, I is the light intensity of the laser point cloud, and R is the beam number of each laser beam , RGB is the color information of the visual point cloud.
- 如权利要求1所述的基于SLAM的移动机器人矿山场景重建方法,其特征在于:The SLAM-based mobile robot mine scene reconstruction method according to claim 1, characterized in that:点云运动畸变去除处理,包括:Point cloud motion distortion removal processing, including:假设T时刻为一帧激光开始扫描时刻,T+ΔT时刻为该帧激光扫描结束时刻,对IMU三轴的角速度数据和加速度数据进行预积分,得到ΔT时间内机器人相对运动,然后将ΔT时间内的所有激光点转换至第一个激光点,实现激光点云运动畸变去除。Assuming that time T is the start time of a frame of laser scanning, and time T+ΔT is the end time of laser scanning of this frame, pre-integrate the angular velocity data and acceleration data of the three axes of the IMU to obtain the relative motion of the robot within ΔT time, and then calculate the relative motion of the robot within ΔT time All the laser points of the laser point are converted to the first laser point to realize the removal of laser point cloud motion distortion.
- 如权利要求1所述的基于SLAM的移动机器人矿山场景重建方法,其特征在于:The mobile robot mine scene reconstruction method based on SLAM as claimed in claim 1, characterized in that:点云滤波处理,包括:Point cloud filtering processing, including:将每帧激光点云投影为深度图像,多线激光雷达的扫描线束为r线,每帧每条线扫描得到n个激光点,深度图像为r*n的矩阵,每个矩阵包含点云的坐标信息和光强信息;Project each frame of laser point cloud into a depth image, the scanning line bundle of multi-line lidar is r lines, and each line scans each frame to obtain n laser points, and the depth image is an r*n matrix, each matrix contains the point cloud Coordinate information and light intensity information;利用深度图像获得相邻点云之间的位置关系,并利用kd-tree搜索算法,对点云紧邻点进行搜索,依据点云之间的几何关系,对地面点及噪点进行滤波处理。Use the depth image to obtain the positional relationship between adjacent point clouds, and use the kd-tree search algorithm to search the adjacent points of the point cloud, and filter the ground points and noise points according to the geometric relationship between the point clouds.
- 一种基于SLAM的移动机器人矿山场景重建系统,其特征在于:包括:A SLAM-based mobile robot mine scene reconstruction system is characterized in that: comprising:数据获取模块,被配置为:获取同步标定过的通过移动机器人测得的激光点云数据和视觉点云数据;The data acquisition module is configured to: acquire synchronously calibrated laser point cloud data and visual point cloud data measured by the mobile robot;点云融合模块,被配置为:对获取的激光点云数据和视觉点云数据进行融合;The point cloud fusion module is configured to: fuse the acquired laser point cloud data and visual point cloud data;点云数据处理模块,被配置为:对融合后的点云数据进行点云运动畸变去除处理和点云滤波处理;The point cloud data processing module is configured to: perform point cloud motion distortion removal processing and point cloud filtering processing on the fused point cloud data;三维地图重建模块,被配置为:结合处理后的点云数据,采用基于图优化的多约束因子图算法,将IMU预积分数据、点云关键帧数据以及GNSS数据加入约束子图,进行回环检测后得到重建后的三维地图。The 3D map reconstruction module is configured to: combine the processed point cloud data, adopt a multi-constraint factor graph algorithm based on graph optimization, add IMU pre-integration data, point cloud key frame data and GNSS data into the constraint subgraph, and perform loop closure detection Then get the reconstructed 3D map.
- 一种计算机可读存储介质,其上存储有程序,其特征在于,该程序被处理器执行时实现如权利要求1-7任一项所述的基于SLAM的移动机器人矿山场景重建方法中的步骤。A computer-readable storage medium on which a program is stored, characterized in that, when the program is executed by a processor, the steps in the SLAM-based mobile robot mine scene reconstruction method according to any one of claims 1-7 are realized .
- 一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的程序,其特征在于,所述处理器执行所述程序时实现如权利要求1-7任一项所述的基于SLAM的移动机器人矿山场景重建方法中的步骤。An electronic device, comprising a memory, a processor, and a program stored in the memory and operable on the processor, wherein the processor implements the program described in any one of claims 1-7 when executing the program Steps in the SLAM-based mine scene reconstruction method for mobile robots.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110644536.7A CN113379910B (en) | 2021-06-09 | 2021-06-09 | Mobile robot mine scene reconstruction method and system based on SLAM |
CN202110644536.7 | 2021-06-09 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022257801A1 true WO2022257801A1 (en) | 2022-12-15 |
Family
ID=77573315
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/095982 WO2022257801A1 (en) | 2021-06-09 | 2022-05-30 | Slam-based mobile robot mine scene reconstruction method and system |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113379910B (en) |
WO (1) | WO2022257801A1 (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111311742A (en) * | 2020-03-27 | 2020-06-19 | 北京百度网讯科技有限公司 | Three-dimensional reconstruction method, three-dimensional reconstruction device and electronic equipment |
CN115690332A (en) * | 2022-12-30 | 2023-02-03 | 华东交通大学 | Point cloud data processing method and device, readable storage medium and electronic equipment |
CN115861957A (en) * | 2023-01-19 | 2023-03-28 | 中国科学技术大学 | Novel dynamic object segmentation method based on sensor fusion |
CN115965756A (en) * | 2023-03-13 | 2023-04-14 | 安徽蔚来智驾科技有限公司 | Map construction method, map construction apparatus, driving apparatus, and medium |
CN115979248A (en) * | 2023-03-17 | 2023-04-18 | 上海仙工智能科技有限公司 | Map updating method and system based on positioning pose as constraint and storage medium |
CN116012613A (en) * | 2023-01-04 | 2023-04-25 | 北京数字绿土科技股份有限公司 | Method and system for measuring and calculating earthwork variation of strip mine based on laser point cloud |
CN116295021A (en) * | 2023-05-25 | 2023-06-23 | 齐鲁工业大学(山东省科学院) | Method for calculating position relation between camera and laser in monocular line structured light system |
CN116358573A (en) * | 2023-05-31 | 2023-06-30 | 小米汽车科技有限公司 | Map building method, map building device, storage medium and vehicle |
CN116500638A (en) * | 2023-06-25 | 2023-07-28 | 江苏大学 | Automatic navigation method and system for harvester tillage channel based on SLAM technology |
CN116518984A (en) * | 2023-07-05 | 2023-08-01 | 中国矿业大学 | Vehicle road co-location system and method for underground coal mine auxiliary transportation robot |
CN116539026A (en) * | 2023-07-06 | 2023-08-04 | 杭州华橙软件技术有限公司 | Map construction method, device, equipment and storage medium |
CN116630394A (en) * | 2023-07-25 | 2023-08-22 | 山东中科先进技术有限公司 | Multi-mode target object attitude estimation method and system based on three-dimensional modeling constraint |
CN116817928A (en) * | 2023-08-28 | 2023-09-29 | 北京交通大学 | Method for multi-source fusion positioning of guard/inertial navigation train based on factor graph optimization |
CN116878504A (en) * | 2023-09-07 | 2023-10-13 | 兰笺(苏州)科技有限公司 | Accurate positioning method for building outer wall operation unmanned aerial vehicle based on multi-sensor fusion |
CN116958842A (en) * | 2023-09-19 | 2023-10-27 | 北京格镭信息科技有限公司 | Underground pipeline inspection method and device based on laser-vision fusion |
CN117237553A (en) * | 2023-09-14 | 2023-12-15 | 广东省核工业地质局测绘院 | Three-dimensional map mapping system based on point cloud image fusion |
CN117635719A (en) * | 2024-01-26 | 2024-03-01 | 浙江托普云农科技股份有限公司 | Weeding robot positioning method, system and device based on multi-sensor fusion |
CN117706563A (en) * | 2024-02-05 | 2024-03-15 | 中南大学 | Method, system, equipment and storage medium for positioning drilling holes in vertical section of mine |
CN117906595A (en) * | 2024-03-20 | 2024-04-19 | 常熟理工学院 | Scene understanding navigation method and system based on feature point method vision SLAM |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113379910B (en) * | 2021-06-09 | 2023-06-02 | 山东大学 | Mobile robot mine scene reconstruction method and system based on SLAM |
CN113804183B (en) * | 2021-09-17 | 2023-12-22 | 广东汇天航空航天科技有限公司 | Real-time topographic mapping method and system |
CN114383611A (en) * | 2021-12-30 | 2022-04-22 | 华南智能机器人创新研究院 | Multi-machine cooperative laser SLAM method, device and system for mobile robot |
CN114429432B (en) * | 2022-04-07 | 2022-06-21 | 科大天工智能装备技术(天津)有限公司 | Multi-source information layered fusion method and device and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9215382B1 (en) * | 2013-07-25 | 2015-12-15 | The United States Of America As Represented By The Secretary Of The Navy | Apparatus and method for data fusion and visualization of video and LADAR data |
CN110163968A (en) * | 2019-05-28 | 2019-08-23 | 山东大学 | RGBD camera large-scale three dimensional scenario building method and system |
CN112634451A (en) * | 2021-01-11 | 2021-04-09 | 福州大学 | Outdoor large-scene three-dimensional mapping method integrating multiple sensors |
CN113379910A (en) * | 2021-06-09 | 2021-09-10 | 山东大学 | Mobile robot mine scene reconstruction method and system based on SLAM |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105702151B (en) * | 2016-03-31 | 2019-06-11 | 百度在线网络技术(北京)有限公司 | A kind of indoor map construction method and device |
CN108038453A (en) * | 2017-12-15 | 2018-05-15 | 罗派智能控制技术(上海)有限公司 | A kind of driver's state-detection and identifying system based on RGBD |
CN109816774B (en) * | 2018-12-31 | 2023-11-17 | 江苏天策机器人科技有限公司 | Three-dimensional reconstruction system and three-dimensional reconstruction method based on unmanned aerial vehicle |
CN110764108B (en) * | 2019-11-05 | 2023-05-02 | 畅加风行(苏州)智能科技有限公司 | Obstacle detection method and device for port automatic driving scene |
CN111045017B (en) * | 2019-12-20 | 2023-03-31 | 成都理工大学 | Method for constructing transformer substation map of inspection robot by fusing laser and vision |
CN111598916A (en) * | 2020-05-19 | 2020-08-28 | 金华航大北斗应用技术有限公司 | Preparation method of indoor occupancy grid map based on RGB-D information |
CN111784835B (en) * | 2020-06-28 | 2024-04-12 | 北京百度网讯科技有限公司 | Drawing method, drawing device, electronic equipment and readable storage medium |
CN112083726B (en) * | 2020-09-04 | 2021-11-23 | 湖南大学 | Park-oriented automatic driving double-filter fusion positioning system |
CN112304307A (en) * | 2020-09-15 | 2021-02-02 | 浙江大华技术股份有限公司 | Positioning method and device based on multi-sensor fusion and storage medium |
CN112907491B (en) * | 2021-03-18 | 2023-08-22 | 中煤科工集团上海有限公司 | Laser point cloud loop detection method and system suitable for underground roadway |
-
2021
- 2021-06-09 CN CN202110644536.7A patent/CN113379910B/en active Active
-
2022
- 2022-05-30 WO PCT/CN2022/095982 patent/WO2022257801A1/en unknown
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9215382B1 (en) * | 2013-07-25 | 2015-12-15 | The United States Of America As Represented By The Secretary Of The Navy | Apparatus and method for data fusion and visualization of video and LADAR data |
CN110163968A (en) * | 2019-05-28 | 2019-08-23 | 山东大学 | RGBD camera large-scale three dimensional scenario building method and system |
CN112634451A (en) * | 2021-01-11 | 2021-04-09 | 福州大学 | Outdoor large-scene three-dimensional mapping method integrating multiple sensors |
CN113379910A (en) * | 2021-06-09 | 2021-09-10 | 山东大学 | Mobile robot mine scene reconstruction method and system based on SLAM |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111311742A (en) * | 2020-03-27 | 2020-06-19 | 北京百度网讯科技有限公司 | Three-dimensional reconstruction method, three-dimensional reconstruction device and electronic equipment |
CN115690332A (en) * | 2022-12-30 | 2023-02-03 | 华东交通大学 | Point cloud data processing method and device, readable storage medium and electronic equipment |
CN116012613B (en) * | 2023-01-04 | 2024-01-16 | 北京数字绿土科技股份有限公司 | Method and system for measuring and calculating earthwork variation of strip mine based on laser point cloud |
CN116012613A (en) * | 2023-01-04 | 2023-04-25 | 北京数字绿土科技股份有限公司 | Method and system for measuring and calculating earthwork variation of strip mine based on laser point cloud |
CN115861957B (en) * | 2023-01-19 | 2023-06-16 | 中国科学技术大学 | Novel dynamic object segmentation method based on sensor fusion |
CN115861957A (en) * | 2023-01-19 | 2023-03-28 | 中国科学技术大学 | Novel dynamic object segmentation method based on sensor fusion |
CN115965756A (en) * | 2023-03-13 | 2023-04-14 | 安徽蔚来智驾科技有限公司 | Map construction method, map construction apparatus, driving apparatus, and medium |
CN115979248A (en) * | 2023-03-17 | 2023-04-18 | 上海仙工智能科技有限公司 | Map updating method and system based on positioning pose as constraint and storage medium |
CN115979248B (en) * | 2023-03-17 | 2023-08-18 | 上海仙工智能科技有限公司 | Map updating method and system based on positioning pose as constraint and storage medium |
CN116295021A (en) * | 2023-05-25 | 2023-06-23 | 齐鲁工业大学(山东省科学院) | Method for calculating position relation between camera and laser in monocular line structured light system |
CN116358573A (en) * | 2023-05-31 | 2023-06-30 | 小米汽车科技有限公司 | Map building method, map building device, storage medium and vehicle |
CN116358573B (en) * | 2023-05-31 | 2023-08-29 | 小米汽车科技有限公司 | Map building method, map building device, storage medium and vehicle |
CN116500638A (en) * | 2023-06-25 | 2023-07-28 | 江苏大学 | Automatic navigation method and system for harvester tillage channel based on SLAM technology |
CN116500638B (en) * | 2023-06-25 | 2023-10-10 | 江苏大学 | Automatic navigation method and system for harvesting machine tillage based on SLAM technology |
CN116518984A (en) * | 2023-07-05 | 2023-08-01 | 中国矿业大学 | Vehicle road co-location system and method for underground coal mine auxiliary transportation robot |
CN116518984B (en) * | 2023-07-05 | 2023-09-08 | 中国矿业大学 | Vehicle road co-location system and method for underground coal mine auxiliary transportation robot |
CN116539026B (en) * | 2023-07-06 | 2023-09-29 | 杭州华橙软件技术有限公司 | Map construction method, device, equipment and storage medium |
CN116539026A (en) * | 2023-07-06 | 2023-08-04 | 杭州华橙软件技术有限公司 | Map construction method, device, equipment and storage medium |
CN116630394A (en) * | 2023-07-25 | 2023-08-22 | 山东中科先进技术有限公司 | Multi-mode target object attitude estimation method and system based on three-dimensional modeling constraint |
CN116630394B (en) * | 2023-07-25 | 2023-10-20 | 山东中科先进技术有限公司 | Multi-mode target object attitude estimation method and system based on three-dimensional modeling constraint |
CN116817928A (en) * | 2023-08-28 | 2023-09-29 | 北京交通大学 | Method for multi-source fusion positioning of guard/inertial navigation train based on factor graph optimization |
CN116817928B (en) * | 2023-08-28 | 2023-12-01 | 北京交通大学 | Method for multi-source fusion positioning of guard/inertial navigation train based on factor graph optimization |
CN116878504B (en) * | 2023-09-07 | 2023-12-08 | 兰笺(苏州)科技有限公司 | Accurate positioning method for building outer wall operation unmanned aerial vehicle based on multi-sensor fusion |
CN116878504A (en) * | 2023-09-07 | 2023-10-13 | 兰笺(苏州)科技有限公司 | Accurate positioning method for building outer wall operation unmanned aerial vehicle based on multi-sensor fusion |
CN117237553A (en) * | 2023-09-14 | 2023-12-15 | 广东省核工业地质局测绘院 | Three-dimensional map mapping system based on point cloud image fusion |
CN116958842A (en) * | 2023-09-19 | 2023-10-27 | 北京格镭信息科技有限公司 | Underground pipeline inspection method and device based on laser-vision fusion |
CN116958842B (en) * | 2023-09-19 | 2024-01-05 | 北京格镭信息科技有限公司 | Underground pipeline inspection method and device based on laser-vision fusion |
CN117635719A (en) * | 2024-01-26 | 2024-03-01 | 浙江托普云农科技股份有限公司 | Weeding robot positioning method, system and device based on multi-sensor fusion |
CN117635719B (en) * | 2024-01-26 | 2024-04-16 | 浙江托普云农科技股份有限公司 | Weeding robot positioning method, system and device based on multi-sensor fusion |
CN117706563A (en) * | 2024-02-05 | 2024-03-15 | 中南大学 | Method, system, equipment and storage medium for positioning drilling holes in vertical section of mine |
CN117706563B (en) * | 2024-02-05 | 2024-05-07 | 中南大学 | Method, system, equipment and storage medium for positioning drilling holes in vertical section of mine |
CN117906595A (en) * | 2024-03-20 | 2024-04-19 | 常熟理工学院 | Scene understanding navigation method and system based on feature point method vision SLAM |
Also Published As
Publication number | Publication date |
---|---|
CN113379910B (en) | 2023-06-02 |
CN113379910A (en) | 2021-09-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022257801A1 (en) | Slam-based mobile robot mine scene reconstruction method and system | |
CN110070615B (en) | Multi-camera cooperation-based panoramic vision SLAM method | |
CN110163968B (en) | RGBD camera large three-dimensional scene construction method and system | |
WO2019127445A1 (en) | Three-dimensional mapping method, apparatus and system, cloud platform, electronic device, and computer program product | |
JP4185052B2 (en) | Enhanced virtual environment | |
CN109506642B (en) | Robot multi-camera visual inertia real-time positioning method and device | |
CN111156998A (en) | Mobile robot positioning method based on RGB-D camera and IMU information fusion | |
Gee et al. | Lidar guided stereo simultaneous localization and mapping (SLAM) for UAV outdoor 3-D scene reconstruction | |
CN113362247B (en) | Semantic real scene three-dimensional reconstruction method and system for laser fusion multi-view camera | |
KR20140049361A (en) | Multiple sensor system, and apparatus and method for three dimensional world modeling using the same | |
Agrawal et al. | PCE-SLAM: A real-time simultaneous localization and mapping using LiDAR data | |
Bai et al. | Perception-aided visual-inertial integrated positioning in dynamic urban areas | |
Wang et al. | An attitude estimation method based on monocular vision and inertial sensor fusion for indoor navigation | |
CN115797490A (en) | Drawing construction method and system based on laser vision fusion | |
Zhang et al. | A Three-dimensional (3-D) Reconstruction Approach Using Single Layer Lidar and an Inertial Measurement Unit. | |
Hu et al. | Efficient Visual-Inertial navigation with point-plane map | |
Xu et al. | A Multi-source Information Fusion Method for Mobile Robot Visual-inertial Navigation | |
Niu et al. | Research on the development of 3d laser slam technology | |
Peng et al. | A novel geo-localisation method using GPS, 3D-GIS and laser scanner for intelligent vehicle navigation in urban areas | |
Liu et al. | Research on Simultaneous localization and mapping Algorithm based on lidar and IMU | |
Hu et al. | Image projection onto flat LiDAR point cloud surfaces to create dense and smooth 3D color maps | |
Wang et al. | Indoor localization technology of slam based on binocular vision and imu | |
Yssa | Geometry model for marker-based localisation | |
CN117197246B (en) | Human-shaped robot position confirmation method based on three-dimensional point cloud and binocular vision | |
Huang | Overview of Visual SLAM Technology: From Traditional to Deep Learning Methods |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22819399 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2023576197 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |