CN110163968A - RGBD camera large-scale three dimensional scenario building method and system - Google Patents

RGBD camera large-scale three dimensional scenario building method and system Download PDF

Info

Publication number
CN110163968A
CN110163968A CN201910452208.XA CN201910452208A CN110163968A CN 110163968 A CN110163968 A CN 110163968A CN 201910452208 A CN201910452208 A CN 201910452208A CN 110163968 A CN110163968 A CN 110163968A
Authority
CN
China
Prior art keywords
point cloud
subgraph
algorithm
rgbd camera
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910452208.XA
Other languages
Chinese (zh)
Other versions
CN110163968B (en
Inventor
周风余
顾潘龙
万方
边钧健
于邦国
庄文密
杨志勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Xinchen Artificial Intelligence Technology Co ltd
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN201910452208.XA priority Critical patent/CN110163968B/en
Publication of CN110163968A publication Critical patent/CN110163968A/en
Application granted granted Critical
Publication of CN110163968B publication Critical patent/CN110163968B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本发明公开了一种RGBD相机大型三维场景构建方法及系统,该方法包括以下步骤:获取RGBD相机的深度图像与彩色图像,生成三维点云,并将点云配准到激光雷达参考系;获取机器人姿态数据和激光雷达的扫描点数据,利用Cartographerd算法的局部优化算法对其进行处理,生成子图和机器人里程计信息;根据机器人里程计信息,将生成的子图与对应的点云进行绑定,拼接子图生成三维地图。

The invention discloses a method and system for constructing a large-scale three-dimensional scene with an RGBD camera. The method includes the following steps: obtaining a depth image and a color image of an RGBD camera, generating a three-dimensional point cloud, and registering the point cloud to a laser radar reference system; obtaining The robot attitude data and the scanning point data of the laser radar are processed by the local optimization algorithm of the Cartographerd algorithm to generate sub-graphs and robot odometer information; according to the robot odometer information, the generated sub-graphs are bound to the corresponding point cloud After setting, the submaps are spliced to generate a 3D map.

Description

RGBD相机大型三维场景构建方法及系统Method and system for constructing large-scale three-dimensional scene with RGBD camera

技术领域technical field

本公开涉及三维建模技术领域,具体涉及一种基于Cartographer算法的RGBD相机大型三维场景构建方法及系统。The present disclosure relates to the technical field of three-dimensional modeling, in particular to a method and system for constructing a large-scale three-dimensional scene with an RGBD camera based on a Cartographer algorithm.

背景技术Background technique

在移动机器人的智能化应用中,机器人的智能化程度一大部分上取决于机器人在场景内的定位精度,因此需要为机器人提供高精确度的环境地图。环境地图一般有二维地图和三维地图两种形式,其中使用二维激光雷达构建的二维地图仅有平面二维数据,在应用过程中会丢失较多信息,不利于机器人进行导航和定位,所以三维地图在很多方面有着极其重要的应用与研究价值。In the intelligent application of mobile robots, the intelligence of the robot largely depends on the positioning accuracy of the robot in the scene, so it is necessary to provide a high-precision environment map for the robot. Environmental maps generally have two forms: two-dimensional maps and three-dimensional maps. Among them, the two-dimensional maps constructed by using two-dimensional lidar only have two-dimensional two-dimensional data, and more information will be lost during the application process, which is not conducive to the navigation and positioning of robots. Therefore, the 3D map has extremely important application and research value in many aspects.

对比能够生成三维数据的三维激光雷达、深度相机与双目相机。其中RGBD相机受测量原理、测量精度和测量距离的限制,在大场景建图的应用过程中效果较差,当遇到较大的干扰或机器人运行较快时,容易丢失自身位置,虽然然能够通过改变图像之间的约束关系来解决位置丢失问题,但都无可避免的增加了计算机资源消耗。而与RGBD相机相比,虽然三维激光雷达和双目相机的抗干扰能力较强,但三维激光雷达的价格十分昂贵,生成的点云稀疏,缺少颜色信息;双目相机则是需要耗费额外的计算资源对比图像计算深度数据,两者在室内进行建图的快速性和精确性均不足。因此本文提出了一种融合激光雷达与RGBD相机两种传感器,利用激光SLAM输出机器人的位姿叠加RGBD相机点云在大场景下进行稠密建图的方法。Compare 3D lidar, depth cameras, and binocular cameras that generate 3D data. Among them, the RGBD camera is limited by the measurement principle, measurement accuracy and measurement distance, and the effect is poor in the application process of large-scale scene mapping. When encountering large interference or the robot runs fast, it is easy to lose its position, although it can The problem of position loss can be solved by changing the constraint relationship between images, but it will inevitably increase the consumption of computer resources. Compared with RGBD cameras, although the anti-interference ability of 3D lidar and binocular camera is stronger, the price of 3D lidar is very expensive, and the generated point cloud is sparse and lacks color information; binocular camera needs to spend additional Computing resources are compared with image computing depth data, both of which are insufficient in speed and accuracy for indoor mapping. Therefore, this paper proposes a method of fusing the two sensors of lidar and RGBD camera, using laser SLAM to output the pose of the robot and superimposing the point cloud of RGBD camera to perform dense mapping in large scenes.

当前流行的几种激光SLAM算法有:(1)Gmapping算法是一种基于改进后粒子滤波方法的建图算法,在大场景下粒子数量较多会耗费较多资源;(2)Karto算法,采用图优化的方法代替了粒子滤波,并使用SPA对图进行调整,但是会在大场景建图情况下有大量landmark的插入,也会耗费大量内存;(3)Hector算法利用高斯牛顿方法来对扫描点集进行扫描匹配,虽然内存消耗较少,但对传感器要求较高,需要较高精度的激光雷达才能建出较好的地图;(4)Cartographer算法采用了姿态优化的方法,将激光雷达的数据帧与Submap进行匹配,并在全局优化过程中通过找到闭环约束进行全局优化,相比前三种方法,该方案能够消除大部分的累积误差,且无需精度特别高的的激光雷达。Several popular laser SLAM algorithms are: (1) Gmapping algorithm is a mapping algorithm based on the improved particle filter method. In large scenes, a large number of particles will consume more resources; (2) Karto algorithm, using The method of graph optimization replaces the particle filter, and uses SPA to adjust the graph, but there will be a large number of landmarks inserted in the case of large-scale scene mapping, and it will consume a lot of memory; (3) the Hector algorithm uses the Gauss-Newton method to scan Point sets are scanned and matched, although the memory consumption is less, but the sensor requirements are high, and a high-precision laser radar is required to build a better map; (4) the Cartographer algorithm adopts the method of attitude optimization, and the laser radar The data frame is matched with the submap, and the global optimization is performed by finding the closed-loop constraint in the global optimization process. Compared with the first three methods, this scheme can eliminate most of the accumulated errors and does not require a particularly high-precision laser radar.

在当前流行的RGBD相机三维重建方法中,多是采用VSLAM(visual simultaneouslocalization and mapping)算法在前端计算RGBD图像特征点之间的约束关系并按此关系叠加点云,同时在后端对前端数据进行回环优化的方法构建三维地图。发明人在研发过程中发现,这种方法受RGBD相机测量原理、测量精度和测量距离的限制,在大场景、环境复杂、存在大量阳光干扰的情况下,建图精度较差、容易丢失机器人位置,且在计算时会耗费大量资源,需要较高性能的电脑才能保证计算速度。In the currently popular 3D reconstruction methods of RGBD cameras, the VSLAM (visual simultaneous localization and mapping) algorithm is mostly used to calculate the constraint relationship between the feature points of the RGBD image at the front end and superimpose the point cloud according to this relationship, and at the same time, the front end data is processed at the back end. The method of loop-closing optimization constructs 3D maps. The inventor found during the research and development process that this method is limited by the RGBD camera measurement principle, measurement accuracy and measurement distance. In the case of large scenes, complex environments, and a large amount of sunlight interference, the mapping accuracy is poor and the robot position is easy to lose. , and it will consume a lot of resources during the calculation, and a computer with higher performance is required to ensure the calculation speed.

发明内容Contents of the invention

为了克服上述现有技术的不足,本公开以配备激光雷达与RGBD相机的室内机器人为研究对象,提出了一种基于Cartographer算法的RGBD相机大型三维场景构建方法及系统,借助激光雷达数据少、抗干扰能力强的特点,通过Cartographer激光SLAM算法计算机器人位姿,在Cartographer生成的子图中插入实时点云以实现实时建图;同时使用Cartographer的回环优化功能对之前插入的位姿进行调整,在现实场景下实验表明,相比单纯的使用激光雷达或RGBD相机进行建图精度更高,建图速度更快。In order to overcome the deficiencies of the above-mentioned prior art, this disclosure takes indoor robots equipped with lidar and RGBD cameras as the research object, and proposes a method and system for constructing large-scale 3D scenes with RGBD cameras based on the Cartographer algorithm. With strong interference ability, the pose of the robot is calculated by the Cartographer laser SLAM algorithm, and real-time point clouds are inserted into the submap generated by Cartographer to realize real-time mapping; at the same time, the loopback optimization function of Cartographer is used to adjust the previously inserted pose. Experiments in real-world scenarios show that, compared to simply using lidar or RGBD cameras for mapping, the accuracy is higher and the mapping speed is faster.

本公开提供的一种基于Cartographer算法的RGBD相机大型三维场景构建方法的技术方案是:The technical solution of a large-scale three-dimensional scene construction method for RGBD cameras based on the Cartographer algorithm provided by the present disclosure is:

一种基于Cartographer算法的RGBD相机大型三维场景构建方法,该方法包括以下步骤:A kind of RGBD camera large-scale three-dimensional scene construction method based on Cartographer algorithm, this method comprises the following steps:

获取RGBD相机的深度图像与彩色图像,生成三维点云,并将点云配准到激光雷达参考系;Obtain the depth image and color image of the RGBD camera, generate a 3D point cloud, and register the point cloud to the lidar reference system;

获取机器人姿态数据和激光雷达的扫描点数据,利用Cartographerd算法的局部优化算法对其进行处理,生成子图和机器人里程计信息;Obtain robot attitude data and laser radar scan point data, use the local optimization algorithm of the Cartographerd algorithm to process them, and generate subgraphs and robot odometer information;

根据机器人里程计信息,将生成的子图与对应的点云进行绑定,拼接子图生成三维地图。According to the robot odometer information, the generated sub-map is bound to the corresponding point cloud, and the sub-map is stitched to generate a 3D map.

进一步的,所述点云生成方法为:Further, the point cloud generation method is:

对RGBD相机进行配准,将深度相机参考系配准到彩色相机参考系下;Register the RGBD camera, and register the depth camera reference system to the color camera reference system;

根据配准后深度图像中的相应的像素值计算彩色图像中的任一点在空间中的三维位置信息;Calculate the three-dimensional position information of any point in the color image in space according to the corresponding pixel value in the registered depth image;

依次遍历整幅深度图像,得到相机对应的点云。Traverse the entire depth image in turn to obtain the point cloud corresponding to the camera.

进一步的,所述将点云配准到激光雷达参考系的步骤包括:Further, the step of registering the point cloud to the lidar frame of reference includes:

获取机器人底盘上彩色相机光心与激光雷达中心的距离及其在偏航角上的偏离的角度;Obtain the distance between the optical center of the color camera on the robot chassis and the center of the lidar and its deviation angle on the yaw angle;

使用变换矩阵对相机点云中的每个点的坐标进行变换,将激光雷达的扫描点集与点云进行配准。The coordinates of each point in the camera point cloud are transformed using a transformation matrix, and the set of scanned points of the lidar is registered with the point cloud.

进一步的,所述将子图与对应的点云进行绑定的步骤包括:Further, the step of binding the submap to the corresponding point cloud includes:

记录生成的子图的位姿信息和时间戳,将生成子图时产生的时间戳与RGBD相机采集深度图像的时间戳进行对比,选择时间上最靠近子图的RGBD相机深度图像信息;Record the pose information and timestamp of the generated subimage, compare the timestamp generated when the subimage is generated with the timestamp of the depth image collected by the RGBD camera, and select the depth image information of the RGBD camera closest to the subimage in time;

将时间上最靠近子图的RGBD相机深度图像信息和机器人位姿信息进行时间同步;Synchronize the RGBD camera depth image information and robot pose information closest to the sub-image in time;

计算生成子图时机器人与子图之间的水平位移差和转动的角度差,形成旋转矩阵,使用旋转矩阵将点云变换到其对应子图的参考系中;Calculate the horizontal displacement difference and the rotation angle difference between the robot and the sub-image when generating the sub-image to form a rotation matrix, and use the rotation matrix to transform the point cloud into the reference system of its corresponding sub-image;

将变换到子图参考系中的点云与子图信息绑定,并对点云进行位移,将变换后的点云平移到子图空间所在位置。Bind the point cloud transformed into the sub-image reference system with the sub-image information, and displace the point cloud, and translate the transformed point cloud to the location of the sub-image space.

进一步的,还包括当子图位姿发生变化时,将位姿出现变化的子图所绑定的点云按照子图变化进行重新平移的步骤。Further, when the pose of the sub-picture changes, the step of re-translating the point cloud bound to the sub-picture whose pose changes occurs according to the change of the sub-picture.

本公开的另一方面提供的一种基于Cartographer算法的RGBD相机大型三维场景构建方法的技术方案是:Another aspect of the present disclosure provides a technical solution for a large-scale three-dimensional scene construction method of an RGBD camera based on the Cartographer algorithm:

一种基于Cartographer算法的RGBD相机大型三维场景构建系统,该系统包括:A large-scale three-dimensional scene construction system based on RGBD camera based on Cartographer algorithm, the system includes:

点云配准模块,用于获取RGBD相机的深度图像与彩色图像,生成三维点云,并将点云配准到激光雷达参考系;The point cloud registration module is used to obtain the depth image and color image of the RGBD camera, generate a three-dimensional point cloud, and register the point cloud to the lidar reference system;

子图生成模块,用于获取机器人姿态数据和激光雷达的扫描点数据,利用Cartographerd算法的局部优化算法对其进行处理,生成子图和机器人里程计信息;The subgraph generation module is used to obtain the robot attitude data and the scanning point data of the laser radar, and use the local optimization algorithm of the Cartographerd algorithm to process it to generate subgraphs and robot odometer information;

点云绑定模块,用于根据机器人里程计信息,将生成的子图与对应的点云进行绑定;The point cloud binding module is used to bind the generated submap with the corresponding point cloud according to the robot odometer information;

三维构建模块,用于拼接子图生成三维地图;3D building blocks for splicing submaps to generate 3D maps;

回环点云模块,用于当子图位姿发生变化时,将位姿出现变化的子图所绑定的点云按照子图变化进行重新平移。The loopback point cloud module is used to re-translate the point cloud bound to the sub-image whose pose has changed according to the change of the sub-image when the pose of the sub-image changes.

本公开的另一方面提供的计算机可读存储介质的技术方案是:The technical solution of the computer-readable storage medium provided by another aspect of the present disclosure is:

一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如上所述的基于Cartographer算法的RGBD相机大型三维场景构建方法中的步骤。A computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the steps in the above-mentioned method for constructing a large-scale three-dimensional scene with an RGBD camera based on the Cartographer algorithm are realized.

本公开的另一方面提供的一种计算机设备的技术方案是:A technical solution of a computer device provided by another aspect of the present disclosure is:

一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现如上所述的基于Cartographer算法的RGBD相机大型三维场景构建方法中的步骤。A computer device comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, when the processor executes the program, the above-mentioned large-scale three-dimensional scene construction of the RGBD camera based on the Cartographer algorithm is realized steps in the method.

通过上述技术方案,本公开的有益效果是:Through the above technical solution, the beneficial effects of the present disclosure are:

(1)本公开采用激光雷达的大视场角和Cartographer算法的高鲁棒性,大大提升了RGBD相机大场景下三维重建的抗干扰能力和算法运行速度;(1) This disclosure adopts the large field of view of the laser radar and the high robustness of the Cartographer algorithm, which greatly improves the anti-interference ability and algorithm running speed of the 3D reconstruction of the RGBD camera in a large scene;

(2)本公开采用Cartographer算法的回环优化功能对位姿发生变化的子图所对应的点云进行重新平移,消除建图过程中产生的累积误差,提升了RGBD相机三维重建地图的精度。(2) This disclosure uses the loop-back optimization function of the Cartographer algorithm to re-translate the point cloud corresponding to the sub-image whose pose has changed, eliminates the cumulative error generated during the mapping process, and improves the accuracy of the RGBD camera 3D reconstruction map.

附图说明Description of drawings

构成本公开的一部分的说明书附图用来提供对本公开的进一步理解,本公开的示意性实施例及其说明用于解释本申请,并不构成对本公开的不当限定。The accompanying drawings constituting a part of the present disclosure are used to provide a further understanding of the present disclosure, and the exemplary embodiments and descriptions of the present disclosure are used to explain the present application, and do not constitute undue limitations on the present disclosure.

图1是实施例一基于Cartographer算法的RGBD相机大型三维场景构建方法的流程图一;Fig. 1 is the flowchart one of embodiment one based on the RGBD camera large-scale three-dimensional scene construction method of Cartographer algorithm;

图2是实施例一基于Cartographer算法的RGBD相机大型三维场景构建方法的流程图二;Fig. 2 is the flow chart two of embodiment one based on the RGBD camera large-scale three-dimensional scene construction method of Cartographer algorithm;

图3(a)是实施例一矫正前KinectV2深度图像与RGB图像配准图;Fig. 3 (a) is the registration figure of KinectV2 depth image and RGB image before embodiment one is corrected;

图3(b)是实施例一矫正后KinectV2深度图像与RGB图像配准图;Fig. 3 (b) is the registration figure of KinectV2 depth image and RGB image after embodiment one correction;

图4是实施例一Kinect与雷达点云配准图;Fig. 4 is embodiment one Kinect and radar point cloud registration figure;

图5是实施例一Submap与点云数据的绑定示意图;Fig. 5 is the binding schematic diagram of embodiment one Submap and point cloud data;

图6(a)是实施例一回环前Cartographer回环效果图;Fig. 6 (a) is the effect diagram of Cartographer loopback before embodiment 1 loopback;

图6(b)是实施例一回环后Cartographer回环效果图;Fig. 6 (b) is the Cartographer loop-back effect diagram after the loop-back of embodiment one;

图7(a)和图7(b)是实施例一RGBD相机大型三维场景构建方法所得结果图;Fig. 7 (a) and Fig. 7 (b) are the result figure obtained by the large-scale three-dimensional scene construction method of RGBD camera of embodiment one;

图7(c)和图7(d)是RTAB算法所得结果图;Figure 7(c) and Figure 7(d) are the result graphs obtained by the RTAB algorithm;

图7(e)和图7(f)是阳光干扰下的彩色图像与深度图像;Figure 7(e) and Figure 7(f) are color images and depth images under sunlight interference;

图8(a)和图8(b)是实施例一RGBD相机大型三维场景构建方法所得结果图;Fig. 8 (a) and Fig. 8 (b) are result figure obtained by embodiment one RGBD camera large-scale three-dimensional scene construction method;

图8(c)和图8(d)是实施例一RGBD相机大型三维场景构建方法的点云叠加精度示意图;Fig. 8 (c) and Fig. 8 (d) are the point cloud overlay accuracy schematic diagram of embodiment one RGBD camera large-scale three-dimensional scene construction method;

图8(e)和图8(f)是RTAB算法所得结果图;Fig. 8 (e) and Fig. 8 (f) are the result figure obtained by RTAB algorithm;

图9是Cartographer建图精度示意图;Figure 9 is a schematic diagram of Cartographer's mapping accuracy;

图10(a)和图10(b)是二楼实验精度对比示意图;Figure 10(a) and Figure 10(b) are schematic diagrams of the comparison of experimental accuracy on the second floor;

图10(c)和图10(d)是一楼实验精度对比示意图。Figure 10(c) and Figure 10(d) are schematic diagrams of the comparison of experimental accuracy on the first floor.

具体实施方式Detailed ways

下面结合附图与实施例对本公开作进一步说明。The present disclosure will be further described below in conjunction with the accompanying drawings and embodiments.

应该指出,以下详细说明都是例示性的,旨在对本公开提供进一步的说明。除非另有指明,本公开使用的所有技术和科学术语具有与本公开所属技术领域的普通技术人员通常理解的相同含义。It should be noted that the following detailed description is exemplary and intended to provide further explanation of the present disclosure. Unless defined otherwise, all technical and scientific terms used in this disclosure have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.

需要注意的是,这里所使用的术语仅是为了描述具体实施方式,而非意图限制根据本申请的示例性实施方式。如在这里所使用的,除非上下文另外明确指出,否则单数形式也意图包括复数形式,此外,还应当理解的是,当在本说明书中使用术语“包含”和/或“包括”时,其指明存在特征、步骤、操作、器件、组件和/或它们的组合。It should be noted that the terminology used here is only for describing specific implementations, and is not intended to limit the exemplary implementations according to the present application. As used herein, unless the context clearly dictates otherwise, the singular is intended to include the plural, and it should also be understood that when the terms "comprising" and/or "comprising" are used in this specification, they mean There are features, steps, operations, means, components and/or combinations thereof.

实施例一Embodiment one

本实施例以配备激光雷达与RGBD相机的室内机器人为研究对象,提出了一种基于Cartographer算法的RGBD相机大型三维场景构建方法,借助激光雷达数据少、抗干扰能力强的特点,通过Cartographer激光SLAM算法计算机器人位姿,在Cartographer生成的子图中插入实时点云以实现实时建图。同时使用Cartographer的回环优化功能对之前插入的位姿进行调整,在现实场景下实验表明,相比单纯的使用激光雷达或RGBD相机进行建图精度更高,建图速度更快。This embodiment takes indoor robots equipped with lidar and RGBD cameras as the research object, and proposes a large-scale three-dimensional scene construction method based on Cartographer algorithm for RGBD cameras. The algorithm calculates the pose of the robot, and inserts the real-time point cloud into the submap generated by Cartographer to realize real-time mapping. At the same time, the loopback optimization function of Cartographer is used to adjust the previously inserted pose. Experiments in real-world scenarios show that the mapping accuracy is higher and the mapping speed is faster than that of simply using lidar or RGBD cameras.

请参阅附图1,所述基于Cartographer算法的RGBD相机大型三维场景构建方法包括以下步骤:Please refer to accompanying drawing 1, described RGBD camera large-scale three-dimensional scene construction method based on Cartographer algorithm comprises the following steps:

S101,获取RGBD相机深度图像与彩色图像,生成三维点云,并将点云配准到激光雷达参考系。S101. Acquire a depth image and a color image of an RGBD camera, generate a three-dimensional point cloud, and register the point cloud to a lidar reference system.

具体地,所述步骤101的实现方式如下:Specifically, the implementation of step 101 is as follows:

S1011,获取RGBD相机的深度图像与彩色图像。S1011. Acquire a depth image and a color image of an RGBD camera.

具体地,获取RGBD相机Kinectv2采集的深度图像数据与彩色图像数据。Specifically, the depth image data and the color image data collected by the RGBD camera Kinectv2 are obtained.

为了RGBD相机Kinectv2采集的数据与激光雷达采集的数据融合使用,需要将两传感器数据配准到同一坐标系下。In order to combine the data collected by the RGBD camera Kinectv2 with the data collected by the lidar, it is necessary to register the data of the two sensors in the same coordinate system.

S1012,点云生成。S1012, generating a point cloud.

为了获得精确的点云,首先对RGBD相机进行配准,将深度相机参考系配准到RGB相机参考系下,如图3(a)和图3(b)所示。In order to obtain an accurate point cloud, the RGBD camera is first registered, and the depth camera reference frame is registered to the RGB camera reference frame, as shown in Fig. 3(a) and Fig. 3(b).

完成配准后,RGB图像中的任一点(ximg,yimg),都可以根据配准后深度图像中的相应的像素值计算出该点的深度值,并且根据公式(1)、(2)、(3)即可计算出该点在空间中的位置p=[px,py,pz]T,遍历整幅深度图像即可得到相机对应的点云。After the registration is completed, any point (x img , y img ) in the RGB image can calculate the depth value of the point according to the corresponding pixel value in the depth image after registration, and according to the formula (1), (2 ), (3) to calculate the position of the point in space p=[p x , p y , p z ] T , and traverse the entire depth image to obtain the point cloud corresponding to the camera.

其中,Z0表示从相机到标定时所使用参考平面的距离,f表示深度相机焦距,b表示为深度相机到RGB相机的距离,(cx,cy)表示深度相机的光心坐标。Among them, Z 0 represents the distance from the camera to the reference plane used for calibration, f represents the focal length of the depth camera, b represents the distance from the depth camera to the RGB camera, and (c x , cy ) represents the optical center coordinates of the depth camera.

S1013,点云到激光雷达坐标系的配准。S1013, registration of the point cloud to the lidar coordinate system.

在得到相机点云后,为了能将点云准确的插入到Cartographer生成的栅格地图中,还需要对点云与激光雷达扫描点进行配准。After obtaining the camera point cloud, in order to accurately insert the point cloud into the raster map generated by Cartographer, it is necessary to register the point cloud and the lidar scanning points.

测量可得机器人底盘上RGB相机光心与激光雷达中心的距离及其Yaw轴上的偏离的角度ξ=(ξxyzθ),因为相机和雷达相对水平放置,所以可以忽略Pitch轴和Roll轴的偏量。测得以上数据后,使用变换矩阵Tξ对相机点云中的每个点的坐标进行变换:Measure the distance between the optical center of the RGB camera on the robot chassis and the center of the lidar and the angle of deviation on the Yaw axis ξ=(ξ xyzθ ), because the camera and the radar are placed relatively horizontally, so The offsets of the Pitch and Roll axes can be ignored. After measuring the above data, use the transformation matrix T ξ to transform the coordinates of each point in the camera point cloud:

其中,p为点在空间中的位置。Among them, p is the position of the point in space.

即可完成激光雷达的扫描点集与点云进行配准,效果图如图4所示。The registration of the scanning point set and the point cloud of the lidar can be completed, and the effect diagram is shown in Figure 4.

S102,获取机器人姿态和激光雷达数据,利用Cartographerd算法的局部优化算法对得到的数据进行处理,生成子图和机器人里程计信息。S102. Obtain robot attitude and lidar data, and use the local optimization algorithm of the Cartographerd algorithm to process the obtained data to generate subgraphs and robot odometer information.

Cartographer是2016年谷歌提出的一种跨平台、多传感器融合的激光SLAM算法,由局部优化(Local SLAM)和全局优化(Global SLAM)两部分组成。Cartographer is a cross-platform, multi-sensor fusion laser SLAM algorithm proposed by Google in 2016. It consists of two parts: local optimization (Local SLAM) and global optimization (Global SLAM).

请参阅附图2,在前端的局部优化算法中,Cartographer主要是通过拟合新插入的激光雷达扫描点数据和惯性测量单元(IMU)测量的加速度和角度数据,将新插入数据与对应子图间的位置匹配问题等效为一个最小二乘问题,使得扫描数据点集与子图之间匹配的概率最大,以此来找到数据在子图中的最佳插入点,并在新的扫描点数据插入后对子图进行实时更新。Please refer to Figure 2. In the front-end local optimization algorithm, Cartographer mainly combines the newly inserted data with the corresponding subgraph by fitting the newly inserted lidar scanning point data and the acceleration and angle data measured by the inertial measurement unit (IMU). The position matching problem between is equivalent to a least squares problem, so that the probability of matching between the scanning data point set and the sub-graph is the largest, so as to find the best insertion point of the data in the sub-graph, and at the new scanning point The subgraph is updated in real time after the data is inserted.

在局部优化算法中生成的子图是Cartographer算法新引入的一个概念,他被用来表示全局地图中的一部分,并以概率网格的形式在地图中呈现。它的每个网格都有一个固定的概率值来表示该网格被阻塞的概率,会在新的扫描点数据插入到子图后,重新计算一个代表着网格点的hits集合和misses集合,通过与子图之前的hits集合和misses集合进行对比,使用概率公式(2)对hits和misses集合所代表的网格点的概率值phits或pmiss进行更新。并在插入了足够的扫描点集后输出当前子图,并开始新子图的更新。The subgraph generated in the local optimization algorithm is a new concept introduced by the Cartographer algorithm. It is used to represent a part of the global map and presented in the map in the form of a probability grid. Each of its grids has a fixed probability value to represent the probability that the grid is blocked. After new scan point data is inserted into the subgraph, a hits set and misses set representing the grid points are recalculated. , by comparing with the hits set and misses set before the subgraph, use the probability formula (2) to update the probability value p hits or p miss of the grid points represented by the hits and misses sets. And output the current subgraph after inserting enough scan point sets, and start the update of the new subgraph.

Mnew(x)=clamp(odd-1(odds(Mold(x))·odds(phits/misses))) (6)。M new (x) = clamp (odd -1 (odds (M old (x)) · odds (p hits/misses ))) (6).

在后端的全局优化算法中,Cartographer会将本地优化算法生成的每一个子图与组成它的扫描点数据及其位置记录下来用作回环检测,并在机器人运动过程中使用分枝定界方法开窗检测回环,将新插入的数据与之前记录的数据对比,建立回环约束。最后再利用SPA(Sparse Pose Adjustment)算法对所有子图的位置再进行优化。图6(a)和图6(b)为在真实场景下进行建图回环前后的效果图,可以明显看出,通过回环修正可以消除大部分存在的累积误差。In the back-end global optimization algorithm, Cartographer will record each subgraph generated by the local optimization algorithm and its scanning point data and its position for loop detection, and use the branch and bound method to develop The window detects the loopback, compares the newly inserted data with the previously recorded data, and establishes the loopback constraint. Finally, the SPA (Sparse Pose Adjustment) algorithm is used to optimize the positions of all subgraphs. Figure 6(a) and Figure 6(b) are the effect diagrams before and after the loopback in the real scene. It can be clearly seen that most of the accumulated errors can be eliminated through the loopback correction.

在本实施例中,所述机器人里程计信息为机器人的路径信息,包括单位时间内机器人在X轴、Y轴的坐标变化和角度变化。In this embodiment, the robot odometer information is the path information of the robot, including coordinate changes and angle changes of the robot on the X-axis and Y-axis per unit time.

S103,将步骤102得到的子图与对应点云进行绑定,拼接子图生成三维地图。S103. Bind the sub-image obtained in step 102 with the corresponding point cloud, and stitch the sub-images to generate a three-dimensional map.

在本实施例中,通过把Cartographer算法生成的子图作为关键帧,将对应的点云插入到子图相应的空间中的方式完成的建图工作。同时在后端,Cartographer会对子图和之前插入子图的扫描点集进行回环优化,通过调整子图位姿信息的方式调整插入到子图的点云。所以为了使子图之间的点云信息能够被精确的叠加,需要将点云信息与子图进行绑定,并寻找一种资源消耗较少的方法对调整后的点云进行更新。In this embodiment, the mapping work is completed by using the sub-image generated by the Cartographer algorithm as a key frame, and inserting the corresponding point cloud into the corresponding space of the sub-image. At the same time, at the back end, Cartographer will perform loopback optimization on the sub-image and the scan point set inserted into the sub-image before, and adjust the point cloud inserted into the sub-image by adjusting the pose information of the sub-image. Therefore, in order to make the point cloud information between sub-images can be accurately superimposed, it is necessary to bind the point cloud information with the sub-images, and find a method that consumes less resources to update the adjusted point cloud.

具体地,将子图与对应点云进行绑定的具体步骤如下:Specifically, the specific steps of binding the sub-image to the corresponding point cloud are as follows:

在机器人前进过程中,随着激光雷达扫描点的不断更新,Cartographer会通过ROS话题“/tf”的形式不断的输出机器人位姿信息。记录该位姿信息为δ=[δx,δy,δz,δpitch,δroll,δyaw]T,并通过对比/tf话题与图像话题的时间戳与实时RGBD相机的深度图像、RGB图像进行时间同步。同时,每当一个新的子图产生后,记录该新生成子图的位姿信息为φ=[φx,φy,φz,φpitch,φroll,φyaw]T和时间戳,通过与图像的时间戳进行对比,选择最靠近子图的深度图像信息和机器人位姿信息。As the robot advances, as the lidar scanning points are continuously updated, Cartographer will continuously output robot pose information in the form of the ROS topic "/tf". Record the pose information as δ=[δ x , δ y , δ z , δ pitch , δ roll , δ yaw ] T , and compare the timestamp of the /tf topic and the image topic with the depth image of the real-time RGBD camera, RGB Images are time-synchronized. At the same time, whenever a new sub-image is generated, the pose information of the newly generated sub-image is recorded as φ=[φ x , φ y , φ z , φ pitch , φ roll , φ yaw ] T and timestamp, through Compared with the timestamp of the image, the depth image information and robot pose information closest to the sub-image are selected.

由于Cartographer在运行过程中会不断生成子图,随着子图的叠加,绑定在子图上配准好的点云也会随之进行叠加,即可生成全局地图,图5为在线运行过程中Submap14绑定的点云与Submap15绑定的点云叠加的效果。Since Cartographer will continuously generate submaps during the running process, as the submaps are superimposed, the registered point cloud bound on the submaps will also be superimposed, and a global map can be generated. Figure 5 shows the online running process The superimposition effect of the point cloud bound by Submap 14 and the point cloud bound by Submap 15 .

在图像信息与机器人位姿信息进行同步过程中,图像信息的发布频率为25HZ,机器人位姿的发布频率为70HZ,在二者之间进行时间同步能够将两者的时间差控制在±14ms内。与上述同步过程不同,由于子图的发布频率设置为5HZ,Cartographer输出新生成的子图时,所接收到的图像数据理论存在±40ms的时间误差,但在实际运行过程中运算较多,尤其是在回环过程中,可能会出现程序阻塞,子图与匹配到的点云数据之间的时间误差一般会有120ms左右。此时所接收到的点云的姿态δ与φ不一定相等,在本实施例中,机器人以2m/s的速度移动,子图与子图所匹配到的机器人位姿之间一般存在20cm左右的误差。且Cartographer新生成子图的姿态值指向默认方向代表着该子图的朝向,在后期新的扫描点数据加入进来后会对此值不断进行调整,与机器人位姿中指向机器人朝向的姿态值为不同两变量。同时在Cartographer进行回环优化时不会对回环点云的位姿进行调整,而是对回环点云对应的子图的位姿进行调整。In the process of synchronizing image information and robot pose information, the release frequency of image information is 25HZ, and the release frequency of robot pose is 70HZ. Time synchronization between the two can control the time difference between the two within ±14ms. Different from the above synchronization process, since the publishing frequency of the subimage is set to 5HZ, when Cartographer outputs the newly generated subimage, the received image data has a theoretical time error of ±40ms, but there are many calculations in the actual operation process, especially During the loopback process, program blocking may occur, and the time error between the sub-image and the matched point cloud data is generally about 120ms. At this time, the attitude δ and φ of the point cloud received are not necessarily equal. In this embodiment, the robot moves at a speed of 2m/s, and there is generally about 20cm between the sub-image and the robot pose matched by the sub-image. error. And the attitude value of the newly generated sub-image of Cartographer points to the default direction, which represents the orientation of the sub-image. After the new scanning point data is added in the later stage, this value will be continuously adjusted, and the attitude value of the orientation of the robot in the robot pose is different two variables. At the same time, when Cartographer performs loopback optimization, the pose of the loopback point cloud will not be adjusted, but the pose of the subimage corresponding to the loopback point cloud will be adjusted.

因此,需要对点云进行变换,通过公式(7)、公式(8)计算出机器人与子图之间的水平位移和转动的角度,并利用C++的Eigen库初始化旋转矩阵RΔ到欧拉角rΔ所指方向。将旋转矩阵RΔ与欧拉角tΔ带入欧式变换矩阵TransΔ中,使用公式(9)即可把点云P变换到与其对应子图的参考系中。Therefore, it is necessary to transform the point cloud, calculate the horizontal displacement and rotation angle between the robot and the sub-image through formula (7) and formula (8), and use the Eigen library of C++ to initialize the rotation matrix R Δ to the Euler angle The direction r Δ points to. Bring the rotation matrix R Δ and the Euler angle t Δ into the Euclidean transformation matrix Trans Δ , and use the formula (9) to transform the point cloud P into the reference frame of its corresponding subimage.

tΔ=[δxx,δyy,δzz]T (7)t Δ =[δ xx , δ yy , δ zz ] T (7)

rΔ=[δpitchpitch,δrollroll,δyawyaw]T (8)r Δ =[δ pitchpitch ,δ rollroll ,δ yawyaw ] T (8)

将变换到子图参考系的点云与子图信息绑定后并记录,再按照公式(11)对点云进行位移,即可将变换后的点云平移到子图空间所在位置。Bind and record the point cloud transformed into the sub-image reference system with the sub-image information, and then displace the point cloud according to formula (11), and then translate the transformed point cloud to the location of the sub-image space.

tSubmap=[φx,φy,φz]T (10)t Submap = [φ x , φ y , φ z ] T (10)

其中,tSubmap为对应子图在空间中的位姿;p为绑定到该子图上点云中的点的位姿信息。Among them, t Submap is the pose of the corresponding submap in space; p is the pose information bound to the points in the point cloud on the submap.

本实施例将子图与对应点云进行绑定,在回环后,需要对子图位姿信息进行调整时,不需要再对绑定到需要调整位姿的子图上的点云进行位姿变换,缩短回环所用时间。In this embodiment, the subgraph is bound to the corresponding point cloud. After the loopback, when the pose information of the subgraph needs to be adjusted, it is not necessary to perform pose adjustment on the point cloud bound to the subgraph whose pose needs to be adjusted. Transform to shorten the time spent in loopback.

随着Cartographer算法运行过程中逐步拼接子图生成全局三维地图,绑定在对应点云上的点云也会被拼接,以此来实现RGBD相机三维场景图构建。As the Cartographer algorithm is running, the sub-graphs are gradually spliced to generate a global 3D map, and the point clouds bound to the corresponding point clouds will also be spliced to realize the construction of the RGBD camera 3D scene graph.

S104,回环点云,当子图位姿发生变化时,将位姿出现变化的子图所绑定的点云按照子图变化进行平移。S104, looping back the point cloud, when the pose of the sub-image changes, the point cloud bound to the sub-image with the change in pose is translated according to the change of the sub-image.

随着子图的不断加入,Cartographer会用DFS分枝定界的方法进行回环检测,并建立回环约束,并使用SPA方法实时调整子图的位姿φ。即之前建立的子图的位姿φ,不仅会在回环的时候调整,在进行稀疏位姿矫正的时候也会对子图的位姿进行调整,子图位姿调整的频率会比较高。With the continuous addition of subgraphs, Cartographer will use the DFS branch and bound method to perform loop closure detection, establish loop closure constraints, and use the SPA method to adjust the pose φ of the subgraph in real time. That is, the pose φ of the previously established sub-image will not only be adjusted during loop closure, but will also be adjusted during sparse pose correction, and the frequency of sub-image pose adjustment will be relatively high.

在实施例中,使用PCL库中的PointXYZRGB类型对点云进行存储和叠加,如果在每次子图出现变化后就重新平移叠加一次之前所存储的所有点云占用时间较长,无法实现实时的点云叠加和显示。In the embodiment, the PointXYZRGB type in the PCL library is used to store and superimpose the point cloud. If all the point clouds stored before re-translation and superimposition take a long time after each sub-image changes, real-time cannot be realized. Point cloud overlay and display.

为解决该问题,在子图位姿变换时分区进行变换和叠加。使用C++库中的类模板Vector定义一个PointXYZRGB类型的区域点云数组,并把设定阈值数量的点云叠加后插入区域点云数组。在子图位姿发生变化的时候,只对其对应区域点云重新平移并叠加,并在定时器回调函数中把所有区域点云转化为ROS话题发布出来。To solve this problem, the transformation and superposition are performed in partitions during the pose transformation of the sub-images. Use the class template Vector in the C++ library to define an area point cloud array of PointXYZRGB type, and insert the area point cloud array after superimposing the set threshold number of point clouds. When the pose of the sub-image changes, only the corresponding area point cloud is re-translated and superimposed, and all area point clouds are converted into ROS topics and published in the timer callback function.

建图实验,当出现11个子图位姿出现变化并重新叠加点云耗时2.1s,并且在没有点云重新叠加情况下RVIZ的显示频率基本能稳定在5帧以上(27个子图p),基本对实时显示没有影响。In the mapping experiment, when the pose of 11 sub-images changes and the point cloud is re-stacked, it takes 2.1s, and the display frequency of RVIZ can basically be stabilized at more than 5 frames (27 sub-images p) without point cloud re-stacking. Basically has no effect on real-time display.

具体地,区域点云叠加的具体实现步骤为:Specifically, the specific implementation steps of regional point cloud overlay are as follows:

获取Catographer算法输出的子图、生成的实时点云;Obtain the subgraph output by the Catographer algorithm and the generated real-time point cloud;

判断是否新出现子图,若新出现,则寻找与此时时间差最小的实时点云;Judging whether there is a new sub-image, if it is new, look for the real-time point cloud with the smallest time difference with this time;

将实时点云按公式(9)变换到相对新子图的位置;Transform the real-time point cloud to the position of the relative new submap according to formula (9);

如果将这次新出现的子图加入到最新的残余点云后,残余点云中的子图数量大于设定阈值,则将该点云加到残余点云中,并将其作为一个区域点云记录到内存中,同时清除残余点云中的内容。If the new subimage is added to the latest residual point cloud, and the number of subimages in the residual point cloud is greater than the set threshold, add the point cloud to the residual point cloud and use it as an area point The cloud is recorded to memory while clearing the contents of the residual point cloud.

如果将这次新出现的子图加入到最新的残余点云后,残余点云中的子图数量未达到设定阈值,则将该点云加到残余点云中,等待下一次新子图出现后进行判断。If the new subimage is added to the latest residual point cloud, and the number of subimages in the residual point cloud does not reach the set threshold, add the point cloud to the residual point cloud and wait for the next new subimage Appear to judge.

如果发现之前生成的子图位姿出现变化,判断该子图在哪一个区域点云中,并对该区域点云中的点云按照公式(11)进行重新平移,并输出。If it is found that the pose of the previously generated sub-image changes, determine which area point cloud the sub-image is in, and re-translate the point cloud in the area point cloud according to formula (11), and output it.

本实施例得到的大型三维场景地图可用于蔬菜大棚的智能喷洒,可采用本实施例提出的RGBD相机三维场景构建方法构建蔬菜大棚三维环境地图,并控制机器人在蔬菜大棚内进行移动对植株进行喷药或施肥。The large-scale three-dimensional scene map obtained in this embodiment can be used for intelligent spraying of vegetable greenhouses. The RGBD camera three-dimensional scene construction method proposed in this embodiment can be used to construct a three-dimensional environment map of vegetable greenhouses, and the robot can be controlled to move in vegetable greenhouses to spray plants. medicine or fertilization.

实验验证Experimental verification

对本实施例提出的基于Cartographer算法的RGBD相机三维场景构建方法进行实验验证。在本实验过程中,使用搭载有KinectV2相机、SICKTIM561激光雷达和IMU的麦克纳姆伦移动平台进行现场试验。进行RGBD相机三维场景构建方法中步骤的电脑配置为:Macbookpro(Apple)、i7 6700 2.6GHZCPU、16GRAM、512GRAM。在建图过程中,选用Kinect传感器深度数据精度较高的0.2m到5.5m区间内的点云数据计算,使用0.03m网格大小的体素滤波器(Voxel Filter)对原始点云和叠加后的点云进行降采样,并且通过ROS话题的形式将点云数据发送到RVIZ上进行显示。实验地点选择在校区主楼的一楼大厅和二楼环形回廊,如图7(a)、图7(b)、图7(c)、图7(d)、图7(e)和图7(f)所示。并且选择精确度和运行速度相对RGB-DSLAMV2和DVOSLAM(Dense Visual Odometry and SLAM)更好的RTAB(Real TimeAppearance-Based Mapping)室内建图算法进行对比实验。Experimental verification is carried out on the RGBD camera 3D scene construction method proposed in this embodiment based on the Cartographer algorithm. In the course of this experiment, a field test is carried out using the Mecanumlen mobile platform equipped with KinectV2 camera, SICKTIM561 lidar and IMU. Carry out the computer configuration of step in the method for constructing the three-dimensional scene of RGBD camera: Macbookpro (Apple), i7 6700 2.6GHZCPU, 16GRAM, 512GRAM. In the process of building the map, the point cloud data in the range of 0.2m to 5.5m with high accuracy of Kinect sensor depth data is selected for calculation, and the original point cloud and the superimposed point cloud are processed by using a voxel filter (Voxel Filter) with a grid size of 0.03m. The point cloud is down-sampled, and the point cloud data is sent to RVIZ for display in the form of a ROS topic. The experimental site is selected in the hall on the first floor and the circular corridor on the second floor of the main building of the campus, as shown in Figure 7(a), Figure 7(b), Figure 7(c), Figure 7(d), Figure 7(e) and Figure 7( f) as shown. And choose the RTAB (Real Time Appearance-Based Mapping) indoor mapping algorithm with better accuracy and running speed than RGB-DSLAMV2 and DVOSLAM (Dense Visual Odometry and SLAM) for comparative experiments.

首先为了验证本实施例提出的RGBD相机三维场景构建方法对干扰的抑制能力,选择在校区主楼二楼光照不均匀的回廊大厅进行现场实验,机器人以2m/s的匀速绕回廊运行一圈半,在回廊墙壁的帮助下,激光雷达和RGBD相机测得的数据量更多,没有回环修正也能获得较好的里程计精度。二楼入口处是一面玻璃墙,会有大量的阳光射入,由图7(e)和图7(f)深度图像可以看出,在阳光的干扰下RGBD相机丢失了大部分的深度信息,对仅使用视觉信息的RTAB算法影响较大,在RTAB的视觉里程计的计算过程中会引入较大误差,尽管在计算的时候不对点云测量区间进行限制,在进行的多次建图实验过程中均无法完成三维地图的构建,每次实验过程中都会在接近门口的时候丢失机器人位置而导致建图失败。而仅有的一次成功建图实验中所构建的三维地图也与现实场景差距较大,在多个地方出现重影、错位和扭曲的现象,如图7(c)和图7(d)所示。而与RGBD相机相比,激光雷达具有更强的抗噪性能,有效测量距离更长,同时借助Cartographer的回环优化功能消除建图过程中大部分的累积误差,使得本实施例提出的方法在大场景下三维重建作业中优势更明显,能够在大场景下构建更加精确的三维地图,如图7(a)和图7(b)所示。First, in order to verify the ability of the RGBD camera 3D scene construction method proposed in this example to suppress interference, a field experiment was carried out in the corridor hall with uneven illumination on the second floor of the main building of the campus. The robot ran around the corridor at a constant speed of 2m/s for one and a half circles With the help of the corridor walls, the lidar and RGBD cameras can measure more data, and better odometer accuracy can be obtained without loop correction. There is a glass wall at the entrance of the second floor, and there will be a lot of sunlight. From the depth images in Figure 7(e) and Figure 7(f), it can be seen that RGBD cameras lose most of the depth information under the interference of sunlight. It has a great impact on the RTAB algorithm that only uses visual information, and a large error will be introduced in the calculation process of RTAB's visual odometer. None of them could complete the construction of the three-dimensional map. During each experiment, the position of the robot would be lost when approaching the door, resulting in the failure of the map construction. However, the 3D map constructed in the only successful mapping experiment is also far from the real scene, and ghosting, dislocation and distortion appear in many places, as shown in Figure 7(c) and Figure 7(d) Show. Compared with RGBD cameras, lidar has stronger anti-noise performance, and the effective measurement distance is longer. At the same time, the loopback optimization function of Cartographer is used to eliminate most of the accumulated errors in the mapping process, so that the method proposed in this embodiment can be widely used. The advantages of the 3D reconstruction operation in the scene are more obvious, and a more accurate 3D map can be constructed in a large scene, as shown in Figure 7(a) and Figure 7(b).

而为了对比本实施例的方法在大场景三维重构过程中点云叠加的精度,在夜晚无阳光且室内光源较为均匀的校区主楼一楼进行了对比实验,大厅长27m、宽16m,面积约为300m2,机器人以2m/s的匀速在一楼大绕柱运行一圈半。由图8(c)和图8(d)所示的建图效果可以看出,使用Cartographer计算的Submap位姿对点云数据进行叠加也能够获得较好的建图效果,与RTAB算法中使用视觉里程计的位姿对齐点云所得的结果相比(如图8(e)和图8(f)所示),使用本文算法所计算出的点云在5cm的分辨率下能够被更为准确的对齐,所叠加的点云基本没有重影。In order to compare the accuracy of the point cloud superposition of the method in this embodiment in the process of 3D reconstruction of large scenes, a comparative experiment was carried out on the first floor of the main building of the campus where there is no sunlight at night and the indoor light source is relatively uniform. The hall is 27m long, 16m wide, and has an area of about is 300m 2 , the robot runs around the column on the first floor for one and a half circles at a constant speed of 2m/s. From the mapping effects shown in Figure 8(c) and Figure 8(d), it can be seen that using the Submap pose calculated by Cartographer to superimpose the point cloud data can also obtain a better mapping effect, which is similar to that used in the RTAB algorithm. Compared with the results obtained by aligning the point cloud of the visual odometry (as shown in Figure 8(e) and Figure 8(f)), the point cloud calculated using the algorithm in this paper can be more accurate at a resolution of 5cm. Accurate alignment, the superimposed point cloud is basically free of ghosting.

由于不是在实验室环境验证算法,无法在主楼架设光学追踪仪器。因此为了验证本实施例的方法与RTAB的建图精度,对主楼一楼进行了实地测量,先对Cartographer的建图精度进行了验证。如图9所示,横向两支撑柱之间的建图距离误差为2cm,纵向两支撑柱之间的建图误差为1cm,大厅横向最长部分的距离误差约4cm,纵向最长部分的距离误差为7cm。与Real-Time Loop Closure in 2D LIDAR SLAM论文中,Wolfgang Hess等人实验结果中的误差范围相符。Since the algorithm is not verified in a laboratory environment, it is not possible to set up optical tracking instruments in the main building. Therefore, in order to verify the method of this embodiment and the mapping accuracy of RTAB, a field measurement was carried out on the first floor of the main building, and the mapping accuracy of Cartographer was first verified. As shown in Figure 9, the distance error between the two horizontal supporting columns is 2cm, the error between the two vertical supporting columns is 1cm, the distance error of the longest horizontal part of the hall is about 4cm, and the distance between the longest vertical part The error is 7cm. It is consistent with the error range in the experimental results of Wolfgang Hess et al. in the paper Real-Time Loop Closure in 2D LIDAR SLAM.

然后将本实施例的方法与RTAB算法所计算的轨迹绘制在同一张图上,通过两轨迹之间的距离误差来对比本文算法与RTAB算法的建图精度。由图10(a)、图10(b)、图10(c)、图10(d)所示。可以看到基于视觉里程计的RTAB算法在开阔场景或有大范围干扰的情况下会有大于1m的较大的计算误差,远大于Cartographer算法5cm的误差,因此本实施例的方法所构建的三维地图精度更高。Then draw the trajectory calculated by the method of this embodiment and the RTAB algorithm on the same map, and compare the mapping accuracy of the algorithm in this paper and the RTAB algorithm through the distance error between the two trajectories. By Figure 10 (a), Figure 10 (b), Figure 10 (c), Figure 10 (d) shown. It can be seen that the RTAB algorithm based on visual odometry will have a large calculation error greater than 1m in the case of open scenes or large-scale interference, which is much larger than the 5cm error of the Cartographer algorithm. Therefore, the three-dimensional odometry constructed by the method of this embodiment Maps are more accurate.

本实施例提出的基于Cartographer算法的RGBD相机大型三维场景构建方法,针对RGBD视觉SLAM算法在大场景和大干扰的情况下容易丢失自身位置和建图精确度不够的的问题,将激光SLAM算法应用在大场景三维建图中,并在校区主楼进行了现场实验,在线验证了本文算法相对传统意义上仅使用视觉里程计数据的建图算法更具实时性、准确性和抗干扰能力。The RGBD camera large-scale three-dimensional scene construction method based on the Cartographer algorithm proposed in this embodiment aims at the problem that the RGBD visual SLAM algorithm is easy to lose its own position and the accuracy of mapping is not enough in the case of large scenes and large disturbances, and the laser SLAM algorithm is applied. In the 3D mapping of large scenes, and on-site experiments were carried out in the main building of the campus, it was verified online that the algorithm in this paper is more real-time, accurate and anti-interference than the traditional mapping algorithm that only uses visual odometry data.

实施例二Embodiment two

本实施例提供一种基于Cartographer算法的RGBD相机大型三维场景构建系统,该系统包括:The present embodiment provides a system for constructing a large-scale three-dimensional scene of a RGBD camera based on the Cartographer algorithm, the system comprising:

点云配准模块,用于获取RGBD相机的深度图像与彩色图像,生成三维点云,并将点云配准到激光雷达参考系;The point cloud registration module is used to obtain the depth image and color image of the RGBD camera, generate a three-dimensional point cloud, and register the point cloud to the lidar reference system;

子图生成模块,用于获取机器人姿态数据和激光雷达的扫描点数据,利用Cartographerd算法的局部优化算法对其进行处理,生成子图和机器人里程计信息;The subgraph generation module is used to obtain the robot attitude data and the scanning point data of the lidar, and use the local optimization algorithm of the Cartographerd algorithm to process it to generate subgraphs and robot odometer information;

点云绑定模块,用于根据机器人里程计信息,将生成的子图与对应的点云进行绑定;The point cloud binding module is used to bind the generated submap with the corresponding point cloud according to the robot odometer information;

三维构建模块,用于拼接子图生成三维地图,绑定在对应点云上的点云也会被拼接,以此来实现RGBD相机三维场景图构建;The 3D building module is used to splicing sub-images to generate a 3D map, and the point cloud bound to the corresponding point cloud will also be spliced to realize the construction of the RGBD camera 3D scene graph;

回环点云模块,用于当子图位姿发生变化时,将位姿出现变化的子图所绑定的点云按照子图变化进行重新平移。The loopback point cloud module is used to re-translate the point cloud bound to the sub-image whose pose has changed according to the change of the sub-image when the pose of the sub-image changes.

实施例三Embodiment three

本实施例提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如图1或图2所示的基于Cartographer算法的RGBD相机大型三维场景构建方法中的步骤。This embodiment provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the steps in the method for constructing a large-scale three-dimensional scene of an RGBD camera based on the Cartographer algorithm as shown in Figure 1 or Figure 2 are realized. step.

实施例四Embodiment four

本实施例提供了一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现如图1或图2所示的基于Cartographer算法的RGBD相机大型三维场景构建方法中的步骤。This embodiment provides a computer device, including a memory, a processor, and a computer program stored on the memory and operable on the processor. When the processor executes the program, the computer program shown in FIG. 1 or FIG. 2 is implemented. Steps in the method for constructing large-scale three-dimensional scenes with RGBD cameras based on the Cartographer algorithm.

本领域内的技术人员应明白,本公开的实施例可提供为方法、系统、或计算机程序产品。因此,本公开可采用硬件实施例、软件实施例、或结合软件和硬件方面的实施例的形式。而且,本公开可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。Those skilled in the art should understand that the embodiments of the present disclosure may be provided as methods, systems, or computer program products. Accordingly, the present disclosure may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, optical storage, etc.) having computer-usable program code embodied therein.

本公开是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It should be understood that each procedure and/or block in the flowchart and/or block diagram, and a combination of procedures and/or blocks in the flowchart and/or block diagram can be realized by computer program instructions. These computer program instructions may be provided to a general purpose computer, special purpose computer, embedded processor, or processor of other programmable data processing equipment to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing equipment produce a An apparatus for realizing the functions specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.

这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to operate in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture comprising instruction means, the instructions The device realizes the function specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.

这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded onto a computer or other programmable data processing device, causing a series of operational steps to be performed on the computer or other programmable device to produce a computer-implemented process, thereby The instructions provide steps for implementing the functions specified in the flow chart or blocks of the flowchart and/or the block or blocks of the block diagrams.

本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(RandomAccessMemory,RAM)等。Those of ordinary skill in the art can understand that all or part of the processes in the methods of the above embodiments can be implemented through computer programs to instruct related hardware, and the programs can be stored in a computer-readable storage medium. During execution, it may include the processes of the embodiments of the above-mentioned methods. Wherein, the storage medium may be a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM) or a random access memory (Random Access Memory, RAM) and the like.

上述虽然结合附图对本公开的具体实施方式进行了描述,但并非对本公开保护范围的限制,所属领域技术人员应该明白,在本公开的技术方案的基础上,本领域技术人员不需要付出创造性劳动即可做出的各种修改或变形仍在本公开的保护范围以内。Although the specific implementation of the present disclosure has been described above in conjunction with the accompanying drawings, it does not limit the protection scope of the present disclosure. Those skilled in the art should understand that on the basis of the technical solutions of the present disclosure, those skilled in the art do not need to pay creative work Various modifications or variations that can be made are still within the protection scope of the present disclosure.

Claims (10)

1. a kind of RGBD camera large-scale three dimensional scenario building method based on Cartographer algorithm, characterized in that this method The following steps are included:
The depth image and color image of RGBD camera are obtained, generates three-dimensional point cloud, and point cloud registering to laser radar is referred to System;
The number of scan points evidence for obtaining robot pose data and laser radar, utilizes the local optimum of Cartographerd algorithm Algorithm handles it, spanning subgraph and robot odometer information;
According to robot odometer information, the subgraph of generation is bound with corresponding cloud, splicing subgraph generates dimensionally Figure.
2. the RGBD camera large-scale three dimensional scenario building method according to claim 1 based on Cartographer algorithm, It is characterized in that described cloud generation method are as follows:
RGBD camera is registrated, depth camera referential is registrated under color camera referential;
According to the three-dimensional position of any point in the corresponding calculated for pixel values color image in depth image after registration in space Confidence breath;
Whole picture depth image is successively traversed, corresponding cloud of camera is obtained.
3. the RGBD camera large-scale three dimensional scenario building method according to claim 1 based on Cartographer algorithm, It is characterized in that described include: by point cloud registering to the step of laser radar referential
Robot chassis is obtained to enamel the angle of deviation of the camera photocentre at a distance from laser radar center and its in yaw angle Degree;
It is converted using coordinate of the transformation matrix to each point in camera point cloud, by the scanning point set of laser radar and puts cloud It is registrated.
4. the RGBD camera large-scale three dimensional scenario building method according to claim 1 based on Cartographer algorithm, It is characterized in that the subgraph by generation includes: the step of binding with a corresponding cloud
The posture information and timestamp of the subgraph generated are recorded, the timestamp generated when by spanning subgraph and the acquisition of RGBD camera are deep The timestamp of degree image compares, and selects the RGBD camera depth image information on the time near subgraph;
Time synchronization will be carried out near the RGBD camera depth image information of subgraph and robot posture information on time;
The differential seat angle of horizontal displacement difference and rotation when calculating spanning subgraph between robot and subgraph, forms spin matrix, makes In the referential that Cloud transform is corresponded to subgraph with spin matrix to it;
The point cloud transformed in subgraph referential and picture information are bound, and a cloud is displaced, by transformed cloud Move to Sub-graph Space position.
5. the RGBD camera large-scale three dimensional scenario building method according to claim 1 based on Cartographer algorithm, It is characterized in that further including that pose point cloud bound in the subgraph changed is occurred according to subgraph when subgraph pose changes The step of variation is translated again.
6. the RGBD camera large-scale three dimensional scenario building method according to claim 5 based on Cartographer algorithm, It is characterized in that described there is the step of point cloud bound in the subgraph changed is translated again according to subgraph variation packet for pose It includes:
Judge the region point cloud where the subgraph changed occurs in pose, the point cloud in the region point cloud is changed again according to subgraph Move to Sub-graph Space position.
7. a kind of RGBD camera large-scale three dimensional scenario building system based on Cartographer algorithm, characterized in that the system Include:
Point cloud registering module generates three-dimensional point cloud, and point cloud is matched for obtaining the depth image and color image of RGBD camera Standard arrives laser radar referential;
Subgraph generation module is utilized for obtaining the number of scan points evidence of robot pose data and laser radar The Local Optimization Algorithm of Cartographerd algorithm handles it, spanning subgraph and robot odometer information;
Point cloud binding module, for according to robot odometer information, the subgraph of generation to be bound with corresponding cloud;
Three-dimensional building module generates three-dimensional map for splicing subgraph.
8. the RGBD camera large-scale three dimensional scenario building system according to claim 7 based on Cartographer algorithm, It is characterized in that the system further include:
Winding point cloud module, for point cloud bound in the subgraph changed being occurred in pose and is pressed when subgraph pose changing It is translated again according to subgraph variation.
9. a kind of computer readable storage medium, is stored thereon with computer program, characterized in that the program is executed by processor The Shi Shixian RGBD camera large-scale three dimensional scene structure for example of any of claims 1-6 based on Cartographer algorithm Step in construction method.
10. a kind of computer equipment including memory, processor and stores the meter that can be run on a memory and on a processor Calculation machine program, characterized in that realize when the processor executes described program and be based on as of any of claims 1-6 Step in the RGBD camera large-scale three dimensional scenario building method of Cartographer algorithm.
CN201910452208.XA 2019-05-28 2019-05-28 RGBD camera large three-dimensional scene construction method and system Active CN110163968B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910452208.XA CN110163968B (en) 2019-05-28 2019-05-28 RGBD camera large three-dimensional scene construction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910452208.XA CN110163968B (en) 2019-05-28 2019-05-28 RGBD camera large three-dimensional scene construction method and system

Publications (2)

Publication Number Publication Date
CN110163968A true CN110163968A (en) 2019-08-23
CN110163968B CN110163968B (en) 2020-08-25

Family

ID=67629702

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910452208.XA Active CN110163968B (en) 2019-05-28 2019-05-28 RGBD camera large three-dimensional scene construction method and system

Country Status (1)

Country Link
CN (1) CN110163968B (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110849351A (en) * 2019-11-21 2020-02-28 大连理工大学 Method for constructing grid map by using depth camera and binocular camera
CN110849352A (en) * 2019-11-21 2020-02-28 大连理工大学 A method for building raster maps using infrared, depth and binocular camera fusion
CN111340834A (en) * 2020-03-10 2020-06-26 山东大学 Lining plate assembly system and method based on data fusion of laser radar and binocular camera
CN111429344A (en) * 2020-02-19 2020-07-17 上海交通大学 Laser SLAM closed-loop detection method and system based on perceptual hashing
CN111461982A (en) * 2020-03-30 2020-07-28 北京百度网讯科技有限公司 Method and device for splicing point clouds
CN111757021A (en) * 2020-07-06 2020-10-09 浙江大学 Multi-sensor real-time fusion method for mobile robot remote takeover scenarios
CN111795687A (en) * 2020-06-29 2020-10-20 深圳市优必选科技股份有限公司 Robot map updating method and device, readable storage medium and robot
CN111915723A (en) * 2020-08-14 2020-11-10 广东申义实业投资有限公司 Indoor three-dimensional panorama construction method and system
CN112312113A (en) * 2020-10-29 2021-02-02 贝壳技术有限公司 Method, device and system for generating three-dimensional model
CN112462385A (en) * 2020-10-21 2021-03-09 南开大学 Map splicing and positioning method based on laser radar under outdoor large environment
CN112799095A (en) * 2020-12-31 2021-05-14 深圳市普渡科技有限公司 Static map generation method and device, computer equipment and storage medium
CN112894832A (en) * 2019-11-19 2021-06-04 广东博智林机器人有限公司 Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium
CN113379815A (en) * 2021-06-25 2021-09-10 中德(珠海)人工智能研究院有限公司 Three-dimensional reconstruction method and device based on RGB camera and laser sensor and server
WO2021189194A1 (en) * 2020-03-23 2021-09-30 罗伯特博世有限公司 Three-dimensional environment modeling method and device, computer storage medium, and industrial robot operating platform
WO2021208442A1 (en) * 2020-04-14 2021-10-21 广东博智林机器人有限公司 Three-dimensional scene reconstruction system and method, device, and storage medium
CN113587933A (en) * 2021-07-29 2021-11-02 山东山速机器人科技有限公司 Indoor mobile robot positioning method based on branch-and-bound algorithm
CN113674399A (en) * 2021-08-16 2021-11-19 杭州图灵视频科技有限公司 Mobile robot indoor three-dimensional point cloud map construction method and system
CN113834479A (en) * 2021-09-03 2021-12-24 Oppo广东移动通信有限公司 Map generation method, device, system, storage medium and electronic device
CN113870358A (en) * 2021-09-17 2021-12-31 聚好看科技股份有限公司 Method and equipment for joint calibration of multiple 3D cameras
CN114063099A (en) * 2021-11-10 2022-02-18 厦门大学 RGBD-based positioning method and device
CN114638909A (en) * 2022-03-24 2022-06-17 杭州电子科技大学 Construction method of substation semantic map based on laser SLAM and visual fusion
WO2022257801A1 (en) * 2021-06-09 2022-12-15 山东大学 Slam-based mobile robot mine scene reconstruction method and system
US11928830B2 (en) 2021-12-22 2024-03-12 Honeywell International Inc. Systems and methods for generating three-dimensional reconstructions of environments

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2302586A1 (en) * 2009-08-21 2011-03-30 Sony Corporation Information processing device, information processing method and program
US8224097B2 (en) * 2008-06-12 2012-07-17 Sri International Building segmentation for densely built urban regions using aerial LIDAR data
US9390344B2 (en) * 2014-01-09 2016-07-12 Qualcomm Incorporated Sensor-based camera motion detection for unconstrained slam
CN105783913A (en) * 2016-03-08 2016-07-20 中山大学 SLAM device integrating multiple vehicle-mounted sensors and control method of device
CN105843223A (en) * 2016-03-23 2016-08-10 东南大学 Mobile robot three-dimensional mapping and obstacle avoidance method based on space bag of words model
CN106595659A (en) * 2016-11-03 2017-04-26 南京航空航天大学 Map merging method of unmanned aerial vehicle visual SLAM under city complex environment
CN106598052A (en) * 2016-12-14 2017-04-26 南京阿凡达机器人科技有限公司 Robot security inspection method based on environment map and robot thereof
CN106910217A (en) * 2017-03-17 2017-06-30 驭势科技(北京)有限公司 Vision map method for building up, computing device, computer-readable storage medium and intelligent vehicle
CN107301654A (en) * 2017-06-12 2017-10-27 西北工业大学 A kind of positioning immediately of the high accuracy of multisensor is with building drawing method
US20180074176A1 (en) * 2016-09-14 2018-03-15 Beijing Baidu Netcom Science And Technology Co., Ltd. Motion compensation method and apparatus applicable to laser point cloud data
CN109507677A (en) * 2018-11-05 2019-03-22 浙江工业大学 A kind of SLAM method of combination GPS and radar odometer
CN109541630A (en) * 2018-11-22 2019-03-29 武汉科技大学 A method of it is surveyed and drawn suitable for Indoor environment plane 2D SLAM
CN109633664A (en) * 2018-12-29 2019-04-16 南京理工大学工程技术研究院有限公司 Joint positioning method based on RGB-D Yu laser odometer

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8224097B2 (en) * 2008-06-12 2012-07-17 Sri International Building segmentation for densely built urban regions using aerial LIDAR data
EP2302586A1 (en) * 2009-08-21 2011-03-30 Sony Corporation Information processing device, information processing method and program
US9390344B2 (en) * 2014-01-09 2016-07-12 Qualcomm Incorporated Sensor-based camera motion detection for unconstrained slam
CN105783913A (en) * 2016-03-08 2016-07-20 中山大学 SLAM device integrating multiple vehicle-mounted sensors and control method of device
CN105843223A (en) * 2016-03-23 2016-08-10 东南大学 Mobile robot three-dimensional mapping and obstacle avoidance method based on space bag of words model
US20180074176A1 (en) * 2016-09-14 2018-03-15 Beijing Baidu Netcom Science And Technology Co., Ltd. Motion compensation method and apparatus applicable to laser point cloud data
CN106595659A (en) * 2016-11-03 2017-04-26 南京航空航天大学 Map merging method of unmanned aerial vehicle visual SLAM under city complex environment
CN106598052A (en) * 2016-12-14 2017-04-26 南京阿凡达机器人科技有限公司 Robot security inspection method based on environment map and robot thereof
CN106910217A (en) * 2017-03-17 2017-06-30 驭势科技(北京)有限公司 Vision map method for building up, computing device, computer-readable storage medium and intelligent vehicle
CN107301654A (en) * 2017-06-12 2017-10-27 西北工业大学 A kind of positioning immediately of the high accuracy of multisensor is with building drawing method
CN109507677A (en) * 2018-11-05 2019-03-22 浙江工业大学 A kind of SLAM method of combination GPS and radar odometer
CN109541630A (en) * 2018-11-22 2019-03-29 武汉科技大学 A method of it is surveyed and drawn suitable for Indoor environment plane 2D SLAM
CN109633664A (en) * 2018-12-29 2019-04-16 南京理工大学工程技术研究院有限公司 Joint positioning method based on RGB-D Yu laser odometer

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YINFA YAN 等: "Design and evaluation of visual SLAM method based on Realsense for mobile robots", 《2018 CHINESE AUTOMATION CONGRESS (CAC)》 *
陈世浪 等: "基于RGB-D相机的SLAM技术研究综述", 《计算机是工程与应用》 *

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112894832A (en) * 2019-11-19 2021-06-04 广东博智林机器人有限公司 Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium
CN112894832B (en) * 2019-11-19 2022-06-03 广东博智林机器人有限公司 Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium
CN110849352A (en) * 2019-11-21 2020-02-28 大连理工大学 A method for building raster maps using infrared, depth and binocular camera fusion
CN110849351A (en) * 2019-11-21 2020-02-28 大连理工大学 Method for constructing grid map by using depth camera and binocular camera
CN111429344A (en) * 2020-02-19 2020-07-17 上海交通大学 Laser SLAM closed-loop detection method and system based on perceptual hashing
CN111429344B (en) * 2020-02-19 2022-04-26 上海交通大学 Laser SLAM closed-loop detection method and system based on perceptual hashing
CN111340834A (en) * 2020-03-10 2020-06-26 山东大学 Lining plate assembly system and method based on data fusion of laser radar and binocular camera
CN111340834B (en) * 2020-03-10 2023-05-12 山东大学 Liner assembly system and method based on laser radar and binocular camera data fusion
WO2021189194A1 (en) * 2020-03-23 2021-09-30 罗伯特博世有限公司 Three-dimensional environment modeling method and device, computer storage medium, and industrial robot operating platform
CN111461982A (en) * 2020-03-30 2020-07-28 北京百度网讯科技有限公司 Method and device for splicing point clouds
CN111461982B (en) * 2020-03-30 2023-09-22 北京百度网讯科技有限公司 Method and apparatus for splice point cloud
CN113592989B (en) * 2020-04-14 2024-02-20 广东博智林机器人有限公司 Three-dimensional scene reconstruction system, method, equipment and storage medium
WO2021208442A1 (en) * 2020-04-14 2021-10-21 广东博智林机器人有限公司 Three-dimensional scene reconstruction system and method, device, and storage medium
CN113592989A (en) * 2020-04-14 2021-11-02 广东博智林机器人有限公司 Three-dimensional scene reconstruction system, method, equipment and storage medium
CN111795687A (en) * 2020-06-29 2020-10-20 深圳市优必选科技股份有限公司 Robot map updating method and device, readable storage medium and robot
CN111757021B (en) * 2020-07-06 2021-07-20 浙江大学 Multi-sensor real-time fusion method for mobile robot remote takeover scenarios
CN111757021A (en) * 2020-07-06 2020-10-09 浙江大学 Multi-sensor real-time fusion method for mobile robot remote takeover scenarios
CN111915723A (en) * 2020-08-14 2020-11-10 广东申义实业投资有限公司 Indoor three-dimensional panorama construction method and system
CN112462385A (en) * 2020-10-21 2021-03-09 南开大学 Map splicing and positioning method based on laser radar under outdoor large environment
CN112312113A (en) * 2020-10-29 2021-02-02 贝壳技术有限公司 Method, device and system for generating three-dimensional model
CN112799095A (en) * 2020-12-31 2021-05-14 深圳市普渡科技有限公司 Static map generation method and device, computer equipment and storage medium
CN112799095B (en) * 2020-12-31 2023-03-14 深圳市普渡科技有限公司 Static map generation method and device, computer equipment and storage medium
WO2022257801A1 (en) * 2021-06-09 2022-12-15 山东大学 Slam-based mobile robot mine scene reconstruction method and system
CN113379815A (en) * 2021-06-25 2021-09-10 中德(珠海)人工智能研究院有限公司 Three-dimensional reconstruction method and device based on RGB camera and laser sensor and server
CN113587933B (en) * 2021-07-29 2024-02-02 山东山速机器人科技有限公司 Indoor mobile robot positioning method based on branch-and-bound algorithm
CN113587933A (en) * 2021-07-29 2021-11-02 山东山速机器人科技有限公司 Indoor mobile robot positioning method based on branch-and-bound algorithm
CN113674399A (en) * 2021-08-16 2021-11-19 杭州图灵视频科技有限公司 Mobile robot indoor three-dimensional point cloud map construction method and system
CN113834479A (en) * 2021-09-03 2021-12-24 Oppo广东移动通信有限公司 Map generation method, device, system, storage medium and electronic device
CN113834479B (en) * 2021-09-03 2025-01-14 Oppo广东移动通信有限公司 Map generation method, device, system, storage medium and electronic device
CN113870358A (en) * 2021-09-17 2021-12-31 聚好看科技股份有限公司 Method and equipment for joint calibration of multiple 3D cameras
CN113870358B (en) * 2021-09-17 2024-05-24 聚好看科技股份有限公司 Method and equipment for jointly calibrating multiple 3D cameras
CN114063099A (en) * 2021-11-10 2022-02-18 厦门大学 RGBD-based positioning method and device
US11928830B2 (en) 2021-12-22 2024-03-12 Honeywell International Inc. Systems and methods for generating three-dimensional reconstructions of environments
CN114638909A (en) * 2022-03-24 2022-06-17 杭州电子科技大学 Construction method of substation semantic map based on laser SLAM and visual fusion

Also Published As

Publication number Publication date
CN110163968B (en) 2020-08-25

Similar Documents

Publication Publication Date Title
CN110163968A (en) RGBD camera large-scale three dimensional scenario building method and system
CN113379910B (en) Mine scene reconstruction method and system for mobile robot based on SLAM
CN103247075B (en) Based on the indoor environment three-dimensional rebuilding method of variation mechanism
CN104330074B (en) Intelligent surveying and mapping platform and realizing method thereof
CN111462135A (en) Semantic Mapping Method Based on Visual SLAM and 2D Semantic Segmentation
CN107179086A (en) A kind of drafting method based on laser radar, apparatus and system
CN107478214A (en) A kind of indoor orientation method and system based on Multi-sensor Fusion
CN110163963B (en) Mapping device and mapping method based on SLAM
CN106887037B (en) indoor three-dimensional reconstruction method based on GPU and depth camera
CN111968228B (en) Augmented reality self-positioning method based on aviation assembly
CN110275179A (en) A Map Construction Method Based on LiDAR and Vision Fusion
CN114663473A (en) Personnel target positioning and tracking method and system based on multi-view information fusion
CN113034681B (en) Three-dimensional reconstruction method and device for spatial plane relation constraint
Ranft et al. 3d perception for autonomous navigation of a low-cost mav using minimal landmarks
CN117308939A (en) AR navigation method, terminal and storage medium
Chen et al. Low cost and efficient 3D indoor mapping using multiple consumer RGB-D cameras
CN106959101A (en) A kind of indoor orientation method based on optical flow method
Gu et al. [Retracted] A 3D Reconstruction Method Using Multisensor Fusion in Large‐Scale Indoor Scenes
TWI760128B (en) Method and system for generating depth image and positioning system using the method
CN113674413A (en) BA (building Block based) image optimization-based visible light positioning and depth camera tight coupling method
Shuai et al. Multi-sensor Fusion for Autonomous Positioning of Indoor Robots
CN113963030B (en) A method to improve the stability of monocular vision initialization
CN117739996B (en) Autonomous positioning method based on event camera inertial tight coupling
Yuan et al. A real-time vision-based guided method for autonomous landing of a rotor-craft unmanned aerial vehicle
CN109754412A (en) Method for tracking target, target tracker and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230105

Address after: 1201-7, Floor 12, Hanyu Jingu Artificial Intelligence Building, Jingshi Road, Jinan Area, China (Shandong) Pilot Free Trade Zone, 250000 Shandong Province

Patentee after: Shandong Xinchen Artificial Intelligence Technology Co.,Ltd.

Address before: 250061, No. ten, No. 17923, Lixia District, Ji'nan City, Shandong Province

Patentee before: SHANDONG University

TR01 Transfer of patent right