CN108052103B - Simultaneous localization and map construction method of inspection robot in underground space based on deep inertial odometry - Google Patents

Simultaneous localization and map construction method of inspection robot in underground space based on deep inertial odometry Download PDF

Info

Publication number
CN108052103B
CN108052103B CN201711334617.7A CN201711334617A CN108052103B CN 108052103 B CN108052103 B CN 108052103B CN 201711334617 A CN201711334617 A CN 201711334617A CN 108052103 B CN108052103 B CN 108052103B
Authority
CN
China
Prior art keywords
depth
inspection robot
map
underground space
map construction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201711334617.7A
Other languages
Chinese (zh)
Other versions
CN108052103A (en
Inventor
朱华
陈常
李振亚
汪雷
杨汪庆
李鹏
赵勇
由韶泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Mining and Technology CUMT
Original Assignee
China University of Mining and Technology CUMT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Mining and Technology CUMT filed Critical China University of Mining and Technology CUMT
Priority to CN201711334617.7A priority Critical patent/CN108052103B/en
Publication of CN108052103A publication Critical patent/CN108052103A/en
Application granted granted Critical
Publication of CN108052103B publication Critical patent/CN108052103B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

一种基于深度惯性里程计的巡检机器人地下空间同时定位和地图构建方法,利用深度相机和惯性测量单元进行松耦合,通过深度相机采集的深度图获取点云信息,提取平面特征;将深度相机采集的RGB图像转换成灰度图和平面特征融合,利用迭代最近点算法进行优化;迭代最近点优化后的数据和惯性测量单元数据进行松耦合,利用回环检测提高位姿图精度,得到巡检机器人运行轨迹、点云地图和树跳表地图,达到巡检机器人在室内同时定位和地图构建的效果。通过此方法提高巡检机器人在地下空间的同时定位精度和鲁棒性,达到巡检机器人在地下空间同时定位和地图构建的效果。巡检机器人在地下空间进行作业时,在强旋转环境下本发明采用的方法具有良好鲁棒性。

Figure 201711334617

A method for simultaneous localization and map construction of an inspection robot in underground space based on depth inertial odometry, using a depth camera and an inertial measurement unit for loose coupling, obtaining point cloud information through a depth map collected by the depth camera, and extracting plane features; The collected RGB image is converted into a grayscale image and fused with plane features, and optimized by the iterative closest point algorithm; the data after the iterative closest point optimization is loosely coupled with the inertial measurement unit data, and the loopback detection is used to improve the accuracy of the pose graph and obtain the inspection result. The robot running track, point cloud map and tree jump table map can achieve the effect of simultaneous positioning and map construction of the inspection robot indoors. This method improves the simultaneous positioning accuracy and robustness of the inspection robot in the underground space, and achieves the effect of simultaneous positioning and map construction of the inspection robot in the underground space. When the inspection robot operates in the underground space, the method adopted in the present invention has good robustness in a strong rotation environment.

Figure 201711334617

Description

基于深度惯性里程计的巡检机器人地下空间同时定位和地图 构建方法Simultaneous localization and mapping of underground space for inspection robots based on deep inertial odometry build method

技术领域technical field

本发明涉及巡检机器人同时定位领域,尤其涉及一种基于深度惯性里程计的巡检机器人地下空间同时定位和地图构建方法。The invention relates to the field of simultaneous positioning of inspection robots, in particular to a method for simultaneous positioning and map construction of inspection robots in underground space based on a depth inertial odometer.

背景技术Background technique

随着科学技术的进步,巡检机器人在工业、军事等领域的应用越来越广泛。在很多情况下,巡检机器人的作业空间的信息是复杂未知的。巡检机器人想要实现在室内自主导航、目标识别、自动避障等功能,其精确同时定位显着尤为重要。传统的同时定位方法大多以GPS、北斗等全球卫星同时定位为主,但普通的GPS传感器同时定位精度较低,无法满足巡检机器人精确同时定位。With the advancement of science and technology, inspection robots are more and more widely used in industry, military and other fields. In many cases, the information of the inspection robot's work space is complex and unknown. In order to realize the functions of autonomous navigation, target recognition, and automatic obstacle avoidance in indoor inspection robots, its precise and simultaneous positioning is particularly important. Most of the traditional simultaneous positioning methods are based on the simultaneous positioning of global satellites such as GPS and Beidou, but the simultaneous positioning accuracy of ordinary GPS sensors is low, which cannot meet the precise and simultaneous positioning of inspection robots.

虽然差分GPS在室外同时定位精度较高,但是价格高昂同时无法在隧道、巷道、地下室这些GPS失效环境下工作。隧道、巷道、地下室这些地下空间常年无法接受太阳光照射,光照较低。在视觉定位方面,现在普遍使用单纯的相机进行定位精度较低,无法达到巡检机器人有效定位的效果。Although differential GPS has high positioning accuracy outdoors, it is expensive and cannot work in tunnels, roadways, and basements where GPS fails. Underground spaces such as tunnels, roadways, and basements cannot receive sunlight all year round, and the illumination is low. In terms of visual positioning, simple cameras are now generally used for positioning with low accuracy and cannot achieve the effect of effective positioning of inspection robots.

伴随计算机视觉、图像处理技术发展的同时,机器视觉方法通过感知环境进行导航,同时在机器人实时定位方面得到广泛应用。视觉同时定位方法的原理是,通过安装在机器人身上的摄像机实时采集运动过程中的图像,并从图像中提取相关信息,进而判断并计算机器人的运行姿态和轨迹,最终实现导航和实时定位。然而视觉传感器容易受到光照的影响,同时在曝光较强、低亮度等情况下同时定位容易丢失。除此之外,单纯的单目视觉传感器没有尺度信息,无法感知机器人所处周围环境深度,并且在机器人原地转弯时特征丢失,容易导致机器人实时定位失效。With the development of computer vision and image processing technology, machine vision methods navigate through the perception of the environment, and are widely used in real-time robot positioning. The principle of the simultaneous visual positioning method is that the camera installed on the robot body collects the images during the movement in real time, and extracts relevant information from the images, and then judges and calculates the robot's running posture and trajectory, and finally realizes navigation and real-time positioning. However, visual sensors are easily affected by light, and simultaneous localization is easily lost in situations such as strong exposure and low brightness. In addition, the pure monocular vision sensor has no scale information and cannot perceive the depth of the surrounding environment of the robot, and the features are lost when the robot turns in situ, which may easily lead to the failure of the real-time positioning of the robot.

巡检机器人使用惯性测量单元进行定位发展较早,惯性定位是利用惯性测量单元测量的线加速度和旋转角速率来计算载体的六自由度同时定位信息。载体的角速率通过陀螺仪测量,主要用于计算机器人的旋转矩阵,并且提供载体坐标系和导航坐标系的转化关系;载体的线加速度通过加速度计测量,通过对得到的加速度积分求解机器人的速度信息和位移信息,最后通过将机器人六自由度信息转换到导航坐标系中完成定位。然而单纯的惯性测量单元在重复路径下误差累计较大,并且无法进行有效的回环检测。除此之外,因惯性测量单元随机游走等性质,在巡检机器人起步以及加速度变化较大时会产生大量迟滞误差。The use of inertial measurement unit for positioning of inspection robots has been developed earlier. Inertial positioning is to use the linear acceleration and rotational angular rate measured by the inertial measurement unit to calculate the simultaneous positioning information of the six degrees of freedom of the carrier. The angular rate of the carrier is measured by the gyroscope, which is mainly used to calculate the rotation matrix of the robot, and provides the transformation relationship between the carrier coordinate system and the navigation coordinate system; the linear acceleration of the carrier is measured by the accelerometer, and the speed of the robot is calculated by integrating the obtained acceleration. information and displacement information, and finally complete the positioning by converting the six-degree-of-freedom information of the robot into the navigation coordinate system. However, the simple inertial measurement unit has a large error accumulation under repeated paths, and cannot perform effective loop closure detection. In addition, due to the random walk of the inertial measurement unit and other properties, a large amount of hysteresis errors will be generated when the inspection robot starts and the acceleration changes greatly.

以华硕xtion和微软Kinect为代表的消费机深度相机能够获取RGB图像和深度图,广泛应用于室内机器人领域。但是这种深度相机视野范围普遍较窄,导致算法跟踪目标容易丢失,同时深度数据往往存在大量噪声,甚至导致有些数据无法使用。在传统的方法中,视觉特征的提取算法往往基于像素的差异,但是在深度相机测量的深度数据中,位于边角处的点,不易被识别出来。并且移动机器人在大旋转情况下,采用单独深度相机进行定位容易丢失。Consumer machine depth cameras represented by ASUS xtion and Microsoft Kinect can acquire RGB images and depth maps, and are widely used in the field of indoor robotics. However, the field of view of this kind of depth camera is generally narrow, which makes the algorithm tracking target easy to lose. At the same time, there is often a lot of noise in the depth data, which even makes some data unusable. In traditional methods, the extraction algorithm of visual features is often based on the difference of pixels, but in the depth data measured by the depth camera, the points located at the corners are not easy to be identified. And in the case of large rotation, the positioning of the mobile robot using a separate depth camera is easy to lose.

同时定位和建图(simultaneous location and mapping,SLAM)最初应用于机器人领域。使用单独的传感器的方法虽然计算量较小,但是定位精度不高、鲁棒性不强。使用多种传感器融合的同时定位和地图构建方法已经成为发展的主流,并且缺乏有效的深度相机和惯性测量单元融合的同时定位和地图构建。Simultaneous location and mapping (SLAM) was originally applied in the field of robotics. Although the method of using a single sensor is less computationally intensive, the positioning accuracy is not high and the robustness is not strong. Simultaneous localization and map construction methods using multiple sensor fusions have become mainstream in development, and efficient simultaneous localization and map construction with fusion of depth cameras and inertial measurement units is lacking.

发明内容SUMMARY OF THE INVENTION

根据现有技术的不足,本发明提供一种基于深度惯性里程计的巡检机器人地下空间同时定位和地图构建方法,通过此方法提高巡检机器人在地下空间的定位精度和鲁棒性,达到巡检机器人在地下空间同时定位和地图构建的效果。巡检机器人在地下空间进行作业时,在强旋转环境下本发明采用的方法具有良好鲁棒性。According to the deficiencies of the prior art, the present invention provides a simultaneous positioning and map construction method for an inspection robot in the underground space based on a depth inertial odometry, through which the positioning accuracy and robustness of the inspection robot in the underground space are improved, and the inspection robot is Check the effect of simultaneous positioning and map construction of the robot in the underground space. When the inspection robot operates in the underground space, the method adopted by the present invention has good robustness in a strong rotation environment.

为实现上述目的,本发明采用的技术方案为:To achieve the above object, the technical scheme adopted in the present invention is:

一种基于深度惯性里程计的巡检机器人地下空间同时定位和地图构建方法,该方法为:A method for simultaneous localization and map construction of inspection robot underground space based on deep inertial odometry, the method is as follows:

利用深度相机和惯性测量单元进行松耦合,通过深度相机采集的深度图获取点云信息,提取平面特征;将深度相机采集的RGB图像转换为灰度图和平面特征融合,利用迭代最近点算法进行优化;迭代最近点优化后的数据和惯性测量单元数据进行松耦合,利用回环检测提高位姿图精度,得到巡检机器人运行轨迹、点云地图和树跳表地图,达到巡检机器人在室内同时定位和地图构建的效果。The depth camera and the inertial measurement unit are used for loose coupling, the point cloud information is obtained through the depth map collected by the depth camera, and the plane features are extracted; the RGB image collected by the depth camera is converted into a grayscale image and plane feature fusion, and the iterative closest point algorithm is used to perform Optimization: Loosely couple the data after iterative closest point optimization and the inertial measurement unit data, use loop detection to improve the accuracy of the pose graph, and obtain the running trajectory, point cloud map and tree jump table map of the inspection robot, so that the inspection robot can be indoors at the same time. Effects of positioning and map building.

优选的是,深度相机采集相邻两个帧为场景S和模型M,两组匹配点分别记为P={pi|i=1,...,n}和Q={qi|i=1,...,n};Preferably, the depth camera collects two adjacent frames as the scene S and the model M, and the two sets of matching points are respectively recorded as P={pi | i =1,...,n} and Q={q i |i =1,...,n};

其中pi和qi都表示三维空间点,能够参数化成

Figure BDA0001505816830000021
where pi and qi both represent three-dimensional space points, which can be parameterized as
Figure BDA0001505816830000021

优选的是,深度相机模型为:Preferably, the depth camera model is:

Figure BDA0001505816830000031
Figure BDA0001505816830000031

其中(u,v)为空间点(x,y,z)T对应的像素位置,d为深度值,C为相机内参。Where (u, v) is the pixel position corresponding to the spatial point (x, y, z) T , d is the depth value, and C is the camera internal parameter.

优选的是,M到S的运动用旋转R和平移t描述,利用迭代最近点算法求解:Preferably, the motion of M to S is described by rotation R and translation t, solved using an iterative closest point algorithm:

Figure BDA0001505816830000032
Figure BDA0001505816830000032

优选的是,从深度图获得的点云中提取平面特征,用四个参数来描述三维空间的平面:Preferably, plane features are extracted from the point cloud obtained from the depth map, and four parameters are used to describe the plane of the three-dimensional space:

p=(a,b,c,d)={x,y,z|ax+by+cz+d=0};p=(a,b,c,d)={x,y,z|ax+by+cz+d=0};

令d=0,将每个拟合的平面投影在成像平面上,得到平面点的成像位置(u,v),利用投影方程求解:Let d=0, project each fitted plane on the imaging plane to obtain the imaging position (u, v) of the plane point, and use the projection equation to solve:

Figure BDA0001505816830000033
Figure BDA0001505816830000033

其中fx,fy,cx,cy为深度相机的内参,s是深度数据的缩放因子。where f x , f y , c x , and c y are the internal parameters of the depth camera, and s is the scaling factor of the depth data.

优选的是,对每个平面图做一次灰度直方图归一化增强它的对比度,然后再提取特征点并计算特征点的深度:It is preferable to perform gray histogram normalization on each plane image to enhance its contrast, and then extract feature points and calculate the depth of feature points:

Figure BDA0001505816830000034
Figure BDA0001505816830000034

优选的是,太多的关键帧会给后端和回环检测带来额外的计算量,而太少的关键帧就会导致关键帧之间运动太大,特征匹配数量不够,导致容易丢失。在提取图像的平面,计算它和上一个关键帧之间的相对运动超过某个阈值,就认为这是一个新的关键帧。Preferably, too many keyframes will bring extra computation to the backend and loop closure detection, while too few keyframes will result in too much motion between keyframes and insufficient number of feature matches, resulting in easy loss. In the plane where the image is extracted, the relative motion between it and the previous keyframe is calculated to exceed a certain threshold, and it is considered a new keyframe.

优选的是,阈值的计算是通过评价平移和欧拉角旋转实现的:Preferably, the calculation of the threshold is performed by evaluating translations and Euler rotations:

Figure BDA0001505816830000035
Figure BDA0001505816830000035

这里(Δx,Δy,Δz)是相对平移,而(α,β,γ)是相对的欧拉角;Here (Δx, Δy, Δz) is the relative translation, and (α, β, γ) is the relative Euler angle;

w1=(m,m,m),m∈(0.6,0.7);w2∈(0.95,1.05)。w 1 =(m,m,m),m∈(0.6,0.7); w 2∈ (0.95,1.05).

本发明有益效果:Beneficial effects of the present invention:

通过此方法提高巡检机器人在地下空间作业的同时定位精度和鲁棒性,达到巡检机器人在地下空间环境下的同时定位和地图构建的效果。巡检机器人在地下空间进行作业时,在强旋转环境下本发明采用的方法具有良好鲁棒性。Through this method, the positioning accuracy and robustness of the inspection robot in the underground space are improved, and the effect of simultaneous positioning and map construction of the inspection robot in the underground space environment is achieved. When the inspection robot operates in the underground space, the method adopted by the present invention has good robustness in a strong rotation environment.

附图说明Description of drawings

图1为本发明的方法框架示意图;1 is a schematic diagram of a method framework of the present invention;

图2为本发明的深度相机采集RGB图转换成的灰度图示意图;2 is a schematic diagram of a grayscale image converted from an RGB image acquired by a depth camera of the present invention;

图3为本发明的深度相机采集深度图像示意图;3 is a schematic diagram of a depth image collected by a depth camera of the present invention;

图4为本发明的构建环境三维点云地图;4 is a three-dimensional point cloud map of the construction environment of the present invention;

图5为本发明的构建环境三维树跳表地图;Fig. 5 is the construction environment three-dimensional tree skip table map of the present invention;

图6为本发明的机器人运行轨迹。Fig. 6 is the running track of the robot of the present invention.

具体实施方式Detailed ways

以下结合附图,通过具体实施例对本发明作进一步的说明。The present invention will be further described below through specific embodiments in conjunction with the accompanying drawings.

如图1至图6所示,一种基于深度惯性里程计的巡检机器人地下空间同时定位和地图构建方法,该同时定位方法为:利用深度相机和惯性测量单元进行松耦合,通过深度相机采集的深度图获取点云信息,提取平面特征。将深度相机采集的RGB图像和平面特征融合,利用迭代最近点(ICP)算法进行优化;ICP优化后的数据和惯性测量单元(IMU)数据进行松耦合,利用回环检测(Loop closure)提高位姿图精度,得到巡检机器人运行轨迹、点云地图和树跳表地图。As shown in Figures 1 to 6, a simultaneous localization and map construction method for an inspection robot in the underground space based on a depth inertial odometry. The depth map to obtain point cloud information and extract plane features. Integrate the RGB images collected by the depth camera with the plane features, and use the iterative closest point (ICP) algorithm to optimize; the ICP-optimized data and the inertial measurement unit (IMU) data are loosely coupled, and loop closure is used to improve the pose. The accuracy of the map is obtained, and the running trajectory, point cloud map and tree jump table map of the inspection robot are obtained.

深度相机采集相邻两个帧为场景S和模型M,两组匹配点分别记为P={pi|i=1,...,n}和Q={qi|i=1,...,n}。其中pi和qi都表示三维空间点,可以参数化成

Figure BDA0001505816830000042
The depth camera collects two adjacent frames as scene S and model M, and the two sets of matching points are respectively recorded as P={pi | i =1,...,n} and Q={q i |i=1,. ..,n}. where pi and qi both represent three-dimensional space points, which can be parameterized as
Figure BDA0001505816830000042

深度相机模型为:The depth camera model is:

Figure BDA0001505816830000041
Figure BDA0001505816830000041

其中(u,v)为空间点(x,y,z)T对应的像素位置,d为深度值,C为相机内参。Where (u, v) is the pixel position corresponding to the spatial point (x, y, z) T , d is the depth value, and C is the camera internal parameter.

M到S的运动用旋转R和平移t描述,利用ICP算法求解:The motion of M to S is described by rotation R and translation t, which is solved by the ICP algorithm:

Figure BDA0001505816830000051
Figure BDA0001505816830000051

从深度图获得的点云中提取平面特征,用四个参数来描述三维空间的平面:The plane features are extracted from the point cloud obtained from the depth map, and four parameters are used to describe the plane of the three-dimensional space:

p=(a,b,c,d)={x,y,z|ax+by+cz+d=0}p=(a,b,c,d)={x,y,z|ax+by+cz+d=0}

令d=0,将每个拟合的平面投影在成像平面上,得到平面点的成像位置(u,v),利用投影方程求解:Let d=0, project each fitted plane on the imaging plane to obtain the imaging position (u, v) of the plane point, and use the projection equation to solve:

Figure BDA0001505816830000052
Figure BDA0001505816830000052

其中fx,fy,cx,cy为深度相机的内参,s是深度数据的缩放因子。where f x , f y , c x , and c y are the internal parameters of the depth camera, and s is the scaling factor of the depth data.

对每个平面图做一次灰度直方图归一化增强它的对比度,然后再提取特征点并计算特征点的深度:Do a grayscale histogram normalization for each plane image to enhance its contrast, then extract feature points and calculate the depth of the feature points:

Figure BDA0001505816830000053
Figure BDA0001505816830000053

三维空间的相机位姿,以平移和单位四元数表示:x={x,y,z,qx,qy,qz,qw};The camera pose in three-dimensional space, expressed by translation and unit quaternion: x={ x , y , z ,qx,qy,qz, qw };

从该帧提取的平面组P={Pi},每个平面包含它的平面参数和所属的特征点。The plane group P={P i } extracted from this frame, each plane contains its plane parameters and the feature points to which it belongs.

太多的关键帧会给后端和回环检测带来额外的计算量,而太少的关键帧就会导致关键帧之间运动太大,特征匹配数量不够,导致容易丢失。在提取图像的平面,计算它和上一个关键帧之间的相对运动超过某个阈值,就认为这是一个新的关键帧。阈值的计算是通过评价平移和欧拉角旋转实现的:Too many keyframes will bring extra computation to the backend and loop closure detection, while too few keyframes will result in too much motion between keyframes and insufficient number of feature matches, resulting in easy loss. In the plane where the image is extracted, the relative motion between it and the previous keyframe is calculated to exceed a certain threshold, and this is considered a new keyframe. The threshold is calculated by evaluating translations and Euler rotations:

Figure BDA0001505816830000054
Figure BDA0001505816830000054

这里(Δx,Δy,Δz)是相对平移,而(α,β,γ)是相对的欧拉角。Here (Δx,Δy,Δz) are relative translations and (α,β,γ) are relative Euler angles.

w1=(m,m,m),m∈(0.6,0.7),w2∈(0.95,1.05)。w 1 =(m,m,m), m∈(0.6,0.7), w 2∈ (0.95,1.05).

以上所述,仅仅是对本发明的较佳实施例,并非是对本发明做其他形式的限制,任何熟悉本专业的技术人员可能利用上述揭示的技术内容加以变更或改变形式为等同变化的等效实施例。但是,凡是未脱离本发明方案内容,依据本发明的技术实质对以上实施例所做的任何简单修改、等同变化与改型,仍属于本发明的保护范围。The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention in other forms. Any person skilled in the art may use the technical content disclosed above to change or change the form to be equivalent implementation of equivalent changes. example. However, any simple modifications, equivalent changes and modifications made to the above embodiments according to the technical essence of the present invention without departing from the solution content of the present invention still belong to the protection scope of the present invention.

Claims (6)

1.一种基于深度惯性里程计的巡检机器人地下空间同时定位和地图构建方法,其特征在于,该方法为:1. a simultaneous positioning and map construction method based on the inspection robot underground space of depth inertial odometer, is characterized in that, this method is: 利用深度相机和惯性测量单元进行松耦合,通过深度相机采集的深度图获取点云信息,提取平面特征;The depth camera and the inertial measurement unit are used for loose coupling, and the point cloud information is obtained through the depth map collected by the depth camera, and the plane features are extracted; 将深度相机采集的RGB图像转换成灰度图和平面特征融合,利用迭代最近点算法进行优化;Convert the RGB image collected by the depth camera into a grayscale image and fuse plane features, and use the iterative closest point algorithm for optimization; 迭代最近点优化后的数据和惯性测量单元数据进行松耦合,利用回环检测提高位姿图精度,得到巡检机器人运行轨迹、点云地图和树跳表地图,达到巡检机器人在地下空间同时定位和地图构建的效果;The data after iterative closest point optimization and the inertial measurement unit data are loosely coupled, and the loopback detection is used to improve the accuracy of the pose graph, and the running trajectory, point cloud map and tree jump table map of the inspection robot are obtained, so as to achieve simultaneous positioning of the inspection robot in the underground space and the effect of map construction; 深度相机采集相邻两个帧为场景S和模型M,两组匹配点分别记为P={pi|i=1,...,n}和Q={qi|i=1,...,n};The depth camera collects two adjacent frames as scene S and model M, and the two sets of matching points are respectively recorded as P={pi | i =1,...,n} and Q={q i |i=1,. ..,n}; 其中pi和qi都表示三维空间点,能够参数化成
Figure FDA0002630736360000013
where pi and qi both represent three-dimensional space points, which can be parameterized as
Figure FDA0002630736360000013
M到S的运动用旋转R和平移t描述,利用迭代最近点算法求解:The motion from M to S is described by a rotation R and a translation t, solved using the iterative closest point algorithm:
Figure FDA0002630736360000011
Figure FDA0002630736360000011
2.根据权利要求1所述的基于深度惯性里程计的巡检机器人地下空间同时定位和地图构建方法,其特征在于:2. the simultaneous positioning and map construction method of the inspection robot underground space based on depth inertial odometer according to claim 1, is characterized in that: 深度相机模型为:The depth camera model is:
Figure FDA0002630736360000012
Figure FDA0002630736360000012
其中(u,v)为空间点(x,y,z)T对应的像素位置,d为深度值,C为相机内参。Where (u, v) is the pixel position corresponding to the spatial point (x, y, z) T , d is the depth value, and C is the camera internal parameter.
3.根据权利要求1所述的基于深度惯性里程计的巡检机器人地下空间同时定位和地图构建方法,其特征在于:3. the simultaneous positioning and map construction method of the inspection robot underground space based on the depth inertia odometer according to claim 1, is characterized in that: 从深度图获得的点云中提取平面特征,用四个参数来描述三维空间的平面:The plane features are extracted from the point cloud obtained from the depth map, and four parameters are used to describe the plane of the three-dimensional space: p=(a,b,c,d)={x,y,z|ax+by+cz+d=0};p=(a,b,c,d)={x,y,z|ax+by+cz+d=0}; 令d=0,将每个拟合的平面投影在成像平面上,得到平面点的成像位置(u,v),利用投影方程求解:Let d=0, project each fitted plane on the imaging plane to obtain the imaging position (u, v) of the plane point, and use the projection equation to solve:
Figure FDA0002630736360000021
Figure FDA0002630736360000021
Figure FDA0002630736360000022
Figure FDA0002630736360000022
d=z·sd=z·s 其中fx,fy,cx,cy为深度相机的内参,s是深度数据的缩放因子。where f x , f y , c x , and c y are the internal parameters of the depth camera, and s is the scaling factor of the depth data.
4.根据权利要求3所述的基于深度惯性里程计的巡检机器人地下空间同时定位和地图构建方法,其特征在于:4. the simultaneous positioning and map construction method of the inspection robot underground space based on depth inertial odometer according to claim 3, is characterized in that: 对每个平面图做一次灰度直方图归一化增强它的对比度,然后再提取特征点并计算特征点的深度:Do a grayscale histogram normalization for each plane image to enhance its contrast, then extract feature points and calculate the depth of the feature points:
Figure FDA0002630736360000023
Figure FDA0002630736360000023
5.根据权利要求1所述的基于深度惯性里程计的巡检机器人地下空间同时定位和地图构建方法,其特征在于:5. the simultaneous positioning and map construction method of the inspection robot underground space based on depth inertial odometer according to claim 1, is characterized in that: 太多的关键帧会给后端和回环检测带来额外的计算量,而太少的关键帧就会导致关键帧之间运动太大,特征匹配数量不够,导致容易丢失;在提取图像的平面,计算它和上一个关键帧之间的相对运动超过某个阈值,就认为这是一个新的关键帧。Too many keyframes will bring extra computation to the backend and loop closure detection, while too few keyframes will result in too much movement between keyframes and insufficient feature matching, resulting in easy loss; , calculate the relative motion between it and the previous keyframe exceeds a certain threshold, consider this a new keyframe. 6.根据权利要求5所述的基于深度惯性里程计的巡检机器人地下空间同时定位和地图构建方法,其特征在于:6. the simultaneous positioning and map construction method of the inspection robot underground space based on depth inertial odometer according to claim 5, is characterized in that: 阈值的计算是通过评价平移和欧拉角旋转实现的:The threshold is calculated by evaluating translations and Euler rotations:
Figure FDA0002630736360000024
Figure FDA0002630736360000024
这里(Δx,Δy,Δz)是相对平移,而(α,β,γ)是相对的欧拉角;Here (Δx, Δy, Δz) is the relative translation, and (α, β, γ) is the relative Euler angle; w1=(m,m,m),m∈(0.6,0.7);w2∈(0.95,1.05)。w 1 =(m,m,m),m∈(0.6,0.7); w 2∈ (0.95,1.05).
CN201711334617.7A 2017-12-13 2017-12-13 Simultaneous localization and map construction method of inspection robot in underground space based on deep inertial odometry Expired - Fee Related CN108052103B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711334617.7A CN108052103B (en) 2017-12-13 2017-12-13 Simultaneous localization and map construction method of inspection robot in underground space based on deep inertial odometry

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711334617.7A CN108052103B (en) 2017-12-13 2017-12-13 Simultaneous localization and map construction method of inspection robot in underground space based on deep inertial odometry

Publications (2)

Publication Number Publication Date
CN108052103A CN108052103A (en) 2018-05-18
CN108052103B true CN108052103B (en) 2020-12-04

Family

ID=62132123

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711334617.7A Expired - Fee Related CN108052103B (en) 2017-12-13 2017-12-13 Simultaneous localization and map construction method of inspection robot in underground space based on deep inertial odometry

Country Status (1)

Country Link
CN (1) CN108052103B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108776487A (en) * 2018-08-22 2018-11-09 中国矿业大学 A kind of mining rail formula crusing robot and its localization method
US11287826B2 (en) 2018-10-12 2022-03-29 Boston Dynamics, Inc. Terrain aware step planning system
US11747825B2 (en) * 2018-10-12 2023-09-05 Boston Dynamics, Inc. Autonomous map traversal with waypoint matching
CN110322511B (en) * 2019-06-28 2021-03-26 华中科技大学 Semantic SLAM method and system based on object and plane features
JP7502409B2 (en) 2019-08-06 2024-06-18 ボストン ダイナミクス,インコーポレイテッド Intermediate Waypoint Generator
CN110722559A (en) * 2019-10-25 2020-01-24 国网山东省电力公司信息通信公司 Auxiliary inspection positioning method for intelligent inspection robot
CN112258568B (en) * 2020-10-12 2022-07-01 武汉中海庭数据技术有限公司 High-precision map element extraction method and device
CN112697131B (en) * 2020-12-17 2024-07-23 中国矿业大学 Underground mobile equipment positioning method and system based on vision and inertial navigation system
CN114720978A (en) * 2021-01-06 2022-07-08 扬智科技股份有限公司 Method and mobile platform for simultaneous localization and mapping
CN117795444A (en) 2021-06-04 2024-03-29 波士顿动力公司 Directional exploration of navigation in dynamic environments
US12304082B2 (en) 2021-06-04 2025-05-20 Boston Dynamics, Inc. Alternate route finding for waypoint-based navigation maps
CN113378694B (en) * 2021-06-08 2023-04-07 北京百度网讯科技有限公司 Method and device for generating target detection and positioning system and target detection and positioning
CN113486854A (en) * 2021-07-29 2021-10-08 北京超维世纪科技有限公司 Recognition detection algorithm for realizing industrial inspection robot based on depth camera
CN114429432B (en) * 2022-04-07 2022-06-21 科大天工智能装备技术(天津)有限公司 A kind of multi-source information layered fusion method, device and storage medium
CN118644433B (en) * 2024-06-18 2024-12-17 中国矿业大学 Centralized collaborative SLAM map construction method and system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013165391A1 (en) * 2012-05-01 2013-11-07 Intel Corporation Simultaneous localization and mapping using spatial and temporal coherence for indoor location
CN103411621B (en) * 2013-08-09 2016-02-10 东南大学 A kind of vision/INS Combinated navigation method of the optical flow field towards indoor mobile robot
US9759918B2 (en) * 2014-05-01 2017-09-12 Microsoft Technology Licensing, Llc 3D mapping with flexible camera rig
CN107085422A (en) * 2017-01-04 2017-08-22 北京航空航天大学 A remote control system for a multifunctional hexapod robot based on Xtion equipment
CN107063246A (en) * 2017-04-24 2017-08-18 齐鲁工业大学 A Loose Combination Navigation Method of Visual Navigation/Inertial Navigation
CN107160395B (en) * 2017-06-07 2020-10-16 中国人民解放军装甲兵工程学院 Map construction method and robot control system

Also Published As

Publication number Publication date
CN108052103A (en) 2018-05-18

Similar Documents

Publication Publication Date Title
CN108052103B (en) Simultaneous localization and map construction method of inspection robot in underground space based on deep inertial odometry
CN110070615B (en) Multi-camera cooperation-based panoramic vision SLAM method
CN109166149B (en) Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU
WO2020155616A1 (en) Digital retina-based photographing device positioning method
CN106679648B (en) Visual inertia combination SLAM method based on genetic algorithm
CN113223161B (en) Robust panoramic SLAM system and method based on IMU and wheel speed meter tight coupling
CN107909614B (en) A positioning method of inspection robot under GPS failure environment
CN109520497A (en) The unmanned plane autonomic positioning method of view-based access control model and imu
CN114018236A (en) A strongly coupled SLAM method for laser vision based on adaptive factor graph
Momeni-k et al. Height estimation from a single camera view
CN114063099B (en) Positioning method and device based on RGBD
CN110827353B (en) Robot positioning method based on monocular camera assistance
CN112731503A (en) Pose estimation method and system based on front-end tight coupling
CN110766785A (en) A device and method for real-time positioning and three-dimensional reconstruction of underground pipelines
CN116222543A (en) Multi-sensor fusion map construction method and system for robot environment perception
US10977810B2 (en) Camera motion estimation
CN112767482A (en) Indoor and outdoor positioning method and system with multi-sensor fusion
Zhan et al. A slam map restoration algorithm based on submaps and an undirected connected graph
Huai et al. Stereo-inertial odometry using nonlinear optimization
CN117253003A (en) Indoor RGB-D SLAM method integrating direct method and point-plane characteristic method
Xian et al. Fusing stereo camera and low-cost inertial measurement unit for autonomous navigation in a tightly-coupled approach
CN112179373A (en) A kind of measurement method of visual odometer and visual odometer
Caldato et al. ORB-ODOM: Stereo and odometer sensor fusion for simultaneous localization and mapping
CN117635651A (en) A dynamic environment SLAM method based on YOLOv8 instance segmentation
CN117611673A (en) Image-assisted monocular vision SLAM initialization method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20201204