CN105700525A - Robot working environment uncertainty map construction method based on Kinect sensor depth map - Google Patents
Robot working environment uncertainty map construction method based on Kinect sensor depth map Download PDFInfo
- Publication number
- CN105700525A CN105700525A CN201510891318.8A CN201510891318A CN105700525A CN 105700525 A CN105700525 A CN 105700525A CN 201510891318 A CN201510891318 A CN 201510891318A CN 105700525 A CN105700525 A CN 105700525A
- Authority
- CN
- China
- Prior art keywords
- depth
- data
- ground
- map
- grid
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000010276 construction Methods 0.000 title description 8
- 238000000034 method Methods 0.000 claims abstract description 24
- 238000001514 detection method Methods 0.000 claims description 17
- 238000004458 analytical method Methods 0.000 claims description 7
- 230000000007 visual effect Effects 0.000 claims description 3
- 101100129500 Caenorhabditis elegans max-2 gene Proteins 0.000 claims description 2
- 238000000605 extraction Methods 0.000 claims description 2
- 238000005259 measurement Methods 0.000 claims description 2
- 238000012360 testing method Methods 0.000 claims description 2
- 230000004888 barrier function Effects 0.000 claims 29
- 238000005070 sampling Methods 0.000 claims 4
- 238000010408 sweeping Methods 0.000 claims 2
- 238000006243 chemical reaction Methods 0.000 claims 1
- 208000010877 cognitive disease Diseases 0.000 claims 1
- 238000004040 coloring Methods 0.000 claims 1
- 230000001419 dependent effect Effects 0.000 claims 1
- 238000010586 diagram Methods 0.000 claims 1
- 238000002474 experimental method Methods 0.000 claims 1
- 230000007613 environmental effect Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 4
- 238000010008 shearing Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
Landscapes
- Engineering & Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Image Processing (AREA)
Abstract
一种基于Kinect传感器深度图的机器人工作环境不确定性地图构建方法:其特征在于:该方法包括有以下步骤:步骤(1):机器人使用Kinect传感器采集深度数据;步骤(2):对采集的深度数据进行预处理,得到深度数据图;步骤(3):采集地面深度数据并进行地面模型提取;步骤(4):对深度数据图进行地面模型剪切处理得到障碍物深度图,步骤(5):对障碍物深度图进行检测并识别障碍区域,步骤(6)对障碍区和空闲区分析形成机器人工作环境的不确定性栅格地图。本发明可以准确的检测出周围环境并建立不确定性栅格地图,为机器人完成避障、导航、路径规划等其他任务提供了前提和条件。
A method for constructing an uncertainty map of a robot's working environment based on a Kinect sensor depth map: it is characterized in that: the method includes the following steps: Step (1): the robot uses the Kinect sensor to collect depth data; Step (2): the collected Preprocess the depth data to obtain the depth data map; step (3): collect the ground depth data and extract the ground model; step (4): cut the ground model to the depth data map to obtain the obstacle depth map, step (5 ): detect the obstacle depth map and identify the obstacle area, and step (6) analyze the obstacle area and the free area to form an uncertainty grid map of the robot's working environment. The invention can accurately detect the surrounding environment and establish an uncertain grid map, which provides preconditions and conditions for the robot to complete other tasks such as obstacle avoidance, navigation, and path planning.
Description
技术领域:本发明涉及一种基于Kinect传感器深度图的机器人工作环境不确定性地图构建方法。本发明实现了机器人通过对周围环境的检测并形成不确定性地图,可以为机器人完成避障、导航、路径规划等其他任务提供了前提和条件。Technical field: the present invention relates to a kind of robot working environment uncertainty map construction method based on Kinect sensor depth map. The invention realizes that the robot can detect the surrounding environment and form an uncertainty map, which can provide preconditions and conditions for the robot to complete other tasks such as obstacle avoidance, navigation, and path planning.
背景技术:构建地图是移动机器人学研究的核心内容之一,其目的是通过对地图的构建可以更好的表现周围的环境信息,更有利于机器人识别环境信息以便于后续的工作。目前对机器人建立环境地图的方法有很多,基于激光传感器的环境地图构建方法存在着传感器价格过高,性价比低等缺点;基于超声传感器的环境地图构建方法存在着获取的环境信息比较粗糙,精度低等缺点;基于视觉传感器的环境地图构建方法存在着计算复杂,较难实现等缺点。本发明使用的传感器是Kinect,它是微软公司在2010年推出的一种新的传感器,它不但能获取环境的光学图像还能获取光学图像上物体的位置信息,其获取的信息量丰富,环境适应性好、结构简单、实时性强并且价格低廉,因此可以成为机器人环境感知的一种有利工具。Kinect传感器通过彩色摄像头和深度摄像头来采集室内环境的3维信息,通过输出一个RGB图像和红外深度图像来确定环境中每一个点的颜色信息和深度信息。本发明所建立的地图为栅格地图。栅格地图是将环境离散化为一系列的栅格,每个栅格都有一种状态。Kinect传感器存在着随着距离的增大所检测到的深度距离误差会变大的特点,因此所检测到栅格存在的障碍物也存在着不确定性。最后得到的地图即为不确定性的栅格地图。Background technology: Constructing a map is one of the core contents of mobile robotics research. Its purpose is to better represent the surrounding environmental information through the construction of the map, which is more conducive to the robot's identification of environmental information for subsequent work. At present, there are many methods for building an environmental map for robots. The environmental map construction method based on laser sensors has the disadvantages of high sensor prices and low cost performance; the environmental map construction method based on ultrasonic sensors has relatively rough environmental information and low accuracy. and other disadvantages; the environment map construction method based on visual sensors has the disadvantages of complex calculation and difficult implementation. The sensor used in the present invention is Kinect, which is a new sensor released by Microsoft Corporation in 2010. It can not only obtain the optical image of the environment but also obtain the position information of the object on the optical image. It has good adaptability, simple structure, strong real-time performance and low price, so it can become a favorable tool for robot environment perception. The Kinect sensor collects 3D information of the indoor environment through a color camera and a depth camera, and determines the color information and depth information of each point in the environment by outputting an RGB image and an infrared depth image. The map established by the present invention is a grid map. A grid map discretizes the environment into a series of grids, each of which has a state. The Kinect sensor has the characteristic that the detected depth distance error will become larger with the increase of the distance, so the detected obstacles in the grid also have uncertainty. The resulting map is an uncertainty grid map.
发明内容:Invention content:
发明目的:本发明提供一种基于Kinect传感器深度图的机器人工作环境不确定性地图构建方法,其目的在于实现对周围环境的检测并构建出地图以便于机器人的后续工作。Purpose of the invention: the present invention provides a method for constructing a robot working environment uncertainty map based on a depth map of a Kinect sensor, the purpose of which is to realize the detection of the surrounding environment and construct a map for the follow-up work of the robot.
技术方案:本发明是通过以下技术方案实施的:Technical solution: the present invention is implemented through the following technical solutions:
1.一种基于Kinect传感器深度图的机器人工作环境不确定性地图构建方法:其特征在于:该方法包括有以下步骤:1. A method for constructing a robot working environment uncertainty map based on a Kinect sensor depth map: it is characterized in that: the method includes the following steps:
步骤(1):机器人使用Kinect传感器采集深度数据;Step (1): the robot uses the Kinect sensor to collect depth data;
步骤(2):对采集的深度数据进行预处理,得到深度数据图;Step (2): Preprocessing the collected depth data to obtain a depth data map;
步骤(3):采集地面深度数据并进行地面模型提取;Step (3): collecting ground depth data and extracting the ground model;
步骤(4):对深度数据图进行地面模型剪切处理得到障碍物深度图,再对原深度数据图与障碍物深度图进行剪切处理得到地面深度图;Step (4): Carry out ground model shearing processing on the depth data map to obtain the obstacle depth map, and then perform shearing processing on the original depth data map and the obstacle depth map to obtain the ground depth map;
步骤(5):对障碍物深度图进行检测并识别障碍区域,对地面深度图进行检测并识别空闲区域,对障碍区和空闲区分析形成不确定性栅格地图;Step (5): Detect the obstacle depth map and identify the obstacle area, detect the ground depth map and identify the free area, analyze the obstacle area and the free area to form an uncertainty grid map;
步骤(6)对障碍区和空闲区分析形成机器人工作环境的不确定性栅格地图。Step (6) Analyze the obstacle area and the free area to form an uncertainty grid map of the robot's working environment.
所述步骤(3)对于地面模型的提取采用的方法是在一个空旷无障碍物的环境下采集深度图,根据Kinect的成像原理,可以得知深度图像具有以下性质:(1)与图像的特征无关,只与距离有关。(2)灰度值变化方向与Kinect深度相机所拍摄的视场方向z轴方向一致,而且随着距离的增加灰度值会变大。所以检测到与Kinect距离相同的地面的深度信息是相同的。设定Kinect传感器与地面相对高度和俯仰角不变的条件下,在一个空旷无障碍物的环境下采集深度图,当距离超过一定阈值后Kinect已检测不到前方的地面数据,所以只取Kinect能检测到的地面数据,其他地方都视为无效数据记为0;由于Kinect传感器本身性能的限制,其对于近处地面信息采集的比较好,远处比较差,越近的地方采集到的数据越完整,而远处采集到的数据不完整且误差较大,所以还要对其进行处理;深度图像每一行记录了与Kinect相同距离下的地面深度信息,对其进行行扫描记录每一行的深度信息,除去无效数据的深度信息,对剩下的数据进行加权平均所得到的最后数据即为该处的地面深度信息;处理好每一行的数据记录下并生成一个地面模型模板;这样就得到一个地面模型;将地面模型中的数据保存在程序根目录下。The method that described step (3) adopts for the extraction of ground model is to collect the depth map under the environment of an open space without obstacles, according to the imaging principle of Kinect, it can be learned that the depth image has the following properties: (1) and the feature of the image Nothing to do, only distance. (2) The change direction of the gray value is consistent with the z-axis direction of the field of view captured by the Kinect depth camera, and the gray value will become larger as the distance increases. So the depth information of the ground detected at the same distance as the Kinect is the same. Set the relative height and pitch angle of the Kinect sensor to the ground to be constant, and collect a depth map in an open and unobstructed environment. When the distance exceeds a certain threshold, Kinect can no longer detect the ground data ahead, so only Kinect is used. The ground data that can be detected is regarded as invalid data in other places and recorded as 0; due to the performance limitation of the Kinect sensor itself, it is better for collecting ground information near the ground, and it is worse in the distance, and the data collected in the closer place The more complete the data is, but the data collected in the distance is incomplete and has large errors, so it needs to be processed; each line of the depth image records the ground depth information at the same distance as Kinect, and it is scanned to record the depth information of each line. Depth information, remove the depth information of invalid data, and the final data obtained by weighted average of the remaining data is the ground depth information of the place; process the data of each row and record and generate a ground model template; thus we can get A ground model; save the data in the ground model in the root directory of the program.
所述步骤(5)地图的区域分为空闲区,障碍区和未知区。空闲区设为所检测到的地面区域,障碍区设为所检测到有障碍物的区域,未知区域设为除了地面和障碍物的其他区域。用结构体来记录栅格信息。其中包括栅格的状态标识,栅格的置信度,栅格的颜色。具体操作包括如下步骤:The area of the map in the step (5) is divided into a free area, an obstacle area and an unknown area. The free area is set as the detected ground area, the obstacle area is set as the detected area with obstacles, and the unknown area is set as other areas except the ground and obstacles. Use a structure to record raster information. These include the status identification of the grid, the confidence of the grid, and the color of the grid. The specific operation includes the following steps:
(1)空闲区域检测算法:将Kinect采集到的深度数据和所得到的地面深度数据进行比较,若地面深度数据和采集到的深度数据之差小于一定阈值则保存地面深度数据,否则将数据置为0。所得到的为地面深度信息,将其映射到世界坐标系下并记录所占用的栅格信息。(1) Idle area detection algorithm: compare the depth data collected by Kinect with the obtained ground depth data, if the difference between the ground depth data and the collected depth data is less than a certain threshold, the ground depth data will be saved, otherwise the data will be set to is 0. The obtained ground depth information is mapped to the world coordinate system and the occupied grid information is recorded.
(2)障碍区域检测算法:将Kinect采集到的深度数据和所得到的地面深度数据进行比较,若地面深度数据和采集到的深度数据之差小于一定阈值则将数据置为0,否则保留采集到的深度数据。所得到的为障碍物深度信息,将其进行深度数据列扫描分析后映射到世界坐标系下并记录所占用的栅格信息。(2) Obstacle area detection algorithm: compare the depth data collected by Kinect with the obtained ground depth data, if the difference between the ground depth data and the collected depth data is less than a certain threshold, set the data to 0, otherwise keep the collection to the depth data. Obtained is the obstacle depth information, which is mapped to the world coordinate system after scanning and analyzing the depth data series and records the occupied grid information.
通过Kinect传感器采集深度数据的特性以及障碍物置信度确定模型分析得出确定障碍物置信度的公式。得到不确定性栅格地图。The formula for determining the obstacle confidence is obtained through the analysis of the characteristics of the depth data collected by the Kinect sensor and the obstacle confidence determination model. Get the uncertainty grid map.
优点及效果:Advantages and effects:
本发明使用Kinect传感器实现局部栅格地图的构建工作,将环境分为三部分空闲区,障碍区和未知区;机器人可以在空闲区运动,无法在障碍区运动,对应未知区需要重新检测。与视觉传感器相比,本发明不但能获取环境的颜色信息还能获取距离信息,可以更好的构建地图;与超声传感器相比,本发明获得的环境信息更加精细,精度更高;与激光传感器相比,本发明所检测的范围更大,并能获得三维信息,同时性价比也更高。The invention uses the Kinect sensor to realize the construction of the local grid map, and divides the environment into three parts: idle area, obstacle area and unknown area; the robot can move in the idle area, but cannot move in the obstacle area, and the corresponding unknown area needs to be re-detected. Compared with the visual sensor, the present invention can not only obtain the color information of the environment but also obtain the distance information, and can build a better map; compared with the ultrasonic sensor, the environmental information obtained by the present invention is more refined and has higher precision; compared with the laser sensor In comparison, the detection range of the present invention is larger, and three-dimensional information can be obtained, and the cost performance is also higher.
本发明对Kinect传感器所采集到的深度图进行地面模型剪切处理去除了地面对障碍物检测的影响,通过对障碍物深度图的列扫描法,实现了对障碍物的快速检测;根据Kinect传感器自身的局限,建立障碍物栅格置信度模型确定栅格障碍物置信度,实现了栅格的不确定性建立,使得地图的建立更加精确。最终本发明可以准确的检测出周围环境并建立不确定性栅格地图,为机器人完成避障、导航、路径规划等其他任务提供了前提和条件。In the present invention, the ground model shearing process is performed on the depth map collected by the Kinect sensor to remove the influence of the ground on obstacle detection, and the rapid detection of obstacles is realized through the column scanning method of the obstacle depth map; according to the Kinect Due to the limitations of the sensor itself, the obstacle grid confidence model is established to determine the grid obstacle confidence, which realizes the uncertainty establishment of the grid and makes the establishment of the map more accurate. Finally, the present invention can accurately detect the surrounding environment and establish an uncertainty grid map, which provides prerequisites and conditions for the robot to complete other tasks such as obstacle avoidance, navigation, and path planning.
附图说明:Description of drawings:
图1为原始地面深度图;Figure 1 is the original ground depth map;
图2为处理后的地面深度图;Figure 2 is the ground depth map after processing;
图3为原始深度图;Figure 3 is the original depth map;
图4为剪切完地面模型后的障碍物深度图;Figure 4 is the obstacle depth map after cutting the ground model;
图5为剪切完障碍物后的地面深度图Figure 5 is the ground depth map after cutting obstacles
图6为障碍物分布坐标系Figure 6 is the obstacle distribution coordinate system
图7为不确定性栅格地图Figure 7 is the uncertainty grid map
图8为障碍物置信度确定模型Figure 8 is the obstacle confidence determination model
具体实施方式:下面结合附图对本发明加以具体描述:The specific embodiment: the present invention is specifically described below in conjunction with accompanying drawing:
本发明一种基于Kinect传感器深度图的机器人工作环境不确定性地图构建方法,包括如下步骤:A kind of robot working environment uncertainty map construction method based on Kinect sensor depth map of the present invention comprises the following steps:
步骤一:机器人使用Kinect传感器采集深度数据。该步骤使用Kinect传感器,将采集到的深度数据存在一维数组中用于接下来的处理。Step 1: The robot uses the Kinect sensor to collect depth data. This step uses the Kinect sensor to store the collected depth data in a one-dimensional array for subsequent processing.
步骤二:对采集的深度数据进行预处理,得到深度数据图。本发明首先需要将深度信息映射到颜色信息以便于图像显示,经实验测试得出Kinect所能检测到有效距离为10米之内,所以将0到10米映射到0到255之间,即将距离映射到颜色从而实现深度图的显示。得到深度信息颜色图。如图3所示。Step 2: Preprocessing the collected depth data to obtain a depth data map. The present invention firstly needs to map the depth information to the color information so as to facilitate the image display. Through the experimental test, it is found that the effective distance that Kinect can detect is within 10 meters, so 0 to 10 meters are mapped to 0 to 255, that is, the distance Mapped to color to enable the display of the depth map. Get the depth information color map. As shown in Figure 3.
步骤三:采集地面深度数据并进行地面模型提取,由于Kinect采集的深度信息只与距离有关,所以与Kinect距离相同的地面的深度信息应该是相同的。那么就可以提取地面信息作为一个模板。Step 3: Collect ground depth data and extract the ground model. Since the depth information collected by Kinect is only related to distance, the depth information of the ground at the same distance from Kinect should be the same. Then the ground information can be extracted as a template.
在此步骤中,本发明采用的方法是在一个空旷无障碍物的环境下采集深度图,根据Kinect的成像原理,可以得知深度图像具有以下性质:(1)与图像的特征无关,只与距离有关。(2)灰度值变化方向与Kinect深度相机所拍摄的视场方向z轴方向一致,而且随着距离的增加灰度值会变大。所以检测到与Kinect距离相同的地面的深度信息是相同的。设定Kinect传感器与地面相对高度和俯仰角不变的条件下,在一个空旷无障碍物的环境下采集深度图,当距离超过一定阈值后Kinect已检测不到前方的地面数据,所以只取Kinect能检测到的地面数据,其他地方都视为无效数据记为0。由于Kinect传感器本身性能的限制,其对于近处地面信息采集的比较好,远处比较差,越近的地方采集到的数据越完整,而远处采集到的数据不完整且误差较大,所以还要对其进行处理。深度图像每一行记录了与Kinect相同距离下的地面深度信息,对其进行行扫描记录每一行的深度信息,除去检测到为0的深度信息,对剩下的数据进行加权平均所得到的最后数据即为该处的地面深度信息。处理好每一行的数据记录下并生成一个地面模型模板,这样就可以得到一个地面模型。地面模型中的数据保存在程序根目录下。原始地面深度图如图1所示,处理好的地面模型深度图如图2所示。In this step, the method that the present invention adopts is to gather depth map under the environment of an open space without obstacle, according to the imaging principle of Kinect, can know that depth image has the following properties: (1) has nothing to do with the feature of image, only with It's about distance. (2) The change direction of the gray value is consistent with the z-axis direction of the field of view captured by the Kinect depth camera, and the gray value will become larger as the distance increases. So the depth information of the ground detected at the same distance as the Kinect is the same. Set the relative height and pitch angle of the Kinect sensor to the ground to be constant, and collect a depth map in an open and unobstructed environment. When the distance exceeds a certain threshold, Kinect can no longer detect the ground data in front, so only Kinect is used. The ground data that can be detected are regarded as invalid data in other places and recorded as 0. Due to the limitation of the performance of the Kinect sensor itself, it is relatively good at collecting ground information near the ground, and relatively poor at far away. It also needs to be processed. Each line of the depth image records the depth information of the ground at the same distance as Kinect, and performs line scanning to record the depth information of each line, removes the depth information detected as 0, and weights the remaining data to obtain the final data It is the ground depth information at this place. Process each line of data and record and generate a ground model template, so that a ground model can be obtained. The data in the ground model is saved in the program root directory. The original ground depth map is shown in Figure 1, and the processed ground model depth map is shown in Figure 2.
步骤四:对深度图进行障碍物剪切得到地面深度图,对障碍物深度图进行检测并识别障碍区域,对地面深度图进行检测并识别空闲区域,对障碍区和空闲区分析形成不确定性栅格地图。Step 4: Obtain the ground depth map by clipping the obstacles on the depth map, detect the obstacle depth map and identify the obstacle area, detect the ground depth map and identify the free area, and form uncertainty in the analysis of the obstacle area and the free area grid map.
障碍区域检测算法:Obstacle area detection algorithm:
障碍物区域检测算法具体的实现步骤如下:The specific implementation steps of the obstacle area detection algorithm are as follows:
(1)将Kinect采集到的深度数据与所得到的地面深度数据进行比较,若地面深度数据和采集到的深度数据之差小于一定阈值则将数据置为0,否则保留采集到的深度数据。得到障碍物深度数据图4所示。(1) Compare the depth data collected by Kinect with the obtained ground depth data, if the difference between the ground depth data and the collected depth data is less than a certain threshold, set the data to 0, otherwise keep the collected depth data. Obtained obstacle depth data shown in Figure 4.
(2)对得到的深度图用扫描法进行列扫描,以第一列为例:当扫到第一个非0数字时,记录该数字为第一个障碍物的种子点,当扫到第二个非0数字时,和第一个比较,若两者之差小于一定的阈值则两者合并为一个种子点,取两者的平均值为新的种子点。若两者之差超过一定的阈值则记录后者为新的种子点。直到扫描完一列为止。以结构体来记录每一列的障碍物信息,其中包括障碍物的个数,障碍物的距离,障碍物所包含的像素点的个数,障碍物顶端坐标,障碍物底端坐标。(2) Use the scanning method to perform column scanning on the obtained depth map, taking the first column as an example: when scanning to the first non-zero number, record this number as the seed point of the first obstacle, and when scanning to the second When two non-zero numbers are compared with the first one, if the difference between the two is less than a certain threshold, the two are merged into one seed point, and the average value of the two is taken as the new seed point. If the difference between the two exceeds a certain threshold, record the latter as a new seed point. until a column is scanned. A structure is used to record the obstacle information of each column, including the number of obstacles, the distance of obstacles, the number of pixels contained in obstacles, the coordinates of the top of obstacles, and the coordinates of bottom of obstacles.
(3)不断重复步骤2得到所有列的所有不同的障碍物的各项信息,对不同的障碍物进行判断,除去障碍物所包含的像素点的个数小于一定阈值的所有障碍物。(3) Repeat step 2 continuously to obtain various information of all different obstacles in all columns, judge different obstacles, and remove all obstacles whose number of pixels contained in the obstacles is less than a certain threshold.
(4)根据步骤3可以得到一个横坐标为图像的像素点位置,纵坐标为实际距离的坐标系。坐标系中每一个点代表障碍物。结果如图6所示。(4) According to step 3, a coordinate system in which the abscissa is the pixel position of the image and the ordinate is the actual distance can be obtained. Each point in the coordinate system represents an obstacle. The result is shown in Figure 6.
(5)根据步骤4得到的坐标系再次转换为实际距离坐标下的障碍物显示;需要对图像坐标系到摄像机坐标系再到世界坐标系的转换;利用公式(1)得到障碍物数据从图像坐标系转换为世界坐标系下的坐标;(5) The coordinate system obtained according to step 4 is converted into the obstacle display under the actual distance coordinates again; it is necessary to convert the image coordinate system to the camera coordinate system and then to the world coordinate system; use the formula (1) to obtain the obstacle data from the image The coordinate system is converted to coordinates in the world coordinate system;
dz=depth(u,v)dz=depth(u,v)
其中上式中dx表示像素点(u,v)相对中心位置(u0,v0)在X方向上的偏移距离,dz表示该点对应的深度距离,fx为摄像机的内部参数表示X方向的焦距,设为一个定值;In the above formula, dx represents the offset distance of the pixel point (u, v) relative to the center position (u 0 , v 0 ) in the X direction, dz represents the depth distance corresponding to the point, and f x is the internal parameter of the camera representing X The focal length in the direction is set to a constant value;
(6)判断障碍物数据在世界坐标系下属于哪个栅格,并记录该栅格;(6) Determine which grid the obstacle data belongs to in the world coordinate system, and record the grid;
空闲区域检测算法:Free area detection algorithm:
空闲区域检测算法具体的实现步骤如下:The specific implementation steps of the free area detection algorithm are as follows:
(1)将Kinect采集到的深度数据和所得到的地面深度数据进行比较,若地面深度数据和采集到的深度数据之差小于一定阈值则保存地面深度数据,否则将数据置为0。得到的地面深度数据图如图5所示。(1) Compare the depth data collected by Kinect with the obtained ground depth data, if the difference between the ground depth data and the collected depth data is less than a certain threshold, the ground depth data is saved, otherwise the data is set to 0. The obtained ground depth data map is shown in Fig. 5.
(2)利用公式(1)得到地面数据从图像坐标系转换为世界坐标系下的坐标;(2) Use the formula (1) to obtain the ground data from the image coordinate system to the coordinates in the world coordinate system;
(3)判断地面数据在世界坐标系下属于哪个栅格,并记录该栅格。(3) Determine which grid the ground data belongs to in the world coordinate system, and record the grid.
步骤五:对障碍物进行置信度分析从而确定障碍物的置信度。Step 5: Confidence analysis is performed on the obstacle to determine the confidence of the obstacle.
障碍物置信度确认算法具体实现步骤如下:The specific implementation steps of the obstacle confidence confirmation algorithm are as follows:
由于Kinect对距离的测量存在误差所以需要对栅格进行置信度分析。由于Kinect所检测到的深度数据随着距离的增大,误差也随着变大。而且两者之间存在着一定比例如公式(2)。Because Kinect has errors in distance measurement, it is necessary to perform confidence analysis on the grid. As the depth data detected by Kinect increases with the distance, the error also increases. And there is a certain ratio between the two, such as formula (2).
上式中σz表示距离为z处的误差,f表示深度摄像机的焦距,b表示基线长度(红外发射端与接收端的距离),m表示归一化的参数,z表示实际深度距离,σd表示二分之一的像素距离。In the above formula, σz represents the error at a distance of z, f represents the focal length of the depth camera, b represents the baseline length (the distance between the infrared transmitter and the receiver), m represents the normalized parameter, z represents the actual depth distance, and σd represents the two One-third of the pixel distance.
障碍物置信度确定模型见图8:The obstacle confidence determination model is shown in Figure 8:
得到障碍物置信度确定模型;若检测到该栅格上有障碍物,落在栅格上的概率为(grid_length-2*σz)2/grid_length2。可以得到计算栅格置信度的公式(3)。Obtain the obstacle confidence determination model; if an obstacle is detected on the grid, the probability of falling on the grid is (grid_length-2*σz) 2 /grid_length 2 . The formula (3) for calculating the grid confidence can be obtained.
上式中p表示栅格的置信度,f1,f2表示两种影响置信度的数据所占的比例。E表示误差对栅格置信度的影响,R表示栅格长度对栅格置信度的影响,σMax表示最远处的最大误差,zmax表示所能检测到的最远距离,grid_length表示栅格的实际长度。In the above formula, p represents the confidence of the grid, and f 1 and f 2 represent the proportions of the two types of data that affect the confidence. E represents the influence of error on grid confidence, R represents the influence of grid length on grid confidence, σMax represents the maximum error at the furthest distance, z max represents the furthest distance that can be detected, and grid_length represents the grid length Actual length.
所定的栅格为每个像素点代表实际距离4cm,每个栅格代表实际为12cm平方的栅格,整个局部地图表示的实际范围为10m平方的环境。结果如图7所示。The determined grid is such that each pixel represents an actual distance of 4cm, each grid represents an actual 12cm square grid, and the entire local map represents an environment with an actual range of 10m square. The result is shown in Figure 7.
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510891318.8A CN105700525B (en) | 2015-12-07 | 2015-12-07 | Method is built based on Kinect sensor depth map robot working environment uncertainty map |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510891318.8A CN105700525B (en) | 2015-12-07 | 2015-12-07 | Method is built based on Kinect sensor depth map robot working environment uncertainty map |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105700525A true CN105700525A (en) | 2016-06-22 |
CN105700525B CN105700525B (en) | 2018-09-07 |
Family
ID=56228182
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510891318.8A Expired - Fee Related CN105700525B (en) | 2015-12-07 | 2015-12-07 | Method is built based on Kinect sensor depth map robot working environment uncertainty map |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105700525B (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108227712A (en) * | 2017-12-29 | 2018-06-29 | 北京臻迪科技股份有限公司 | The avoidance running method and device of a kind of unmanned boat |
CN108673510A (en) * | 2018-06-20 | 2018-10-19 | 北京云迹科技有限公司 | Robot security's advance system and method |
CN109426760A (en) * | 2017-08-22 | 2019-03-05 | 聚晶半导体股份有限公司 | Road image processing method and road image processing device |
CN109645892A (en) * | 2018-12-12 | 2019-04-19 | 深圳乐动机器人有限公司 | A kind of recognition methods of barrier and clean robot |
CN110202577A (en) * | 2019-06-15 | 2019-09-06 | 青岛中科智保科技有限公司 | A kind of autonomous mobile robot that realizing detection of obstacles and its method |
CN110275540A (en) * | 2019-07-01 | 2019-09-24 | 湖南海森格诺信息技术有限公司 | Semantic navigation method and its system for sweeping robot |
WO2020015501A1 (en) * | 2018-07-17 | 2020-01-23 | 北京三快在线科技有限公司 | Map construction method, apparatus, storage medium and electronic device |
WO2021022615A1 (en) * | 2019-08-02 | 2021-02-11 | 深圳大学 | Method for generating robot exploration path, and computer device and storage medium |
WO2021120999A1 (en) * | 2019-12-20 | 2021-06-24 | 深圳市杉川机器人有限公司 | Autonomous robot |
CN113063352A (en) * | 2021-03-31 | 2021-07-02 | 深圳中科飞测科技股份有限公司 | Detection method and device, detection equipment and storage medium |
CN114004874A (en) * | 2021-12-30 | 2022-02-01 | 贝壳技术有限公司 | Acquisition method and device of occupied grid map |
CN115022808A (en) * | 2022-06-21 | 2022-09-06 | 北京天坦智能科技有限责任公司 | Instant positioning and radio map construction method for communication robot |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102938142A (en) * | 2012-09-20 | 2013-02-20 | 武汉大学 | Method for filling indoor light detection and ranging (LiDAR) missing data based on Kinect |
CN104677347A (en) * | 2013-11-27 | 2015-06-03 | 哈尔滨恒誉名翔科技有限公司 | Indoor mobile robot capable of producing 3D navigation map based on Kinect |
CN104794748A (en) * | 2015-03-17 | 2015-07-22 | 上海海洋大学 | Three-dimensional space map construction method based on Kinect vision technology |
CN105045263A (en) * | 2015-07-06 | 2015-11-11 | 杭州南江机器人股份有限公司 | Kinect-based robot self-positioning method |
-
2015
- 2015-12-07 CN CN201510891318.8A patent/CN105700525B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102938142A (en) * | 2012-09-20 | 2013-02-20 | 武汉大学 | Method for filling indoor light detection and ranging (LiDAR) missing data based on Kinect |
CN104677347A (en) * | 2013-11-27 | 2015-06-03 | 哈尔滨恒誉名翔科技有限公司 | Indoor mobile robot capable of producing 3D navigation map based on Kinect |
CN104794748A (en) * | 2015-03-17 | 2015-07-22 | 上海海洋大学 | Three-dimensional space map construction method based on Kinect vision technology |
CN105045263A (en) * | 2015-07-06 | 2015-11-11 | 杭州南江机器人股份有限公司 | Kinect-based robot self-positioning method |
Non-Patent Citations (1)
Title |
---|
申丽曼: "室内环境下多机器人协作建图方法的研究", 《中国优秀硕士学位论文全文数据库》 * |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109426760A (en) * | 2017-08-22 | 2019-03-05 | 聚晶半导体股份有限公司 | Road image processing method and road image processing device |
CN108227712A (en) * | 2017-12-29 | 2018-06-29 | 北京臻迪科技股份有限公司 | The avoidance running method and device of a kind of unmanned boat |
CN108673510A (en) * | 2018-06-20 | 2018-10-19 | 北京云迹科技有限公司 | Robot security's advance system and method |
CN110728684A (en) * | 2018-07-17 | 2020-01-24 | 北京三快在线科技有限公司 | Map construction method and device, storage medium and electronic equipment |
CN110728684B (en) * | 2018-07-17 | 2021-02-02 | 北京三快在线科技有限公司 | Map construction method and device, storage medium and electronic equipment |
WO2020015501A1 (en) * | 2018-07-17 | 2020-01-23 | 北京三快在线科技有限公司 | Map construction method, apparatus, storage medium and electronic device |
CN109645892A (en) * | 2018-12-12 | 2019-04-19 | 深圳乐动机器人有限公司 | A kind of recognition methods of barrier and clean robot |
CN110202577A (en) * | 2019-06-15 | 2019-09-06 | 青岛中科智保科技有限公司 | A kind of autonomous mobile robot that realizing detection of obstacles and its method |
CN110275540A (en) * | 2019-07-01 | 2019-09-24 | 湖南海森格诺信息技术有限公司 | Semantic navigation method and its system for sweeping robot |
WO2021022615A1 (en) * | 2019-08-02 | 2021-02-11 | 深圳大学 | Method for generating robot exploration path, and computer device and storage medium |
US20230096982A1 (en) * | 2019-08-02 | 2023-03-30 | Shenzhen University | Method for generating robot exploration path, computer device, and storage medium |
US12147235B2 (en) * | 2019-08-02 | 2024-11-19 | Shenzhen University | Method for generating robot exploration path for a robot to move along, computer device, and storage medium |
WO2021120999A1 (en) * | 2019-12-20 | 2021-06-24 | 深圳市杉川机器人有限公司 | Autonomous robot |
CN113063352A (en) * | 2021-03-31 | 2021-07-02 | 深圳中科飞测科技股份有限公司 | Detection method and device, detection equipment and storage medium |
CN114004874A (en) * | 2021-12-30 | 2022-02-01 | 贝壳技术有限公司 | Acquisition method and device of occupied grid map |
CN115022808A (en) * | 2022-06-21 | 2022-09-06 | 北京天坦智能科技有限责任公司 | Instant positioning and radio map construction method for communication robot |
CN115022808B (en) * | 2022-06-21 | 2022-11-08 | 北京天坦智能科技有限责任公司 | Instant positioning and radio map construction method for communication robot |
Also Published As
Publication number | Publication date |
---|---|
CN105700525B (en) | 2018-09-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105700525A (en) | Robot working environment uncertainty map construction method based on Kinect sensor depth map | |
CN110084116B (en) | Road surface detection method, road surface detection device, computer equipment and storage medium | |
US10102428B2 (en) | Systems and methods for surface and subsurface damage assessments, patch scans, and visualization | |
Kumar et al. | Automated road markings extraction from mobile laser scanning data | |
CN105955258B (en) | Robot global grating map construction method based on the fusion of Kinect sensor information | |
EP2304688B1 (en) | Automated building outline detection | |
CN111340012B (en) | Geological disaster interpretation method and device and terminal equipment | |
CN111709988B (en) | Method and device for determining characteristic information of object, electronic equipment and storage medium | |
CN116051533A (en) | Construction quality detection method and device for electric power pole tower | |
CN111721279A (en) | Tail end path navigation method suitable for power transmission inspection work | |
CN113360587B (en) | Land surveying and mapping equipment and method based on GIS technology | |
CN114882316A (en) | Target detection model training method, target detection method and device | |
CN115588040A (en) | System and method for counting and positioning coordinates based on full-view imaging points | |
KR20220001274A (en) | 3D map change area update system and method | |
Chen et al. | Detection of damaged infrastructure on disaster sites using mobile robots | |
CN114004950B (en) | BIM and LiDAR technology-based intelligent pavement disease identification and management method | |
Karantanellis et al. | Evaluating the quality of photogrammetric point-clouds in challenging geo-environments–a case study in an Alpine Valley | |
CN113223155A (en) | Distance prediction method, device, equipment and medium | |
CN113433568B (en) | Laser radar observation simulation method and device | |
CN116892944B (en) | Agricultural machinery navigation line generation method and device, and navigation method and device | |
CN118566908A (en) | Monitoring data transmission method, electronic equipment and storage medium | |
CN117611661A (en) | Slope displacement monitoring method and system | |
Pagán et al. | 3D modelling of dune ecosystems using photogrammetry from remotely piloted air systems surveys | |
Middelburg | Application of Structure from Motion and a Hand-Held Lidar System to Monitor Erosion at Outcrops | |
CN118447155A (en) | Cable trench mapping system based on wireless inspection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20180907 Termination date: 20191207 |
|
CF01 | Termination of patent right due to non-payment of annual fee |