WO2022111723A1 - Road edge detection method and robot - Google Patents

Road edge detection method and robot Download PDF

Info

Publication number
WO2022111723A1
WO2022111723A1 PCT/CN2021/134282 CN2021134282W WO2022111723A1 WO 2022111723 A1 WO2022111723 A1 WO 2022111723A1 CN 2021134282 W CN2021134282 W CN 2021134282W WO 2022111723 A1 WO2022111723 A1 WO 2022111723A1
Authority
WO
WIPO (PCT)
Prior art keywords
road edge
detection method
edge detection
robot
map
Prior art date
Application number
PCT/CN2021/134282
Other languages
French (fr)
Chinese (zh)
Inventor
黄寅
张涛
吴翔
郭璁
Original Assignee
深圳市普渡科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市普渡科技有限公司 filed Critical 深圳市普渡科技有限公司
Publication of WO2022111723A1 publication Critical patent/WO2022111723A1/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device

Definitions

  • the present application relates to the field of Internet technologies, and in particular, to a road edge detection method and a robot.
  • Mobile robots move in specific scenarios to autonomously perform tasks such as distribution, guidance, inspection, and disinfection.
  • the above scenarios include restaurants, hotels, office buildings, hospitals, etc.
  • the robot needs to build a map and perform path planning.
  • the robot detects the edge of the walking road through lidar, but there is a large error in this method, and the robot is prone to collide with obstacles.
  • a road edge detection method comprising:
  • a road edge is calculated according to the robot pose, the topological map and the gray value.
  • a robot comprising a memory and a processor, wherein computer-readable instructions that can be run on the processor are stored in the memory, and when the processor executes the computer-readable instructions, the above-mentioned road edge detection method is implemented. step.
  • a computer storage medium stores computer-readable instructions, and when the computer-readable instructions are executed by a processor, implements the steps of the above-mentioned road edge detection method.
  • FIG. 1 shows a schematic flowchart of the road edge detection method involved in the present application
  • FIG. 2 shows a schematic flowchart of an embodiment of the road edge detection method involved in the present application
  • FIG. 3 shows a schematic flowchart of an embodiment of the road edge detection method involved in the present application
  • the embodiment of the present application relates to a road edge detection method, including:
  • the depth data, robot pose, topological map and static obstacle map are fused to more accurately calculate the road edge and reduce the possibility of robot collision.
  • the depth data can be acquired by scanning the surrounding environment of the robot through the lidar set on the robot.
  • the robot pose includes the position information and orientation information of the robot.
  • the pose of the robot can be obtained through lidar, IMU, or odometer, etc.
  • the topological map is the artificially set moving path of the robot.
  • step 102 it includes:
  • the grid value of the static obstacle map in the corresponding area of the measurement grid is increased by the first feature value, and the measurement grid does not have the depth point cloud.
  • the grid value of the static obstacle map in the area corresponding to the measurement grid is reduced by a second feature value.
  • step 103 specifically includes:
  • the gray value is calculated.
  • the gray value fuses the changes of the grid values of consecutive frames, so that the identification of obstacles is more accurate.
  • the grayscale value is the grayscale value of each pixel.
  • step 1023 specifically includes:
  • the measurement grid with the depth point cloud is marked as 1, and the measurement grid without the depth point cloud is marked as 0.
  • step 104 specifically includes:
  • the corresponding pixel is the peak pixel, which can be identified as the location of the obstacle. According to the location of the obstacle and the topological path, the road edge can be fitted, thereby improving the road edge. detection accuracy.
  • the spatial intervals may be equally spaced.
  • sampling is performed at specific spatial intervals, which can reduce the amount of calculation and improve the calculation efficiency.
  • the gray value of the sampling point is [0, 0, 10, 15, 20, 200, 215, 170, 120, 180, 100, 50], and the threshold may be 190, then the coordinate positions corresponding to the two gray values of 200, 215 are recorded.
  • step 1045 specifically includes:
  • the line is fitted using a random sampling consensus algorithm.
  • the fitting of the straight line using a random sampling consensus algorithm specifically includes:
  • the straight line with the highest score on both sides of the topological map is selected as the road edge.
  • the calculating the confidence level of the straight line specifically includes:
  • the calculation method of the confidence level is:
  • n is the number of pixels covered by the straight line in the static obstacle map
  • V is the pixel value of the pixel
  • the anti-interference ability of the fitting calculation can be improved, and the accuracy of the calculation can be improved.
  • the depth data is acquired by a depth camera.
  • the depth data includes a depth map.
  • the topology map includes several topology paths.
  • the topological map can be drawn manually.
  • the topology map includes path information that the robot can travel. Topological paths can be straight lines.
  • Embodiments of the present application also relate to a robot, including a memory and a processor.
  • Computer-readable instructions that can be run on the processor are stored in the memory.
  • the processor executes the computer-readable instructions, the steps of the above-mentioned road edge detection method are implemented.
  • a depth camera can be included. The depth camera acquires the depth data.
  • the robot may further include at least one of a lidar, an IMU, and an odometer for acquiring the robot pose.
  • the embodiments of the present application also relate to a computer storage medium, where the computer-readable storage medium stores computer-readable instructions, and when the computer-readable instructions are executed by a processor, implements the steps of the above-mentioned road edge detection method.

Abstract

A road edge detection method and a robot. The road edge detection method comprises: acquiring depth data, a robot posture, and a topological map (101); establishing a static obstacle map according to the depth data and the robot posture (102); calculating a grayscale value of the static obstacle map (103); and calculating a road edge according to the robot posture, the topological map and the grayscale value.

Description

道路边缘检测方法及机器人Road edge detection method and robot
相关申请的交叉引用CROSS-REFERENCE TO RELATED APPLICATIONS
本申请要求于2020年11月30日提交中国专利局、申请号为202011380356.4、申请名称为“道路边缘检测方法及机器人”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number 202011380356.4 and the application title "Road Edge Detection Method and Robot" filed with the China Patent Office on November 30, 2020, the entire contents of which are incorporated into this application by reference.
技术领域technical field
本申请涉及互联网技术领域,特别涉及一种道路边缘检测方法及机器人。The present application relates to the field of Internet technologies, and in particular, to a road edge detection method and a robot.
背景技术Background technique
移动机器人在特定的场景中移动,以自主执行配送、引导、巡查、消毒等任务。上述场景包括餐厅、酒店、写字楼、医院等等。在这些场景中,机器人需要建立地图并进行路径规划,在机器人行走的路径附近会存在很多障碍物,机器人在移动过程中需要躲避这些障碍物。现有技术中,机器人通过激光雷达探测行走道路的边缘,但这种方式存在较大误差,机器人容易与障碍物发生碰撞。Mobile robots move in specific scenarios to autonomously perform tasks such as distribution, guidance, inspection, and disinfection. The above scenarios include restaurants, hotels, office buildings, hospitals, etc. In these scenarios, the robot needs to build a map and perform path planning. There will be many obstacles near the path of the robot, and the robot needs to avoid these obstacles during the movement. In the prior art, the robot detects the edge of the walking road through lidar, but there is a large error in this method, and the robot is prone to collide with obstacles.
发明内容SUMMARY OF THE INVENTION
根据本申请的各种实施例,提供一种道路边缘检测方法及机器人。一种道路边缘检测方法,所述方法包括:According to various embodiments of the present application, a road edge detection method and a robot are provided. A road edge detection method, the method comprising:
获取深度数据、机器人位姿、拓扑地图;Obtain depth data, robot pose, and topology map;
根据所述深度数据和所述机器人位姿建立静态障碍物地图;establishing a static obstacle map according to the depth data and the robot pose;
计算所述静态障碍物地图的灰度值;calculating the gray value of the static obstacle map;
根据所述机器人位姿、所述拓扑地图和所述灰度值计算道路边缘。A road edge is calculated according to the robot pose, the topological map and the gray value.
一种机器人,包括存储器、处理器,所述存储器中存储有可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现上述的道路边缘检测方法的步骤。A robot, comprising a memory and a processor, wherein computer-readable instructions that can be run on the processor are stored in the memory, and when the processor executes the computer-readable instructions, the above-mentioned road edge detection method is implemented. step.
一种计算机存储介质,所述计算机可读存储介质存储有计算机可读指令,所述计算机可读指令被处理器执行时实现上述的道路边缘检测方法的步骤。A computer storage medium, the computer-readable storage medium stores computer-readable instructions, and when the computer-readable instructions are executed by a processor, implements the steps of the above-mentioned road edge detection method.
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征和优点将从说明书、附图以及权利要求书变得明显。The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below. Other features and advantages of the present application will be apparent from the description, drawings, and claims.
附图说明Description of drawings
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他实施例的附图。In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following briefly introduces the accompanying drawings required for the description of the embodiments or the prior art. Obviously, the drawings in the following description are only These are some embodiments of the present application. For those of ordinary skill in the art, the drawings of other embodiments can also be obtained according to these drawings without creative efforts.
图1示出了本申请所涉及的道路边缘检测方法的流程示意图;FIG. 1 shows a schematic flowchart of the road edge detection method involved in the present application;
图2示出了本申请所涉及的道路边缘检测方法的实施方式的流程示意图;FIG. 2 shows a schematic flowchart of an embodiment of the road edge detection method involved in the present application;
图3示出了本申请所涉及的道路边缘检测方法的实施方式的流程示意图;FIG. 3 shows a schematic flowchart of an embodiment of the road edge detection method involved in the present application;
具体实施方式Detailed ways
为了便于理解本申请,下面将参照相关附图对本申请进行更全面的描述。附图中给出了本申请的较佳实施例。但是,本申请可以以许多不同的形式来实现,并不限于本文所描述的实施例。相反地,提供这些实施例的目的是使对本申请的公开内容的理解更加透彻全面。In order to facilitate understanding of the present application, the present application will be described more fully below with reference to the related drawings. The preferred embodiments of the present application are shown in the accompanying drawings. However, the application may be implemented in many different forms and is not limited to the embodiments described herein. Rather, these embodiments are provided so that a thorough and complete understanding of the disclosure of this application is provided.
除非另有定义,本文所使用的所有的技术和科学术语与属于发明的技术领域的技术人员通常理解的含义相同。本文中在发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在限制本申请。本文所使用的术语“和/或”包括一个或多个相关的所列项目的任意的和所有的组合。Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the technical field of the invention. The terms used herein in the description of the invention are for the purpose of describing particular embodiments only and are not intended to limit the application. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
以下,参考附图,详细地说明本申请的优选实施方式。在下面的说明中,对于相同的部件赋予相同的符号,省略重复的说明。另外,附图只是示意性的图,部件相互之间的尺寸的比例或者部件的形状等可以与实际的不同。Hereinafter, preferred embodiments of the present application will be described in detail with reference to the accompanying drawings. In the following description, the same reference numerals are assigned to the same components, and overlapping descriptions are omitted. In addition, the drawings are only schematic diagrams, and the ratios of the dimensions of the members, the shapes of the members, and the like may be different from the actual ones.
如图1所示,本申请实施方式涉及一种道路边缘检测方法,包括:As shown in FIG. 1 , the embodiment of the present application relates to a road edge detection method, including:
101、获取深度数据、机器人位姿、拓扑地图;101. Obtain depth data, robot pose, and topology map;
102、根据所述深度数据和所述机器人位姿建立静态障碍物地图;102. Establish a static obstacle map according to the depth data and the robot pose;
103、计算所述静态障碍物地图的灰度值;103. Calculate the grayscale value of the static obstacle map;
104、根据所述机器人位姿、所述拓扑地图和所述灰度值计算道路边缘。104. Calculate a road edge according to the robot pose, the topological map, and the gray value.
在这种情况下,将深度数据、机器人位姿、拓扑地图以及静态障碍物地图进行融合,从而更准确地计算出道路边缘,降低机器人发生碰撞的可能性。In this case, the depth data, robot pose, topological map and static obstacle map are fused to more accurately calculate the road edge and reduce the possibility of robot collision.
在本实施方式中,深度数据可通过设置于机器人上的激光雷达,对机器人周围环境扫描获取。机器人位姿包括机器人的位置信息和朝向信息。通过激光雷达、IMU或者里程计等,可获取所述机器人位姿。拓扑地图为人为设置的机器人移动路径。In this embodiment, the depth data can be acquired by scanning the surrounding environment of the robot through the lidar set on the robot. The robot pose includes the position information and orientation information of the robot. The pose of the robot can be obtained through lidar, IMU, or odometer, etc. The topological map is the artificially set moving path of the robot.
如图2所示,在本实施方式中,步骤102之后,包括:As shown in FIG. 2, in this embodiment, after step 102, it includes:
1021、对所述静态障碍物地图设置多个测量网格,所述测量网格的分辨率与所述静态障碍物地图的分辨率相同,将所述多个测量网格与所述静态障碍物地图进行位置对齐;1021. Set a plurality of measurement grids on the static obstacle map, the resolution of the measurement grids is the same as the resolution of the static obstacle map, and compare the plurality of measurement grids with the static obstacles. The map is aligned;
1022、由所述机器人位姿将所述深度数据的深度点云从机器人坐标系 转换到世界坐标系下,并向地面投影;1022, convert the depth point cloud of the depth data from the robot coordinate system to the world coordinate system by the robot pose, and project it to the ground;
1023、根据所述测量网格中是否具有所述深度点云,对所述测量网格进行标记;1023. Mark the measurement grid according to whether there is the depth point cloud in the measurement grid;
1024、所述测量网格内具有所述深度点云时,所述测量网格对应区域的所述静态障碍物地图的栅格值增加第一特征值,所述测量网格内不具有所述深度点云时,所述测量网格对应区域的所述静态障碍物地图的栅格值减少第二特征值。1024. When there is the depth point cloud in the measurement grid, the grid value of the static obstacle map in the corresponding area of the measurement grid is increased by the first feature value, and the measurement grid does not have the depth point cloud. When the depth point cloud is obtained, the grid value of the static obstacle map in the area corresponding to the measurement grid is reduced by a second feature value.
由此,融合深度点云并结合测量网格,可将静态障碍物地图中有无深度点云的栅格更显著的进行量化区分。Therefore, by fusing depth point clouds and measuring grids, it is possible to quantify and distinguish grids with or without depth point clouds in the static obstacle map more significantly.
在本实施方式中,步骤103具体包括:In this embodiment, step 103 specifically includes:
融合连续若干帧所述静态障碍物地图的所述栅格值增加所述第一特征值或者减少所述第二特征值后,计算得出所述灰度值。After the grid values of the static obstacle map of several consecutive frames are fused to increase the first eigenvalue or decrease the second eigenvalue, the gray value is calculated.
在这种情况下,灰度值融合了连续多帧栅格值的变化情况,从而使得对于障碍物的识别更加准确。In this case, the gray value fuses the changes of the grid values of consecutive frames, so that the identification of obstacles is more accurate.
在本实施方式中,所述灰度值为各个像素的灰度值。In this embodiment, the grayscale value is the grayscale value of each pixel.
在本实施方式中,步骤1023具体包括:In this embodiment, step 1023 specifically includes:
将具有所述深度点云的所述测量网格标记为1,将不具有所述深度点云的所述测量网格标记为0。The measurement grid with the depth point cloud is marked as 1, and the measurement grid without the depth point cloud is marked as 0.
如图3所示,在本实施方式中,步骤104具体包括:As shown in FIG. 3, in this embodiment, step 104 specifically includes:
1041、根据所述机器人位姿在所述拓扑地图中找到所述机器人当前所在道路的拓扑路径;1041. Find the topological path of the road where the robot is currently located in the topological map according to the robot pose;
1042、沿着所述拓扑路径以特定的空间间隔进行采样;1042. Sampling at specific spatial intervals along the topological path;
1043、以所述采样位置为起始点,沿着所述拓扑路径的法线方向查询所述灰度值;1043. Taking the sampling position as a starting point, query the gray value along the normal direction of the topological path;
1044、当所述灰度值大于阈值时,记录对应的坐标位置;1044. When the gray value is greater than the threshold, record the corresponding coordinate position;
1045、将若干所述坐标位置拟合成直线。1045. Fit several of the coordinate positions into a straight line.
在这种情况下,灰度值大于阈值时,对应的像素为峰值像素,则可认定为障碍物所在位置,根据障碍物的位置并结合拓扑路径可拟合出道路边缘,从而提升了道路边缘检测的准确性。In this case, when the gray value is greater than the threshold, the corresponding pixel is the peak pixel, which can be identified as the location of the obstacle. According to the location of the obstacle and the topological path, the road edge can be fitted, thereby improving the road edge. detection accuracy.
在一些示例中,空间间隔可以等间距设置。In some examples, the spatial intervals may be equally spaced.
在本实施方式中,以特定的空间间隔进行采样,可减少计算量,提升计算效率。In this embodiment, sampling is performed at specific spatial intervals, which can reduce the amount of calculation and improve the calculation efficiency.
在一些示例中,在一次采样中,采样点灰度值为[0,0,10,15,20,200,215,170,120,180,100,50],阈值可以是190,则记录200,215两个灰度值对应的坐标位置。In some examples, in one sampling, the gray value of the sampling point is [0, 0, 10, 15, 20, 200, 215, 170, 120, 180, 100, 50], and the threshold may be 190, then the coordinate positions corresponding to the two gray values of 200, 215 are recorded.
在本实施方式中,步骤1045具体包括:In this embodiment, step 1045 specifically includes:
采用随机抽样一致算法拟合所述直线。The line is fitted using a random sampling consensus algorithm.
在本实施方式中,所述采用随机抽样一致算法拟合所述直线,具体包括:In this embodiment, the fitting of the straight line using a random sampling consensus algorithm specifically includes:
计算所述直线的置信度;calculating the confidence of the straight line;
挑选所述拓扑地图两侧评分最高的所述直线作为所述道路边缘。The straight line with the highest score on both sides of the topological map is selected as the road edge.
在本实施方式中,拓扑地图的两侧均拟合出若干直线。计算直线的置信度,筛选作为所述道路边缘的直线。In this embodiment, several straight lines are fitted on both sides of the topological map. The confidence of the straight line is calculated, and the straight line that is the edge of the road is screened.
在本实施方式中,所述计算所述直线的置信度,具体包括:In this embodiment, the calculating the confidence level of the straight line specifically includes:
所述置信度的计算方法为:The calculation method of the confidence level is:
Figure PCTCN2021134282-appb-000001
Figure PCTCN2021134282-appb-000001
其中,n为所述直线覆盖在所述静态障碍物地图中的像素数量,V为所述像素的像素值。Wherein, n is the number of pixels covered by the straight line in the static obstacle map, and V is the pixel value of the pixel.
由此,可提升拟合计算的抗干扰能力,提升计算的准确性。Therefore, the anti-interference ability of the fitting calculation can be improved, and the accuracy of the calculation can be improved.
在本实施方式中,所述深度数据通过深度相机获取。所述深度数据包括深度图。In this embodiment, the depth data is acquired by a depth camera. The depth data includes a depth map.
在本实施方式中,所述拓扑地图包括若干拓扑路径。In this embodiment, the topology map includes several topology paths.
在一些示例中,拓扑地图可以通过人工绘制。拓扑地图包括机器人能够通行的路径信息。拓扑路径可以为直线。In some examples, the topological map can be drawn manually. The topology map includes path information that the robot can travel. Topological paths can be straight lines.
本申请实施方式还涉及一种机器人,包括存储器、处理器存储器中存储有可在处理器上运行的计算机可读指令,处理器执行计算机可读指令时实现上述的道路边缘检测方法的步骤,机器人可以包括深度相机。深度相机获取所述深度数据。机器人还可以包括激光雷达、IMU、里程计中的至少一种,用于获取所述机器人位姿。Embodiments of the present application also relate to a robot, including a memory and a processor. Computer-readable instructions that can be run on the processor are stored in the memory. When the processor executes the computer-readable instructions, the steps of the above-mentioned road edge detection method are implemented. A depth camera can be included. The depth camera acquires the depth data. The robot may further include at least one of a lidar, an IMU, and an odometer for acquiring the robot pose.
本申请实施方式还涉及一种计算机存储介质,计算机可读存储介质存储有计算机可读指令,计算机可读指令被处理器执行时实现上述的道路边缘检测方法的步骤。The embodiments of the present application also relate to a computer storage medium, where the computer-readable storage medium stores computer-readable instructions, and when the computer-readable instructions are executed by a processor, implements the steps of the above-mentioned road edge detection method.
以上所述的实施方式,并不构成对该技术方案保护范围的限定。任何在上述实施方式的精神和原则之内所作的修改、等同更换和改进等,均应包含在该技术方案的保护范围之内。The above-mentioned embodiments do not constitute a limitation on the protection scope of the technical solution. Any modifications, equivalent replacements and improvements made within the spirit and principles of the above-mentioned embodiments shall be included within the protection scope of this technical solution.
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。The technical features of the above-described embodiments can be combined arbitrarily. For the sake of brevity, all possible combinations of the technical features in the above-described embodiments are not described. However, as long as there is no contradiction between the combinations of these technical features, All should be regarded as the scope described in this specification.
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。The above-mentioned embodiments only represent several embodiments of the present application, and the descriptions thereof are relatively specific and detailed, but should not be construed as a limitation on the scope of the patent application. It should be noted that, for those skilled in the art, without departing from the concept of the present application, several modifications and improvements can be made, which all belong to the protection scope of the present application. Therefore, the scope of protection of the patent of the present application shall be subject to the appended claims.

Claims (20)

  1. 一种道路边缘检测方法,所述方法包括:A road edge detection method, the method comprising:
    获取深度数据、机器人位姿、拓扑地图;Obtain depth data, robot pose, and topology map;
    根据所述深度数据和所述机器人位姿建立静态障碍物地图;establishing a static obstacle map according to the depth data and the robot pose;
    计算所述静态障碍物地图的灰度值;calculating the gray value of the static obstacle map;
    根据所述机器人位姿、所述拓扑地图和所述灰度值计算道路边缘。A road edge is calculated according to the robot pose, the topological map and the gray value.
  2. 如权利要求1所述的道路边缘检测方法,其特征在于,所述根据所述深度数据和所述机器人位姿建立静态障碍物地图的步骤之后,包括:The road edge detection method according to claim 1, wherein after the step of establishing a static obstacle map according to the depth data and the robot pose, the method comprises:
    对所述静态障碍物地图设置多个测量网格,所述测量网格的分辨率与所述静态障碍物地图的分辨率相同,将所述多个测量网格与所述静态障碍物地图进行位置对齐;A plurality of measurement grids are set on the static obstacle map, the resolution of the measurement grid is the same as that of the static obstacle map, and the plurality of measurement grids are compared with the static obstacle map. position alignment;
    由所述机器人位姿将所述深度数据的深度点云从机器人坐标系转换到世界坐标系下,并向地面投影;Transform the depth point cloud of the depth data from the robot coordinate system to the world coordinate system by the robot pose, and project it to the ground;
    根据所述测量网格中是否具有所述深度点云,对所述测量网格进行标记;marking the measurement grid according to whether the depth point cloud exists in the measurement grid;
    所述测量网格内具有所述深度点云时,所述测量网格对应区域的所述静态障碍物地图的栅格值增加第一特征值,所述测量网格内不具有所述深度点云时,所述测量网格对应区域的所述静态障碍物地图的栅格值减少第二特征值。When the measurement grid has the depth point cloud, the grid value of the static obstacle map in the corresponding area of the measurement grid increases the first feature value, and the measurement grid does not have the depth point. When it is cloudy, the grid value of the static obstacle map in the area corresponding to the measurement grid is reduced by a second feature value.
  3. 如权利要求2所述的道路边缘检测方法,其特征在于,所述计算所述静态障碍物地图的灰度值,具体包括:The road edge detection method according to claim 2, wherein the calculating the gray value of the static obstacle map specifically includes:
    融合连续若干帧所述静态障碍物地图的所述栅格值增加所述第一特征值或者减少所述第二特征值后,计算得出所述灰度值。After the grid values of the static obstacle map of several consecutive frames are fused to increase the first eigenvalue or decrease the second eigenvalue, the gray value is calculated.
  4. 如权利要求2所述的道路边缘检测方法,其特征在于,所述根据所述测量网格中是否具有所述深度点云,对所述测量网格进行标记,具体包括:The road edge detection method according to claim 2, wherein the marking the measurement grid according to whether there is the depth point cloud in the measurement grid specifically includes:
    将具有所述深度点云的所述测量网格标记为1,将不具有所述深度点云的所述测量网格标记为0。The measurement grid with the depth point cloud is marked as 1, and the measurement grid without the depth point cloud is marked as 0.
  5. 如权利要求1所述的道路边缘检测方法,其特征在于,所述根据所述机器人位姿、所述拓扑地图和所述灰度值计算道路边缘,具体包括:The road edge detection method according to claim 1, wherein calculating the road edge according to the robot pose, the topological map and the grayscale value specifically includes:
    根据所述机器人位姿在所述拓扑地图中找到所述机器人当前所在道路的拓扑路径;Find the topological path of the road where the robot is currently located in the topological map according to the pose of the robot;
    沿着所述拓扑路径以特定的空间间隔进行采样;sampling at specific spatial intervals along the topological path;
    以所述采样位置为起始点,沿着所述拓扑路径的法线方向查询所述灰度值;Taking the sampling position as a starting point, query the gray value along the normal direction of the topological path;
    当所述灰度值大于阈值时,记录对应的坐标位置;When the gray value is greater than the threshold, record the corresponding coordinate position;
    将若干所述坐标位置拟合成直线。Fit several of the coordinate positions to a straight line.
  6. 如权利要求5所述的道路边缘检测方法,其特征在于,所述将若干所述坐标位置拟合成直线,具体包括:The road edge detection method according to claim 5, wherein the fitting a plurality of the coordinate positions into a straight line specifically includes:
    采用随机抽样一致算法拟合所述直线。The line is fitted using a random sampling consensus algorithm.
  7. 如权利要求6所述的道路边缘检测方法,其特征在于,所述采用随机抽样一致算法拟合所述直线,具体包括:The road edge detection method according to claim 6, wherein the fitting of the straight line using a random sampling consensus algorithm specifically includes:
    计算所述直线的置信度;calculating the confidence of the straight line;
    挑选所述拓扑地图两侧评分最高的所述直线作为所述道路边缘。The straight line with the highest score on both sides of the topological map is selected as the road edge.
  8. 如权利要求7所述的道路边缘检测方法,其特征在于,所述计算所述直线的置信度,具体包括:The road edge detection method according to claim 7, wherein the calculating the confidence level of the straight line specifically includes:
    所述置信度的计算方法为:The calculation method of the confidence level is:
    Figure PCTCN2021134282-appb-100001
    Figure PCTCN2021134282-appb-100001
    其中,n为所述直线覆盖在所述静态障碍物地图中的像素数量,V为所述像素的像素值。Wherein, n is the number of pixels covered by the straight line in the static obstacle map, and V is the pixel value of the pixel.
  9. 如权利要求1所述的道路边缘检测方法,其特征在于,所述深度数 据通过深度相机获取。The road edge detection method according to claim 1, wherein the depth data is acquired by a depth camera.
  10. 如权利要求1所述的道路边缘检测方法,其特征在于,所述深度数据的获取过程包括:The road edge detection method according to claim 1, wherein the acquisition process of the depth data comprises:
    利用机器人上的激光雷达对所述机器人的周围环境进行扫描,以获取所述深度数据。The surrounding environment of the robot is scanned by the lidar on the robot to obtain the depth data.
  11. 如权利要求1所述的道路边缘检测方法,其特征在于,所述机器人位姿包括机器人的位置信息和朝向信息。The road edge detection method according to claim 1, wherein the robot pose includes position information and orientation information of the robot.
  12. 如权利要求1所述的道路边缘检测方法,其特征在于,所述机器人位姿的获取过程,包括:The road edge detection method according to claim 1, wherein the acquisition process of the robot pose comprises:
    利用激光雷达、IMU或里程计获取所述机器人位姿。The robot pose is obtained by using lidar, IMU or odometer.
  13. 如权利要求1所述的道路边缘检测方法,其特征在于,所述拓扑地图具体为人为设置的机器人移动路径。The road edge detection method according to claim 1, wherein the topological map is specifically a robot movement path set manually.
  14. 如权利要求1所述的道路边缘检测方法,其特征在于,所述拓扑地图包括机器人能够通行的路径信息。The road edge detection method according to claim 1, wherein the topology map includes path information that the robot can pass.
  15. 如权利要求5所述的道路边缘检测方法,其特征在于,所述拓扑地图包括若干所述拓扑路径。The road edge detection method according to claim 5, wherein the topological map includes several topological paths.
  16. 如权利要求5所述的道路边缘检测方法,其特征在于,所述沿着所述拓扑路径以特定的空间间隔进行采样的过程,包括:The road edge detection method according to claim 5, wherein the process of sampling at specific spatial intervals along the topological path comprises:
    沿着所述拓扑路径以等间距进行采样。Samples are taken at equal intervals along the topological path.
  17. 如权利要求5所述的道路边缘检测方法,其特征在于,还包括:The road edge detection method of claim 5, further comprising:
    当所述灰度值大于所述阈值时,则将与所述灰度值对应的像素判定为障碍物的所在位置。When the gray value is greater than the threshold, the pixel corresponding to the gray value is determined as the location of the obstacle.
  18. 如权利要求17所述的道路边缘检测方法,其特征在于,还包括:The road edge detection method of claim 17, further comprising:
    根据所述障碍物的所在位置并结合所述拓扑路径拟合所述道路边缘。The road edge is fitted according to the location of the obstacle and combined with the topological path.
  19. 一种机器人,包括存储器、处理器,所述存储器中存储有可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令 时实现如权利要求1-18任一项所述的道路边缘检测方法的步骤。A robot, comprising a memory and a processor, wherein computer-readable instructions that can be executed on the processor are stored in the memory, and when the processor executes the computer-readable instructions, any one of claims 1-18 is implemented. A step of the road edge detection method.
  20. 一种计算机存储介质,所述计算机可读存储介质存储有计算机可读指令,所述计算机可读指令被处理器执行时实现如权利要求1-18任一项所述的道路边缘检测方法的步骤。A computer storage medium, the computer-readable storage medium stores computer-readable instructions, and when the computer-readable instructions are executed by a processor, implements the steps of the road edge detection method according to any one of claims 1-18 .
PCT/CN2021/134282 2020-11-30 2021-11-30 Road edge detection method and robot WO2022111723A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011380356.4A CN112486172A (en) 2020-11-30 2020-11-30 Road edge detection method and robot
CN202011380356.4 2020-11-30

Publications (1)

Publication Number Publication Date
WO2022111723A1 true WO2022111723A1 (en) 2022-06-02

Family

ID=74938491

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/134282 WO2022111723A1 (en) 2020-11-30 2021-11-30 Road edge detection method and robot

Country Status (2)

Country Link
CN (1) CN112486172A (en)
WO (1) WO2022111723A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115330969A (en) * 2022-10-12 2022-11-11 之江实验室 Local static environment vectorization description method for ground unmanned vehicle

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112486172A (en) * 2020-11-30 2021-03-12 深圳市普渡科技有限公司 Road edge detection method and robot

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103247040A (en) * 2013-05-13 2013-08-14 北京工业大学 Layered topological structure based map splicing method for multi-robot system
CN103400392A (en) * 2013-08-19 2013-11-20 山东鲁能智能技术有限公司 Binocular vision navigation system and method based on inspection robot in transformer substation
CN103456182A (en) * 2013-09-06 2013-12-18 浙江大学 Road edge detection method and system based on distance measuring sensor
KR20170050166A (en) * 2015-10-29 2017-05-11 한국과학기술연구원 Robot control system and method for planning driving path of robot
CN107544501A (en) * 2017-09-22 2018-01-05 广东科学技术职业学院 A kind of intelligent robot wisdom traveling control system and its method
CN109765901A (en) * 2019-02-18 2019-05-17 华南理工大学 Dynamic cost digital map navigation method based on line laser and binocular vision
CN109895100A (en) * 2019-03-29 2019-06-18 深兰科技(上海)有限公司 A kind of generation method of navigation map, device and robot
CN109993780A (en) * 2019-03-07 2019-07-09 深兰科技(上海)有限公司 A kind of three-dimensional high-precision ground drawing generating method and device
CN111679664A (en) * 2019-02-25 2020-09-18 北京奇虎科技有限公司 Three-dimensional map construction method based on depth camera and sweeping robot
CN112486172A (en) * 2020-11-30 2021-03-12 深圳市普渡科技有限公司 Road edge detection method and robot

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4282662B2 (en) * 2004-12-14 2009-06-24 本田技研工業株式会社 Moving path generation device for autonomous mobile robot
US9630319B2 (en) * 2015-03-18 2017-04-25 Irobot Corporation Localization and mapping using physical features
KR102466940B1 (en) * 2018-04-05 2022-11-14 한국전자통신연구원 Topological map generation apparatus for traveling robot and method thereof
CN109074668B (en) * 2018-08-02 2022-05-20 达闼机器人股份有限公司 Path navigation method, related device and computer readable storage medium
CN110147748B (en) * 2019-05-10 2022-09-30 安徽工程大学 Mobile robot obstacle identification method based on road edge detection
CN111161334B (en) * 2019-12-31 2023-06-02 南通大学 Semantic map construction method based on deep learning

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103247040A (en) * 2013-05-13 2013-08-14 北京工业大学 Layered topological structure based map splicing method for multi-robot system
CN103400392A (en) * 2013-08-19 2013-11-20 山东鲁能智能技术有限公司 Binocular vision navigation system and method based on inspection robot in transformer substation
CN103456182A (en) * 2013-09-06 2013-12-18 浙江大学 Road edge detection method and system based on distance measuring sensor
KR20170050166A (en) * 2015-10-29 2017-05-11 한국과학기술연구원 Robot control system and method for planning driving path of robot
CN107544501A (en) * 2017-09-22 2018-01-05 广东科学技术职业学院 A kind of intelligent robot wisdom traveling control system and its method
CN109765901A (en) * 2019-02-18 2019-05-17 华南理工大学 Dynamic cost digital map navigation method based on line laser and binocular vision
CN111679664A (en) * 2019-02-25 2020-09-18 北京奇虎科技有限公司 Three-dimensional map construction method based on depth camera and sweeping robot
CN109993780A (en) * 2019-03-07 2019-07-09 深兰科技(上海)有限公司 A kind of three-dimensional high-precision ground drawing generating method and device
CN109895100A (en) * 2019-03-29 2019-06-18 深兰科技(上海)有限公司 A kind of generation method of navigation map, device and robot
CN112486172A (en) * 2020-11-30 2021-03-12 深圳市普渡科技有限公司 Road edge detection method and robot

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115330969A (en) * 2022-10-12 2022-11-11 之江实验室 Local static environment vectorization description method for ground unmanned vehicle

Also Published As

Publication number Publication date
CN112486172A (en) 2021-03-12

Similar Documents

Publication Publication Date Title
US10165422B2 (en) Scalable indoor navigation and positioning systems and methods
CN111076733B (en) Robot indoor map building method and system based on vision and laser slam
WO2021022615A1 (en) Method for generating robot exploration path, and computer device and storage medium
Nieto et al. Recursive scan-matching SLAM
WO2022111723A1 (en) Road edge detection method and robot
US8340818B2 (en) Method of accurate mapping with mobile robots
CN110174894B (en) Robot and repositioning method thereof
CN113074727A (en) Indoor positioning navigation device and method based on Bluetooth and SLAM
WO2020224305A1 (en) Method and apparatus for device positioning, and device
US20170251338A1 (en) Systems and methods for determining indoor location and floor of a mobile device
CN110749901B (en) Autonomous mobile robot, map splicing method and device thereof, and readable storage medium
CN110986956A (en) Autonomous learning global positioning method based on improved Monte Carlo algorithm
WO2019136613A1 (en) Indoor locating method and device for robot
JP2014523572A (en) Generating map data
Chen et al. Drio: Robust radar-inertial odometry in dynamic environments
KR101054520B1 (en) How to recognize the location and direction of the indoor mobile robot
CN108253968B (en) Barrier winding method based on three-dimensional laser
CN111474560A (en) Obstacle positioning method, device and equipment
AU2021273605B2 (en) Multi-agent map generation
Ma et al. Semantic geometric fusion multi-object tracking and lidar odometry in dynamic environment
Hroob et al. Learned Long-Term Stability Scan Filtering for Robust Robot Localisation in Continuously Changing Environments
WO2022071315A1 (en) Autonomous moving body control device, autonomous moving body control method, and program
Lee et al. EKF localization with lateral distance information for mobile robots in urban environments
Pan et al. LiDAR-IMU Tightly-Coupled SLAM Method Based on IEKF and Loop Closure Detection
CN115200601A (en) Navigation method, navigation device, wheeled robot and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21897223

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21897223

Country of ref document: EP

Kind code of ref document: A1