CN113358118A - End-to-end autonomous navigation method for indoor mobile robot in unstructured environment - Google Patents

End-to-end autonomous navigation method for indoor mobile robot in unstructured environment Download PDF

Info

Publication number
CN113358118A
CN113358118A CN202110488134.2A CN202110488134A CN113358118A CN 113358118 A CN113358118 A CN 113358118A CN 202110488134 A CN202110488134 A CN 202110488134A CN 113358118 A CN113358118 A CN 113358118A
Authority
CN
China
Prior art keywords
list
door
current
mobile robot
elements
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110488134.2A
Other languages
Chinese (zh)
Other versions
CN113358118B (en
Inventor
黄冉
鲁张帆
刘可
曹政才
胡标
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Chemical Technology
Original Assignee
Beijing University of Chemical Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Chemical Technology filed Critical Beijing University of Chemical Technology
Priority to CN202110488134.2A priority Critical patent/CN113358118B/en
Publication of CN113358118A publication Critical patent/CN113358118A/en
Application granted granted Critical
Publication of CN113358118B publication Critical patent/CN113358118B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/383Indoor data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3863Structures of map data
    • G01C21/3867Geometry of map features, e.g. shape points, polygons or for simplified maps

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

本发明公开了非结构环境中端到端的室内移动机器人自主导航方法,包括以下步骤:1)获取陌生建筑内平面布局图;2)对布局图进行拓扑化,创建无向图;3)使用无向图,结合目标房间生成最短节点路径;4)将机器人第一视角图像输入避碰导航模块,生出避碰路点;5)利用三次样条生成避碰路径,通过模型预测控制进行跟踪;6)判断是否抵达目标点,如果是,结束导航,否则,返回步骤3;采用本机器人自主导航方法,能够在复杂的医疗环境中,仅使用单个单目RGB相机作为视觉输入以实现自主导航。优点是简单、灵活、经济、实用,准确性高、鲁棒性强。

Figure 202110488134

The invention discloses an end-to-end autonomous navigation method for an indoor mobile robot in an unstructured environment. 4) Input the first-view image of the robot into the collision avoidance navigation module to generate collision avoidance waypoints; 5) Use cubic splines to generate collision avoidance paths and track them through model predictive control; 6 ) to determine whether the target point has been reached, if so, end the navigation, otherwise, return to step 3; using this robot autonomous navigation method, in a complex medical environment, only a single monocular RGB camera can be used as visual input to achieve autonomous navigation. The advantages are simplicity, flexibility, economy, practicality, high accuracy and robustness.

Figure 202110488134

Description

非结构环境中端到端的室内移动机器人自主导航方法End-to-end autonomous navigation method for indoor mobile robots in unstructured environments

技术领域technical field

本发明涉及非结构环境中端到端的室内机器人自主导航方法,尤其涉及一种非结构环境中端到端的医疗服务机器人自主导航方法。The invention relates to an end-to-end indoor robot autonomous navigation method in an unstructured environment, in particular to an end-to-end autonomous navigation method of a medical service robot in an unstructured environment.

背景技术Background technique

随着社会经济的不断发展,人们对于自身健康状况以及医疗服务水平的要求逐年提高,医疗任务繁重和医务人员短缺是医疗领域亟需面对的问题。现代科技的发展,各种移动机器人应运而生,能代替或辅助医务人员进行样本运送,医疗垃圾回收等工作,减少医务人员的重复性劳动。这为解决医疗领域工作繁重、人员短缺问题提供了新方案。With the continuous development of social economy, people's requirements for their own health status and medical service level are increasing year by year. Heavy medical tasks and shortage of medical personnel are urgent problems in the medical field. With the development of modern technology, various mobile robots have emerged as the times require, which can replace or assist medical staff in sample delivery, medical waste recycling and other tasks, reducing the repetitive work of medical staff. This provides a new solution to the problem of heavy work and personnel shortages in the medical field.

上述机器人运行在复杂多变、人员大量流动的医疗环境中,为了在“人-机”共享空间中安全、准确、快速地抵达目标点,需要可靠的自主导航技术。经典移动机器人的自主导航如人工势场法、滚动窗口法虽然在固定环境中有良好的表现,但在动态环境中其适应性弱,可靠性差并且容易陷入局部最优解。The above-mentioned robots operate in a complex and changeable medical environment with a large flow of people. In order to reach the target point safely, accurately and quickly in the "human-machine" shared space, reliable autonomous navigation technology is required. The autonomous navigation of classical mobile robots, such as artificial potential field method and rolling window method, has good performance in fixed environment, but in dynamic environment, its adaptability is weak, reliability is poor, and it is easy to fall into local optimal solution.

针对于动态环境,当前提出一种端到端的机器人导航方法。该方法通过神经网络将机器人当前的视觉图像或是激光雷达的检测信息映射为机器人的速度与转角。在一定程度上解决了在动态环境中机器人导航可靠性差的问题。但对于人员流动大的复杂环境,它的实时避碰效果人不能令人满意。鉴于上述诸多因素,医疗服务机器人至今仍难普及。For dynamic environments, an end-to-end robot navigation method is currently proposed. The method maps the current visual image of the robot or the detection information of the lidar to the speed and rotation angle of the robot through the neural network. To a certain extent, the problem of poor robot navigation reliability in dynamic environments is solved. But for a complex environment with large flow of people, its real-time collision avoidance effect is unsatisfactory. In view of many of the above factors, medical service robots are still difficult to popularize.

发明内容SUMMARY OF THE INVENTION

本发明要解决的技术问题是克服当前技术中的缺陷,提供一种非结构环境中端到端的室内机器人自主导航方法,主要针对医院等大型建筑物室内的导航。要点在于通过对建筑物内部的平面布局图处理以解决利用系数地图导航的问题;通过端到端的导航模块以解决在复杂多变的动态环境中避碰问题。The technical problem to be solved by the present invention is to overcome the defects in the current technology and provide an end-to-end indoor robot autonomous navigation method in a non-structural environment, which is mainly aimed at indoor navigation in large buildings such as hospitals. The main point is to solve the problem of using coefficient map navigation through the processing of the floor plan inside the building; to solve the problem of collision avoidance in a complex and dynamic environment through the end-to-end navigation module.

为了实现上述目的,本发明采用如下技术方案:In order to achieve the above object, the present invention adopts the following technical solutions:

1)获取楼层平面布局图,构建语义-拓扑图。通过给定目标房间,利用无向图搜索最短节点路径,将该条路径所有节点设为子目标。1) Obtain the floor plan and construct the semantic-topology map. With a given target room, an undirected graph is used to search for the shortest node path, and all nodes in this path are set as sub-goals.

2)获取机器人第一视角场景图像。2) Obtain the first-view scene image of the robot.

3)结合步骤2所获取的机器人第一视角场景图像以及当前子目标,导航模块输出基于当前场景的避碰路点。3) Combined with the first-view scene image of the robot obtained in step 2 and the current sub-target, the navigation module outputs the collision avoidance waypoint based on the current scene.

所述步骤3之后,还包括:After the step 3, it also includes:

4)基于步骤3生的避碰路点,机器人通过三次样条生成避碰路径,并利用模型预测控制跟踪路径抵达避碰路点。4) Based on the collision avoidance waypoint generated in step 3, the robot generates a collision avoidance path through cubic splines, and uses the model predictive control to track the path to the collision avoidance waypoint.

所述步骤4之后,还包括以下步骤:After the step 4, the following steps are also included:

5)判断是否抵达当前子目标,如果是,执行步骤6;如果否,返回步骤2。5) Determine whether the current sub-goal is reached, if yes, go to step 6; if not, go back to step 2.

6)判断是否抵达目标点坐标,如果是,停止导航;如果否,获取基于步骤6时刻的当前子目标,并返回步骤2。6) Determine whether the coordinates of the target point are reached, if so, stop the navigation; if not, obtain the current sub-target based on the moment in step 6, and return to step 2.

所述拓扑图,其创建包括以下步骤:The creation of the topology map includes the following steps:

A)获取建筑物楼层平面布局图;A) Obtain the floor plan of the building;

B)使用SegLink对该平面图进行文本(房间号、走廊号)检测、使用yolo对标识检测、识别,其检测具体结果为:用矩形框标识出该平面图的房间号、走廊号以及标识符(包括楼梯、安全出口、卫生间标识符),记录矩形框左上角位置坐标和矩形框的长宽;使用CRNN对上述文本进行识别;B) Use SegLink to detect the text (room number, corridor number) of the floor plan, and use yolo to detect and identify the logo. The specific results of the detection are: use a rectangular frame to identify the room number, corridor number and identifier of the floor plan (including stairs, safety exits, toilet identifiers), record the position coordinates of the upper left corner of the rectangular frame and the length and width of the rectangular frame; use CRNN to identify the above text;

C)使用语义分割技术对该平面图进行门、墙、房间、走廊元素的识别,识别具体结果为:对于平面图中为墙体的元素标记为黑色,对于平面图中为门的元素标记为粉色,对于平面图中为走廊的元素标记为蓝色,对于平面图中为房间的元素标记为多种随机色(非上述三种颜色);具体实现方法如下:C) Use semantic segmentation technology to identify the elements of doors, walls, rooms, and corridors on the floor plan. The specific results of the identification are: for the elements that are walls in the floor plan, marked as black, for the elements of the floor plan, the elements of doors are marked as pink, for The elements that are corridors in the floor plan are marked with blue, and the elements that are rooms in the floor plan are marked with a variety of random colors (not the above three colors); the specific implementation methods are as follows:

a)平面图预处理为512×512像素图像;a) The floor plan is preprocessed into a 512×512 pixel image;

b)使用VGG-encoder对预处理图像进行特征提取;b) Use VGG-encoder to perform feature extraction on the preprocessed image;

c)使用VGG-decoder对特征进行分别解码,分别预测任务一,房门、墙体,和任务二,房间、走廊,总体网络(如图1所示)损失函数定义如下:c) Use VGG-decoder to decode the features separately, and predict task 1, door, wall, and task 2, room, corridor, and overall network (as shown in Figure 1). The loss function is defined as follows:

Figure BDA0003051279230000021
Figure BDA0003051279230000021

wi=N-Niw i =NN i ;

LTotal=ω1L12L2L Total1 L 12 L 2 ;

Figure BDA0003051279230000022
Figure BDA0003051279230000022

其中,L表示任务一、二的交叉熵损失;C表示预测类型种类;yi表示该像素点的真实种类;pi表示该像素点的预测种类;N表示在该任务中,预测类型的像素点总数;Ni表示该任务中,第i种预测类型像素总数;LTotal表示任务一和任务二的总体交叉熵损失;NTotal_i表示在一张预测图像中,任务一和任务二预测类型的总像素数。Among them, L represents the cross-entropy loss of tasks 1 and 2; C represents the type of prediction type; y i represents the real type of the pixel point; pi represents the prediction type of the pixel point; N represents the pixel of the prediction type in this task The total number of points; Ni represents the total number of pixels of the i-th prediction type in the task; L Total represents the overall cross-entropy loss of task 1 and task 2; N Total_i represents the prediction type of task 1 and task 2 in a prediction image. total number of pixels.

D)拓扑结构的构建,通过以下步骤:D) Construction of topology structure, through the following steps:

a)在经过上述语义分割处理的图片中,找到所有颜色为粉色(房门元素)的像素点,存入door_list列表,将该列表中的像素点经过分类存入不同的列表door中,通过执行以下算法:a) In the image processed by the above semantic segmentation, find all the pixels whose color is pink (door element), store them in the door_list list, and store the pixels in the list into different lists door after classification. The following algorithm:

步骤1:取出door_list中第1个像素点存入列表door中,并以此作为种子点;Step 1: Take out the first pixel in door_list and store it in the list door, and use it as a seed point;

步骤2:判断door_list中是否存在当前种子点的8邻域邻接点或door中是否存在除种子点外的其他点,如果是,执行步骤3;如果否,执行步骤4;Step 2: Determine whether there are 8 adjacent points of the current seed point in the door_list or whether there are other points except the seed point in the door, if so, go to step 3; if not, go to step 4;

步骤3:取出door_list中当前种子点的8邻域邻接点,存入列表door中,并将door中原种子点下一个点作为当前种子点,并返回步骤2;Step 3: Take out the 8 neighboring points of the current seed point in the door_list, store them in the list door, and use the next point of the original seed point in the door as the current seed point, and return to step 2;

步骤4:判断door_list是否为空,如果为空,结束循环;如果不空,建立新的列表door并返回步骤1;Step 4: Determine whether the door_list is empty, if it is empty, end the loop; if it is not empty, create a new list door and return to step 1;

步骤5:求取所有列表door内的元素坐标均值,以该均值表示房门坐标;Step 5: Obtain the mean value of the coordinates of all elements in the list door, and use the mean value to represent the coordinates of the door;

b1)将B中获取的门牌号、标识符关联节点颜色,将所有列表door作为独立元素入栈,执行以下算法:b1) Associate the color of the house number and identifier obtained in B, push all the list door as independent elements into the stack, and execute the following algorithm:

步骤1:若栈为空,算法结束;否则取出栈顶元素中心坐标作为种子点;Step 1: If the stack is empty, the algorithm ends; otherwise, take the center coordinates of the top element of the stack as the seed point;

步骤2:获取种子点4邻域邻接点,若有非粉、非黑色点并且邻接列表不存在当前颜色,将该坐标存入邻接列表,否则将当前4个点作为新种子点;Step 2: Obtain the adjacent points in the neighborhood of the seed point 4. If there are non-pink, non-black points and the current color does not exist in the adjacency list, the coordinates are stored in the adjacency list, otherwise the current 4 points are used as new seed points;

步骤3:若遍历完当前元素或邻接列表长度等于2,则创建新邻接列表并返回步骤1,否则返回步骤2;Step 3: If the current element is traversed or the length of the adjacency list is equal to 2, create a new adjacency list and return to step 1, otherwise return to step 2;

c1)通过查找各邻接列表内包含的元素,判断同一房门的所属节点;两者共有同一个房门的节点,判定为连通区域,构建拓扑图1。c1) By looking up the elements contained in each adjacency list, determine the node to which the same door belongs; the nodes that both share the same door are determined to be connected areas, and a topology diagram 1 is constructed.

b2)求出a中列表door的元素个数的均值μ0,删除元素个数小于μ0/2的列表,并求剩余列表door的元素个数均值μ1与方差

Figure BDA0003051279230000023
通过给定置信度α=0.05,求取均值置信水平为0.95的置信区间:b2) Find the mean value μ 0 of the number of elements in the list door in a, delete the list whose number of elements is less than μ 0 /2, and find the mean value μ 1 and the variance of the number of elements in the remaining list door
Figure BDA0003051279230000023
By giving a confidence level α=0.05, find the confidence interval with a confidence level of 0.95 for the mean:

Figure BDA0003051279230000024
Figure BDA0003051279230000024

将B中获取的门牌号、标识符关联房间颜色,将所有列表door作为独立元素入栈,执行以下算法:Associate the house number and identifier obtained in B with the room color, put all the list doors as independent elements on the stack, and execute the following algorithm:

步骤1:若栈为空,算法结束;否则取出栈顶元素中心坐标作为种子点;Step 1: If the stack is empty, the algorithm ends; otherwise, take the center coordinates of the top element of the stack as the seed point;

步骤2:获取种子点4邻域邻接点,若有非粉、非黑色点并且新列表不存在当前颜色,将该坐标存入新列表,否则将当前4个点作为新种子点;Step 2: Obtain the neighboring points of the 4th neighborhood of the seed point. If there are non-pink, non-black points and the new list does not have the current color, the coordinates are stored in the new list, otherwise the current 4 points are used as new seed points;

步骤3:若遍历完当前元素或邻接列表长度等于2,则创建新邻接列表并返回步骤1,否则返回步骤2。Step 3: If the current element is traversed or the length of the adjacency list is equal to 2, create a new adjacency list and return to step 1, otherwise return to step 2.

c2)通过查找各邻接列表内包含的元素,判断同一房门的所属节点;两者共有同一个房门的节点,判定为连通区域,构建拓扑图2;c2) by looking up the elements contained in each adjacency list, determine the node to which the same door belongs; the nodes that both share the same door are determined to be connected areas, and a topology diagram 2 is constructed;

所述无向图,其创建包括以下步骤:The creation of the undirected graph includes the following steps:

A)获取真实房间、房门位置信息;A) Obtain real room and door location information;

B)将A中获取的位置信息与拓扑图1、2相关联,构建无向图;B) Associate the position information obtained in A with the topological graphs 1 and 2 to construct an undirected graph;

C)将无向图2转换为邻接矩阵A(横纵坐标表示节点,矩阵元素1、0表示是否连通);C) Convert the undirected graph 2 into an adjacency matrix A (the horizontal and vertical coordinates represent nodes, and the matrix elements 1 and 0 represent whether they are connected);

D)通过邻接矩阵A构建拉普拉斯矩阵L:D) Construct the Laplacian matrix L from the adjacency matrix A:

L=D-A;L=D-A;

Figure BDA0003051279230000031
Figure BDA0003051279230000031

E)若拉普拉斯矩阵L的特征根有且只有一个0,则使用无向图2,否则使用无向图1。E) If the eigenvalue of the Laplacian matrix L has one and only one 0, use the undirected graph 2, otherwise use the undirected graph 1.

所述导航模块,包括以下方面:The navigation module includes the following aspects:

A)导航模块主体由ResNet-50网络以及3个全连接层构成(如图2所示);A) The main body of the navigation module is composed of the ResNet-50 network and 3 fully connected layers (as shown in Figure 2);

B)以当前机器人第一视角图像输入ResNet-50网络,其输出的特征连同目标点坐标和当前速度、角速度信息一同输入全连接层,最后输出路点位置;B) Input the ResNet-50 network with the current robot's first-view image, and input the output features together with the target point coordinates and the current speed and angular velocity information into the fully connected layer, and finally output the waypoint position;

C)上述网络通过虚拟方式训练,方法如下:C) The above network is trained in a virtual manner as follows:

a)在虚拟环境中,通过三次样条函数模拟人行走轨迹,通过模型预测控制跟踪轨迹,优化过程需通过最小化如下方程:a) In the virtual environment, the human walking trajectory is simulated by the cubic spline function, and the trajectory is tracked by the model prediction control. The optimization process needs to minimize the following equation:

Figure BDA0003051279230000032
Figure BDA0003051279230000032

Figure BDA0003051279230000033
Figure BDA0003051279230000033

其中,

Figure BDA00030512792300000318
表示惩罚权重;
Figure BDA0003051279230000034
表示i时刻人的状态,包括所在位置及转角;
Figure BDA0003051279230000035
表示人与人的目标点的距离;
Figure BDA0003051279230000036
表示人与障碍物的惩罚距离;
Figure BDA0003051279230000037
表示人与障碍物的带符号距离;
Figure BDA0003051279230000038
表示i时刻人的速度及角速度;in,
Figure BDA00030512792300000318
represents the penalty weight;
Figure BDA0003051279230000034
Indicates the state of the person at time i, including the location and corner;
Figure BDA0003051279230000035
Represents the distance between people and the target point;
Figure BDA0003051279230000036
Represents the penalty distance between the person and the obstacle;
Figure BDA0003051279230000037
Indicates the signed distance between the person and the obstacle;
Figure BDA0003051279230000038
Represents the speed and angular velocity of the person at time i;

b)在虚拟环境中,通过三次样条函数模拟机器人运动轨迹,通过模型预测控制跟踪轨迹,优化过程需通过最小化如下方程:b) In the virtual environment, the motion trajectory of the robot is simulated by the cubic spline function, and the trajectory is tracked by the model prediction control. The optimization process needs to minimize the following equation:

Figure BDA0003051279230000039
Figure BDA0003051279230000039

Figure BDA00030512792300000310
Figure BDA00030512792300000310

其中,

Figure BDA00030512792300000311
表示惩罚权重;
Figure BDA00030512792300000312
表示i时刻移动机器人的状态,包括所在位置及转角;
Figure BDA00030512792300000313
表示人与人的目标点的距离;
Figure BDA00030512792300000314
表示移动机器人与障碍物的惩罚距离;
Figure BDA00030512792300000315
表示移动机器人与障碍物的带符号距离;
Figure BDA00030512792300000316
表示移动机器人与人的惩罚距离;
Figure BDA00030512792300000317
表示i时刻移动机器人距人的带符号距离;in,
Figure BDA00030512792300000311
represents the penalty weight;
Figure BDA00030512792300000312
Indicates the state of the mobile robot at time i, including its location and corner;
Figure BDA00030512792300000313
Represents the distance between people and the target point;
Figure BDA00030512792300000314
Represents the penalty distance between the mobile robot and the obstacle;
Figure BDA00030512792300000315
Indicates the signed distance between the mobile robot and the obstacle;
Figure BDA00030512792300000316
Indicates the penalty distance between the mobile robot and the human;
Figure BDA00030512792300000317
Represents the signed distance between the mobile robot and the person at time i;

c)在虚拟环境记录机器人当前第一视角图像、当前状态信息,截取通过模型预测控制生成的路径点,以第一视角图像、当前状态信息作为输入、以路点信息作为输出,进行监督学习训练;c) Record the robot's current first-view image and current state information in the virtual environment, intercept the path points generated by the model predictive control, take the first-view image and current state information as input, and use the waypoint information as output to conduct supervised learning training ;

所述三次样条函数和模型预测控制,包括以下步骤:The cubic spline function and model predictive control include the following steps:

A)根据当前速度及转角,结合下一路点信息,利用三次样条函数进行避碰路径规划;A) According to the current speed and turning angle, combined with the next way point information, use the cubic spline function to plan the collision avoidance path;

B)通过模型预测控制对上述路径进行跟踪。B) The above path is tracked by model predictive control.

所述建筑物场景包括医院建筑物、商场建筑物和学校建筑物。The building scene includes a hospital building, a shopping mall building, and a school building.

采用本发明动态环境中基于深度学习的医疗服务机器人自主导航方法,直接利用单目RGB相机作为视觉输入设备,价格经济并且可靠性强、稳定性高。基于图-结构的最短节点路径在很大程度上解决了局部最优问题;基于学习的导航模块能够有效准确预测移动障碍物的运行轨迹,解决了在复杂多变环境中避碰的问题。通过预测控制驱动保证机器人的平稳运行,三次样条函数保证路点前后其速度、加速度的连续与平滑,鲁棒性强。The invention adopts the deep learning-based autonomous navigation method for a medical service robot in a dynamic environment, and directly uses a monocular RGB camera as a visual input device, which is economical in price, has strong reliability and high stability. The shortest node path based on graph-structure solves the local optimal problem to a large extent; the learning-based navigation module can effectively and accurately predict the trajectory of moving obstacles, and solve the problem of collision avoidance in complex and changeable environments. Through the predictive control drive to ensure the smooth operation of the robot, the cubic spline function ensures the continuity and smoothness of the speed and acceleration before and after the waypoint, and has strong robustness.

附图说明Description of drawings

图1为本发明的语义分割网络结构;Fig. 1 is the semantic segmentation network structure of the present invention;

图2为本发明的导航模块网络结构;Fig. 2 is the network structure of the navigation module of the present invention;

图3为本发明的语义-拓扑地图构建流程图;Fig. 3 is the semantic-topological map construction flow chart of the present invention;

图4为本发明的室内无向图制作流程;Fig. 4 is the indoor undirected graph making process of the present invention;

图5为本发明中楼层平面图示意图;5 is a schematic diagram of a floor plan in the present invention;

图6为本发明中不添加交接点的楼层平面图拓扑图;6 is a floor plan topology diagram without adding a junction point in the present invention;

图7为本发明中添加交接点的楼层平面图拓扑图;Fig. 7 is the floor plan topology diagram of adding junction point in the present invention;

图8为本发明中最短节点路径生成示意图;8 is a schematic diagram of generating the shortest node path in the present invention;

图9为本发明中局部路径规划示意图。FIG. 9 is a schematic diagram of partial path planning in the present invention.

图10为本发明方法实施的整体流程图。FIG. 10 is an overall flow chart of the implementation of the method of the present invention.

具体实施方式Detailed ways

为了使本技术领域从业者能够更好直观地理解本发明,下面结合具体案例的实施方式对本发明作进一步阐述说明,但实际应用中,本发明并不限于具体案例。In order to enable practitioners in the technical field to understand the present invention more intuitively, the present invention is further described below in conjunction with the embodiments of specific cases, but in practical applications, the present invention is not limited to specific cases.

本发明的一种动态环境中基于深度学习的医疗服务机器人自主导航方法,需要在例如医院建筑物等大型室内场所使用,其实现的功能是在这种动态环境中导航移动机器人安全、准确、快速地抵达目标点。A deep learning-based autonomous navigation method for a medical service robot in a dynamic environment of the present invention needs to be used in large indoor places such as hospital buildings, and the realized function is to navigate the mobile robot safely, accurately and quickly in this dynamic environment reach the target point.

上述使用此项导航方法的用户需要首先完成以下室内无向图制作步骤(如图3所示):The above-mentioned users who use this navigation method need to complete the following indoor undirected graph creation steps (as shown in Figure 3):

获取建筑物(如医院)楼层平面图,建立初步室内地图(如图4所示);Obtain the floor plan of a building (such as a hospital) and build a preliminary indoor map (as shown in Figure 4);

获取房间以及房间-走廊交接点(房门)位置信息(坐标);Obtain the location information (coordinates) of the room and the room-corridor junction (door);

对楼层平面图进行拓扑化(步骤如图5所示),拓扑化标准如下:房间、大厅等结构视为1个节点;连通的走廊视为1个节点;a走廊被b走廊单侧截断(T型路口),a、b走廊均视为1个节点;a走廊与b走廊相互截断(十字路口),a、b走廊均视为2个节点;被房间、大厅截断的走廊视为2个节点;房间-走廊交接点视为1个边;制作结果如图(如图6、7所示),Topologicalize the floor plan (the steps are shown in Figure 5), and the topological standards are as follows: the room, hall and other structures are regarded as 1 node; the connected corridor is regarded as 1 node; corridor a is truncated by corridor b on one side (T type intersection), corridors a and b are regarded as 1 node; corridor a and corridor b are cut off from each other (crossroads), corridors a and b are regarded as 2 nodes; corridors cut off by rooms and halls are regarded as 2 nodes ; The room-corridor junction is regarded as an edge; the production results are shown in the figure (as shown in Figures 6 and 7),

将拓扑图中的各个节点同获取的位置信息进行关联,以无向图的形式表示,建立完整的室内无向地图(无向图1、无向图2)。使用拉普拉斯矩阵来判断整个图的连通性;若使用阈值法消除房门误差干扰的无向图(无向图1)是不连通的(拉普拉斯矩阵存在两个或两个以上的0特征根),则需要使用未消除干扰的无向图(无向图2)。Associate each node in the topology map with the obtained location information, and represent it in the form of an undirected graph to build a complete indoor undirected map (undirected graph 1, undirected graph 2). Use the Laplace matrix to judge the connectivity of the entire graph; if the undirected graph (undirected graph 1) that uses the threshold method to eliminate the door error interference is disconnected (there are two or more Laplace matrices) 0 characteristic root), you need to use an undirected graph (undirected graph 2) without interference removal.

存入室内无向图后,用户还可以根据需要对数据节点进行增加或删除。After storing in the indoor undirected graph, users can also add or delete data nodes as needed.

完成上述工作,用户欲在医院内部使用导航功能,需要完成以现步骤:After completing the above work, if the user wants to use the navigation function in the hospital, he needs to complete the following steps:

1)给定目标房间,利用无向图搜索最短节点路径,标记所述路径上的节点为子目标;1) Given a target room, use an undirected graph to search for the shortest node path, and mark the nodes on the path as sub-targets;

该步骤中,用户给定机器人目标房间(例如A12),结合当前自身位置(例如A8),利用D中所述无向图搜索A8-A12最短节点路径,得到结果(a):A8-C3-H5-C2-A2。加之边可得到如下结果(b):A8-(A8-C3)-(C3-H5)-(H5-C2)-(C2-A12)-A12。(注:图中边C3-A8与A8-C3皆为同一边。)将(b)中节点设为子目标(如图8所示),图中十字符号为子目标所在位置,以此作为机器人接下来导航依据;In this step, the user gives the robot's target room (such as A12), combined with the current position (such as A8), uses the undirected graph described in D to search for the shortest node path of A8-A12, and obtains the result (a): A8-C3- H5-C2-A2. Adding the edges gives the following result (b): A8-(A8-C3)-(C3-H5)-(H5-C2)-(C2-A12)-A12. (Note: The edges C3-A8 and A8-C3 in the figure are the same side.) Set the node in (b) as the sub-target (as shown in Figure 8), and the cross symbol in the figure is the location of the sub-target, which is used as the The robot's next navigation basis;

2)获取机器人当前第一视角场景图像;2) Obtain the current first-view scene image of the robot;

场景图像通过设置在机器人正面的单目RGB相机获取,无需任何的深度信息以及地图信息。该场景图像将被用于第3步路径规划;(注:相机的固定位置需满足以下条件:a.使得拍摄的场景图像能够包括机器人正面的绝大多数障碍物;b.固定位置应在机器人相对高度靠上,例如头部。)The scene image is acquired by the monocular RGB camera set on the front of the robot, without any depth information and map information. The scene image will be used for the third step path planning; (Note: The fixed position of the camera must meet the following conditions: a. The captured scene image can include most of the obstacles in front of the robot; b. The fixed position should be in the robot Relatively high above, such as the head.)

3)机器人通过当前第一视角图像对抵达当前子目标进行避碰路点输出;3) The robot outputs the collision avoidance waypoint to the current sub-target through the current first-view image;

该步骤中,机器人通过步骤2中拍摄的场景图像,通过既定子目标,利用导航模块输出避碰路点,即第4步机器人即将抵达的位置;In this step, the robot uses the navigation module to output the collision avoidance waypoint through the scene image captured in step 2, through the established sub-target, that is, the position that the robot is about to reach in step 4;

由于基于神经网络的导航模块在训练过程中数据集包含移动障碍物,因此导航过程能够顺利躲避移动障碍物(如图9所示);Since the data set of the neural network-based navigation module contains moving obstacles during the training process, the navigation process can smoothly avoid moving obstacles (as shown in Figure 9);

由于基于神经网络的导航模块在训练过程中数据集包含被噪声干扰的样本,因此即使遇到光线不好而导致的场景图像模糊此类情况导航模块也能稳定有效地输出路点信息;Since the data set of the neural network-based navigation module contains samples disturbed by noise during the training process, the navigation module can output waypoint information stably and effectively even if the scene image is blurred due to poor lighting;

4)根据当前速度与转向角通过三阶样条函数生成由当前位置至路点位置的避碰路径,使用模型预测控制进行路径跟踪,使得机器人抵达路点;4) Generate a collision avoidance path from the current position to the waypoint position through a third-order spline function according to the current speed and steering angle, and use model predictive control to track the path, so that the robot arrives at the waypoint;

由三阶样条函数生成的路径能够保证在路点前后的路径、速度以及加速度连续、平滑(如图6所示),从而保证机器人稳定运行;The path generated by the third-order spline function can ensure that the path, speed and acceleration before and after the waypoint are continuous and smooth (as shown in Figure 6), thus ensuring the stable operation of the robot;

5)判断是否抵达当前子目标,如果是,执行下一步;如果否,返回步骤2;5) judge whether to reach the current sub-target, if so, execute the next step; if not, return to step 2;

由于机器人在通常情况下很难通过一次路点输出直接导航至子目标(如图9所示),因此需要多步路点输出直至导航至子目标为止,导航图中子目标点不变(注:图9中,①表示机器人原始位置,②表示历史输出路点,③表示机器人当前位置,④表示基于当前视觉图像输出的避碰路点,⑤表示行人位置,⑥表示机器人视野范围,⑦表示A8-C3子目标点);Since it is usually difficult for the robot to directly navigate to the sub-target through one waypoint output (as shown in Figure 9), it needs multi-step waypoint output until it navigates to the sub-target, and the sub-target point in the navigation map remains unchanged (Note : In Figure 9, ① represents the original position of the robot, ② represents the historical output waypoint, ③ represents the current position of the robot, ④ represents the collision avoidance waypoint output based on the current visual image, ⑤ represents the pedestrian position, ⑥ represents the field of view of the robot, ⑦ represents the A8-C3 sub-target points);

6)判断是否抵达目标点坐标,如果是,停止导航;如果否,获取基于当前时刻的当前子目标,并返回步骤2。6) Determine whether the coordinates of the target point are reached, if so, stop the navigation; if not, obtain the current sub-target based on the current moment, and return to step 2.

当第四步导航至子目标点后,需要通过判断该子目标点是否为终点(步骤1中所给出的目标房间)来决定是否停止导航;如果该子目标不是终点,则需要获取下一个子目标点继续进行导航,直至导航至终点;例如机器人在执行A8至A12导航任务过程中,抵达子目标点A8-C3,判定该点不是终点,则需要获取当前时刻子目标点(相对于A8-C3的下一个点),执行A8-C3至C3-H5子任务;After navigating to the sub-target point in the fourth step, you need to decide whether to stop the navigation by judging whether the sub-target point is the end point (the target room given in step 1); if the sub-target point is not the end point, you need to get the next The sub-target point continues to navigate until it reaches the end point; for example, when the robot reaches the sub-target point A8-C3 in the process of performing the navigation tasks from A8 to A12, and determines that this point is not the end point, it needs to obtain the sub-target point at the current moment (relative to A8 - the next point of C3), perform subtasks A8-C3 to C3-H5;

以上所述仅是本发明的基本实施方式,对于本技术领域的从业者来说,在本脱离本发明原理的情况下所做的改进、发展也应视为本发明的保护范围。The above descriptions are only the basic embodiments of the present invention, and for practitioners in the technical field, improvements and developments made without departing from the principles of the present invention should also be regarded as the protection scope of the present invention.

Claims (8)

1.非结构环境中端到端的室内移动机器人自主导航方法,其特征在于,包括以下步骤:1. an end-to-end indoor mobile robot autonomous navigation method in an unstructured environment, characterized in that, comprising the following steps: 1)获取楼层平面图,构建语义-拓扑图;给定目标房间,利用无向图搜索最短路径,设该路径节点为子目标;1) Obtain a floor plan and construct a semantic-topology map; given a target room, use an undirected graph to search for the shortest path, and set the path node as a sub-goal; 2)获取机器人第一视角场景图像;2) Obtain the first-view scene image of the robot; 3)结合步骤2场景图像及当前子目标,导航模块输出避碰路点。3) Combined with the scene image and the current sub-target in step 2, the navigation module outputs the collision avoidance waypoint. 2.如权利要求1所述的非结构环境中端到端的室内移动机器人自主导航方法,其特征在于,所述步骤3后,还包括:2. The end-to-end indoor mobile robot autonomous navigation method in the unstructured environment as claimed in claim 1, characterized in that, after the step 3, further comprising: 4)基于步骤3的避碰路点,机器人通过三次样条生成避碰路径,并利用模型预测控制跟踪路径抵达避碰路点。4) Based on the collision avoidance waypoint in step 3, the robot generates a collision avoidance path through cubic splines, and uses the model predictive control to track the path to the collision avoidance waypoint. 3.如权利要求2所述的非结构环境中端到端的室内移动机器人自主导航方法,其特征在于,所述步骤4后,还包括:3. The end-to-end indoor mobile robot autonomous navigation method in the unstructured environment as claimed in claim 2, characterized in that, after the step 4, further comprising: 5)判断是否抵达当前子目标,如果是,执行步骤6;如果否,返回步骤2;5) judge whether to reach the current sub-target, if so, execute step 6; if not, return to step 2; 6)判断是否抵达目标点坐标,如果是,停止导航;如果否,获取基于步骤6时刻的当前子目标,并返回步骤2。6) Determine whether the coordinates of the target point are reached, if so, stop the navigation; if not, obtain the current sub-target based on the moment in step 6, and return to step 2. 4.如权利要求1所述的非结构环境中端到端的室内移动机器人自主导航方法,其特征在于,所语义-拓扑图的创建,包括以下步骤:4. the end-to-end indoor mobile robot autonomous navigation method in the unstructured environment as claimed in claim 1, is characterized in that, the creation of the semantic-topology map, comprises the following steps: A)获取建筑物楼层平面布局图;A) Obtain the floor plan of the building; B)使用SegLink对该平面图进行文本检测、使用yolo对标识进行检测、识别,其检测具体结果为:用矩形框标出该平面图的房间号、走廊号以及标识符,记录矩形框左上角位置坐标和矩形框的长宽;使用CRNN对上述文本进行识别;B) Use SegLink to perform text detection on the floor plan, and use yolo to detect and identify the logo. The specific results of the detection are: mark the room number, corridor number and identifier of the floor plan with a rectangular frame, and record the position coordinates of the upper left corner of the rectangular frame and the length and width of the rectangular box; use CRNN to recognize the above text; C)使用语义分割技术对该平面图进行门、墙、房间、走廊元素的识别,识别具体结果为:对于平面图中为墙体的元素标记为黑色,对于平面图中为门的元素标记为粉色,对于平面图中为走廊的元素标记为蓝色,对于平面图中为房间的元素标记为多种随机色;具体实现方法如下:C) Use semantic segmentation technology to identify the elements of doors, walls, rooms, and corridors on the floor plan. The specific results of the identification are: for the elements that are walls in the floor plan, marked as black, for the elements of the floor plan, the elements of doors are marked as pink, for The elements that are corridors in the floor plan are marked with blue, and the elements that are rooms in the floor plan are marked with various random colors; the specific implementation method is as follows: a)平面图预处理为512×512像素图像;a) The floor plan is preprocessed into a 512×512 pixel image; b)使用VGG-encoder对预处理图像进行特征提取;b) Use VGG-encoder to perform feature extraction on the preprocessed image; c)使用VGG-decoder对特征进行分别解码,分别预测任务一,房门、墙体,和任务二,房间、走廊,总体网络,损失函数定义如下:c) Use VGG-decoder to decode the features separately, and predict task 1, door, wall, and task 2, room, corridor, and overall network, respectively. The loss function is defined as follows:
Figure FDA0003051279220000011
Figure FDA0003051279220000011
wi=N-Niw i =NN i ; LTotal=ω1L12L2L Total1 L 12 L 2 ;
Figure FDA0003051279220000012
Figure FDA0003051279220000012
其中,L表示任务一、二的交叉熵损失;C表示预测类型种类;yi表示该像素点的真实种类;pi表示该像素点的预测种类;N表示在该任务中,预测类型的像素点总数;Ni表示该任务中,第i种预测类型像素总数;LTotal表示任务一和任务二的总体交叉熵损失;NTotal_i表示在一张预测图像中,任务一和任务二预测类型的总像素数。Among them, L represents the cross-entropy loss of tasks 1 and 2; C represents the type of prediction type; y i represents the real type of the pixel point; pi represents the prediction type of the pixel point; N represents the pixel of the prediction type in this task The total number of points; Ni represents the total number of pixels of the i-th prediction type in the task; L Total represents the overall cross-entropy loss of task 1 and task 2; N Total_i represents the prediction type of task 1 and task 2 in a prediction image. total number of pixels. D)拓扑结构的构建,通过以下步骤:D) Construction of topology structure, through the following steps: a)在经过上述语义分割处理的图片中,找到所有颜色为粉色的像素点,存入door_list列表,将该列表中的像素点经过分类存入不同的列表door中,通过执行以下算法:a) In the image processed by the above semantic segmentation, find all the pixels whose color is pink, store them in the door_list list, and store the pixels in the list into different lists of doors after classification, and execute the following algorithm: 步骤1:取出door_list中第1个像素点存入列表door中,并以此作为种子点;Step 1: Take out the first pixel in door_list and store it in the list door, and use it as a seed point; 步骤2:判断door_list中是否存在当前种子点的8邻域邻接点或door中是否存在除种子点外的其他点,如果是,执行步骤3;如果否,执行步骤4;Step 2: Determine whether there are 8 adjacent points of the current seed point in the door_list or whether there are other points except the seed point in the door, if so, go to step 3; if not, go to step 4; 步骤3:取出door_list中当前种子点的8邻域邻接点,存入列表door中,并将door中原种子点下一个点作为当前种子点,并返回步骤2;Step 3: Take out the 8 neighboring points of the current seed point in the door_list, store them in the list door, and use the next point of the original seed point in the door as the current seed point, and return to step 2; 步骤4:判断door_list是否为空,如果为空,结束循环;如果不空,建立新的列表door并返回步骤1;Step 4: Determine whether door_list is empty, if it is empty, end the loop; if not, create a new list door and return to step 1; 步骤5:求取所有列表door内的元素坐标均值,以该均值表示房门坐标;Step 5: Obtain the mean value of the coordinates of all elements in the list door, and use the mean value to represent the coordinates of the door; b1)将B中获取的门牌号、标识符关联房间颜色,将所有列表door作为独立元素入栈,执行以下算法:b1) Associate the house number and identifier obtained in B with the room color, put all the list doors as independent elements on the stack, and execute the following algorithm: 步骤1:如果栈为空,算法结束;否则取出栈顶元素中心坐标作为种子点;Step 1: If the stack is empty, the algorithm ends; otherwise, take the center coordinate of the top element of the stack as the seed point; 步骤2:获取种子点4邻域邻接点,若有非粉、非黑色点并且邻接列表不存在当前颜色,将该坐标存入邻接列表,否则将当前4个点作为新种子点;Step 2: Obtain the adjacent points in the neighborhood of the seed point 4. If there are non-pink, non-black points and the current color does not exist in the adjacency list, the coordinates are stored in the adjacency list, otherwise the current 4 points are used as new seed points; 步骤3:若遍历完当前元素或邻接列表长度等于2,则创建新邻接列表并返回步骤1,否则返回步骤2;Step 3: If the current element is traversed or the length of the adjacency list is equal to 2, create a new adjacency list and return to step 1, otherwise return to step 2; c1)通过查找各邻接列表内包含的元素,判断同一房门的所属节点;两者共有同一个房门的节点,判定为连通区域,构建拓扑图1;c1) By looking up the elements contained in each adjacency list, determine the node to which the same door belongs; the nodes that both share the same door are determined as a connected area, and a topology diagram 1 is constructed; b2)求出a中列表door的元素个数均值μ0,删除元素个数小于μ0/2的列表,并求剩余列表door的元素个数均值μ1与方差
Figure FDA0003051279220000021
通过给定置信度α=0.05,求取均值置信水平为0.95的置信区间:
b2) Find the mean value μ 0 of the number of elements in the list door in a, delete the list whose number of elements is less than μ 0 /2, and find the mean value μ 1 and the variance of the number of elements in the remaining list door
Figure FDA0003051279220000021
By giving a confidence level α=0.05, find the confidence interval with a confidence level of 0.95 for the mean:
Figure FDA0003051279220000022
Figure FDA0003051279220000022
将B中获取的门牌号、标识符关联房间颜色,将所有列表door作为独立元素入栈,执行以下算法:Associate the house number and identifier obtained in B with the room color, put all the list doors as independent elements on the stack, and execute the following algorithm: 步骤1:如果栈为空,算法结束;否则取出栈顶元素中心坐标作为种子点;Step 1: If the stack is empty, the algorithm ends; otherwise, take the center coordinate of the top element of the stack as the seed point; 步骤2:获取种子点4邻域邻接点,若有非粉、非黑色点并且邻接列表不存在当前颜色,将该坐标存入邻接列表,否则将当前4个点作为新种子点;Step 2: Obtain the adjacent points in the neighborhood of the seed point 4. If there are non-pink, non-black points and the current color does not exist in the adjacency list, the coordinates are stored in the adjacency list, otherwise the current 4 points are used as new seed points; 步骤3:若遍历完当前元素或邻接列表长度等于2,则创建新邻接列表并返回步骤1,否则返回步骤2;Step 3: If the current element is traversed or the length of the adjacency list is equal to 2, create a new adjacency list and return to step 1, otherwise return to step 2; c2)通过查找各邻接列表内包含的元素,判断同一房门的所属节点;两者共有同一个房门的节点,判定为连通区域,构建拓扑图2。c2) By looking up the elements contained in each adjacency list, determine the node to which the same door belongs; the nodes that both share the same door are determined to be connected areas, and a topology diagram 2 is constructed.
5.如权利要求1所述的非结构环境中端到端的室内移动机器人自主导航方法,其特征在于,所述无向图的创建,包括以下步骤:5. The end-to-end indoor mobile robot autonomous navigation method in an unstructured environment as claimed in claim 1, wherein the creation of the undirected graph comprises the following steps: A)获取真实房间、房门位置信息;A) Obtain real room and door location information; B)将A中获取的位置信息与拓扑图1、2相关联,构建无向图1、2;B) Associate the position information obtained in A with the topological graphs 1 and 2 to construct undirected graphs 1 and 2; C)将无向图2转换为邻接矩阵A(横纵坐标表示节点,矩阵元素1、0表示是否连通);C) Convert the undirected graph 2 into an adjacency matrix A (the horizontal and vertical coordinates represent nodes, and the matrix elements 1 and 0 represent whether they are connected); D)通过邻接矩阵A构建拉普拉斯矩阵L:D) Construct the Laplacian matrix L from the adjacency matrix A: L=D-A;L=D-A;
Figure FDA0003051279220000023
Figure FDA0003051279220000023
E)若拉普拉斯矩阵L的特征根有且只有一个0,则使用无向图2,否则使用无向图1。E) If the eigenvalue of the Laplacian matrix L has one and only one 0, use the undirected graph 2, otherwise use the undirected graph 1.
6.如权利要求1所述的非结构环境中端到端的室内移动机器人自主导航方法,其特征在于,所述导航模块,包括以下方面:6. The end-to-end indoor mobile robot autonomous navigation method in the unstructured environment as claimed in claim 1, wherein the navigation module comprises the following aspects: A)导航模块主体由ResNet-50网络以及3个全连接层构成;A) The main body of the navigation module is composed of the ResNet-50 network and 3 fully connected layers; B)以当前机器人第一视角图像输入ResNet-50网络,其输出的特征连同目标点坐标和当前速度、角速度信息一同输入全连接层,最后输出路点位置;B) Input the ResNet-50 network with the current robot's first-view image, and input the output features together with the target point coordinates and the current speed and angular velocity information into the fully connected layer, and finally output the waypoint position; C)上述网络通过虚拟方式训练,方法如下:C) The above network is trained in a virtual manner as follows: a)在虚拟环境中,通过三次样条函数模拟人行走轨迹,通过模型预测控制跟踪轨迹,优化过程需通过最小化如下方程:a) In the virtual environment, the human walking trajectory is simulated by the cubic spline function, and the trajectory is tracked by the model prediction control. The optimization process needs to minimize the following equation:
Figure FDA0003051279220000031
Figure FDA0003051279220000031
Figure FDA0003051279220000032
Figure FDA0003051279220000032
其中,
Figure FDA0003051279220000033
表示惩罚权重;
Figure FDA0003051279220000034
表示i时刻人的状态,包括所在位置及转角;
Figure FDA0003051279220000035
表示人与人的目标点的距离;
Figure FDA0003051279220000036
表示人与障碍物的惩罚距离;
Figure FDA0003051279220000037
表示人与障碍物的带符号距离;
Figure FDA0003051279220000038
表示i时刻人的速度及角速度;
in,
Figure FDA0003051279220000033
represents the penalty weight;
Figure FDA0003051279220000034
Indicates the state of the person at time i, including the location and corner;
Figure FDA0003051279220000035
Represents the distance between people and the target point;
Figure FDA0003051279220000036
Represents the penalty distance between the person and the obstacle;
Figure FDA0003051279220000037
Indicates the signed distance between the person and the obstacle;
Figure FDA0003051279220000038
Represents the speed and angular velocity of the person at time i;
b)在虚拟环境中,通过三次样条函数模拟机器人运动轨迹,通过模型预测控制跟踪轨迹,优化过程需通过最小化如下方程:b) In the virtual environment, the motion trajectory of the robot is simulated by the cubic spline function, and the trajectory is tracked by the model prediction control. The optimization process needs to minimize the following equation:
Figure FDA0003051279220000039
Figure FDA0003051279220000039
Figure FDA00030512792200000310
Figure FDA00030512792200000310
其中,
Figure FDA00030512792200000311
表示惩罚权重;
Figure FDA00030512792200000312
表示i时刻移动机器人的状态,包括所在位置及转角;
Figure FDA00030512792200000313
表示人与人的目标点的距离;
Figure FDA00030512792200000314
表示移动机器人与障碍物的惩罚距离;
Figure FDA00030512792200000315
表示移动机器人与障碍物的带符号距离;
Figure FDA00030512792200000316
表示移动机器人与人的惩罚距离;
Figure FDA00030512792200000317
表示i时刻移动机器人距人的带符号距离;
in,
Figure FDA00030512792200000311
represents the penalty weight;
Figure FDA00030512792200000312
Indicates the state of the mobile robot at time i, including its location and corner;
Figure FDA00030512792200000313
Represents the distance between people and the target point;
Figure FDA00030512792200000314
Represents the penalty distance between the mobile robot and the obstacle;
Figure FDA00030512792200000315
Indicates the signed distance between the mobile robot and the obstacle;
Figure FDA00030512792200000316
Indicates the penalty distance between the mobile robot and the human;
Figure FDA00030512792200000317
Represents the signed distance between the mobile robot and the person at time i;
c)在虚拟环境中记录机器人当前第一视角图像、当前状态信息,截取通过模型预测控制生成的路径点,以第一视角图像、当前状态信息作为输入、以路点信息作为输出,进行监督学习训练。c) Record the robot's current first-view image and current state information in the virtual environment, intercept the waypoints generated by model predictive control, and use the first-view image and current state information as input and waypoint information as output to perform supervised learning train.
7.如权利要求2所述的非结构环境中端到端的移动机器人自主导航方法,其特征在于,所述三次样条函数和模型预测控制,包括以下步骤:7. The end-to-end mobile robot autonomous navigation method in the unstructured environment as claimed in claim 2, wherein the cubic spline function and model predictive control comprise the following steps: A)根据当前速度及转角,结合下一路点信息,利用三次样条函数进行避碰路径规划;A) According to the current speed and turning angle, combined with the next way point information, use the cubic spline function to plan the collision avoidance path; B)通过模型预测控制对上述路径进行跟踪。B) The above path is tracked by model predictive control. 8.如权利要求1至7任意一项所述的非结构环境中端到端的室内移动机器人自主导航方法,其特征在于,所述建筑物场景包括医院建筑物、商场建筑物和学校建筑物。8. The end-to-end indoor mobile robot autonomous navigation method in an unstructured environment according to any one of claims 1 to 7, wherein the building scene includes a hospital building, a shopping mall building, and a school building.
CN202110488134.2A 2021-05-06 2021-05-06 End-to-end autonomous navigation method for indoor mobile robot in unstructured environment Active CN113358118B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110488134.2A CN113358118B (en) 2021-05-06 2021-05-06 End-to-end autonomous navigation method for indoor mobile robot in unstructured environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110488134.2A CN113358118B (en) 2021-05-06 2021-05-06 End-to-end autonomous navigation method for indoor mobile robot in unstructured environment

Publications (2)

Publication Number Publication Date
CN113358118A true CN113358118A (en) 2021-09-07
CN113358118B CN113358118B (en) 2022-09-20

Family

ID=77526046

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110488134.2A Active CN113358118B (en) 2021-05-06 2021-05-06 End-to-end autonomous navigation method for indoor mobile robot in unstructured environment

Country Status (1)

Country Link
CN (1) CN113358118B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113740838A (en) * 2021-09-13 2021-12-03 四川启睿克科技有限公司 Whole-house personnel tracking method based on millimeter wave radar
CN114510031A (en) * 2021-12-31 2022-05-17 中原动力智能机器人有限公司 Robot vision navigation method, device, robot and storage medium
CN116400605A (en) * 2023-06-08 2023-07-07 成都航空职业技术学院 Robot automatic control method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170057087A1 (en) * 2014-05-02 2017-03-02 Hanwha Techwin Co., Ltd. Device for planning path of mobile robot and method for planning path of mobile robot
CN106931975A (en) * 2017-04-14 2017-07-07 北京航空航天大学 A kind of many strategy paths planning methods of mobile robot based on semantic map
CN107063258A (en) * 2017-03-07 2017-08-18 重庆邮电大学 A kind of mobile robot indoor navigation method based on semantic information
CN111367318A (en) * 2020-03-31 2020-07-03 华东理工大学 A dynamic obstacle environment navigation method and device based on visual semantic information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170057087A1 (en) * 2014-05-02 2017-03-02 Hanwha Techwin Co., Ltd. Device for planning path of mobile robot and method for planning path of mobile robot
CN107063258A (en) * 2017-03-07 2017-08-18 重庆邮电大学 A kind of mobile robot indoor navigation method based on semantic information
CN106931975A (en) * 2017-04-14 2017-07-07 北京航空航天大学 A kind of many strategy paths planning methods of mobile robot based on semantic map
CN111367318A (en) * 2020-03-31 2020-07-03 华东理工大学 A dynamic obstacle environment navigation method and device based on visual semantic information

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113740838A (en) * 2021-09-13 2021-12-03 四川启睿克科技有限公司 Whole-house personnel tracking method based on millimeter wave radar
CN113740838B (en) * 2021-09-13 2024-02-06 四川启睿克科技有限公司 Whole house personnel tracking method based on millimeter wave radar
CN114510031A (en) * 2021-12-31 2022-05-17 中原动力智能机器人有限公司 Robot vision navigation method, device, robot and storage medium
CN116400605A (en) * 2023-06-08 2023-07-07 成都航空职业技术学院 Robot automatic control method and system
CN116400605B (en) * 2023-06-08 2023-08-11 成都航空职业技术学院 Robot automatic control method and system

Also Published As

Publication number Publication date
CN113358118B (en) 2022-09-20

Similar Documents

Publication Publication Date Title
CN113358118B (en) End-to-end autonomous navigation method for indoor mobile robot in unstructured environment
CN110285813B (en) Man-machine co-fusion navigation device and method for indoor mobile robot
Beeson et al. Towards autonomous topological place detection using the extended voronoi graph
WO2022041596A1 (en) Visual slam method applicable to indoor dynamic environment
CN110333714A (en) A method and device for path planning of an unmanned vehicle
CN107092264A (en) Towards the service robot autonomous navigation and automatic recharging method of bank's hall environment
WO2021143543A1 (en) Robot and method for controlling same
US20210060787A1 (en) Education assisting robot and control method thereof
CN107063258A (en) A kind of mobile robot indoor navigation method based on semantic information
Kucner et al. Probabilistic mapping of spatial motion patterns for mobile robots
CN118020038A (en) Two-wheeled self-balancing robot
CN115167433A (en) Method and system for robot to independently explore environmental information by fusing global vision
CN118258407B (en) Navigation method, system, terminal and storage medium based on hierarchical scene graph
Tsuru et al. Online object searching by a humanoid robot in an unknown environment
Flikweert et al. Automatic extraction of a navigation graph intended for IndoorGML from an indoor point cloud
CN119309580A (en) A visual language navigation method for exhibition hall robots based on large models
CN117570960A (en) Indoor positioning navigation system and method for blind guiding robot
Feng et al. osmag: Hierarchical semantic topometric area graph maps in the osm format for mobile robotics
CN115493596A (en) Semantic map construction and navigation method for mobile robot
Zhao et al. A multi-sensor fusion system for improving indoor mobility of the visually impaired
Luperto et al. Completing robot maps by predicting the layout of rooms behind closed doors
CN117804420A (en) Robot navigation method and system based on multilayer map
Hang et al. A multi-strategy path planner based on space accessibility
Kessler et al. I'll keep you in sight: Finding a good position to observe a person
Drouilly et al. Local map extrapolation in dynamic environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant