CN113358118A - End-to-end autonomous navigation method for indoor mobile robot in unstructured environment - Google Patents

End-to-end autonomous navigation method for indoor mobile robot in unstructured environment Download PDF

Info

Publication number
CN113358118A
CN113358118A CN202110488134.2A CN202110488134A CN113358118A CN 113358118 A CN113358118 A CN 113358118A CN 202110488134 A CN202110488134 A CN 202110488134A CN 113358118 A CN113358118 A CN 113358118A
Authority
CN
China
Prior art keywords
list
door
points
current
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110488134.2A
Other languages
Chinese (zh)
Other versions
CN113358118B (en
Inventor
黄冉
鲁张帆
刘可
曹政才
胡标
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Chemical Technology
Original Assignee
Beijing University of Chemical Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Chemical Technology filed Critical Beijing University of Chemical Technology
Priority to CN202110488134.2A priority Critical patent/CN113358118B/en
Publication of CN113358118A publication Critical patent/CN113358118A/en
Application granted granted Critical
Publication of CN113358118B publication Critical patent/CN113358118B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/383Indoor data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3863Structures of map data
    • G01C21/3867Geometry of map features, e.g. shape points, polygons or for simplified maps

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses an end-to-end autonomous navigation method for an indoor mobile robot in a non-structural environment, which comprises the following steps: 1) acquiring a plane layout diagram in the unfamiliar building; 2) carrying out topology on the layout diagram and creating an undirected graph; 3) generating a shortest node path by using an undirected graph and combining a target room; 4) inputting the first visual angle image of the robot into a collision avoidance navigation module, and generating collision avoidance waypoints; 5) generating a collision-prevention path by utilizing a cubic spline, and tracking by model prediction control; 6) judging whether the target point is reached, if so, ending navigation, otherwise, returning to the step 3; by adopting the autonomous navigation method of the robot, the autonomous navigation can be realized by only using a single monocular RGB camera as visual input in a complex medical environment. The method has the advantages of simplicity, flexibility, economy, practicability, high accuracy and strong robustness.

Description

End-to-end autonomous navigation method for indoor mobile robot in unstructured environment
Technical Field
The invention relates to an end-to-end indoor robot autonomous navigation method in a non-structural environment, in particular to an end-to-end medical service robot autonomous navigation method in a non-structural environment.
Background
With the continuous development of social economy, the requirements of people on self health conditions and medical service levels are increased year by year, and the heavy medical tasks and the shortage of medical staff are problems which need to be faced urgently in the medical field. The development of modern science and technology, various mobile robots are born by the birth, can replace or assist medical staff to carry out work such as sample transport, medical waste recovery, reduce medical staff's repetitive labor. The method provides a new scheme for solving the problems of heavy work and personnel shortage in the medical field.
The robot operates in a complicated and changeable medical environment with a large number of mobile personnel, and a reliable autonomous navigation technology is needed in order to safely, accurately and quickly reach a target point in a man-machine shared space. The autonomous navigation of the classical mobile robot, such as the artificial potential field method and the rolling window method, has good performance in a fixed environment, but has weak adaptability and poor reliability in a dynamic environment and is easy to fall into a local optimal solution.
Aiming at a dynamic environment, an end-to-end robot navigation method is currently provided. The method maps the current visual image of the robot or the detection information of the laser radar to the speed and the rotation angle of the robot through a neural network. The problem of poor navigation reliability of the robot in a dynamic environment is solved to a certain extent. However, for complex environments with large personnel mobility, the real-time collision avoidance effect is unsatisfactory. In view of the above factors, medical service robots have been difficult to popularize so far.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provide an end-to-end indoor robot autonomous navigation method in a non-structural environment, which mainly aims at indoor navigation of large buildings such as hospitals and the like. The key point is that the problem of map navigation by using the coefficient is solved by processing a plane layout map in the building; the collision avoidance problem in a complex and variable dynamic environment is solved through an end-to-end navigation module.
In order to achieve the purpose, the invention adopts the following technical scheme:
1) and acquiring a floor plan layout and constructing a semantic-topological graph. And searching the shortest node path by using an undirected graph through a given target room, and setting all nodes of the path as sub-targets.
2) And acquiring a first visual angle scene image of the robot.
3) And (3) combining the first visual angle scene image of the robot obtained in the step (2) and the current sub-target, and outputting the collision avoidance waypoints based on the current scene by the navigation module.
After the step 3, the method further comprises the following steps:
4) and 3, based on the collision avoidance waypoints generated in the step 3, generating a collision avoidance path by the robot through a cubic spline, and controlling the tracking path to reach the collision avoidance waypoints by utilizing model prediction.
After the step 4, the method further comprises the following steps:
5) judging whether the current sub-target is reached, if so, executing step 6; if not, returning to the step 2.
6) Judging whether the coordinates of the target point are reached, if so, stopping navigation; if not, the current sub-target based on the time of step 6 is obtained, and the step 2 is returned.
The topological graph is created by the following steps:
A) acquiring a floor plan layout of a building;
B) the SegLink is used for detecting texts (room numbers and corridor numbers) and the yolo is used for detecting and identifying marks, and the detection specific results are as follows: identifying the room number, the corridor number and the identifier (including a stair, a safety exit and a toilet identifier) of the plan by using a rectangular box, and recording the position coordinate of the upper left corner of the rectangular box and the length and width of the rectangular box; identifying the text by using the CRNN;
C) using semantic segmentation technology to identify elements of doors, walls, rooms and corridors of the plane graph, wherein the specific identification result is as follows: the elements for walls in plan view are marked in black, the elements for doors in plan view are marked in pink, the elements for corridors in plan view are marked in blue, and the elements for rooms in plan view are marked in multiple random colors (not in the three colors described above); the specific implementation method comprises the following steps:
a) the plan view is preprocessed into a 512 x 512 pixel image;
b) performing feature extraction on the preprocessed image by using a VGG-encoder;
c) the VGG-decoder is used to decode the features separately, predicting task one, door, wall, and task two, room, corridor, overall network (as shown in fig. 1) loss functions as follows:
Figure BDA0003051279230000021
wi=N-Ni
LTotal=ω1L12L2
Figure BDA0003051279230000022
wherein L represents the cross entropy loss of the first task and the second task; c represents a prediction type category; y isiRepresenting the real type of the pixel point; p is a radical ofiRepresenting the prediction type of the pixel point; n represents the total number of pixel points of the prediction type in the task; n is a radical ofiRepresenting the total number of pixels of the ith prediction type in the task; l isTotalRepresenting the overall cross entropy loss of task one and task two; n is a radical ofTotal_iIndicating the total number of pixels of the task one and task two prediction types in a predicted picture.
D) Constructing a topological structure by the following steps:
a) in the picture processed by the semantic segmentation, all pixel points with pink colors (door elements) are found and stored in a door _ list, the pixel points in the list are stored in different lists of doors by classification, and the following algorithm is executed:
step 1: taking out the 1 st pixel point in the door _ list, storing the pixel point in the list door, and taking the pixel point as a seed point;
step 2: judging whether 8 neighborhood adjacent points of the current seed point exist in the door _ list or whether other points except the seed point exist in the door, and if so, executing the step 3; if not, executing the step 4;
and step 3: taking out 8 neighborhood adjacent points of the current seed point in the door _ list, storing the adjacent points in the list door, taking the next point of the original seed point in the door as the current seed point, and returning to the step 2;
and 4, step 4: judging whether the door _ list is empty or not, and if the door _ list is empty, ending the circulation; if not, establishing a new list door and returning to the step 1;
and 5: calculating the mean value of the element coordinates in all the list doors, and expressing the door coordinates by the mean value;
b1) and B, associating the house number and the identifier obtained in the step B with the node color, stacking all the list doors as independent elements, and executing the following algorithm:
step 1: if the stack is empty, the algorithm is ended; otherwise, taking out the center coordinates of the stack top elements as seed points;
step 2: acquiring 4 neighborhood adjacent points of the seed points, if non-pink or non-black points exist and the adjacent list does not have the current color, storing the coordinates into the adjacent list, and if not, taking the current 4 points as new seed points;
and step 3: if the length of the current element or the adjacency list is equal to 2 after traversal, a new adjacency list is created and the step 1 is returned, otherwise, the step 2 is returned;
c1) judging the nodes of the same door by searching the elements contained in each adjacent list; the two nodes share the same door, are judged to be connected areas, and a topological graph 1 is constructed.
b2) Find the mean value mu of the number of elements of the list door in a0The number of deleted elements is less than mu 02, and solving the average value mu of the number of elements of the remaining list door1And variance
Figure BDA0003051279230000023
By giving a confidence α of 0.05, a confidence interval with a mean confidence level of 0.95 is determined:
Figure BDA0003051279230000024
and (3) associating the house number and the identifier obtained in the step (B) with the room color, stacking all the list doors as independent elements, and executing the following algorithm:
step 1: if the stack is empty, the algorithm is ended; otherwise, taking out the center coordinates of the stack top elements as seed points;
step 2: acquiring neighborhood adjacent points of the seed points 4, if non-pink or non-black points exist and the new list does not have the current color, storing the coordinates into the new list, and if not, taking the current 4 points as new seed points;
and step 3: and if the length of the current element or the adjacency list is equal to 2 after traversing, creating a new adjacency list and returning to the step 1, otherwise, returning to the step 2.
c2) Judging the nodes of the same door by searching the elements contained in each adjacent list; the two nodes share the same door, are judged to be connected areas, and a topological graph 2 is constructed;
the undirected graph is created by the following steps:
A) acquiring position information of a real room and a door;
B) associating the position information acquired in the step A with the topological graphs 1 and 2 to construct an undirected graph;
C) converting the undirected graph 2 into an adjacency matrix A (the horizontal and vertical coordinates represent nodes, and matrix elements 1 and 0 represent whether to be connected or not);
D) constructing a Laplace matrix L by the adjacency matrix A:
L=D-A;
Figure BDA0003051279230000031
E) if the characteristic root of the Laplace matrix L has only one 0, then the undirected graph 2 is used, otherwise the undirected graph 1 is used.
The navigation module comprises the following aspects:
A) the navigation module body is composed of a ResNet-50 network and 3 full connection layers (shown in FIG. 2);
B) inputting a ResNet-50 network by using a current robot first visual angle image, inputting output characteristics of the ResNet-50 network, a target point coordinate, current speed and angular speed information into a full connection layer, and finally outputting a position of a waypoint;
C) the network is trained in a virtual mode, and the method comprises the following steps:
a) in a virtual environment, a human walking track is simulated through a cubic spline function, the track is predicted and controlled through a model, and the optimization process needs to minimize the following equation:
Figure BDA0003051279230000032
Figure BDA0003051279230000033
wherein,
Figure BDA00030512792300000318
representing a penalty weight;
Figure BDA0003051279230000034
the state of the person at the moment i is shown, including the position and the corner;
Figure BDA0003051279230000035
represents the distance from person to person's target point;
Figure BDA0003051279230000036
representing a punished distance between the person and the obstacle;
Figure BDA0003051279230000037
representing a signed distance of the person from the obstacle;
Figure BDA0003051279230000038
representing the speed and angular velocity of the person at time i;
b) in a virtual environment, simulating the motion track of the robot by a cubic spline function, and predicting and controlling the tracking track by a model, wherein the optimization process needs to minimize the following equation:
Figure BDA0003051279230000039
Figure BDA00030512792300000310
wherein,
Figure BDA00030512792300000311
representing a penalty weight;
Figure BDA00030512792300000312
the state of the mobile robot at the moment i is shown, and the state comprises the position and the rotation angle;
Figure BDA00030512792300000313
represents the distance from person to person's target point;
Figure BDA00030512792300000314
representing a punished distance between the mobile robot and the obstacle;
Figure BDA00030512792300000315
indicating a signed distance of the mobile robot from the obstacle;
Figure BDA00030512792300000316
representing the punished distance between the mobile robot and the human;
Figure BDA00030512792300000317
indicating the signed distance of the mobile robot from the person at time i;
c) recording a current first perspective image and current state information of the robot in a virtual environment, intercepting path points generated through model predictive control, and performing supervised learning training by taking the first perspective image and the current state information as input and taking the path point information as output;
the cubic spline function and model predictive control comprises the following steps:
A) according to the current speed and the turning angle, combining the next path point information, and utilizing a cubic spline function to plan a collision-prevention path;
B) and tracking the path through model prediction control.
The building scenes include hospital buildings, mall buildings, and school buildings.
By adopting the medical service robot autonomous navigation method based on deep learning in the dynamic environment, the monocular RGB camera is directly used as the visual input equipment, and the method has the advantages of economic price, strong reliability and high stability. The shortest node path based on the graph-structure solves the local optimal problem to a great extent; the navigation module based on learning can effectively and accurately predict the moving track of the moving barrier, and the problem of collision avoidance in a complex and changeable environment is solved. The stable operation of the robot is ensured through the predictive control drive, and the cubic spline function ensures the continuity and smoothness of the speed and the acceleration before and after the waypoint and has strong robustness.
Drawings
FIG. 1 is a semantic segmentation network structure of the present invention;
FIG. 2 is a network architecture of a navigation module of the present invention;
FIG. 3 is a semantic-topological map construction flow diagram of the present invention;
FIG. 4 is a flow chart of the present invention for making an indoor undirected graph;
FIG. 5 is a pictorial illustration of a floor plan in accordance with the present invention;
FIG. 6 is a floor plan topology without the addition of cross-over points in accordance with the present invention;
FIG. 7 is a floor plan topology with added junctions in accordance with the present invention;
FIG. 8 is a schematic diagram illustrating the generation of the shortest node path according to the present invention;
fig. 9 is a schematic diagram of local path planning in the present invention.
FIG. 10 is an overall flow chart of the method implementation of the present invention.
Detailed Description
In order to make the present invention more intuitively understandable for practitioners in the art, the present invention is further described below with reference to specific examples, but in practical applications, the present invention is not limited to the specific examples.
The medical service robot autonomous navigation method based on deep learning in the dynamic environment needs to be used in large indoor places such as hospital buildings and the like, and the function of navigation in the dynamic environment is to enable the mobile robot to safely, accurately and quickly reach a target point.
The user using this navigation method needs to complete the following indoor undirected graph production steps (as shown in fig. 3) first:
acquiring a floor plan of a building (such as a hospital), and establishing a preliminary indoor map (shown in figure 4);
acquiring room and room-corridor junction (door) position information (coordinates);
the floor plan is topologically structured (step shown in fig. 5), and the topological criteria are as follows: the structures such as rooms, halls and the like are regarded as 1 node; the connected corridors are regarded as 1 node; the corridor a is cut off by the corridor b on one side (T-shaped intersection), and the corridor a and the corridor b are both regarded as 1 node; the corridor a and the corridor b are mutually cut off (crossroads), and the corridor a and the corridor b are both regarded as 2 nodes; the corridor cut off by the room and the hall is regarded as 2 nodes; the room-corridor junction is considered as 1 side; the results are shown in the figures (see FIGS. 6 and 7),
and associating each node in the topological graph with the acquired position information, representing the nodes in an undirected graph form, and establishing a complete indoor undirected map (undirected graph 1 and undirected graph 2). Judging the connectivity of the whole graph by using a Laplace matrix; if an undirected graph (undirected graph 1) for eliminating the door error interference by using the threshold method is not connected (two or more 0 characteristic roots exist in the laplacian matrix), the undirected graph (undirected graph 2) without eliminating the interference needs to be used.
After the data nodes are stored in the indoor undirected graph, the user can also add or delete the data nodes according to the needs.
To complete the above work, the user needs to complete the following steps in order to use the navigation function in the hospital:
1) a target room is given, a shortest node path is searched by utilizing an undirected graph, and nodes on the path are marked as sub-targets;
in this step, the user gives a robot target room (e.g. a12), and in combination with the current self-position (e.g. A8), searches the shortest node path from A8 to a12 using the undirected graph in D, resulting in the result (a): A8-C3-H5-C2-A2. In addition, the following result (b) can be obtained: a8- (A8-C3) - (C3-H5) - (H5-C2) - (C2-A12) -A12. (note: the edges C3-A8 and A8-C3 are the same edge in the figure.) the node in (b) is set as a sub-target (as shown in FIG. 8), and the cross symbol in the figure is the position of the sub-target, so as to be used as the basis for the next navigation of the robot;
2) acquiring a current first visual angle scene image of the robot;
the scene image is acquired by a monocular RGB camera arranged on the front side of the robot, and any depth information and map information are not needed. The scene image is used for the 3 rd step path planning; (Note: the fixed position of the camera is such that a. the scene image taken can include most of the obstacles on the front of the robot; b. the fixed position should be on the robot at a relative height, e.g. head.)
3) The robot outputs collision avoidance waypoints when arriving at the current sub-target through the current first perspective image;
in the step, the robot outputs collision avoidance waypoints, namely the position to be reached by the robot in the step 4, through the scene image shot in the step 2 and the established sub-targets by using the navigation module;
because the data set of the navigation module based on the neural network contains the moving obstacle in the training process, the navigation process can smoothly avoid the moving obstacle (as shown in fig. 9);
because the data set of the navigation module based on the neural network contains the samples interfered by noise in the training process, the navigation module can stably and effectively output the waypoint information even if the scene image is blurred due to poor light;
4) generating a collision avoidance path from the current position to the waypoint position through a third-order spline function according to the current speed and the steering angle, and tracking the path by using model predictive control to enable the robot to reach the waypoint;
the path generated by the third-order spline function can ensure that the path, the speed and the acceleration before and after the waypoint are continuous and smooth (as shown in figure 6), thereby ensuring the stable operation of the robot;
5) judging whether the current sub-target is reached, if so, executing the next step; if not, returning to the step 2;
because the robot is difficult to navigate to the sub-targets directly through one-time road point output under the normal condition (as shown in figure 9), the robot needs to output multi-step road points until the robot navigates to the sub-targets, and the sub-target points in the navigation map are not changed (note: in figure 9, the original position of the robot is shown, the historical output road points are shown, the current position of the robot is shown, the collision avoidance road points output based on the current visual image are shown, the positions of pedestrians are shown, the visual field range of the robot is shown, and the sub-target points A8-C3 are shown);
6) judging whether the coordinates of the target point are reached, if so, stopping navigation; if not, the current sub-target based on the current time is obtained, and the step 2 is returned.
After navigating to the child target point in the fourth step, whether to stop the navigation is determined by judging whether the child target point is the terminal point (the target room given in the step 1); if the sub-target is not the terminal, acquiring the next sub-target point to continue navigation until the terminal is reached; for example, when the robot arrives at the sub-target point A8-C3 during the process of executing the navigation tasks A8-A12, if the point is judged not to be the terminal point, the sub-target point at the current moment (relative to the next point A8-C3) needs to be acquired, and the sub-tasks A8-C3-C3-H5 are executed;
the foregoing is only a basic embodiment of the invention, and modifications and developments made by those skilled in the art without departing from the principle of the invention should also be considered as the protection scope of the invention.

Claims (8)

1. An end-to-end indoor mobile robot autonomous navigation method in a non-structural environment is characterized by comprising the following steps:
1) acquiring a floor plan and constructing a semantic-topological graph; a target room is given, a shortest path is searched by utilizing an undirected graph, and a path node is set as a sub-target;
2) acquiring a first visual angle scene image of the robot;
3) and (3) combining the scene image and the current sub-target in the step (2), and outputting the collision avoidance waypoints by the navigation module.
2. The method for autonomous navigation of an indoor mobile robot end-to-end in an unstructured environment as defined in claim 1, further comprising after step 3:
4) and 3, based on the collision avoidance waypoints in the step 3, generating a collision avoidance path by the robot through a cubic spline, and predicting and controlling the tracking path to reach the collision avoidance waypoints by using the model.
3. The method for autonomous navigation of an indoor mobile robot end-to-end in an unstructured environment as defined in claim 2, further comprising after step 4:
5) judging whether the current sub-target is reached, if so, executing step 6; if not, returning to the step 2;
6) judging whether the coordinates of the target point are reached, if so, stopping navigation; if not, the current sub-target based on the time of step 6 is obtained, and the step 2 is returned.
4. The method for end-to-end autonomous navigation of an indoor mobile robot in an unstructured environment of claim 1, wherein the creation of the semantic-topological graph comprises the steps of:
A) acquiring a floor plan layout of a building;
B) the SegLink is used for carrying out text detection on the plane graph, the yolo is used for detecting and identifying the mark, and the specific detection result is as follows: marking the room number, the corridor number and the identifier of the plan by using a rectangular box, and recording the position coordinate of the upper left corner of the rectangular box and the length and the width of the rectangular box; identifying the text by using the CRNN;
C) using semantic segmentation technology to identify elements of doors, walls, rooms and corridors of the plane graph, wherein the specific identification result is as follows: the elements that are walls in the plan view are marked in black, the elements that are doors in the plan view are marked in pink, the elements that are corridors in the plan view are marked in blue, and the elements that are rooms in the plan view are marked in a plurality of random colors; the specific implementation method comprises the following steps:
a) the plan view is preprocessed into a 512 x 512 pixel image;
b) performing feature extraction on the preprocessed image by using a VGG-encoder;
c) using VGG-decoder to decode features separately, predicting task one, door, wall, and task two, room, corridor, overall network, loss function defined as follows:
Figure FDA0003051279220000011
wi=N-Ni
LTotal=ω1L12L2
Figure FDA0003051279220000012
wherein L represents the cross entropy loss of the first task and the second task; c represents a prediction type category; y isiRepresenting the real type of the pixel point; p is a radical ofiRepresenting the prediction type of the pixel point; n represents the total number of pixel points of the prediction type in the task; n is a radical ofiRepresenting the total number of pixels of the ith prediction type in the task; l isTotalRepresenting the overall cross entropy loss of task one and task two; n is a radical ofTotal_iIndicating the total number of pixels of the task one and task two prediction types in a predicted picture.
D) Constructing a topological structure by the following steps:
a) in the picture processed by the semantic segmentation, all pixel points with pink colors are found and stored in a door _ list, the pixel points in the list are stored in different lists of doors in a classified manner, and the following algorithm is executed:
step 1: taking out the 1 st pixel point in the door _ list, storing the pixel point in the list door, and taking the pixel point as a seed point;
step 2: judging whether 8 neighborhood adjacent points of the current seed point exist in the door _ list or whether other points except the seed point exist in the door, and if so, executing the step 3; if not, executing the step 4;
and step 3: taking out 8 neighborhood adjacent points of the current seed point in the door _ list, storing the adjacent points in the list door, taking the next point of the original seed point in the door as the current seed point, and returning to the step 2;
and 4, step 4: judging whether the door _ list is empty or not, and if the door _ list is empty, ending the circulation; if not, establishing a new list door and returning to the step 1;
and 5: calculating the mean value of the element coordinates in all the list doors, and expressing the door coordinates by the mean value;
b1) and (3) associating the house number and the identifier obtained in the step (B) with the room color, stacking all the list doors as independent elements, and executing the following algorithm:
step 1: if the stack is empty, the algorithm ends; otherwise, taking out the center coordinates of the stack top elements as seed points;
step 2: acquiring 4 neighborhood adjacent points of the seed points, if non-pink or non-black points exist and the adjacent list does not have the current color, storing the coordinates into the adjacent list, and if not, taking the current 4 points as new seed points;
and step 3: if the length of the current element or the adjacency list is equal to 2 after traversal, a new adjacency list is created and the step 1 is returned, otherwise, the step 2 is returned;
c1) judging the nodes of the same door by searching the elements contained in each adjacent list; the two nodes share the same door, are judged to be connected areas, and a topological graph 1 is constructed;
b2) calculating the average value mu of the number of the elements of the list door in a0The number of deleted elements is less than mu02, and solving the average value mu of the number of elements of the remaining list door1And variance
Figure FDA0003051279220000021
By giving a confidence α of 0.05, a confidence interval with a mean confidence level of 0.95 is determined:
Figure FDA0003051279220000022
and (3) associating the house number and the identifier obtained in the step (B) with the room color, stacking all the list doors as independent elements, and executing the following algorithm:
step 1: if the stack is empty, the algorithm ends; otherwise, taking out the center coordinates of the stack top elements as seed points;
step 2: acquiring 4 neighborhood adjacent points of the seed points, if non-pink or non-black points exist and the adjacent list does not have the current color, storing the coordinates into the adjacent list, and if not, taking the current 4 points as new seed points;
and step 3: if the length of the current element or the adjacency list is equal to 2 after traversal, a new adjacency list is created and the step 1 is returned, otherwise, the step 2 is returned;
c2) judging the nodes of the same door by searching the elements contained in each adjacent list; the two nodes share the same door, are judged to be connected areas, and a topological graph 2 is constructed.
5. The method for end-to-end autonomous navigation of an indoor mobile robot in an unstructured environment of claim 1, wherein the creation of the undirected graph comprises the steps of:
A) acquiring position information of a real room and a door;
B) associating the position information acquired in the step A with the topological graphs 1 and 2 to construct undirected graphs 1 and 2;
C) converting the undirected graph 2 into an adjacency matrix A (the horizontal and vertical coordinates represent nodes, and matrix elements 1 and 0 represent whether to be connected or not);
D) constructing a Laplace matrix L by the adjacency matrix A:
L=D-A;
Figure FDA0003051279220000023
E) if the characteristic root of the Laplace matrix L has only one 0, then the undirected graph 2 is used, otherwise the undirected graph 1 is used.
6. The method for end-to-end autonomous navigation of an indoor mobile robot in an unstructured environment of claim 1, wherein the navigation module comprises the following aspects:
A) the navigation module main body consists of a ResNet-50 network and 3 full connection layers;
B) inputting a ResNet-50 network by using a current robot first visual angle image, inputting output characteristics of the ResNet-50 network, a target point coordinate, current speed and angular speed information into a full connection layer, and finally outputting a position of a waypoint;
C) the network is trained in a virtual mode, and the method comprises the following steps:
a) in a virtual environment, a human walking track is simulated through a cubic spline function, the track is predicted and controlled through a model, and the optimization process needs to minimize the following equation:
Figure FDA0003051279220000031
Figure FDA0003051279220000032
wherein,
Figure FDA0003051279220000033
representing a penalty weight;
Figure FDA0003051279220000034
the state of the person at the moment i is shown, including the position and the corner;
Figure FDA0003051279220000035
represents the distance from person to person's target point;
Figure FDA0003051279220000036
representing a punished distance between the person and the obstacle;
Figure FDA0003051279220000037
representing a signed distance of the person from the obstacle;
Figure FDA0003051279220000038
representing the speed and angular velocity of the person at time i;
b) in a virtual environment, simulating the motion track of the robot by a cubic spline function, and predicting and controlling the tracking track by a model, wherein the optimization process needs to minimize the following equation:
Figure FDA0003051279220000039
Figure FDA00030512792200000310
wherein,
Figure FDA00030512792200000311
representing a penalty weight;
Figure FDA00030512792200000312
the state of the mobile robot at the moment i is shown, and the state comprises the position and the rotation angle;
Figure FDA00030512792200000313
represents the distance from person to person's target point;
Figure FDA00030512792200000314
display moving machinePunishment distance between the robot and the obstacle;
Figure FDA00030512792200000315
indicating a signed distance of the mobile robot from the obstacle;
Figure FDA00030512792200000316
representing the punished distance between the mobile robot and the human;
Figure FDA00030512792200000317
indicating the signed distance of the mobile robot from the person at time i;
c) recording a current first perspective image and current state information of the robot in a virtual environment, intercepting path points generated through model prediction control, taking the first perspective image and the current state information as input, taking the path point information as output, and performing supervised learning training.
7. The method for end-to-end autonomous navigation of a mobile robot in an unstructured environment of claim 2, wherein said cubic spline and model predictive control comprises the steps of:
A) according to the current speed and the turning angle, combining the next path point information, and utilizing a cubic spline function to plan a collision-prevention path;
B) and tracking the path through model prediction control.
8. The method for end-to-end autonomous navigation of indoor mobile robots in unstructured environments of any of claims 1 to 7, characterized in that the building scenes comprise hospital buildings, mall buildings and school buildings.
CN202110488134.2A 2021-05-06 2021-05-06 End-to-end autonomous navigation method for indoor mobile robot in unstructured environment Active CN113358118B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110488134.2A CN113358118B (en) 2021-05-06 2021-05-06 End-to-end autonomous navigation method for indoor mobile robot in unstructured environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110488134.2A CN113358118B (en) 2021-05-06 2021-05-06 End-to-end autonomous navigation method for indoor mobile robot in unstructured environment

Publications (2)

Publication Number Publication Date
CN113358118A true CN113358118A (en) 2021-09-07
CN113358118B CN113358118B (en) 2022-09-20

Family

ID=77526046

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110488134.2A Active CN113358118B (en) 2021-05-06 2021-05-06 End-to-end autonomous navigation method for indoor mobile robot in unstructured environment

Country Status (1)

Country Link
CN (1) CN113358118B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113740838A (en) * 2021-09-13 2021-12-03 四川启睿克科技有限公司 Whole-house personnel tracking method based on millimeter wave radar
CN116400605A (en) * 2023-06-08 2023-07-07 成都航空职业技术学院 Robot automatic control method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170057087A1 (en) * 2014-05-02 2017-03-02 Hanwha Techwin Co., Ltd. Device for planning path of mobile robot and method for planning path of mobile robot
CN106931975A (en) * 2017-04-14 2017-07-07 北京航空航天大学 A kind of many strategy paths planning methods of mobile robot based on semantic map
CN107063258A (en) * 2017-03-07 2017-08-18 重庆邮电大学 A kind of mobile robot indoor navigation method based on semantic information
CN111367318A (en) * 2020-03-31 2020-07-03 华东理工大学 Dynamic obstacle environment navigation method and device based on visual semantic information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170057087A1 (en) * 2014-05-02 2017-03-02 Hanwha Techwin Co., Ltd. Device for planning path of mobile robot and method for planning path of mobile robot
CN107063258A (en) * 2017-03-07 2017-08-18 重庆邮电大学 A kind of mobile robot indoor navigation method based on semantic information
CN106931975A (en) * 2017-04-14 2017-07-07 北京航空航天大学 A kind of many strategy paths planning methods of mobile robot based on semantic map
CN111367318A (en) * 2020-03-31 2020-07-03 华东理工大学 Dynamic obstacle environment navigation method and device based on visual semantic information

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113740838A (en) * 2021-09-13 2021-12-03 四川启睿克科技有限公司 Whole-house personnel tracking method based on millimeter wave radar
CN113740838B (en) * 2021-09-13 2024-02-06 四川启睿克科技有限公司 Whole house personnel tracking method based on millimeter wave radar
CN116400605A (en) * 2023-06-08 2023-07-07 成都航空职业技术学院 Robot automatic control method and system
CN116400605B (en) * 2023-06-08 2023-08-11 成都航空职业技术学院 Robot automatic control method and system

Also Published As

Publication number Publication date
CN113358118B (en) 2022-09-20

Similar Documents

Publication Publication Date Title
Dai et al. Fast frontier-based information-driven autonomous exploration with an mav
Taylor et al. Vision-based motion planning and exploration algorithms for mobile robots
Henry et al. Learning to navigate through crowded environments
Lin et al. Intelligent generation of indoor topology (i-GIT) for human indoor pathfinding based on IFC models and 3D GIS technology
CN113358118B (en) End-to-end autonomous navigation method for indoor mobile robot in unstructured environment
Ntakolia et al. Autonomous path planning with obstacle avoidance for smart assistive systems
Kucner et al. Probabilistic mapping of spatial motion patterns for mobile robots
CN112015187A (en) Semantic map construction method and system for intelligent mobile robot
Zhu et al. Deep learning for embodied vision navigation: A survey
Flikweert et al. Automatic extraction of a navigation graph intended for IndoorGML from an indoor point cloud
Kondyli et al. Rotational locomotion in large-scale environments: A survey and implications for evidence-based design practice
Tsuru et al. Online object searching by a humanoid robot in an unknown environment
CN118258407B (en) Navigation method, system, terminal and storage medium based on hierarchical scene graph
Borkowski et al. Towards semantic navigation in mobile robotics
Niijima et al. Real-time autonomous navigation of an electric wheelchair in large-scale urban area with 3D map
Liu et al. A two-level path-finding strategy for indoor navigation
Boguslawski et al. 3D building interior modelling for navigation in emergency response applications
Wooden Graph-based path planning for mobile robots
Hajibabai et al. Agent-based simulation of spatial cognition and wayfinding in building fire emergency evacuation
Zheng et al. A hierarchical approach for mobile robot exploration in pedestrian crowd
Rasmussen et al. Robot navigation using image sequences
Xue et al. Navigation system with SLAM-based trajectory topological map and reinforcement learning-based local planner
CN112987720A (en) Multi-scale map construction method and construction device for mobile robot
Zhao et al. A multi-sensor fusion system for improving indoor mobility of the visually impaired
Gregorić et al. Autonomous hierarchy creation for computationally feasible near-optimal path planning in large environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant