CN113110482B - Indoor environment robot exploration method and system based on priori information heuristic method - Google Patents
Indoor environment robot exploration method and system based on priori information heuristic method Download PDFInfo
- Publication number
- CN113110482B CN113110482B CN202110475488.3A CN202110475488A CN113110482B CN 113110482 B CN113110482 B CN 113110482B CN 202110475488 A CN202110475488 A CN 202110475488A CN 113110482 B CN113110482 B CN 113110482B
- Authority
- CN
- China
- Prior art keywords
- point
- heuristic
- area
- boundary
- robot
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 238000000605 extraction Methods 0.000 claims abstract description 16
- 238000001514 detection method Methods 0.000 claims description 17
- 238000012545 processing Methods 0.000 claims description 9
- 230000004888 barrier function Effects 0.000 claims description 8
- 238000013507 mapping Methods 0.000 claims description 7
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 6
- 238000013135 deep learning Methods 0.000 claims description 5
- 230000005484 gravity Effects 0.000 claims description 5
- 238000011176 pooling Methods 0.000 claims description 5
- 238000003708 edge detection Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 abstract description 7
- 238000010586 diagram Methods 0.000 description 14
- 238000004422 calculation algorithm Methods 0.000 description 13
- 238000004590 computer program Methods 0.000 description 7
- 238000012959 renal replacement therapy Methods 0.000 description 5
- 230000009471 action Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000004088 simulation Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 241000282414 Homo sapiens Species 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0251—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0221—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0223—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Electromagnetism (AREA)
- Manipulator (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention relates to a heuristic indoor environment robot exploration method and system based on prior information, which comprises the following steps: the robot acquires data of surrounding environment information through a sensor carried by the robot; updating a part of map into a known area based on data of the surrounding environment information to obtain an updated map; performing boundary extraction on the updated map by using two fast search random trees to obtain RRT boundary points; identifying a heuristic object and carrying out position estimation on the heuristic object, constructing a prior area by taking the position of the heuristic object as a reference, and extracting boundary points in the prior area to obtain room boundary points; based on the RRT boundary points and the room boundary points, the robot performs indoor environment exploration. The robot preferentially explores the environment in the prior area, can preferentially explore one room area and then turn to other areas, effectively reduces backtracking in the exploration process, and improves exploration efficiency.
Description
Technical Field
The invention relates to the technical field of robot exploration, in particular to a heuristic indoor environment robot exploration method and system based on prior information.
Background
The breakthrough of the artificial intelligence technology brings huge opportunities to the research of mobile service robots, the application scenes and service modes of intelligent public service robots are continuously expanded, and the market scale of service robots in China is driven to grow at a high speed. The autonomous exploration of the mobile robot refers to a process of establishing a complete environment map by moving the robot in a new environment without any prior knowledge. Currently, the main autonomous exploration methods can be roughly divided into two categories: a search method based on a grid map and a search method based on a feature map. The grid map-based method is rich in representation information and high in resolution, and is beneficial to construction of a boundary point information gain model, so that the grid map-based method is adopted to develop the research of robot autonomous exploration. The mainstream of the grid map-based research method is a boundary point-based exploration method, which divides a detection space into a known region and an unknown region and guides a robot to collect information to update a map. In order to detect the environment more efficiently, the focus of this detection method is mainly on how to detect and select the boundaries. The choice of boundaries directly affects the exploration efficiency. Currently, most of the existing robot exploration strategies focus on how to design a boundary point information gain model to select a boundary point with a large profit value. However, the information gain model only considers the cost of exploring the path at the current moment and the information benefit brought by updating a small part of the map around the boundary point, and ignores the geometrical continuity of the obstacles in the environment. For example, in an indoor environment, after a robot enters a room for exploration, there are generally two actions: firstly, completing room area exploration and then turning to room external exploration; and secondly, the room area is turned to the outside of the room without being explored, and then the room is explored again. Obviously, the second action is more time consuming and less efficient; the expected revenue value for the first action to continue exploring downward is greater. However, the current robot exploration strategies independently evaluate the candidate boundary points and ignore the potential gains of the environmental structure, so the backtracking phenomenon of the exploration strategy like the second action is frequent.
Disclosure of Invention
Therefore, the technical problem to be solved by the invention is to overcome the technical defect that the robot ignores the potential benefit of the environment structure and has backtracking phenomenon in the exploration process in the prior art.
In order to solve the technical problem, the invention provides a heuristic indoor environment robot exploration method based on prior information, which comprises the following steps:
s1, the robot collects the data of the surrounding environment information through the sensor carried by the robot;
s2, updating a part of map to be a known area based on the data of the surrounding environment information, and obtaining an updated map;
s3, performing boundary extraction on the updated map by using the two fast search random trees to obtain RRT boundary points;
s4, identifying a heuristic object and carrying out position estimation on the heuristic object, constructing a prior area by taking the position of the heuristic object as a reference, and extracting boundary points in the prior area to obtain room boundary points; wherein the heuristic object is a door;
and S5, based on the RRT boundary points and the room boundary points, the robot conducts indoor environment exploration.
Preferably, the S5 includes:
s51, when the room boundary point exists, selecting the room boundary point with the maximum profit value as a target point, and when no room boundary point exists, selecting the RRT boundary point with the maximum profit value as the target point;
And S52, guiding the robot to navigate to the target point.
Preferably, the profit margin R1 for the boundary pointsf=w1*If-w2*Nf,
Wherein, IfFor information gain, the information gain refers to the number of unknown grids within the information gain radius r of 1 at the centroid point,
Nfthe path cost is the Euclidean distance between the current position of the robot and the position of the centroid point;
w1and w2Is a custom weight and is a constant.
Preferably, the step S5 further includes:
s6, when the boundary point can not be detected in the prior area, the area is explored completely, and then the model of the prior area is destroyed to form the next heuristic area;
and S7, circulating S1-S6 until the robot explores the whole environment to obtain a grid map.
Preferably, the S3 includes:
s31, in the initialization stage, adding a starting point into the tree structure as a root node, wherein the starting points of the two trees are set in the free area of the map by people;
s32, randomly scattering points in the map area to serve as candidate points;
s33, if the candidate point is in the known area, traversing all the existing nodes on the tree structure, selecting the node nearest to the candidate point as the nearest point, using the connecting line between the nearest point and the candidate node as the growth direction, if the distance between the nearest point and the candidate node exceeds the preset step length, growing a step length along the growth direction by the nearest point, using the reached point as the growth point, and if the distance does not exceed the step length, using the candidate point as the growth point;
If the candidate point is in the unknown area, the nearest tree node of the candidate point is found first, the connecting line between the nearest point and the candidate point is used as the growing direction, the nearest point grows forwards along the growing direction, and the place reaching the boundary is used as the boundary point.
Preferably, the method further includes, after S33:
s34, performing collision detection on the connecting line of the growing point and the candidate node on the map, and specifically comprising the following steps:
traversing all grid points on the connecting line of the growing point and the candidate node, and judging the grid state of the grid points;
if the state of the grid point is occupied, the collision detection is not passed, and the step returns to S32 to perform point acquisition again;
and if the connecting line of the growing point and the candidate node does not touch the obstacle, adding the connecting line of the candidate point, the growing point and the candidate node into the tree structure.
Preferably, in S4, identifying and performing position estimation on the heuristic object includes:
constructing a lightweight network, completing the identification of a heuristic object based on a deep learning method, and acquiring the coordinate information of the heuristic object; wherein the lightweight network comprises a convolutional layer, an inverse residual block, a pooling layer, and an SSP layer.
Preferably, in S4, constructing the prior region based on the position of the heuristic object includes:
When the robot identifies the heuristic object, if the position of the robot is below the heuristic object, the estimated room area is above the position of the heuristic object, and if the position of the robot is above the heuristic object, the estimated room area is below the position of the heuristic object;
the length of the estimated room area is the length a of the heuristic object extending towards two sides, and the width of the estimated area is the length 2b of the heuristic object extending backwards from the position. Wherein the parameters a, b are set empirically.
Preferably, in S4, the extracting boundary points in the prior region to obtain room boundary points includes:
carrying out binarization processing on the image of the prior area to obtain a binarized image, wherein the barrier of the binarized image is white, and the rest areas are black;
turning the color of the binary image to obtain an image after the color is turned, wherein the barrier of the image after the color is turned is black, and the rest areas are white;
performing edge detection on the binary image by using a Canny operator, wherein the edge of the image is set to be white in a detection result, and the rest areas are black;
performing bitwise AND operation on the binarized image and the image after color inversion to remove redundant white edges to obtain a final image, wherein the boundary between a known area and an unknown area in the final image is composed of a straight line;
And extracting the gravity center of the straight line, namely the room boundary point.
The invention also discloses a heuristic indoor environment robot exploration system based on the prior information, which is characterized by comprising the following steps:
the data acquisition module is used for acquiring data of surrounding environment information by the robot through a sensor carried by the robot;
the positioning and mapping module updates a part of map into a known area based on data of surrounding environment information to obtain an updated map;
the RRT boundary point extraction module is used for extracting the boundary of the updated map by using two fast search random trees to obtain RRT boundary points;
the room boundary point extraction module is used for identifying a heuristic object and carrying out position estimation on the heuristic object, constructing a prior area by taking the position of the heuristic object as a reference, and extracting boundary points in the prior area to obtain room boundary points;
and the environment exploration module is used for carrying out indoor environment exploration on the basis of the RRT boundary points and the room boundary points.
Compared with the prior art, the technical scheme of the invention has the following advantages:
According to the invention, by introducing the heuristic prior information exploration module, after the robot identifies the heuristic object, the robot preferentially explores the environment in the prior area, so that the robot can preferentially explore one room area and then turn to other areas, thereby effectively reducing backtracking phenomenon in the exploration process and improving the exploration efficiency.
Drawings
FIG. 1 is a schematic diagram of a robot exploration method for indoor environment according to the present invention;
FIG. 2 is a flow chart of the robot exploration of the present invention;
FIG. 3 is a schematic diagram of a process for fast searching a random tree to extract boundary points;
FIG. 4 is a structural diagram of the structure of the prior region;
FIG. 5 is a diagram of a simulation environment structure of a scene;
FIG. 6 is a diagram of a simulation environment structure of scene two;
fig. 7 is a diagram of a scene triple simulation environment structure.
Detailed Description
The present invention is further described below in conjunction with the following figures and specific examples so that those skilled in the art may better understand the present invention and practice it, but the examples are not intended to limit the present invention.
Referring to fig. 3, the invention discloses a heuristic indoor environment robot exploration method based on prior information, which comprises the following steps:
firstly, the robot collects data of surrounding environment information through a sensor carried by the robot. At the beginning, the whole grid map is unknown, the robot is located at a certain position in the environment, and the robot acquires data of surrounding environment information through a sensor carried by the robot.
And step two, updating a part of the map into a known area based on the data of the surrounding environment information, and obtaining the updated map.
A location and mapping (SLAM) module receives the sensor data and updates a portion of the map to a known area, while the SLAM-constructed map can in turn correct the pose of the robot.
Step three, performing boundary extraction on the updated map by using two fast search random trees to obtain RRT boundary points, comprising the following steps of:
s31, in the initialization stage, adding a starting point into the tree structure as a root node, wherein the starting points of the two trees are set in the free area of the map by people;
s32, randomly scattering points in the map area as candidate points;
s33, as shown in fig. 3, we will describe the boundary point extraction process by taking three sampling points 1, 2, and 3 as examples. And if the candidate point is in the known region, traversing all the existing nodes on the tree structure, selecting the node closest to the candidate point as the nearest point, and using the connection line between the nearest point and the candidate node as the growth direction. Candidate point 2 is in a known region as shown by candidate point 2 in fig. 3, the dashed line with arrows in the figure being the growth direction. If the distance between the nearest neighbor point and the candidate node exceeds a preset step length, growing a step length along the growth direction by the nearest neighbor point, taking the reached point as a growth point, and if the distance does not exceed the step length, taking the candidate point as the growth point;
If the candidate point is in the unknown area, the nearest tree node of the candidate point is found first, the connecting line from the nearest point to the candidate point is used as the growing direction, the nearest point grows forwards along the growing direction, and the place reaching the boundary is used as the boundary point.
The method is characterized in that traversal is started from the nearest neighbor point along the growth direction, and the grid state is unknown and serves as a boundary point. As shown by candidate point 1 in fig. 3, candidate point 2 is in an unknown region, the dotted line with an arrow in the figure is the growing direction, and along the growing direction, the place where the boundary is reached is taken as the boundary point.
S34, performing collision detection on the connecting line of the growing point and the candidate node on the map, and specifically comprising the following steps:
traversing all grid points on the connecting line of the growing point and the candidate node, and judging the grid state of the grid points;
if the state of the grid point is occupied (i.e., an obstacle), the collision detection is not passed, and the flow returns to S32 to repeat the sampling;
and if the connecting line of the growing point and the candidate node does not touch the obstacle, adding the connecting line of the candidate point, the growing point and the candidate node into the tree structure. As shown in fig. 3, the line connecting the candidate point and the nearest neighboring point crosses the obstacle, so the collision detection does not pass and the point needs to be picked again.
The algorithm of the invention uses two fast search random numbers, which are divided into a global tree and a local tree. The global tree extracts the boundary points through the above steps, the local tree extracts the boundary points according to the same principle and growth mode as the global tree, and the difference is that after the local tree detects the boundary points, the local tree is cleared and grows again at the current position of the robot.
In the third step, for the RRT boundary points, RRT boundary points can be filtered and eliminated, and specifically, the detected boundary points are clustered through a mean-shift clustering algorithm to obtain centroid points, so that a part of boundary points can be filtered, and the calculation consumption is reduced. And simultaneously detecting the grid state of the boundary point and the value in a costmap (the costmap divides each grid value into 0-255 at each moment, the white value is 255 and represents an idle state, the black value is 0 and represents an obstacle, and the value between the white value and the black value is gray and represents unknown), if the grid state is idle (indicating that the grid point is explored) and the midstream value of the costmap exceeds a certain threshold value, indicating that the area where the grid point is located is explored, and taking the grid point as an invalid point to be rejected.
Identifying a heuristic object and carrying out position estimation on the heuristic object, constructing a prior area by taking the position of the heuristic object as a reference, and extracting boundary points in the prior area to obtain room boundary points; wherein the heuristic object is a door.
In the fourth step, identifying a heuristic object and performing position estimation on the heuristic object comprises: constructing a lightweight network, completing the identification of a heuristic object based on a deep learning method, and acquiring the coordinate information of the heuristic object; wherein the lightweight network comprises a convolutional layer, an inverse residual block, a pooling layer, and an SSP layer. In the invention, the improved lightweight network can quickly complete object identification and issue the position coordinates of the door.
1. The invention uses a target detection method based on deep learning to identify the heuristic object, which is a lightweight network improved on the basis of YOLOv4_ tiny. The lightweight network mainly comprises a convolution layer, an inverse residual block, a pooling layer and an SPP (spatial Pyramid) layer, wherein the total number of the layers is 42, and the output is simplified into two layers. When the input size is 416 × 416 × 3, the corresponding output layers are 13 × 13 × 255 and 26 × 26 × 255, respectively. The inverse residual block in the backbone network can effectively improve the dimension of feature extraction, and the SPP layer positioned in the deep layer spatially fuses local features and overall features. Compared with YOLOv4_ tiny, the detection accuracy of the lightweight network is improved by 19.2%, which is close to YOLOv4, but the speed is almost 4 times of YOLOv 4. Overall, this lightweight network is very efficient in terms of both speed and accuracy, and it is suitable for heuristic target recognition.
2. When a heuristic object is detected using the above network, two-dimensional image coordinate information (position in the depth camera coordinate system) of the heuristic object may be obtained. Next, a mapping from two-dimensional points to three-dimensional points is implemented, and we assume that the coordinates of the center point P of the output heuristic object on the two-dimensional plane are (x ', y'), and the coordinates thereof on the three-dimensional coordinate system (i.e. the position under the world coordinate system) are (x, y, z), and there exists the following mapping relationship between them:
to facilitate the actual transformation calculation, we use homogeneous coordinates to represent the camera parameters, so the transformation from two-dimensional points to three-dimensional points can be expressed as:
wherein camera internal reference (f)x,fy,cx,cy) Can be obtained by subscribing to the topic distributed by the depth camera under the ROS, and the z coordinate of the heuristic object has no influence on the construction of the prior area in the following steps, so that the description is omitted here.
In the fourth step, a priori region is constructed by taking the position of the heuristic object as a reference, and the method comprises the following steps: as shown in fig. 4, when the robot recognizes the heuristic object, if the robot position is below the heuristic object, the estimated room area is above the position of the heuristic object, and if the robot position is above the heuristic object, the estimated room area is below the position of the heuristic object; the length of the estimated room area is the length a of the heuristic object extending towards two sides, and the width of the estimated area is the length 2b of the heuristic object extending backwards from the position. Wherein the parameters a, b are set empirically.
The invention constructs the prior area by taking the position of a heuristic object as a reference, the area conforms to the perception habit of human beings, the area behind a door is a room, the size of the area is set by human experience, and the size of the area is ensured to be slightly larger than that of the actual area in most occasions.
In the fourth step, the boundary points are extracted from the prior examination area to obtain the room boundary points, and the method comprises the following steps:
1. because the original image is a gray image (the barrier is black, the unknown area is gray, and the free area is white), the invention can directly carry out binarization processing, wherein the threshold value adopts a self-adaptive method, the barrier is set to be white after processing, and the rest areas are black;
2. turning the color of the binary image to obtain an image after the color is turned, wherein the barrier of the image after the color is turned is black, and the rest areas are white;
3. performing edge detection on the binary image by using a Canny operator, wherein the edge of the image is set to be white in a detection result, and the rest areas are black;
4. performing bitwise AND operation on the binarized image and the image after color inversion to remove redundant white edges to obtain a final image, wherein the boundary between a known area and an unknown area in the final image is composed of a straight line;
5. And extracting the gravity center of the straight line, namely the room boundary point.
And for the room boundary points in the fourth step, invalid room boundary points are filtered and eliminated, and specifically, the detected boundary points are clustered through a mean-shift clustering algorithm to obtain centroid points, so that a part of boundary points can be filtered, and the calculation consumption is reduced. Meanwhile, the grid state of the boundary point and the value in a costmap (costmap) are detected at each moment (costmap divides each grid value into 0-255, the white value is 255 and represents an idle state, the black value is 0 and represents an obstacle, the value between the white value and the black value is gray and represents unknown), if the grid state is idle (indicating that the grid point is explored) and the costmap median exceeds a certain threshold value, the area where the grid point is located is found, and the grid point is also taken as an invalid point to be removed.
Based on the RRT boundary points and the room boundary points, the robot conducts indoor environment exploration and comprises the following steps:
and S51, when the room boundary points exist, the robot preferentially selects the room boundary point search, so that the robot preferentially enters the prior area search after recognizing the heuristic objects. When the room boundary point exists, the prior area is not explored completely, and then the RRT boundary point is selected for exploration after the room boundary point is explored completely. Therefore, the robot can switch to other areas for exploration after one area is explored according to the artificial assumption of the robot.
When the room boundary points exist, selecting the room boundary point with the maximum profit value as a target point, and when no room boundary point exists, selecting the RRT boundary point with the maximum profit value as the target point;
profit value R1 for boundary pointsf=w1*If-w2*Nf,
Wherein, IfFor information gain, the information gain refers to the number of unknown grids within 1 of the information gain radius r of the centroid point,
Nfthe path cost is the Euclidean distance between the current position of the robot and the position of the centroid point;
w1and w2Is a custom weight and is a constant.
And S52, guiding the robot to navigate to the target point. A global path planning algorithm is used for rapidly planning a path from the current position of the robot to a target point in a known environment, a DWA local path planning algorithm is combined to enable the robot to well utilize local environment information to complete obstacle avoidance, and the DWA local path planning algorithm are combined to guide the robot to navigate to the target point and update a map.
And step six, when the boundary point cannot be detected in the prior area, the prior area is explored completely, and then the model of the prior area is destroyed so as to form the next heuristic area.
And step seven, the step one to the step six are circulated until the robot explores the complete environment to obtain the grid map.
The invention discloses a heuristic indoor environment robot exploration system based on prior information, which comprises a data acquisition module, a positioning and mapping module, an RRT (remote distance transform) boundary point extraction module, a room boundary point extraction module and an environment exploration module.
The data acquisition module is used for acquiring data of surrounding environment information by the robot through a sensor carried by the robot.
And the positioning and mapping module updates a part of the map into a known area based on the data of the surrounding environment information to obtain an updated map.
And the RRT boundary point extraction module uses two fast search random trees to perform boundary extraction on the updated map to obtain RRT boundary points.
And the room boundary point extraction module is used for identifying a heuristic object and carrying out position estimation on the heuristic object, constructing a prior area by taking the position of the heuristic object as a reference, and extracting boundary points in the prior area to obtain room boundary points.
And the environment exploration module is used for carrying out indoor environment exploration on the basis of the RRT boundary points and the room boundary points.
In order to fully prove the effectiveness of the invention, the invention and an autonomous exploration algorithm (hereinafter, expressed by an RRTs algorithm) based on a fast search random tree are subjected to comparison experiments under two simulation scenes. A total of 20 experiments were performed for each experimental environment. Including 10 uses of the RRTs algorithm, 10 of our improved algorithm. The indicators of the experimental comparisons are the time used to explore the entire environment and the length of the path traveled.
Table 1 is the experimental data of scene one, table 2 is the experimental data of scene two, and table 3 is the experimental data of scene three. As shown in fig. 5, in the first scenario, compared to the RRTs algorithm, the method of the present invention reduces the search time by 34.9% and the search path length by 24.5%. As shown in FIG. 6, in the second scenario, compared to the RRTs algorithm, the method of the present invention reduces the search time by 34.04% and the search path length by 35.9%. As shown in fig. 7, in the third scenario, compared to the RRTs algorithm, the method of the present invention reduces the search time by 12.8% and the search path length by 16.9%.
TABLE 1
TABLE 2
TABLE 3
According to the invention, by introducing the heuristic prior information exploration module, after the robot identifies the heuristic object, the robot preferentially explores the environment in the prior area, so that the robot can preferentially explore one room area and then turn to other areas, thereby effectively reducing backtracking phenomenon in the exploration process and improving the exploration efficiency.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications derived therefrom are intended to be within the scope of the invention.
Claims (3)
1. A heuristic indoor environment robot exploration method based on prior information is characterized by comprising the following steps:
s1, acquiring data of surrounding environment information by the robot through a sensor carried by the robot;
s2, updating a part of map to be a known area based on the data of the surrounding environment information, and obtaining an updated map;
S3, performing boundary extraction on the updated map by using two fast search random trees to obtain RRT boundary points, wherein the method comprises the following steps:
s31, in the initialization stage, adding a starting point into the tree structure as a root node, wherein the starting points of the two trees are set in the free area of the map by people;
s32, randomly scattering points in the map area as candidate points;
s33, if the candidate point is in the known area, traversing all existing nodes on the tree structure, selecting the node closest to the candidate point as the nearest point, using the connecting line between the nearest point and the candidate node as the growth direction, if the distance between the nearest point and the candidate node exceeds the preset step length, growing a step length along the growth direction by the nearest point, using the reached point as the growth point, and if the distance does not exceed the step length, using the candidate point as the growth point;
if the candidate point is in the unknown area, firstly finding out the nearest tree node of the candidate point, using the connecting line from the nearest point to the candidate point as the growth direction, growing forwards from the nearest point along the growth direction, and using the place reaching the boundary as the boundary point;
s34, performing collision detection on the connecting line of the growing point and the candidate node on the map, and specifically comprising the following steps:
Traversing all grid points on the connecting line of the growing point and the candidate node, and judging the grid state of the grid points;
if the state of the grid point is occupied, the collision detection is not passed, and the step returns to S32 to perform point acquisition again;
if the connecting line of the growing point and the candidate node does not touch the obstacle, adding the connecting line of the candidate point, the growing point and the candidate node into the tree structure;
s4, identifying a heuristic object and carrying out position estimation on the heuristic object, constructing a prior area by taking the position of the heuristic object as a reference, and extracting boundary points in the prior area to obtain room boundary points; wherein the heuristic object is a door;
wherein identifying and estimating the position of heuristic objects comprises:
constructing a lightweight network, completing the identification of a heuristic object based on a deep learning method, and acquiring the coordinate information of the heuristic object; wherein the lightweight network comprises a convolutional layer, an inverse residual block, a pooling layer and an SSP layer;
the method for constructing the prior area by taking the position of the heuristic object as a reference comprises the following steps:
when the robot identifies the heuristic object, if the position of the robot is below the heuristic object, the estimated room area is above the position of the heuristic object, and if the position of the robot is above the heuristic object, the estimated room area is below the position of the heuristic object;
The length of the estimated room area is the length a of the heuristic object extending from the position to two sides, and the width of the estimated area is the length 2b of the heuristic object extending from the position backwards; wherein the parameters a and b are set by experience;
the method for extracting the boundary points in the prior region to obtain the room boundary points comprises the following steps:
carrying out binarization processing on the image of the prior area to obtain a binarized image, wherein the barrier of the binarized image is white, and the rest areas are black;
turning the color of the binary image to obtain an image after the color is turned, wherein the barrier of the image after the color is turned is black, and the rest areas are white;
performing edge detection on the binary image by using a Canny operator, wherein the edge of the image is set to be white in a detection result, and the rest areas are black;
performing bitwise AND operation on the binarized image and the color-reversed image to remove redundant white edges to obtain a final image, wherein the boundary between a known region and an unknown region in the final image consists of a straight line;
extracting the gravity center of the straight line, namely the gravity center is a room boundary point;
s5, based on the RRT boundary point and the room boundary point, the robot carries out indoor environment exploration, which comprises the following steps:
S51, when the room boundary point exists, selecting the room boundary point with the maximum profit value as the target point, and when no room boundary point exists, selecting the RRT boundary point with the maximum profit value as the target point, wherein the profit value R1 of the boundary pointf=w1*If-w2*Nf,
Wherein, IfFor information gain, the information gain refers to the number of unknown grids within the information gain radius r of 1 at the centroid point,
Nfthe path cost refers to the Euclidean distance between the current position of the robot and the position of a centroid point;
w1and w2The weight is self-defined and is a constant;
and S52, guiding the robot to navigate to the target point.
2. The indoor environment robot exploration method according to claim 1, wherein said S5 is followed by further comprising:
s6, when the boundary point can not be detected in the prior area, the area is explored completely, and then the model of the prior area is destroyed to form the next heuristic area;
and S7, circulating S1-S6 until the robot explores the whole environment to obtain a grid map.
3. An indoor environment robot exploration system based on a priori information heuristic method, comprising:
the data acquisition module is used for acquiring data of surrounding environment information by the robot through a sensor carried by the robot;
The positioning and mapping module updates a part of map into a known area based on data of surrounding environment information to obtain an updated map;
the RRT boundary point extraction module performs boundary extraction on the updated map by using two fast search random trees to obtain RRT boundary points, and includes:
s31, in the initialization stage, adding a starting point into the tree structure as a root node, wherein the starting points of the two trees are set in the free area of the map by people;
s32, randomly scattering points in the map area as candidate points;
s33, if the candidate point is in the known area, traversing all existing nodes on the tree structure, selecting the node closest to the candidate point as the nearest point, using the connecting line between the nearest point and the candidate node as the growth direction, if the distance between the nearest point and the candidate node exceeds the preset step length, growing a step length along the growth direction by the nearest point, using the reached point as the growth point, and if the distance does not exceed the step length, using the candidate point as the growth point;
if the candidate point is in the unknown area, firstly finding out the nearest tree node of the candidate point, using the connecting line from the nearest point to the candidate point as the growth direction, growing forwards from the nearest point along the growth direction, and using the place reaching the boundary as the boundary point;
S34, performing collision detection on the connecting line of the growing point and the candidate node on the map, which specifically comprises the following steps:
traversing all grid points on the connecting line of the growing point and the candidate node, and judging the grid state of the grid points;
if the state of the grid point is occupied, the collision detection is not passed, and the step returns to S32 to perform point acquisition again;
if the connecting line of the growing point and the candidate node does not touch the obstacle, adding the connecting line of the candidate point, the growing point and the candidate node into the tree structure;
the room boundary point extraction module is used for identifying a heuristic object and carrying out position estimation on the heuristic object, constructing a prior area by taking the position of the heuristic object as a reference, and extracting boundary points in the prior area to obtain room boundary points; wherein identifying and estimating the position of heuristic objects comprises: constructing a lightweight network, completing the identification of a heuristic object based on a deep learning method, and acquiring the coordinate information of the heuristic object; wherein the lightweight network comprises a convolutional layer, an inverse residual block, a pooling layer and an SSP layer; the method for constructing the prior area by taking the position of the heuristic object as a reference comprises the following steps: when the robot identifies the heuristic object, if the position of the robot is below the heuristic object, the estimated room area is above the position of the heuristic object, and if the position of the robot is above the heuristic object, the estimated room area is below the position of the heuristic object; the length of the estimated room area is the length a of the heuristic object extending towards two sides from the position of the heuristic object, the width of the estimated area is the length 2b of the heuristic object extending backwards from the position of the heuristic object, and parameters a and b are set by experience; the method comprises the following steps of extracting boundary points in a prior region to obtain room boundary points, wherein the method comprises the following steps: carrying out binarization processing on the image of the prior area to obtain a binarized image, wherein the barrier of the binarized image is white, and the rest areas are black; turning the color of the binary image to obtain a color-turned image, wherein the obstacle of the color-turned image is black, and the rest areas are white; performing edge detection on the binary image by using a Canny operator, wherein the edge of the image is set to be white in a detection result, and the rest areas are black; performing bitwise AND operation on the binarized image and the image after color inversion to remove redundant white edges to obtain a final image, wherein the boundary between a known area and an unknown area in the final image is composed of a straight line; extracting the gravity center of the straight line, namely the room boundary point;
The environment exploration module is based on RRT boundary points and room boundary points, and the robot explores the indoor environment: when the room boundary points exist, selecting the room boundary point with the maximum profit value as a target point, and when no room boundary point exists, selecting the RRT boundary point with the maximum profit value as the target point; guiding the robot to navigate to the target point, wherein the profit value R1 of the boundary pointf=w1*If-w2*NfWherein, IfFor information gain, the information gain refers to the information gain radius r at the centroid point being 1Number of internally unknown grids, NfThe path cost is the Euclidean distance between the current position of the robot and the position of the centroid point; w is a1And w2Is a custom weight and is a constant.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110475488.3A CN113110482B (en) | 2021-04-29 | 2021-04-29 | Indoor environment robot exploration method and system based on priori information heuristic method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110475488.3A CN113110482B (en) | 2021-04-29 | 2021-04-29 | Indoor environment robot exploration method and system based on priori information heuristic method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113110482A CN113110482A (en) | 2021-07-13 |
CN113110482B true CN113110482B (en) | 2022-07-19 |
Family
ID=76720444
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110475488.3A Active CN113110482B (en) | 2021-04-29 | 2021-04-29 | Indoor environment robot exploration method and system based on priori information heuristic method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113110482B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113485373B (en) * | 2021-08-12 | 2022-12-06 | 苏州大学 | Robot real-time motion planning method based on Gaussian mixture model |
CN113485375B (en) * | 2021-08-13 | 2023-03-24 | 苏州大学 | Indoor environment robot exploration method based on heuristic bias sampling |
CN113837029A (en) * | 2021-09-06 | 2021-12-24 | 苏州大学 | Object identification method, system, terminal device and storage medium |
CN113805590A (en) * | 2021-09-23 | 2021-12-17 | 云南民族大学 | Indoor robot autonomous exploration method and system based on boundary driving |
CN114589708B (en) * | 2022-02-28 | 2023-11-07 | 华南师范大学 | Indoor autonomous exploration method and device based on environment information and robot |
CN115167433B (en) * | 2022-07-22 | 2024-03-19 | 华南理工大学 | Method and system for autonomously exploring environment information of robot fusing global vision |
CN115469662B (en) * | 2022-09-13 | 2023-07-07 | 苏州大学 | Environment exploration method, device and application |
CN116679712B (en) * | 2023-06-19 | 2024-07-12 | 苏州大学 | Efficient exploration decision-making method for indoor environment robot based on generalized voronoi diagram |
CN117387649B (en) * | 2023-10-26 | 2024-06-14 | 苏州大学 | Self-adaptive navigation method and system for uncertain environment robot with probability self-updating |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103905319A (en) * | 2014-03-24 | 2014-07-02 | 中国电子科技集团公司第三十研究所 | Multiple-constraint multicast routing algorithm based on iteration coding |
CN106406320A (en) * | 2016-11-29 | 2017-02-15 | 重庆重智机器人研究院有限公司 | Robot path planning method and robot planning route |
CN106774314A (en) * | 2016-12-11 | 2017-05-31 | 北京联合大学 | A kind of home-services robot paths planning method based on run trace |
CN110221614A (en) * | 2019-06-14 | 2019-09-10 | 福州大学 | A kind of multirobot map heuristic approach based on rapid discovery random tree |
CN110531760A (en) * | 2019-08-16 | 2019-12-03 | 广东工业大学 | It is explored based on the boundary that curve matching and target vertex neighborhood are planned and independently builds drawing method |
CN110908377A (en) * | 2019-11-26 | 2020-03-24 | 南京大学 | Robot navigation space reduction method |
CN112327852A (en) * | 2020-11-09 | 2021-02-05 | 苏州大学 | Mobile robot autonomous exploration method integrating path information richness |
-
2021
- 2021-04-29 CN CN202110475488.3A patent/CN113110482B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103905319A (en) * | 2014-03-24 | 2014-07-02 | 中国电子科技集团公司第三十研究所 | Multiple-constraint multicast routing algorithm based on iteration coding |
CN106406320A (en) * | 2016-11-29 | 2017-02-15 | 重庆重智机器人研究院有限公司 | Robot path planning method and robot planning route |
CN106774314A (en) * | 2016-12-11 | 2017-05-31 | 北京联合大学 | A kind of home-services robot paths planning method based on run trace |
CN110221614A (en) * | 2019-06-14 | 2019-09-10 | 福州大学 | A kind of multirobot map heuristic approach based on rapid discovery random tree |
CN110531760A (en) * | 2019-08-16 | 2019-12-03 | 广东工业大学 | It is explored based on the boundary that curve matching and target vertex neighborhood are planned and independently builds drawing method |
CN110908377A (en) * | 2019-11-26 | 2020-03-24 | 南京大学 | Robot navigation space reduction method |
CN112327852A (en) * | 2020-11-09 | 2021-02-05 | 苏州大学 | Mobile robot autonomous exploration method integrating path information richness |
Non-Patent Citations (4)
Title |
---|
A Heuristic Rapidly-Exploring Random Trees Method for Manipulator Motion Planning;CHENGREN YUAN 等;《IEEE Access》;20200103;第8卷;全文 * |
A Reusable Generalized Voronoi Diagram Based Feature Tree for Fast Robot Motion Planning in Trapped Environments;Wenzheng Chi 等;《IEEE Sensors Journal》;20210127;全文 * |
An Improved RRT Robot Autonomous Exploration and SLAM Construction Method;Zeyu Tian 等;《IEEE Xplore》;20201231;全文 * |
Risk-Informed-RRT*: A Sampling-based Human-friendly Motion Planning Algorithm for Mobile Service Robots in Indoor Environments;Wenzheng Chi 等;《Proceeding of the IEEE International Conference on Information and Automation》;20180831;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN113110482A (en) | 2021-07-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113110482B (en) | Indoor environment robot exploration method and system based on priori information heuristic method | |
CN113485375B (en) | Indoor environment robot exploration method based on heuristic bias sampling | |
CN111190981B (en) | Method and device for constructing three-dimensional semantic map, electronic equipment and storage medium | |
CN111666921B (en) | Vehicle control method, apparatus, computer device, and computer-readable storage medium | |
Krajník et al. | Image features for visual teach-and-repeat navigation in changing environments | |
CN104732587B (en) | A kind of indoor 3D semanteme map constructing method based on depth transducer | |
Steder et al. | Robust place recognition for 3D range data based on point features | |
Zivkovic et al. | Hierarchical map building using visual landmarks and geometric constraints | |
Jebari et al. | Multi-sensor semantic mapping and exploration of indoor environments | |
CN103712617B (en) | A kind of creation method of the multilamellar semanteme map of view-based access control model content | |
CN109163722B (en) | Humanoid robot path planning method and device | |
Tian et al. | ObjectFusion: An object detection and segmentation framework with RGB-D SLAM and convolutional neural networks | |
CN112802204B (en) | Target semantic navigation method and system for three-dimensional space scene prior in unknown environment | |
WO2017079918A1 (en) | Indoor scene scanning reconstruction method and apparatus | |
JP2020038660A (en) | Learning method and learning device for detecting lane by using cnn, and test method and test device using the same | |
CN113505646B (en) | Target searching method based on semantic map | |
CN114613013A (en) | End-to-end human behavior recognition method and model based on skeleton nodes | |
CN110146080B (en) | SLAM loop detection method and device based on mobile robot | |
Kim et al. | Urban scene understanding from aerial and ground LIDAR data | |
CN114782626A (en) | Transformer substation scene mapping and positioning optimization method based on laser and vision fusion | |
CN111679661A (en) | Semantic map construction method based on depth camera and sweeping robot | |
CN113936210A (en) | Anti-collision method for tower crane | |
Li et al. | Improving autonomous exploration using reduced approximated generalized voronoi graphs | |
CN115469662A (en) | Environment exploration method, device and application | |
CN113284228B (en) | Indoor scene room layout dividing method based on point cloud |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |