CN112327852B - Mobile robot autonomous exploration method integrating path information richness - Google Patents

Mobile robot autonomous exploration method integrating path information richness Download PDF

Info

Publication number
CN112327852B
CN112327852B CN202011240856.8A CN202011240856A CN112327852B CN 112327852 B CN112327852 B CN 112327852B CN 202011240856 A CN202011240856 A CN 202011240856A CN 112327852 B CN112327852 B CN 112327852B
Authority
CN
China
Prior art keywords
robot
information
boundary point
point
boundary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011240856.8A
Other languages
Chinese (zh)
Other versions
CN112327852A (en
Inventor
迟文政
刘杰
袁媛
丁智宇
陈国栋
孙立宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou University
Original Assignee
Suzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou University filed Critical Suzhou University
Priority to CN202011240856.8A priority Critical patent/CN112327852B/en
Publication of CN112327852A publication Critical patent/CN112327852A/en
Application granted granted Critical
Publication of CN112327852B publication Critical patent/CN112327852B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Optics & Photonics (AREA)
  • Electromagnetism (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses an autonomous exploration method of a mobile robot integrating path information richness, which comprises the following steps: the robot acquires environmental information in a sensing range and changes the environment from an unknown state to a known state; generating a fast search random tree in an idle area in a known state, clustering boundary points of the fast search random tree by adopting a clustering algorithm to obtain a centroid point, and simultaneously detecting the grid state of the boundary points and removing invalid points; constructing a profit function by combining information gain, path cost, path information richness and boundary point information richness, and calculating the profit value of the profit function at each centroid point; selecting the centroid point with the maximum profit value as a target point, and guiding the robot to move towards the target point; and repeating the steps until a complete environment is explored to obtain the grid map. According to the invention, the path information richness and the boundary point information richness are added when the revenue function is constructed, the consideration factors of the boundary point selection process are enriched, the perception uncertainty of the robot is reduced, and the exploration efficiency is improved.

Description

Mobile robot autonomous exploration method integrating path information richness
Technical Field
The invention relates to the technical field of autonomous exploration of robots, in particular to an autonomous exploration method of a mobile robot integrating richness of path information.
Background
The breakthrough of the artificial intelligence technology brings huge opportunities for research on mobile service robots, and at present, mobile service robots such as guide robots, floor sweeping robots, shopping guide robots, goods handling robots and the like are successfully applied to various environments such as airports, supermarkets, museums, families and the like. The autonomous exploration of the mobile robot refers to a process of establishing a complete environment map by moving the robot in a new environment without any prior knowledge. Currently, the main autonomous exploration methods can be roughly classified into two types, namely, an exploration method based on a grid map and an exploration method based on a feature map. The main method of the grid map-based research method is a boundary-based search method, wherein a search space is divided into a known area and an unknown area by a boundary, the robot is guided to move towards the boundary continuously, and the unknown area is searched to acquire new environmental information until a complete map is searched. The benefit values of different boundary points are distinguished by constructing an information gain model for the boundary points, and the target point with the maximum benefit value is selected for exploration, so that the exploration efficiency of the map can be improved. However, when the robot selects a boundary point with the largest profit value at a certain time to search and reach the goal point, the robot wanders near the point due to the surrounding environment structure exceeding the sensing range of the sensor configured in the robot, and a sensing uncertainty phenomenon occurs. The existing boundary point information gain model only considers the boundary point information gain and the path cost in the exploration process, but does not consider the environment information in the exploration process, so that the robot is easy to fall into the maximum local revenue function, the positioning uncertainty of the robot is increased, the exploration efficiency is low, and the wandering phenomenon is serious.
Disclosure of Invention
The invention aims to provide an autonomous mobile robot exploration method which considers the information gain of boundary points of environment information in a fusion exploration process, changes the autonomous exploration method of a robot under the sensing uncertainty and has higher exploration efficiency and the information richness of fusion paths.
In order to solve the technical problem, the invention provides a mobile robot autonomous exploration method integrating the richness of path information, which comprises the following steps:
step 1: the robot acquires environmental information in a sensing range and changes the environment in the range from an unknown state to a known state;
step 2: generating a fast search random tree in an idle area in a known state, clustering boundary points of the fast search random tree by adopting a clustering algorithm to obtain centroid points, and detecting the grid state of the boundary points and eliminating invalid points;
and step 3: combined information gain I f Path cost N f And a path information richness Q f And the richness of information of boundary points O f Constructing a revenue function R f Calculating a revenue function R f A benefit value at each centroid point;
and 4, step 4: selecting the centroid point with the maximum profit value as a target point, and guiding the robot to move to the target point;
and 5: and (4) repeating the steps 1 to 4 until the robot explores the whole environment to obtain the grid map.
Further, the robot acquires the environmental information within the sensing range in the step 1, and the method includes acquiring the environmental information within the sensing range through a sensor of the robot, and receiving sensor data through an SLAM module.
Further, the fast search random tree in step 2 includes a global tree and a local tree.
Further, the clustering algorithm adopted in the step 2 is a mean-shift algorithm.
Further, the information gain I in step 3 f Is a radius at the boundary point of r 1 The number of unknown grids in the circle, wherein when the current position of the robot is taken as the center of the circle and the radius is r 1 And radius r with the boundary point as the center 1 When the unknown grids in the circle are repeated, the unknown grids are removed; the path cost N in the step 3 f The Euclidean distance between the current position of the robot and the position of the boundary point.
Further, the richness Q of the path information in the step 3 f For the number of the effective environmental information in the region S sensed in the process that the robot moves to the boundary point, the specific calculation method comprises the following steps:
position P of robot r (x 1 ,y 1 ) Boundary point position P f (x 2 ,y 2 ) And a robot sensing range r 2 Position P of the robot converted to grid coordinate system r (i 1 ,j 1 ) Boundary point position P f (i 2 ,j 2 ) And a perception range d;
the path of the robot moving to the boundary point is a line segment
Figure BDA0002768305330000031
Adding the perception range d and line segment when the robot reaches the boundary point
Figure BDA0002768305330000032
The forward extension distance d is obtained as a line segment
Figure BDA0002768305330000033
Segment of line
Figure BDA0002768305330000034
Respectively translating the parallelogram areas obtained by the d to the two sides to serve as areas S which can be sensed in the process that the robot moves to the boundary points;
counting the number of effective environment information in the region S, specifically along a line segment
Figure BDA0002768305330000035
Traversing points in the region S along the direction of the transverse axis and the direction of the longitudinal axis respectively;
outputting the points in the area S traversed along the horizontal axis direction and the vertical axis direction as return values to obtain the path information richness Q of the robot reaching the boundary point f
Further, the information richness O of the boundary points in the step 3 f The information gain radius r is centered on the boundary point 3 The specific calculation method is as follows:
position P of boundary point f (x, y) and information gain radius r 3 Position P of boundary point converted to grid coordinate system f (i, j) and an information gain radius R;
in the grid coordinate system, counting the position P of the boundary point f (i, j) is the circle center, R is the number of the environment information in the circle with the radius, the circle is approximated to be a square with the boundary point as the center, and the boundary i of the square is obtained min 、i max 、j min 、j max Then traversing the whole square area, and taking the number of the grids occupied by statistics as the number of the environmental information;
outputting the counted number of the environment information as a return value as the boundary point information richness O of the boundary point f
Further, the revenue function R in the step 3 f Comprises the following steps:
R f =w 1 *h 1 *I f -w 2 *N f +w 3 *h 2 *Q f +w 4 *O f
wherein, w 1 、w 2 、w 3 、w 4 As a weight parameter, h 1 、h 2 The distance constraint function is used for enabling the robot to search in a limited range, and the backtracking phenomenon is reduced.
Further, the distance constraint function is:
Figure BDA0002768305330000041
Figure BDA0002768305330000042
where δ and ζ are constant coefficients, c 1 And c 2 As empirical parameter, d 1 Is the Euclidean distance between the position of the robot and the position of the boundary point.
Further, the specific method for guiding the robot to move to the target point in the step 4 is,
a global path planning algorithm is used for rapidly planning a path from the current position of the robot to a target point in a known environment, a DWA local path planning algorithm is combined to enable the robot to complete obstacle avoidance by using local environment information, and the DWA local path planning algorithm are combined to guide the robot to explore and move to the target point.
The invention has the beneficial effects that: the mobile robot autonomous exploration method integrating the path information richness introduces consideration to the environment information in the exploration process in the boundary point selection process, enriches consideration factors of the boundary point selection process by adding the path information richness and the boundary point information richness when constructing the revenue function, reduces the perception uncertainty of the robot, and improves the exploration efficiency.
The foregoing description is only an overview of the technical solutions of the present invention, and in order to make the technical solutions of the present invention more clearly understood and to implement them in accordance with the contents of the description, the following detailed description is given with reference to the preferred embodiments of the present invention and the accompanying drawings.
Drawings
FIG. 1 is a flow chart of the present invention.
Fig. 2 is a schematic diagram of the area S sensed by the robot moving to the boundary point in the present invention.
Fig. 3 is a schematic diagram of the robot of the present invention after it fails to detect an obstacle.
Fig. 4 is a schematic diagram of the robot traversing the area S in the direction of the horizontal axis in the present invention.
Fig. 5 is a schematic diagram of the robot traversing the area S in the longitudinal direction in the present invention.
FIG. 6 is a diagram of a C-type simulation environment in an embodiment of the present invention.
FIG. 7 is a diagram of an L-type simulation environment according to an embodiment of the present invention.
Detailed Description
The present invention is further described below in conjunction with the following figures and specific examples so that those skilled in the art may better understand the present invention and practice it, but the examples are not intended to limit the present invention.
In the description of the present invention, it should be understood that the term "comprises/comprising" is intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to the listed steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Referring to the flowchart of fig. 1, an embodiment of an autonomous exploration method for a mobile robot with fusion of richness of path information according to the present invention includes the following steps:
step 1: the robot acquires environmental information within a sensing range and changes the environment within the range from an unknown state to a known state. Specifically, environmental information in a sensing range is acquired through a sensor of the grid map, sensor data are received through an SLAM module, and a grid map in the sensing range is changed from an unknown state to a known state. The grid map divides the grid into three states: unknown, idle, occupied, where idle and occupied are of known state. The grid map-based method has rich representation information and high resolution, and is beneficial to the construction of a boundary point information gain model, so the method adopts the grid map-based method to develop the research of robot autonomous exploration.
In this embodiment, the urtlebot is placed in a simulation environment without any prior knowledge, and the initial map is a completely unknown grid map. The method comprises the steps that data information of the surrounding environment is obtained through a mounted sine laser radar, a map is built by receiving sensor data through an SLAM module of a Gmapping mapping algorithm, and the map is changed from an unknown state to a known state.
Step 2: and generating a fast search random tree (also called a boundary point detector) in an idle area in a known state, clustering boundary points of the fast search random tree by adopting a clustering algorithm to obtain a centroid point, and detecting the grid state of the boundary points and removing invalid points.
Step 2-1: and generating two fast search random trees, namely a global tree and a local tree, in the free area of the known state map. In this embodiment, both trees start to grow from a starting point set in a known region by human as a root node, a randomly generated point is used as a growth direction of the tree, if the random point is in the known region, a collision-free branch grows from the nearest node on the tree, otherwise, the tree does not grow; if the random point is in the unknown area, a boundary point is generated on the boundary of the growth direction, and the global tree continuously extracts the boundary point by the method. The extraction principle and growing mode of the boundary point of the local tree are the same as the global tree, except that after the local tree detects the boundary point, the local tree is cleared and grows again at the current position of the robot (see document "Steven M LaValle. Rapid-expanding random trees: A new tool for path growing. 1998").
Step 2-2: and clustering the boundary points of the fast search random tree by adopting a mean-shift clustering algorithm to obtain the centroid points. Meanwhile, the grid state of each boundary point is continuously detected in the process, and if the grid state is smaller than the preset threshold value 60 in the embodiment, the grid state is removed as an invalid point.
And step 3: combined information gain I f Path cost N f And a path information richness Q f And the richness of information of boundary points O f Constructing a revenue function R f Calculating a revenue function R f The value of the benefit at each centroid point. The consideration of the environmental information of the exploration process is added in the selection process of the boundary points, including the richness Q of the path information f And the richness of information of boundary points O f The sensing uncertainty can be reduced, the positioning precision of the robot in the exploration process is improved, and the exploration efficiency is improved.
Step 3-1: calculating an information gain I f Specifically, the radius at the boundary point is r 1 The more the number of the unknown grids is, the more the information is harvested when the point is reached, wherein when the current position of the robot is taken as the circle center and the radius is r 1 And radius r with the boundary point as the center 1 When the unknown grid in the circle is repeated, the unknown grid is removed, and the process leads the closer the robot is to the boundary point, the information gain I f The smaller the size, the more the robot can effectively avoid the situation of staying in place. In this example, r 1 =1。
Step 3-2: calculating a path cost N f Specifically, the european distance between the current position of the robot and the boundary point is greater, and the cost consumed by the robot to the boundary point is higher.
Step 3-3: computing path information richness Q f Specifically, the number of effective environmental information in the region S sensed in the process of moving the robot to the boundary point is calculated as follows:
step 3-3-1: position P of robot r (x 1 ,y 1 ) Boundary point position P f (x 2 ,y 2 ) And a robot sensing range r 2 Position P of the robot converted to grid coordinate system r (i 1 ,j 1 ) Boundary point position P f (i 2 ,j 2 ) And a perception range d;
step 3-3-2: the path of the robot moving to the boundary point is a line segment
Figure BDA0002768305330000071
Adding the perception range d and line segment when the robot reaches the boundary point
Figure BDA0002768305330000072
The forward extension distance d is obtained as a line segment
Figure BDA0002768305330000073
Segment of line
Figure BDA0002768305330000074
Respectively translating the parallelogram areas obtained by the d to the two sides to serve as areas S which can be sensed in the process that the robot moves to the boundary points;
step 3-3-3: fig. 2 shows a region S sensed during the movement of the robot to the boundary point. Because the sensing area of the robot sensor is in a fan shape, the process that the robot moves to the position of the boundary point can be approximated to an area formed by continuously translating the fan-shaped area from the position of the robot to the position of the boundary point, and the area is further approximated to a parallelogram area, so that the area S sensed in the process that the robot moves to the boundary point is an estimated value. The specific calculation process of S is as follows: firstly, simplifying the path of the robot moving to the boundary point into a line segment
Figure BDA0002768305330000075
Plus the perception range d, i.e. line segment, when the robot reaches the boundary point
Figure BDA0002768305330000076
The forward extension distance d is obtained as a line segment
Figure BDA0002768305330000077
Take 180 ° as an example, line segment
Figure BDA0002768305330000078
The parallelogram area obtained after the robot respectively translates to the two sides is sensed in the process that the robot moves to the boundary pointA region S;
step 3-3-4: counting the number of effective environmental information in the area S, specifically (i.e., the number of occupied grids), where "effective" means that when a sensor detects an obstacle in the area S, only the data of the obstacle in front is fed back, and it is reflected that the distance between the sensor and the obstacle in a two-dimensional plane is measured, and the thickness of the obstacle cannot be measured (as shown in fig. 3), so that when the robot moves to a boundary point, the robot cannot sense the environmental information behind the obstacle, and thus the number of the environmental information in front that is not blocked is counted; the environmental information and the barriers are all occupied grids in the grid map, the more occupied grids, the more barriers are represented, and the more the environmental information is. (by line segment)
Figure BDA0002768305330000081
As an example of an inclined straight line, i.e. not parallel or perpendicular to the longitudinal and transverse axes) along a line segment
Figure BDA0002768305330000082
Traversing points in the region S along the horizontal and vertical axial directions respectively; traversing in the direction of the horizontal axis (as shown in FIG. 4), line segments
Figure BDA0002768305330000083
Can be recorded as y = kx + b, then the x-axis intercept b of two straight lines after d is translated 1 =b+d、b 2 = b-d, the two straight lines being the upper and lower sides of the parallelogram region, and the ordinate for a point in the parallelogram region can be represented by (j) 1 -d,…,j 1 ,j 1 +1,j 1 +2,…,j 2 ,…,j 2 + d) (with j) 1 <j 2 For example), the coordinates of each point (i) are obtained by obtaining the corresponding abscissa from the slope with the ordinate as a reference, and then shifting the abscissa one unit at a time to the right k ,j k ) The distance b of the straight line passing through the point and having the slope k is obtained k Judging whether b is satisfied 2 <b k <b 1 If not, the system continues traversing to the right to ensure that the translated points are on the four parallelsIn the shape area (namely, the condition is met), then the grid state of the point is checked, and if the point is occupied, the traversal in the direction is stopped; over the longitudinal axis (see fig. 5), line segments
Figure BDA0002768305330000084
The abscissa of each point on (i) is represented as (i) 1 ,i 1 +1,i 1 +2,…,i 2 ) The vertical coordinate (j) is obtained from the slope 1 ,j i,+1 ,j i,+2 ,…,j 2 ) Each point (i) on the line segment k ,j k ) Respectively translating the distance d upwards and downwards, translating one unit length each time, checking the grid state of the point, stopping traversing in the direction if the point is occupied, and turning to the next point to repeat the steps after traversing in the upper direction and the lower direction is finished until the point traversing on the whole line segment is finished;
step 3-3-5: outputting the number of the environmental information counted along the direction of the horizontal axis and the direction of the vertical axis as a return value to obtain the richness Q of the path information of the robot reaching the boundary point f . In this embodiment, two cases in which the sensing ranges are 5m and 6m, respectively, are set.
Step 3-4: calculating the richness O of the boundary point information f Specifically, the information gain radius r is centered at the boundary point 3 The number of the environment information in the circle with the radius is calculated as follows:
step 3-4-1: position P of boundary point f (x, y) and information gain radius r 3 Position P of boundary point converted to grid coordinate system f (i, j) and an information gain radius R;
step 3-4-2: in the grid coordinate system, counting the position P of the boundary point f (i, j) is the circle center, R is the number of the environment information in the circle with the radius, the circle is approximated to be a square with the boundary point as the center, and the boundary i of the square is obtained min 、i max 、j min 、j max Then traversing the whole square area, and taking the number of the grids occupied by statistics as the number of the environmental information;
step 3-4-3: will make statistics ofThe number of the obtained environment information is output as a return value as the boundary point information richness O of the boundary point f . In this example, r 3 =1。
Step 3-5: constructing a revenue function R f The method specifically comprises the following steps:
R f =w 1 *h 1 *I f -w 2 *N f +w 3 *h 2 *Q f +w 4 *O f
wherein, w 1 、w 2 、w 3 、w 4 As a weight parameter, h 1 、h 2 Is a distance constraint function. The distance constraint function is used for searching the robot in a limited range and reducing backtracking phenomena, and specifically comprises the following steps:
Figure BDA0002768305330000091
Figure BDA0002768305330000092
where δ and ζ are very small coefficients, δ ≦ 0.1, ζ ≦ 0.01 1 And c 2 As empirical parameter, d 1 Is the Euclidean distance between the position of the robot and the position of the boundary point. Obviously, when d 1 <c 1 Or d 1 >c 2 When h is present 1 And h 2 Very small, gain function R f The size is small, so that the robot is prevented from staying in the original place or selecting a target point too far for exploration; when c is going to 1 <d 1 <c 2 In time, h increases as the distance between the robot position and the boundary point position increases 1 And h 2 The size of the boundary point is reduced, and the robot is moved to a closer boundary point for searching. In this embodiment, the values of the parameters are shown in table 1,
parameter(s) w 1 w 2 w 3 w 4 δ ζ c 1 c 2
Value taking 3 1 20 50 0.1 0.01 1 5
TABLE 1 parameter table
Obtaining a gain function: r f =3*h 1 *I f -1*N f +20*h 2 *P f +50*O f
Figure BDA0002768305330000101
From this, a revenue function R is calculated f The value of the benefit at each centroid point.
And 4, step 4: and selecting the center of mass point with the maximum profit value as a target point, rapidly planning a path from the current position of the robot to the target point in a known environment by using an A-global path planning algorithm, well utilizing local environment information to complete obstacle avoidance by combining a DWA local path planning algorithm, and guiding the robot to explore and move to the target point by combining the two.
And 5: and (4) repeating the steps 1 to 4 until the robot explores the whole environment to obtain the grid map.
The invention has the beneficial effects that: the mobile robot autonomous exploration method integrating the path information richness introduces consideration to the environment information in the exploration process in the boundary point selection process, and by adding consideration factors in the path information richness and boundary point information richness enrichment boundary point selection process when a revenue function is constructed, the perception uncertainty of the robot is reduced, and the exploration efficiency is improved.
To demonstrate the effectiveness of the present invention, a fast search random tree based autonomous exploration method (hereinafter denoted by "RRTs" in the literature "Umari H, mukhopadhyay s. Autonomousroblastic exploration based on multiple rapid-expanding random trees [ C ]// IEEE/RSJ International Conference on Intelligent Robots & systems. IEEE,2017 1396-1402.") was compared with the present invention in two simulation scenarios with perception ranges of 5m and 6m, respectively. In both scenarios a relatively clear area is set, and the sensor coverage area is set to 180 °. The size of the C-type map is 11m × 14m, and the size of the open region is 11m × 9m, as shown in fig. 6. The size of the L-type environment is 19m × 14m, and the size of the open region is 14m × 9m, as shown in fig. 7. The performance of the robot in the range of 90 ° and 120 ° which is not sensed in the open area was tested, respectively, and the sensing ranges corresponding to the laser light were set to 6m and 5m, respectively. A total of 40 experiments were performed for each experimental environment. Of these, 20 are sensing ranges at 5m, including 10 RRTs,10 uses of the present invention. The other 20 groups of data were conducted at a sensing range of 6m, and 10 comparative experiments were also conducted using the present invention. The indicators of the experimental comparisons are the time used to explore the entire environment and the length of the path traveled.
As shown in table 2, in the C-type map, when the sensing range is set to 5m, the search time of the invention is reduced by 13.8% and the search path length is reduced by 3.8% compared with the RRTs; as shown in table 3, when the sensing range is set to 6m, the search time of the RRTs is reduced by 9% and the search path length is reduced by 2.5%. As shown in Table 4, in an L-shaped map, when the sensing range is set to be 5m, the search time of the invention is reduced by 21.1% and the search path length is reduced by 4.7% compared with RRTs; as shown in table 5, when the sensing range is set to 6m, the search time of the RRTs is reduced by 14.1% and the search path length is reduced by 2.6%. The beneficial effects of the invention are further illustrated by simulation experiments.
Figure BDA0002768305330000111
TABLE 2 experimental comparison data for environmental perception range of 5m at 2C
Figure BDA0002768305330000121
TABLE 3 experimental comparison data for environmental perception range of 6m at 3C
Figure BDA0002768305330000122
TABLE 4 experimental comparison data for 5m ambient perception range of 4L
Figure BDA0002768305330000131
TABLE 5 experimental comparison data for environmental perception range of 6m at 5L
The above-mentioned embodiments are merely preferred embodiments for fully illustrating the present invention, and the scope of the present invention is not limited thereto. The equivalent substitution or change made by the technical personnel in the technical field on the basis of the invention is all within the protection scope of the invention. The protection scope of the invention is subject to the claims.

Claims (3)

1. A mobile robot autonomous exploration method fusing path information richness is characterized by comprising the following steps:
step 1: the robot acquires environmental information in a sensing range, changes the environment in the range from an unknown state to a known state, and acquires the environmental information in the sensing range by a sensor thereof and receiving sensor data by an SLAM module;
step 2: generating a fast search random tree in an idle area in a known state, clustering boundary points of the fast search random tree by adopting a clustering algorithm to obtain centroid points, and detecting the grid state of the boundary points and eliminating invalid points;
and step 3: combined information gain I f Path cost N f And a path information richness Q f And the richness of information of boundary points O f Constructing a revenue function R f Calculating a revenue function R f The value of profit at each centroid point, the path information richness Q f For the number of the effective environmental information in the region S sensed in the process that the robot moves to the boundary point, the specific calculation method comprises the following steps:
position P of robot r (x 1 ,y 1 ) Boundary point position P f (x 2 ,y 2 ) And a robot sensing range r 2 Position P of the robot converted to grid coordinate system r (i 1 ,j 1 ) Boundary point position P f (i 2 ,j 2 ) And a perception range d;
the path of the robot moving to the boundary point is a line segment
Figure FDA0003730077800000011
Adding the perception range d and line segment when the robot reaches the boundary point
Figure FDA0003730077800000012
The forward extension distance d is obtained as a line segment
Figure FDA0003730077800000013
Segment of line
Figure FDA0003730077800000014
Respectively translating the parallelogram areas obtained by the d to the two sides to serve as areas S which can be sensed in the process that the robot moves to the boundary points;
counting the number of effective environment information in the region S, specifically along a line segment
Figure FDA0003730077800000015
Traversing points in the region S along the direction of the transverse axis and the direction of the longitudinal axis respectively;
outputting the points in the area S traversed along the horizontal axis direction and the vertical axis direction as return values to obtain the path information richness Q of the robot reaching the boundary point f
The information gain I f Is a radius at the boundary point of r 1 The number of unknown grids in the circle, wherein when the current position of the robot is taken as the center of the circle and the radius is r 1 And radius r with the boundary point as the center 1 When the unknown grids in the circle are repeated, the unknown grids are removed; the path cost N in the step 3 f The Euclidean distance between the current position of the robot and the position of the boundary point;
the information richness O of the boundary point f To take the boundary point position as the center of a circle and the information gain radius r 3 The specific calculation method is as follows:
position P of boundary point f (x, y) and information gain radius r 3 Position P of boundary point converted to grid coordinate system f (i, j) and an information gain radius R;
in the grid coordinate system, counting the position P of the boundary point f (i, j) is a circle with the center and R is a radiusThe number of the internal environment information is that the circle is approximate to a square with the boundary point as the center, and then the boundary i of the square is obtained min 、i max 、j min 、j max Then traversing the whole square area, and taking the number of the grids occupied by statistics as the number of the environmental information;
outputting the counted number of the environment information as a return value as the richness O of the boundary point information f
The revenue function R f Comprises the following steps:
R f =w 1 *h 1 *I f -w 2 *N f +w 3 *h 2 *Q f +w 4 *O f
wherein, w 1 、w 2 、w 3 、w 4 As a weight parameter, h 1 、h 2 The distance constraint function is used for enabling the robot to explore in a limited range and reducing backtracking phenomena;
the distance constraint function is:
Figure FDA0003730077800000021
Figure FDA0003730077800000022
where δ and ζ are constant coefficients, c 1 And c 2 As an empirical parameter, d 1 The Euclidean distance between the position of the robot and the position of the boundary point;
and 4, step 4: selecting a center of mass point with the maximum profit value as a target point, guiding the robot to move to the target point, wherein the specific method for guiding the robot to move to the target point is to rapidly plan a path from the current position of the robot to the target point in a known environment by using an A-global path planning algorithm, combine a DWA local path planning algorithm to enable the robot to complete obstacle avoidance by using local environment information, and combine the two to guide the robot to explore and move to the target point;
and 5: and (4) repeating the steps 1 to 4 until the robot explores the whole environment to obtain the grid map.
2. The method for autonomous discovery of a mobile robot fusing richness of path information according to claim 1, characterized in that: the fast search random tree in the step 2 comprises a global tree and a local tree.
3. The method for autonomous discovery of a mobile robot fusing richness of path information according to claim 1, characterized in that: the clustering algorithm adopted in the step 2 is a mean-shift algorithm.
CN202011240856.8A 2020-11-09 2020-11-09 Mobile robot autonomous exploration method integrating path information richness Active CN112327852B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011240856.8A CN112327852B (en) 2020-11-09 2020-11-09 Mobile robot autonomous exploration method integrating path information richness

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011240856.8A CN112327852B (en) 2020-11-09 2020-11-09 Mobile robot autonomous exploration method integrating path information richness

Publications (2)

Publication Number Publication Date
CN112327852A CN112327852A (en) 2021-02-05
CN112327852B true CN112327852B (en) 2022-12-27

Family

ID=74316706

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011240856.8A Active CN112327852B (en) 2020-11-09 2020-11-09 Mobile robot autonomous exploration method integrating path information richness

Country Status (1)

Country Link
CN (1) CN112327852B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113050632B (en) * 2021-03-11 2022-06-14 珠海一微半导体股份有限公司 Map exploration method and chip for robot to explore unknown area and robot
CN113110482B (en) * 2021-04-29 2022-07-19 苏州大学 Indoor environment robot exploration method and system based on priori information heuristic method
CN113324558A (en) * 2021-05-17 2021-08-31 亿嘉和科技股份有限公司 Grid map traversal algorithm based on RRT
CN113485375B (en) * 2021-08-13 2023-03-24 苏州大学 Indoor environment robot exploration method based on heuristic bias sampling
CN113805590A (en) * 2021-09-23 2021-12-17 云南民族大学 Indoor robot autonomous exploration method and system based on boundary driving
CN113848912A (en) * 2021-09-28 2021-12-28 北京理工大学重庆创新中心 Indoor map establishing method and device based on autonomous exploration

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109341707B (en) * 2018-12-03 2022-04-08 南开大学 Method for constructing three-dimensional map of mobile robot in unknown environment
CN110221614B (en) * 2019-06-14 2021-06-01 福州大学 Multi-robot map exploration method based on rapid exploration of random tree
CN111432015B (en) * 2020-03-31 2022-07-19 中国人民解放军国防科技大学 Dynamic noise environment-oriented full-coverage task allocation method
CN111638526B (en) * 2020-05-20 2022-08-26 电子科技大学 Method for robot to automatically build graph in strange environment

Also Published As

Publication number Publication date
CN112327852A (en) 2021-02-05

Similar Documents

Publication Publication Date Title
CN112327852B (en) Mobile robot autonomous exploration method integrating path information richness
CN111338359B (en) Mobile robot path planning method based on distance judgment and angle deflection
CN113485375B (en) Indoor environment robot exploration method based on heuristic bias sampling
Dornhege et al. A frontier-void-based approach for autonomous exploration in 3d
CN107860387B (en) Plant protection drone operation flight course planning method and plant protection drone
CN111024092B (en) Method for rapidly planning tracks of intelligent aircraft under multi-constraint conditions
CN103196430B (en) Based on the flight path of unmanned plane and the mapping navigation method and system of visual information
CN103926930A (en) Multi-robot cooperation map building method based on Hilbert curve detection
CN108801268A (en) Localization method, device and the robot of target object
CN113110522A (en) Robot autonomous exploration method based on composite boundary detection
Jebari et al. Multi-sensor semantic mapping and exploration of indoor environments
Drouilly et al. Semantic representation for navigation in large-scale environments
Ji et al. Mapless-planner: A robust and fast planning framework for aggressive autonomous flight without map fusion
CN109919955A (en) The tunnel axis of ground formula laser radar point cloud extracts and dividing method
CN110531782A (en) Unmanned aerial vehicle flight path paths planning method for community distribution
CN104898106B (en) Towards the ground point extracting method of complicated landform airborne laser radar data
CN116522548B (en) Multi-target association method for air-ground unmanned system based on triangular topological structure
CN117406771A (en) Efficient autonomous exploration method, system and equipment based on four-rotor unmanned aerial vehicle
Yue et al. Kinect based real time obstacle detection for legged robots in complex environments
CN108253968B (en) Barrier winding method based on three-dimensional laser
Lin et al. Faster navigation of semi-structured forest environments using multirotor UAVs
CN115469662A (en) Environment exploration method, device and application
He et al. Feature extraction from 2D laser range data for indoor navigation of aerial robot
CN107543541A (en) A kind of ground magnetic positioning method of suitable indoor free movement carrier
Mahmud et al. Crop identification and navigation design based on probabilistic roadmap for crop inspection robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant