CN114967694A - Mobile robot collaborative environment exploration method - Google Patents

Mobile robot collaborative environment exploration method Download PDF

Info

Publication number
CN114967694A
CN114967694A CN202210597663.0A CN202210597663A CN114967694A CN 114967694 A CN114967694 A CN 114967694A CN 202210597663 A CN202210597663 A CN 202210597663A CN 114967694 A CN114967694 A CN 114967694A
Authority
CN
China
Prior art keywords
robot
points
point
map
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210597663.0A
Other languages
Chinese (zh)
Inventor
陈超
童鑫
徐诚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University of Science and Technology
Original Assignee
Jiangsu University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University of Science and Technology filed Critical Jiangsu University of Science and Technology
Priority to CN202210597663.0A priority Critical patent/CN114967694A/en
Publication of CN114967694A publication Critical patent/CN114967694A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention belongs to the technical field of robots, in particular to a mobile robot collaborative environment exploration method, which uses multiple mobile robots to jointly explore and establish an environment map, improves the working efficiency, adopts a node type distribution mode, completes more complex tasks by multiple robots with lower performance, saves the task cost, and can continue to complete the tasks by the other robots when one robot breaks down; the boundary unit search algorithm based on the improved RRT algorithm is provided, the problem of slow tree growth caused by repeated random sampling is solved, and the search efficiency is improved; the dynamic obstacle avoidance is completed by matching with a DWA algorithm, and the capability of the robot for coping with emergency situations is increased; a task allocation module is added, so that the repeated search rate of the robot is reduced, and the search efficiency is increased; based on the ROS system, a unique communication mechanism and a practical tool, all subtasks can be easily completed.

Description

Mobile robot collaborative environment exploration method
Technical Field
The invention belongs to the technical field of robots, and particularly relates to a mobile robot collaborative environment exploration method.
Background
The traditional mobile robots usually work independently, and with the complication and diversification of the executed tasks, the traditional mobile robots working independently have difficulty in meeting the requirements of practical applications, such as robot football games, cooperative combat and the like, and the research of multi-robot systems has been paid attention by scholars at home and abroad.
In the map building process, it is not enough to rely on only a single robot to fuse multiple sensors of the robot, and sensor information on other robots is used to make up for the deficiency of the self-perception environment capability, so that the perceived environment is closer to the real environment. Multiple robots have other advantages over a single robot, such as for a task that can be broken down, multiple robot systems can execute different subtasks in parallel: a multi-robot system may be designed to accomplish a certain task. However, for a multi-robot system, self-positioning and mapping are performed under the condition that the position of the robot is unknown and the robot is in an unknown environment, and at present, a plurality of difficulties exist in accurately identifying and understanding the working environment of the multi-robot system. Under a complex dynamic environment, multiple robots cooperate with each other to share and fuse, sensor information can be effectively utilized to improve the sensing capability of the surrounding environment, and therefore the positioning and mapping accuracy of robot groups is improved, and therefore the co-positioning and mapping of the multiple robots become the difficulty and hot spot of the system research of the multiple robots.
Disclosure of Invention
In order to solve the technical problems, the invention discloses a mobile robot collaborative environment exploration method, which is used for exploring an unknown environment and quickly establishing a grid map, adopts a multi-robot collaborative exploration and information sharing mode, improves the working efficiency and reduces the workload; when a certain robot breaks down, other robots can still work normally to complete tasks; the map matching fusion is completed by using a mode based on improved ORB feature extraction, an unknown boundary unit is quickly searched by combining a search algorithm based on an improved RRT algorithm, and the map is clustered and updated, and the improved algorithm makes up the problem of slow tree growth caused by repeated random sampling in each round of the original algorithm by fully utilizing idle state points acquired in the round; and the central computer adopts an algorithm based on a market auction method to quickly explore tasks, issue and distribute the tasks to each mobile robot.
The invention adopts the following specific technical scheme:
a method for exploring collaborative environment of a mobile robot is characterized in that a two-wheeled differential mobile robot is used, a 2D laser radar is mounted on the differential mobile robot, a PC computer supporting an ROS operating system is supported, and a vision sensor can be additionally arranged for observing the current environment, and specifically comprises the following steps:
the method comprises the following steps: the method comprises the steps that a mobile robot carrying a laser radar is used for drawing an unknown environment, a Gmapping function package in ROS is adopted, a Rao-Blackwellized particle filtering algorithm is integrated, depth information, IMU information and odometer information of the robot are subscribed, configuration of some necessary parameters is completed at the same time, a probability-based two-dimensional grid map can be created and output, and each robot builds a local map for the current environment; and synchronously sharing the map information to other independent mobile robots.
Step two: in the moving process of the robot, the global positioning of the robot and the mutual positioning between the robots are realized according to an AMCL positioning algorithm;
the method comprises the steps of loading initial relative poses of multiple robots, detecting local map information of the multiple robots in real time and receiving map data. And determining the size and the pose of the global map according to the parameters in the current system and the local map constructed by the robot in real time. The method comprises the following steps of extracting feature points in a gray scale image for matching by adopting an ORB feature extraction mode, introducing a moment method to determine the direction of the feature points, calculating the mass center of the feature points in a radius range by using r as the moment, forming a vector from the coordinates of the feature points to the mass center as the direction of the feature points, and adopting the following calculation formula:
Figure BDA0003668753360000021
wherein, I (x, y) is an image gray expression, x, y are selected random points, p, q are pixel values of the random points, m pq Is the moment of the image block, the centroid of which is:
Figure BDA0003668753360000022
assuming that the coordinates of the corner points are 0, connecting the geometric centers O and C of the image blocks to obtain a direction vector OC, so that the direction calculation formula of the feature points is as follows:
Figure BDA0003668753360000023
where θ is the azimuth angle, i.e., the direction of the feature point.
After the direction of the feature points is obtained through the calculation, n pairs of feature points generated by pairs are obtained at each feature point by using an improved BRIEF algorithm, the feature points are rotated and then distinguished, and the main process is as follows:
Figure BDA0003668753360000031
the rotation matrix formed by the angle theta is R θ Then the coordinates of the matched points after rotation are:
S θ =R θ S
in the formula R θ A rotation matrix, S, representing an angle theta θ The matrix is corresponding to the matrix after rotation, and S is corresponding to the matrix before rotation;
then n point pairs compare the pixel sizes of the two point pairs with each other, and carry out binary assignment, wherein the formula is as follows:
Figure BDA0003668753360000032
p (x), p (y) are pixel values of the random points x and y, respectively, and tau is a binary value after comparing the pixel points.
Repeating the assignment on n pairs of random points to finally form a binary code, wherein the code is the description of the characteristic points, the three steps of purification are carried out, the matching is carried out by using a RANSAC algorithm, and the map fusion is completed by finally calculating the obtained homography matrix T;
step three: using the search algorithm for performing boundary cells on a map based on the RRT algorithm, a local detector firstly initializes an initial point, and then randomly selects a point around the initial point as a sampling point in each iteration, and the sampling point is marked as x rand Since the sampling point probability is not the node for tree growth, and is naturally not in the growing graph G, a Nearest to the current sampling point is found through a Nearest functionNode, denoted as x Nearest . Followed by Steer function with x Nearest The node is used as a center, a fixed step length grows around randomly, and the node generated at the other end of the step length is marked as x new X of this new It is the point we are looking at. Next, a gridcheck function is used to examine whether the node can be used to continuously detect the boundary point. This function classifies the detected points into two categories:
1. if the detected point and x Nearest The unknown area of the grid between the two points indicates that the robot should move in the direction to search the area, gridcheck is-1, and x is set new And sending the boundary point to a filtering module. And because the unknown path is found, the initial point of the update is unknown of the current robot, all the previous trees are deleted, and the new tree is grown again at the current position.
2. If the detected points and x Nearest The grids between are all known regions and have no obstacles, the gridcheck is 1, and if there are obstacles, the gridcheck is 0, both cases indicate that tree growth is needed to search for boundary points, so x is taken as new Added as a vertex into the tree, and the connecting lines between them are also expanded into the graph as branches. New boundary points are explored by continually iterating through the current tree that is reset. It is proposed here that when a point is detected x Nearest When the grids between the two are known areas and have no obstacles, random sampling is not needed again, and the random sampling point is used again to call Steer to generate a new state point x new And determining x new If not, the next round of random sampling is carried out again. If it is reasonable, judge the new x again Nearest To x new And if the grid state on the connecting line detects an unknown boundary unit, sending the unknown boundary unit to the unknown boundary unit set clustering updating module. And if the obstacle is detected, jumping out and carrying out the next round of random sampling again. If the new trellis state is left idle. The idle state point of this round is again utilized.
Step four: and setting a clustering updating module, uniformly converting unknown boundary units in different coordinate systems into coordinates in a global coordinate system, clustering the received unknown boundary unit set by using a DBSCAN algorithm, inputting a sample set D (x1, x 2.., xm), field parameters (epsilon, MinPts), a sample distance measuring mode and outputting a cluster division C. The specific process is as follows:
process 1, initializing core object set
Figure BDA0003668753360000041
Initializing cluster number K equal to 0, initializing sample set T equal to D, and dividing cluster
Figure BDA0003668753360000042
In process 2, for j ═ 1, 2.. times, m, all core objects are found according to the following steps:
a, finding an epsilon-field subsample set N epsilon (xj) of the sample xj in a distance measurement mode
b, if the number of the sub sample set samples meets | N epsilon (xj) | is more than or equal to MinPts, adding the samples xjxj into the core object sample set: Ω ═ ω { xj };
flow 3 if core object set
Figure BDA0003668753360000043
Ending the algorithm, otherwise, turning to the flow 4;
a process 4, randomly selecting a core object o in the core object set Ω, initializing the current cluster core object queue Ω cur ═ o }, initializing the class sequence number k ═ k +1, initializing the current cluster sample set Ck ═ o }, and updating the unaccessed sample set Γ ═ Γ - { o };
flow 5, if the current cluster core object queue
Figure BDA0003668753360000044
After the current cluster Ck is generated, updating cluster partitioning { C1, C2.., Ck }, updating a core object set Ω -Ck, and turning to a process 3;
a flow 6, taking out a core object o 'o' from the current cluster core object queue Ω cur Ω cur, finding out all the epsilon-neighborhood subsample sets N epsilon (o ') through the neighborhood distance threshold epsilon, making Δ ═ N epsilon (o') # Γ, updating the current cluster sample set Ck ═ Ck ≧ Ck ═ Δ, updating the unaccessed sample set Γ ═ Γ - Δ -updated Ω cur ═ Ω (Δ ≈ Ω) -o 'Ω cur ═ Ω (Δ ≈ Ω) -o', and transferring to a flow 5;
and 7, outputting a result as follows: cluster division C ═ C1, C2.
Step five: selecting an unknown boundary unit closest to the central point of each cluster, sending the unknown boundary unit to the next task, updating the unknown boundary unit, and removing the outdated boundary unit; after receiving the unknown boundary cell information, adopting a market auction-based mode, and in each round of auction, the robot bids a proper price for each unallocated task according to the current position and the existing task condition. Suppose that the existing assigned tasks of each robot are { S1, S2., Sn }, and the set a is a set of tasks that are not currently assigned. And each robot bids each task in the A, finds out the task with the minimum bid price and distributes the task to the corresponding robot. And setting the task a epsilon A as a task which is not distributed currently. And each robot bids the task a according to the current task condition of the robot.
Step six: after each independent mobile robot obtains an exploration task, the problem of the global shortest path of the robot in a static environment is solved by adopting an A-x algorithm, and a DWA algorithm finishes local path planning to realize dynamic obstacle avoidance.
Step seven: and performing rear-end optimization on each planned moving path, and solving an optimal control problem to make the optimal control problem conform to a kinematics model of the robot.
In the invention, the chassis of the mobile robot carrying the laser radar is a two-wheeled differential mobile chassis, the front and the back of which are provided with supporting rollers, and a notebook computer supporting an ROS system is also carried on the chassis; based on the communication mode among ROS nodes, all devices in the whole system framework are connected through a wireless network.
The invention has the beneficial effects that: the invention uses multiple mobile robots to jointly explore and establish an environment map, improves the working efficiency, adopts a node type distribution mode, uses multiple robots with lower performance to complete more complex tasks, saves the task cost, and can continue to complete the tasks by the other robots when one robot fails; the boundary unit search algorithm based on the improved RRT algorithm is provided, the problem of slow tree growth caused by repeated random sampling is solved, and the search efficiency is improved; the dynamic obstacle avoidance is completed by matching with a DWA algorithm, and the capability of the robot for dealing with emergency situations is improved; a task allocation module is added, so that the repeated search rate of the robot is reduced, and the search efficiency is increased; based on the ROS system, a unique communication mechanism and a practical tool, all subtasks can be easily completed.
Drawings
FIG. 1 is a general flow diagram of the present invention.
Fig. 2 is a diagram of the main apparatus used in the present invention.
Fig. 3 is a block diagram of the entire system of the present invention.
Fig. 4 is a diagram of a map fusion method according to the present invention.
FIG. 5 is a flow chart of a task allocation algorithm based on market auctions in accordance with the present invention.
Detailed Description
For the purpose of enhancing the understanding of the present invention, the present invention will be described in further detail with reference to the accompanying drawings and examples, which are provided for the purpose of illustration only and are not intended to limit the scope of the present invention.
Example (b): as shown in fig. 1, fig. 2 and fig. 3, a method for exploring a collaborative environment of a mobile robot, which uses at least 2 two-wheeled differential mobile robots, carries a 2D laser radar thereon, and supports a PC computer of an ROS operating system, and specifically includes the following steps:
the method comprises the following steps: operating a Gmapping node to control a single robot to construct a local map;
step two: the radar end runs an AMCL node, and in the moving process, the PC end subscribes/AMCL _ position topics and outputs the pose of the robot relative to a world coordinate system;
step three: the robot carries out synchronous positioning and map construction based on map optimization;
step four: the operation map fusion node extracts and matches ORB characteristic points of each local map obtained in the step 3, and then performs global map fusion according to the solved map relative transformation matrix;
step five: searching unknown boundary cells in the obtained global map and local map by using a search algorithm based on an improved RRT algorithm;
step six: clustering the searched unknown boundary units and updating data information in time;
step seven: a multi-robot dynamic task allocation strategy based on a market method is used for receiving, extracting and analyzing the tasks and generating local tasks;
step eight: planning an optimal path to a task point, and finishing global path planning and local path dynamic obstacle avoidance by matching the A-star algorithm and the DWA algorithm;
step nine: and monitoring all running state information of the multi-robot system in real time.
In the implementation, initial relative poses of multiple robots are loaded, local map information of the multiple robots is detected in real time, map data is received, the size and the poses of a global map are determined according to parameters in a current system and a local map constructed by the robots in real time, feature points in a gray scale map are extracted for matching in an ORB feature extraction mode, a moment method is introduced to determine the directions of the feature points, the mass center of the feature points in a radius range of r is calculated through the moment, a vector is formed from coordinates of the feature points to the mass center and serves as the direction of the feature points, and the calculation formula is as follows:
Figure BDA0003668753360000061
where I (x, y) is the image gray scale expression, the centroid of this moment is:
Figure BDA0003668753360000062
assuming that the coordinates of the corner points are 0, the angle of the vector is the calculation formula of the direction of the feature point as follows:
Figure BDA0003668753360000071
after the direction of the feature points is obtained through the calculation, n pairs of feature points generated by pairs are obtained at each feature point by using an improved BRIEF algorithm, the feature points are rotated and then distinguished, and the main process is as follows:
Figure BDA0003668753360000072
the rotation matrix formed by the angle θ is R θ, then the coordinates of the matched points after rotation are:
S θ =R θ S
in the formula R θ A rotation matrix, S, representing an angle theta θ The matrix is corresponding to the rotated matrix;
then n point pairs compare the pixel sizes of the two point pairs with each other, and carry out binary assignment, wherein the formula is as follows:
Figure BDA0003668753360000073
p (x), p (y) are pixel values of random points x, y, respectively,
repeating the above assignment on n pairs of random points, and finally forming a binary code, wherein the code is the description of the characteristic points, and is subjected to three-step purification, matching by using RANSAC algorithm, and finally calculating to obtain a homography matrix T to complete map fusion, as shown in FIG. 4.
In this embodiment, the fourth specific process is as follows: using a search algorithm for performing a boundary unit on a map, which is realized based on an RRT algorithm, a local detector firstly initializes an initial point, then randomly selects a point around the initial point as a sampling point in each iteration, and the sampling point is marked as xrandd, because the sampling point is not a node used for tree growth in a rough rate and is not in a growing graph G naturally, a node closest to the current sampling point is found through a Nearest function and is marked as xnearest, then a Steer function is used, the node of xnearest is taken as a center, a fixed step length is randomly grown around, a node generated at the other end of the step length is marked as xnew, and then a gridcheck function is used for inspecting whether the node can be used for continuously detecting the boundary point; this function classifies the detected points into two categories: if the lattice between the detected point and xnearest is unknown, the robot should move in the direction to search the area, then gridcheck is-1, this xnew should be sent to the filtering module as the boundary point, and since the unknown path of this search has been found, the initial point is updated to be unknown to the current robot, the previous tree is deleted completely, and the new tree is grown again at the current position; if the grids between the detected point and xnearest are all known areas and there is no obstacle, gridcheck is 1, and if there is an obstacle, gridcheck is 0, both cases indicate that the tree needs to continue growing to search for the boundary point, so xnew is added into the tree as a vertex, the connecting line between them is also expanded into the graph as a branch, and the current tree is always reset through continuous iteration to search for a new boundary point.
Step five, a clustering updating module is arranged, unknown boundary units in different coordinate systems are uniformly converted into coordinates in a global coordinate system, then a DBSCAN algorithm is utilized to cluster the received unknown boundary unit sets, a sample set D (x1, x2,.., xm), field parameters (epsilon, MinPts), a sample distance measuring mode and an output cluster division C are input, and the specific flow is as follows:
process 1, initializing core object set
Figure BDA0003668753360000081
Initializing cluster number K equal to 0, initializing sample set T equal to D, and dividing cluster
Figure BDA0003668753360000082
In process 2, for j ═ 1, 2.. times, m, all core objects are found according to the following steps:
a, finding an epsilon-domain sub-sample set N epsilon (xj) of the sample xj in a distance measurement mode,
b, if the number of the sub sample set samples meets | N epsilon (xj) | is more than or equal to MinPts, adding the samples xjxj into the core object sample set: Ω ═ ω { xj };
flow 3 if core object set
Figure BDA0003668753360000083
Ending the algorithm, otherwise, turning to the flow 4;
a process 4, randomly selecting a core object o in the core object set Ω, initializing the current cluster core object queue Ω cur ═ o }, initializing the class sequence number k ═ k +1, initializing the current cluster sample set Ck ═ o }, and updating the unaccessed sample set Γ ═ Γ - { o };
flow 5, if the current cluster core object queue
Figure BDA0003668753360000084
After the current cluster Ck is generated, updating cluster partition { C1, C2.., Ck }, updating a core object set Ω ═ Ω -Ck, and turning to a flow 3;
a flow 6, taking out a core object o 'o' from the current cluster core object queue Ω cur Ω cur, finding out all the epsilon-neighborhood subsample sets N epsilon (o ') through the neighborhood distance threshold epsilon, making Δ ═ N epsilon (o') # Γ, updating the current cluster sample set Ck ═ Ck ≧ Ck ═ Δ, updating the unaccessed sample set Γ ═ Γ - Δ -updated Ω cur ═ Ω (Δ ≈ Ω) -o 'Ω cur ═ Ω (Δ ≈ Ω) -o', and transferring to a flow 5;
and 7, outputting a result as follows: cluster division C ═ C1, C2.
As shown in fig. 5, the seventh step is as follows: selecting an unknown boundary unit closest to the central point of each cluster, sending the unknown boundary unit to the next task, updating the unknown boundary unit, and removing the outdated boundary unit; after receiving the unknown boundary unit information, adopting a market auction-based mode, in each round of auction, the robots bid a proper price for each unallocated task according to the current position and the existing task condition, assuming that the existing allocated tasks of each robot are { S1, S2.,. Sn }, the set A is a task set which is not allocated currently, each robot bids each task in the set A, finding out the task with the minimum bid price, and allocating the task to the corresponding robot, setting a task a belonging to the set A as the task which is not allocated currently, and bidding each robot according to the current task condition of the robot.
In this implementation, two-wheeled differential mobile robot that quantity is no less than 2 is provided with a vision sensor and is used for observing current environment on, and each equipment is based on the communication mode between the ROS node in the whole system frame, through wireless network connection, as shown in fig. 3.
The foregoing illustrates and describes the principles, general features, and advantages of the present invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (9)

1. A method for exploring collaborative environment of a mobile robot is characterized in that the two-wheeled differential mobile robot with the number not less than 2 is used, a 2D laser radar is carried on the two-wheeled differential mobile robot, and a PC computer supporting an ROS operating system specifically comprises the following steps:
the method comprises the following steps: operating a Gmapping node to control a single robot to construct a local map;
step two: the radar end runs an AMCL node, and in the moving process, the PC end subscribes/AMCL _ position topics and outputs the pose of the robot relative to a world coordinate system;
step three: the robot carries out synchronous positioning and map construction based on map optimization;
step four: the operation map fusion node extracts and matches ORB characteristic points of each local map obtained in the step 3, and then global map fusion is carried out according to the solved map relative transformation matrix;
step five: searching unknown boundary cells in the obtained global map and local map by using a search algorithm based on an improved RRT algorithm;
step six: clustering the searched unknown boundary units and updating data information in time;
step seven: a multi-robot dynamic task allocation strategy based on a market method is used for receiving, extracting and analyzing the tasks and generating local tasks;
step eight: planning an optimal path to a task point, and finishing global path planning and local path dynamic obstacle avoidance by matching the A-star algorithm and the DWA algorithm;
step nine: and monitoring all running state information of the multi-robot system in real time.
2. The method for exploring a collaborative environment of a mobile robot according to claim 1, wherein the second specific process is as follows:
the method comprises the following steps of firstly loading initial relative poses of multiple robots, detecting local map information of the multiple robots in real time, receiving map data, determining the size and pose of a global map according to parameters in a current system and a local map constructed by the robots in real time, extracting feature points in a gray scale map for matching by adopting an ORB feature extraction mode, introducing a moment method to determine the direction of the feature points, calculating the mass center of the feature points by using the moment, forming a vector from the coordinates of the feature points to the mass center as the direction of the feature points, and adopting the following calculation formula:
Figure FDA0003668753350000011
wherein, I (x, y) is an image gray expression, x, y are selected random points, p, q are pixel values of the random points, m pq Is the moment of the image block, the centroid of which is:
Figure FDA0003668753350000021
assuming that the coordinates of the angular points are 0, connecting the geometric center O and the centroid C of the image block to obtain a direction vector OC, and then obtaining a feature point direction calculation formula as follows:
Figure FDA0003668753350000022
where θ is the orientation angle, i.e., the direction of the feature point.
After the direction of the feature points is obtained through the calculation, n pairs of feature points generated by pairs are obtained at each feature point by using an improved BRIEF algorithm, the feature points are rotated and then distinguished, and the main process is as follows:
Figure FDA0003668753350000023
the rotation matrix formed by the angle θ is R θ, then the coordinates of the matched points after rotation are:
S θ =R θ S
in the formula R θ A rotation matrix, S, representing an angle theta θ The matrix is corresponding to the matrix after rotation, and S is corresponding to the matrix before rotation;
then n point pairs compare the pixel sizes of the two point pairs with each other, and carry out binary assignment, wherein the formula is as follows:
Figure FDA0003668753350000024
p (x), p (y) are pixel values of the random points x and y, respectively, and τ is a binary value after comparing the pixel values.
Repeating the assignment on n pairs of random points to finally form a binary code, wherein the code is the description of the characteristic points, the three steps of purification are carried out, the matching is carried out by using a RANSAC algorithm, and finally the calculated homography matrix T completes the map fusion.
3. Mobile robot collaboration as claimed in claim 2The environment exploration method is characterized in that the concrete process of the step four is as follows: using the search algorithm for performing boundary cells on a map based on the RRT algorithm, a local detector firstly initializes an initial point, and then randomly selects a point around the initial point as a sampling point in each iteration, and the sampling point is marked as x rand Since the sampling point probability is not the node for tree growth, and naturally not in the growing graph G, the node closest to the current sampling point is found through a Nearest function, which is denoted as x Nearest Then through Steer function with x Nearest The node is used as a center, a fixed step length grows around randomly, and the node generated at the other end of the step length is marked as x new And then, adopting a gridcheck function to examine whether the node can be used for continuously detecting the boundary point.
4. A method for exploring a collaborative environment of a mobile robot according to claim 3, wherein in said step four, a gridcheck function is used to examine whether the node can be used for continuously detecting the boundary point, and the function divides the detected points into two categories: if the detected point and x Nearest The unknown area of the grid between the two points indicates that the robot should move in the direction to search the area, gridcheck is-1, and x is set new Sending the path as a boundary point to a filtering module, and updating an initial point to be unknown of the current robot at the moment because the unknown path is found, deleting all the previous trees, and growing a new tree at the current position again; if the detected points and x Nearest The grids between are all known regions and have no obstacles, the gridcheck is 1, and if there are obstacles, the gridcheck is 0, both cases indicate that tree growth is needed to search for boundary points, so x is taken as new Adding the vertex into the tree, expanding the connecting line between the vertex and the tree into the graph as a branch, and constantly resetting the current tree through continuous iteration to explore new boundary points.
5. The method according to claim 4, wherein the fifth step includes providing a cluster update module, uniformly converting the unknown boundary units in different coordinate systems into coordinates in a global coordinate system, clustering the received unknown boundary unit sets by using a DBSCAN algorithm, inputting a sample set D (x1, x 2.., xm), a domain parameter (epsilon, MinPts), a sample distance measurement mode, and outputting a cluster division C.
6. The method for exploring a collaborative environment of a mobile robot according to claim 5, wherein the concrete flow of the fifth step is as follows:
process 1, initializing core object set
Figure FDA0003668753350000031
Initializing cluster number K equal to 0, initializing sample set T equal to D, and dividing cluster
Figure FDA0003668753350000032
In process 2, for j ═ 1, 2.. times, m, all core objects are found according to the following steps:
a, finding an epsilon-domain sub-sample set N epsilon (xj) of the sample xj in a distance measurement mode,
b, if the number of the sub sample set samples meets | N epsilon (xj) | is more than or equal to MinPts, adding the samples xjxj into the core object sample set: Ω ═ ω { xj };
flow 3 if core object set
Figure FDA0003668753350000033
Ending the algorithm, otherwise, turning to the flow 4;
a process 4, randomly selecting a core object o in the core object set Ω, initializing the current cluster core object queue Ω cur ═ o }, initializing the class sequence number k ═ k +1, initializing the current cluster sample set Ck ═ o }, and updating the unaccessed sample set Γ ═ Γ - { o };
flow 5, if the current cluster core object queue
Figure FDA0003668753350000041
After the current cluster Ck is generated, updating cluster partition { C1, C2.., Ck }, updating a core object set Ω ═ Ω -Ck, and turning to a flow 3;
a flow 6, taking out a core object o 'o' from the current cluster core object queue Ω cur Ω cur, finding out all the epsilon-neighborhood subsample sets N epsilon (o ') through the neighborhood distance threshold epsilon, making Δ ═ N epsilon (o') # Γ, updating the current cluster sample set Ck ═ Ck ≧ Ck ═ Δ, updating the unaccessed sample set Γ ═ Γ - Δ -updated Ω cur ═ Ω (Δ ≈ Ω) -o 'Ω cur ═ Ω (Δ ≈ Ω) -o', and transferring to a flow 5;
and 7, outputting a result as follows: cluster division C ═ C1, C2.
7. The method for exploring a collaborative environment of a mobile robot according to claim 6, wherein the seventh specific process of said step is as follows: selecting an unknown boundary unit closest to the central point of each cluster, sending the unknown boundary unit to the next task, updating the unknown boundary unit, and removing the outdated boundary unit; after receiving the unknown boundary unit information, adopting a market auction-based mode, in each round of auction, the robots bid a proper price for each unallocated task according to the current position and the existing task condition, assuming that the existing allocated tasks of each robot are { S1, S2.,. Sn }, the set A is a task set which is not allocated currently, each robot bids each task in the set A, finding out the task with the minimum bid price, and allocating the task to the corresponding robot, setting a task a belonging to the set A as the task which is not allocated currently, and bidding each robot according to the current task condition of the robot.
8. The mobile robot collaborative environment exploration method according to any one of claims 1 to 7, wherein a visual sensor is provided on a two-wheeled differential mobile robot using a number of not less than 2 for observing a current environment.
9. The method of claim 8, wherein the devices in the system framework are connected via a wireless network based on the communication between ROS nodes.
CN202210597663.0A 2022-05-30 2022-05-30 Mobile robot collaborative environment exploration method Pending CN114967694A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210597663.0A CN114967694A (en) 2022-05-30 2022-05-30 Mobile robot collaborative environment exploration method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210597663.0A CN114967694A (en) 2022-05-30 2022-05-30 Mobile robot collaborative environment exploration method

Publications (1)

Publication Number Publication Date
CN114967694A true CN114967694A (en) 2022-08-30

Family

ID=82957012

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210597663.0A Pending CN114967694A (en) 2022-05-30 2022-05-30 Mobile robot collaborative environment exploration method

Country Status (1)

Country Link
CN (1) CN114967694A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115930933A (en) * 2022-09-20 2023-04-07 江苏海洋大学 Multi-agent collaborative mapping method based on cluster control

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115930933A (en) * 2022-09-20 2023-04-07 江苏海洋大学 Multi-agent collaborative mapping method based on cluster control

Similar Documents

Publication Publication Date Title
US20220028163A1 (en) Computer Vision Systems and Methods for Detecting and Modeling Features of Structures in Images
CN111240319B (en) Outdoor multi-robot cooperative operation system and method thereof
CN112859859B (en) Dynamic grid map updating method based on three-dimensional obstacle object pixel object mapping
CN109755995B (en) Robot automatic charging docking method based on ROS robot operating system
CN109725327B (en) Method and system for building map by multiple machines
CN110110763B (en) Grid map fusion method based on maximum public subgraph
CN106017472A (en) Global path planning method, global path planning system and unmanned aerial vehicle
CN110260856A (en) One kind building drawing method and device
CN112305559A (en) Power transmission line distance measuring method, device and system based on ground fixed-point laser radar scanning and electronic equipment
CN114387410B (en) Road data fusion map generation method and device and electronic equipment
CN114859375A (en) Autonomous exploration mapping system and method based on multi-robot cooperation
CN111932612B (en) Intelligent vehicle vision positioning method and device based on second-order hidden Markov model
CN114967694A (en) Mobile robot collaborative environment exploration method
CN114397894B (en) Mobile robot target searching method imitating human memory
CN115268461A (en) Mobile robot autonomous navigation method with fusion algorithm
CN114943766A (en) Relocation method, relocation device, electronic equipment and computer-readable storage medium
Wang et al. Dynamic object separation and removal in 3d point cloud map building
Baker et al. GPU assisted processing of point cloud data sets for ground segmentation in autonomous vehicles
Xie et al. Autonomous Multi-robot Navigation and Cooperative Mapping in Partially Unknown Environments
CN114137955B (en) Multi-robot rapid collaborative mapping method based on improved market method
CN117537803B (en) Robot inspection semantic-topological map construction method, system, equipment and medium
CN117952322B (en) Engineering management system based on BIM technology
CN117387616A (en) Multi-robot autonomous exploration method and system
CN116465387A (en) Grid map fusion method based on feature matching
Pires Orozco QoI-oriented incentive mechanisms for vehicle-based visual crowdsourcing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination