CN115586767A - Multi-robot path planning method and device - Google Patents

Multi-robot path planning method and device Download PDF

Info

Publication number
CN115586767A
CN115586767A CN202210686058.0A CN202210686058A CN115586767A CN 115586767 A CN115586767 A CN 115586767A CN 202210686058 A CN202210686058 A CN 202210686058A CN 115586767 A CN115586767 A CN 115586767A
Authority
CN
China
Prior art keywords
robot
robots
map
sub
graph
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210686058.0A
Other languages
Chinese (zh)
Inventor
李迅
张迎港
张彦铎
黄骥
刘富鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Institute of Technology
Original Assignee
Wuhan Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Institute of Technology filed Critical Wuhan Institute of Technology
Priority to CN202210686058.0A priority Critical patent/CN115586767A/en
Publication of CN115586767A publication Critical patent/CN115586767A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/028Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using a RF signal
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0287Control of position or course in two dimensions specially adapted to land vehicles involving a plurality of land vehicles, e.g. fleet or convoy travelling
    • G05D1/0289Control of position or course in two dimensions specially adapted to land vehicles involving a plurality of land vehicles, e.g. fleet or convoy travelling with means for avoiding collisions between vehicles
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0287Control of position or course in two dimensions specially adapted to land vehicles involving a plurality of land vehicles, e.g. fleet or convoy travelling
    • G05D1/0291Fleet control
    • G05D1/0295Fleet control by at least one leading vehicle of the fleet

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention provides a multi-robot path planning method and a device, belonging to the field of robot control, wherein the method comprises the following steps: obtaining a plurality of laser data from laser radars respectively arranged on a plurality of robots, and obtaining a plurality of visual images from cameras respectively arranged on the robots; respectively carrying out map construction on the plurality of laser data to obtain a grid map; analyzing the pose information of the plurality of visual images respectively to obtain pose information; respectively constructing paths in a grid map according to the pose information to obtain robot paths; and respectively carrying out map correction analysis on each grid map to obtain a corrected map. The method accelerates the matching speed of the subgraphs, improves the fusion precision of the map, accelerates the exploration efficiency, enables the exploration path to be as smooth as possible, avoids touching obstacles, and is suitable for the map fusion without the initial pose.

Description

Multi-robot path planning method and device
Technical Field
The invention mainly relates to the technical field of robot control, in particular to a multi-robot path planning method and device.
Background
The multi-robot space exploration under the unknown environment has wide application scenes including transportation, medical treatment, industry, rescue and the like. For example, in the case of an earthquake, the robot may help a rescuer assess damage inside a building. In such cases, it is important that the assessment be made as quickly as possible. Compared with a single robot, the multi-robot cooperation has the advantages that the time spent for exploring unknown environment and constructing a global map is shorter, and the efficiency is higher; the cooperation of multiple robots can make the system operate with higher fault tolerance, reliability and stability.
In the multi-robot exploration system, the part of map fusion is very critical, and the map fusion is divided into two types: 1) And (4) carrying out map fusion with an initial pose. 2) And (4) map fusion without an initial pose. Wherein the fusion without initial pose is more difficult, the relative pose estimation is mainly performed through transmitted data, then the map is merged, and the data transmission comprises: 1) A location descriptor, which is a compact code that describes a location. Location recognition (PR) extracts and matches descriptors to find the same location between robots. 2) Sensor data for calculating a relative pose between the robots based on the matched identical positions.
Common path planning methods include algorithms a, RRT, D, etc., but these methods are usually inefficient, and may cause too many repeated routes in a complex environment, thereby greatly reducing exploration efficiency.
Disclosure of Invention
The invention aims to solve the technical problem of the prior art and provides a multi-robot path planning method and a multi-robot path planning device.
The technical scheme for solving the technical problems is as follows: a multi-robot path planning method comprises the following steps:
obtaining a plurality of laser data corresponding to each robot from laser radars arranged on the robots respectively, and obtaining a plurality of visual images corresponding to the robots respectively from cameras arranged on the robots respectively;
respectively carrying out map construction on a plurality of laser data corresponding to each robot to obtain a grid map corresponding to each robot;
analyzing the pose information of the plurality of visual images corresponding to the robots respectively to obtain the pose information corresponding to the robots;
respectively constructing paths in a grid map corresponding to each robot according to the pose information corresponding to each robot to obtain a robot path corresponding to each robot;
respectively carrying out map correction analysis on each grid map to obtain corrected maps corresponding to the robots;
and respectively carrying out path correction analysis on each robot path to obtain a corrected robot path corresponding to each robot, and controlling each robot to move in a corrected map corresponding to each robot along the corrected robot path corresponding to each robot.
Another technical solution of the present invention for solving the above technical problems is as follows: a multi-robot path planning apparatus comprising:
the data acquisition module is used for respectively acquiring a plurality of laser data corresponding to each robot from laser radars arranged on the robots and respectively acquiring a plurality of visual images corresponding to the robots from cameras arranged on the robots;
the map construction module is used for respectively carrying out map construction on a plurality of laser data corresponding to each robot to obtain a grid map corresponding to each robot;
the pose information analysis module is used for respectively analyzing the pose information of the plurality of visual images corresponding to the robots to obtain the pose information corresponding to the robots;
the path construction module is used for respectively constructing paths in the grid map corresponding to each robot according to the pose information corresponding to each robot to obtain a robot path corresponding to each robot;
the map correction analysis module is used for respectively performing map correction analysis on each grid map to obtain corrected maps corresponding to the robots;
and the planning result obtaining module is used for respectively carrying out path correction analysis on each robot path to obtain a corrected robot path corresponding to each robot, and controlling each robot to move in a corrected map corresponding to each robot along the corrected robot path corresponding to each robot.
Another technical solution of the present invention for solving the above technical problems is as follows: a multi-robot path planning apparatus comprising a memory, a processor and a computer program stored in the memory and executable on the processor, when executing the computer program, implementing the multi-robot path planning method as described above.
Another technical solution of the present invention for solving the above technical problems is as follows: a computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the multi-robot path planning method as described above.
The beneficial effects of the invention are: the method comprises the steps of obtaining a raster map by respectively constructing a plurality of maps of laser data, obtaining pose information by respectively analyzing the pose information of a plurality of visual images, obtaining a robot path by respectively constructing paths in the raster map according to the pose information, obtaining a corrected map by respectively correcting and analyzing the maps of the raster map, obtaining a corrected robot path by respectively correcting and analyzing the paths of the robot path, controlling the robot to move in the corrected map along the corrected robot path, accelerating the matching speed of subgraphs, improving the fusion precision of the maps, accelerating the exploration efficiency, enabling the exploration path to be smooth as much as possible, avoiding touching obstacles, and being suitable for map fusion without initial pose.
Drawings
Fig. 1 is a schematic flow chart of a multi-robot path planning method according to an embodiment of the present invention;
fig. 2 is a block diagram of a multi-robot path planning apparatus according to an embodiment of the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth to illustrate, but are not to be construed to limit the scope of the invention.
Fig. 1 is a schematic flow chart of a multi-robot path planning method according to an embodiment of the present invention.
As shown in fig. 1, a multi-robot path planning method includes the following steps:
respectively obtaining a plurality of laser data corresponding to each robot from laser radars arranged on the robots, and respectively obtaining a plurality of visual images corresponding to the robots from cameras arranged on the robots;
respectively carrying out map construction on a plurality of laser data corresponding to each robot to obtain a grid map corresponding to each robot;
analyzing the pose information of the plurality of visual images corresponding to the robots respectively to obtain the pose information corresponding to the robots;
respectively constructing paths in a grid map corresponding to each robot according to the pose information corresponding to each robot to obtain a robot path corresponding to each robot;
respectively carrying out map correction analysis on each grid map to obtain corrected maps corresponding to the robots;
and respectively carrying out path correction analysis on each robot path to obtain a corrected robot path corresponding to each robot, and controlling each robot to move in a corrected map corresponding to each robot along the corrected robot path corresponding to each robot.
In the embodiment, the raster map is obtained by respectively constructing the maps of the plurality of laser data, the pose information of the plurality of visual images is respectively analyzed to obtain the pose information, the path construction is respectively carried out in the raster map according to the pose information to obtain the robot path, the map of the raster map is respectively corrected and analyzed to obtain the corrected map, the path correction and analysis of the robot path are respectively carried out to obtain the corrected robot path, and the robot is controlled to move in the corrected map along the corrected robot path, so that the matching speed of subgraphs is increased, the fusion precision of the map is improved, the exploration efficiency is increased, the exploration path is smooth as much as possible, the obstacle is avoided being touched, and the method is suitable for map fusion without the initial pose.
Optionally, as an embodiment of the present invention, the process of respectively performing map construction on a plurality of laser data corresponding to each robot to obtain a grid map corresponding to each robot includes:
and respectively carrying out map construction on a plurality of laser data corresponding to each robot based on a cartography framework to obtain a grid map corresponding to each robot.
It should be understood that each robot performs SLAM of the unknown environment through cartography.
It should be understood that the cartography framework is a framework that google has come in 2016 and can access data from 2D lasers, 3D lasers, odometers, IMU sensors, and output either 2D maps or 3D maps.
Specifically, cartography collects laser data to form a scans, i.e., a laser spot cloud, during mapping, and a subgraph is created from several consecutive scans and is a probability grid [ P ] with a size of 5cm × 5cm min ,P max ]When the subgraph is created, the grid probability is less than P min Indicates that the spot is clear, at P min And P max Is unknown, is greater than P max The obstacle exists at the point, and the whole map (namely the grid map) is formed by connecting sub-maps.
Specifically, the grid map divides the environment into a plurality of cells with certain sizes, each cell is represented by a square, and the side length of each square represents the resolution of the grid map. The state of each cell is calculated from the observed values of the robot sensors by calculating the posterior probability, which is expressed by a number between [0,1], and the closer to 0 indicates the greater the probability that the cell is not occupied by an obstacle in the environment, and the closer to 1 indicates the greater the probability that the cell is occupied. The grid map is generally used as a prior map for path planning, navigation, obstacle avoidance and other works, and is divided into three states: occupied (indicating that an obstacle exists in the real environment position corresponding to the grid), idle (indicating that no obstacle exists in the real environment position corresponding to the grid, namely, the area where the robot can walk), and unknown (indicating that the grid is not observed by a sensor carried by the robot and the occupation condition of the grid is not known).
In the embodiment, the raster map is obtained by respectively constructing the maps of the plurality of laser data based on the cartography framework, so that basic data are provided for subsequent data processing, the sub-graph matching speed is increased, and the map fusion precision is improved.
Optionally, as an embodiment of the present invention, the analyzing the pose information of the plurality of visual images corresponding to each robot respectively to obtain the pose information corresponding to each robot includes:
respectively extracting the features of a plurality of visual images corresponding to the robots based on an ORBSLAM3 frame to obtain a plurality of visual feature maps corresponding to the robots;
performing interframe matching on the plurality of visual feature maps corresponding to the robots respectively to obtain a plurality of matched feature maps corresponding to the robots;
respectively obtaining a visual odometer corresponding to each matched feature map through each matched feature map;
and respectively positioning a plurality of visual odometers corresponding to the robots to obtain pose information corresponding to the robots.
It should be appreciated that the ORBSLAM3 is used to compute a visual odometer for positioning.
It should be understood that the ORBSLAM3 framework, ORB _ SLAM3, is the first system that can perform data-associative visual and visual inertial navigation on short-term, medium-term, and long-term data. Drift-free operation is possible in the context of known maps, where mixed map data association-this ensures that we perform map matching and BA optimization, which also achieves the goal of building a map and then performing accurate positioning in the map.
The process of obtaining the visual odometer corresponding to each matched feature map through each matched feature map can be understood as follows: extracting the feature points of the image, and estimating the camera motion and the scene structure between two frames by matching to realize a visual odometer between two frames, which specifically comprises the following steps:
a representative point is selected from the image that remains unchanged after a small change in camera view angle. Thus we can find these points in the images of the previous and next frames and then estimate the pose of the camera. The characteristic points of the image can be simply understood as points which are more conspicuously conspicuous in the image, such as contour points, bright points in darker areas, dark points in lighter areas, and the like. The feature point is composed of a key point and a descriptor, the key point indicates the position of the feature point in the image, and the descriptor is usually a vector and describes the information around the key point in a certain artificial design way. The designer of the descriptor is designed according to the principle that similar appearance characteristics can be similar to the similar descriptor. Commonly used methods are SIFT, SURF, ORB, etc.
It should be understood that the inter-frame matching, i.e. the point cloud registration, may actually be understood as a process of obtaining a perfect coordinate transformation through calculation, and uniformly integrating point cloud data at different viewing angles into a specified coordinate system through rigid transformation such as rotation and translation.
It should be understood that on a real robot, we use the ORBSLAM3 visual odometer to locate the robot and provide pose information to cartography for mapping.
In the embodiment, the pose information of the plurality of visual images is analyzed to obtain the pose information, so that the accuracy of robot path planning is improved, the exploration efficiency is accelerated, and the method is suitable for map fusion without initial poses.
Optionally, as an embodiment of the present invention, the grid map includes a plurality of points to be identified and a plurality of obstacle points, and the process of constructing paths in the grid map corresponding to each robot according to the pose information corresponding to each robot, to obtain a robot path corresponding to each robot includes:
verifying whether only one preset boundary threshold exists in a plurality of adjacent points of the point to be recognized one by one, and if the verification is successful, taking the point to be recognized as a boundary point, thereby obtaining a plurality of boundary points corresponding to the robots, wherein the adjacent points are the point to be recognized or an obstacle point adjacent to the currently verified point to be recognized;
constructing a plurality of boundaries corresponding to the robots by a plurality of boundary points corresponding to the robots, and taking the center points of the boundaries as target points to obtain a plurality of target points corresponding to the robots;
and respectively constructing paths of a plurality of target points corresponding to the robots, a plurality of barrier points corresponding to the robots and pose information corresponding to the robots by using an artificial potential field method to obtain robot paths corresponding to the robots.
Preferably, the preset boundary threshold may be-1.
It should be understood that the center point of the boundary, i.e. the center point of a line segment, has only one center point per boundary.
It should be understood that if the verification is unsuccessful, no processing is required.
It will be appreciated that after finding an unknown boundary, the centroid of the boundary is calculated and set as the target point.
It should be understood that an artificial potential field is added into the RRT fast-expanding random tree algorithm to generate a new PFRRT path planning algorithm, and the algorithm is used for path planning of the robot to a target point.
Specifically, the artificial potential field method is 1986, and Khatib proposes to use a potential field for obstacle avoidance and path planningIdea of the invention [50] I.e. artificial potential field method (APF). It comes from the concept of fields in physics, which treats the motion of an object as the result of two forces interacting, so in this method, the robot is modeled as a particle that is influenced by attractive and repulsive forces. In this idea, the robot first creates a potential field, denoted as U, in a moving environment, and this force field is divided into two parts: 1. the gravitational field generated by the target point is directed towards the target point position. 2. The repulsive field generated by the obstacle is directed away from the obstacle.
The obstacle is represented as a repulsive force field U rep The target is an attraction field U att . Under the combined action of the attraction force and the repulsion force, the robot moves along the direction of the attraction force and safely reaches the target area, namely, no collision occurs.
When the robot is travelling in a potential field, it is constantly subjected to forces of different directions and magnitudes. The repulsion is larger when the robot is closer to the obstacle, and vice versa, and the attraction is smaller when the robot is closer to the target point, and vice versa.
The force of the robot in the potential field is the combined action of the repulsive force and the attractive force, the robot moves along a reasonable direction, and the total potential field function is as follows:
U t =U att +U rep (4-8)
in the above formula, U att Representing gravitational field, U rep Representing a repulsive force field, U t Representing the total potential field of the environment in which the robot is located in the potential field. The resultant force experienced is expressed as:
F t =F att +F rep (4-9)
in the above formula, F att Representing the gravitational force to which the robot is subjected, F rep Indicating the repulsive force to which the robot is subjected. Suppose the pose of the robot in the map is x, x g Representing the coordinate position of the target point, the potential field function of the attraction is as follows:
Figure BDA0003697926400000091
wherein K a The coefficient of the gravitational field is expressed as a constant, and the expression of the negative gradient of the gravitational function at the moment is as follows:
Figure BDA0003697926400000092
as can be seen from the above equation, the attraction force becomes smaller as the robot approaches the target point and larger as the robot moves away from the target point, and the attraction force becomes 0 when the robot reaches the target point. The repulsion function is defined as follows:
Figure BDA0003697926400000093
in the above formula, K r Is a coefficient of a repulsive force field, is also a constant, rho is the distance between the robot and the obstacle, rho 0 Representing a minimum influence distance of the potential field when the robot to obstacle distance p is larger than the potential field influence distance p 0 When the repulsive force is 0, the robot will not be influenced by the repulsive force, but only by the attractive force. The negative gradient expression of the repulsion function is as follows:
Figure BDA0003697926400000094
when the robot is navigating, generally, the robot is subjected to an attractive force and a plurality of repulsive forces, and the sum of the potential fields is as follows:
U t =U att +∑U rep (4-14)
the total force experienced by the robot is then:
F t =F att +∑F rep (4-15)
and PFRRT, on the basis of RRT algorithm, we add artificial potential field and use the artificial force field to guide the selection of child nodes. Before selecting random point, adding a potential field function APF (x) rand ) As shown in Algorithm 1, x rand For sampling point set, then calculating current point distanceMinimum distance d from obstacle min The effective distance d between the minimum distance and the repulsive force obs (when the distance exceeds d obs No repulsive force) are compared, if d is min ≤d obs Then a repulsive force is generated, the anti-regular repulsive force is 0, and then the function f of the attractive force is passed att () And repulsive force function f rep () Extracting the gravity vector of the current point and the target point, and after obtaining the gravity vector, adding a minimum increment lambda and pointing to the resultant force direction, wherein the formula is as follows:
Figure BDA0003697926400000101
finally returning the optimized sample set x rand By this function, a point in the sampling space is brought closer to the target point as allowed by the repulsive force, thereby generating a new sample x prand
Specifically, the grid map generates a plurality of boundaries which are located at the boundary of the explored space and the unexplored space, when the robot moves to the boundaries, the robot can explore the environment in the unknown area, the map information is updated, and the center of the boundaries is set as the target point to serve as the moving target of the robot.
Specifically, on the basis of the RRT algorithm, an artificial potential field is added, and the selection of child nodes is guided by the artificial potential field. Before a random point is selected, a potential field function is added, a gravity vector of a current point and a target point is extracted through the gravity function, a repulsive force vector of an obstacle to the target point is calculated, then a final sum vector is calculated, the sum vector acts on the random point, the random point is enabled to be closer to the direction of the target point and far away from the obstacle, and a formed path is enabled to be less bent and smoother.
Specifically, the specific calculation process of the child node is as follows:
Figure BDA0003697926400000102
Figure BDA0003697926400000111
in the algorithm, V is a set of vertices, E is a set of edges, ε is a growth step, C is a state space, N is an iteration number, and G (V, E) is a graph formed by V and E. The random tree T contains only one node at initialization: the root node x _ init. Firstly, a Sample function randomly selects a sampling point x _ rand from a state space; then the Nreatt function selects a node x _ Nreast nearest to x _ rand from the random tree; finally, the NewNode function obtains a new node x _ new by extending a distance epsilon from x _ nreast to x _ rand. If x _ new collides with the obstacle, the growth is abandoned, otherwise q _ new is added to the V point set and edges (x _ new ) are added to the edge set E. When the method is used for path planning, the steps are repeated until the distance between x _ nreast and the target point x _ gal is smaller than a threshold value, which represents that the random tree reaches the target point, and the algorithm returns success.
In the embodiment, the paths are respectively constructed in the grid map according to the pose information to obtain the robot paths, so that the formed paths are less bent and smoother, the iteration times are reduced, and the construction of the map is accelerated.
Optionally, as an embodiment of the present invention, the process of performing map correction analysis on each grid map respectively to obtain a corrected map corresponding to each robot includes:
dividing each grid map by using a spatial distribution sampling algorithm to obtain a plurality of NDT detection units corresponding to each robot;
constructing an initial training model, and respectively training and analyzing the initial training model according to a plurality of NDT detection units corresponding to the robots to obtain training models corresponding to the robots;
respectively positioning each NDT detection unit corresponding to each robot through each training model to obtain a plurality of sub-graph descriptors corresponding to each robot, and storing all the sub-graph descriptors into a pre-established sub-graph descriptor database;
respectively screening and analyzing a plurality of sub-graph descriptors corresponding to the robots according to the sub-graph descriptor database to obtain sub-graph candidate queues corresponding to the robots;
respectively carrying out target image transformation matrix analysis on the sub-image candidate queues corresponding to the robots to obtain target image transformation matrixes corresponding to the robots;
and obtaining a corrected map corresponding to each robot through the target image transformation matrix corresponding to each robot.
It should be understood that after sampling by the spatial sampler (i.e., the spatial distribution sampling algorithm), the 2D-NDT-Transformer network is used to generate the map descriptor (i.e., the sub-graph descriptor) and store it in the common sub-graph repository (i.e., the sub-graph descriptor database).
It should be understood that the training model positions each of the NDT detection units, respectively, to obtain a sub-graph descriptor corresponding to each of the NDT detection units.
Specifically, a method G of spatially distributed sampling is used to generate a uniform NDT detection unit, with the following formula:
Figure BDA0003697926400000121
wherein m is s M denotes a map (i.e., the grid map), s denotes its size (i.e., the size of the grid map), and G denotes an NDT cell of size k.
It should be understood that the spatial sampling distribution samples the input s-sized map (i.e., the grid map) to a fixed point number k and causes it to retain the original map information, and the subgraph is transformed into a subgraph
Figure BDA0003697926400000122
And then sending the data into a 2D-NDT-Transformer network to obtain the subgraph descriptor.
In the embodiment, the corrected map is obtained by respectively correcting and analyzing the maps of the grid maps, so that the original map information can be kept, the exploration efficiency is accelerated, and the map fusion without the initial pose is realized.
Optionally, as an embodiment of the present invention, the constructing an initial training model, and performing training analysis on the initial training model according to a plurality of NDT detection units corresponding to the robots, to obtain the training model corresponding to each of the robots includes:
constructing an NDT-Transformer point cloud network, and performing dimensionality reduction on the NDT-Transformer point cloud network to obtain a dimensionality-reduced point cloud network;
respectively judging whether the radius of each NDT detection unit is equal to a preset judgment threshold value, if so, taking the NDT detection unit as a similar sub-graph; if not, taking the NDT detection unit as a dissimilar subgraph until all the NDT detection units are judged to be finished, thereby obtaining a plurality of similar subgraphs corresponding to the robots and a plurality of dissimilar subgraphs corresponding to the robots;
and training the dimensionality reduced point cloud network according to the plurality of NDT detection units corresponding to the robots, the plurality of similar subgraphs corresponding to the robots and the plurality of dissimilar subgraphs corresponding to the robots respectively to obtain training models corresponding to the robots.
It should be appreciated that the NDT-Transformer point cloud network is used for real-time and large-scale location identification using 3D point clouds. Specifically, the original, dense 3D point cloud is compressed into a probability distribution (NDT unit) using a 3D Normal Distribution Transform (NDT) representation to provide a geometric description. A new NDT-Transformer network then learns global descriptors from a set of 3D NDT cell representations. With the benefit of NDT representation and NDT-Transformer networks, the learned global descriptors enrich the geometric and contextual information. Finally, descriptor retrieval is accomplished using a query database for location identification.
It should be understood that the NDT-Transformer (i.e., the NDT-Transformer point cloud network) is a point cloud network for 3D scene localization, the input layer of the network is changed to 2D, and 2D indoor data (i.e., the NDT detection units) is collected and divided into subgraphs (i.e., the NDT detection units), the triplets of the similar subgraphs are trained, and a 2D-NDT-Transformer model (i.e., the dimensionality reduced point cloud network) suitable for a 2D grid map is generated and used for multi-robot map matching.
In the embodiment, the training model is obtained by respectively carrying out training analysis on the initial training model through the plurality of NDT detection units, so that the original map information can be retained, the exploration efficiency is accelerated, and the map fusion without the initial pose is realized.
Optionally, as an embodiment of the present invention, the step of respectively performing screening analysis on a plurality of sub-graph descriptors corresponding to the robots according to the sub-graph descriptor database to obtain sub-graph candidate queues corresponding to the robots includes:
respectively comparing Euclidean distances of the sub-graph descriptors according to the sub-graph descriptors in the sub-graph descriptor database to obtain descriptor distances corresponding to the sub-graph descriptors;
and if the descriptor distance is smaller than a preset distance threshold, taking the corresponding sub-graph descriptor in the sub-graph descriptor database as a sub-graph candidate descriptor, and storing the sub-graph candidate descriptor in a pre-established sub-graph candidate queue, so as to obtain the sub-graph candidate queue corresponding to each robot.
Preferably, the preset distance threshold may be 0.275.
It should be understood that the new subgraph is compared with other robot subgraphs in the subgraph warehouse (i.e. subgraph descriptors in the subgraph descriptor database) one by one after generating the map descriptor (i.e. the subgraph descriptor), if the new subgraph is smaller than a set threshold (i.e. the preset distance threshold), the nearby subgraph (i.e. the subgraph candidate descriptor) is considered, and the subgraph candidate is added into the subgraph candidate queue.
In particular, the newly generated subgraph is compared with the stored subgraphs in descriptors,
Figure BDA0003697926400000141
represents the distance between subgraph i of robot alpha and subgraph j of robot beta, alpha i Subgraphs i, β representing robot α j Sub-figure j representing the robot beta, where a d is arranged th (d th = 0.275), selecting subgraphs below the threshold value to add in the candidate queue, and filtering out subgraphs above the threshold value.
In the above embodiment, a subgraph candidate queue is obtained by respectively screening and analyzing a plurality of subgraph descriptors according to the subgraph descriptor database, and nearby subgraphs can be selected, so that some useless subgraphs are filtered.
Optionally, as an embodiment of the present invention, the step of performing target image transformation matrix analysis on the sub-image candidate queues corresponding to the robots respectively to obtain target image transformation matrices corresponding to the robots includes:
respectively extracting features of the subgraph candidate descriptors in each subgraph candidate queue by using an AKAZE algorithm to obtain feature points corresponding to each subgraph candidate descriptor;
respectively carrying out image matching on a plurality of feature points corresponding to the robots by utilizing an OpenCV tool to obtain original image transformation matrixes corresponding to the robots, scale factors corresponding to the robots and initial confidence degrees corresponding to the robots;
respectively carrying out normalization processing on each scale factor and the initial confidence degree corresponding to each robot through a first formula to obtain a target confidence degree corresponding to each robot, wherein the first formula is as follows:
Figure BDA0003697926400000151
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003697926400000152
for the purpose of the confidence level of the object,
Figure BDA0003697926400000153
is a scale factor, and is a function of,
Figure BDA0003697926400000154
is the initial confidence;
and screening the original image transformation matrix corresponding to each robot according to each target confidence coefficient to obtain a target image transformation matrix corresponding to each robot.
It should be understood that the akage algorithm, namely the akage feature algorithm, is an improved version of the SIFT feature algorithm, but does not use gaussian blur to construct the scale space, because gaussian blur has the disadvantage of losing edge information, and then employs nonlinear diffusion filtering to construct the scale space, thereby preserving more edge features of the image.
It should be appreciated that the OpenCV tool is a cross-platform computer vision and machine learning software library issued based on apache2.0 licensing (open source), which can run on Linux, windows, android, and Mac OS operating systems. The method is light and efficient, is composed of a series of C functions and a small number of C + + classes, provides interfaces of languages such as Python, ruby, MATLAB and the like, and realizes a plurality of general algorithms in the aspects of image processing and computer vision.
It should be understood that AKAZE feature extraction is performed on all sub-graphs in the new sub-graph and sub-graph candidate queue (namely sub-graph candidate descriptors in the sub-graph candidate queue), the confidence coefficient of the AKAZE feature extraction is calculated, the most similar sub-graph is selected as the same sub-graph, and pose transformation and map fusion are performed according to the sub-graph.
It should be understood that the AKAZE algorithm in OpenCV performs feature point extraction of subgraphs, and then AffiniBestOf 2NearestMatcher () to obtain image transformations
Figure BDA0003697926400000161
(i.e., the original image transformation matrix), the scale factor
Figure BDA0003697926400000162
And an image arrangementConfidence of sex
Figure BDA0003697926400000163
(i.e., the initial confidence). Meanwhile, the subgraphs should have the same size, so the normalization is carried out by using the scale factor to obtain the final image transformation
Figure BDA0003697926400000164
(i.e., the target image transformation matrix), the final confidence becomes
Figure BDA0003697926400000165
(i.e., the target confidence), the calculation formula is as follows:
Figure BDA0003697926400000166
threshold value c of confidence coefficient is set in experiment th (c th = 0.5), if the final confidence is greater than the threshold, then the two are considered co-located. And then the relative pose of the robot is used for carrying out map fusion.
In the embodiment, the target image transformation matrixes of the sub-image candidate queue are analyzed respectively to obtain the target image transformation matrixes, so that the matching speed of the sub-images is increased, the map fusion precision is improved, and the map exploration efficiency is improved.
Optionally, as an embodiment of the present invention, the robot path includes distances from a plurality of robots to a point, and the process of respectively performing path correction analysis on each robot path to obtain a corrected robot path corresponding to each robot includes:
respectively calculating repulsive potential field values of the distances from the multiple robots to the point corresponding to the multiple robots through a second formula to obtain multiple repulsive potential field values corresponding to the multiple robots, wherein the second formula is as follows:
Figure BDA0003697926400000167
wherein, P r (α, i) is the value of the repulsive potential field, d s Is the laser radar range value, r d (α, i) is the distance from the robot to the point, α is the robot number, i is the point in the grid map;
and respectively carrying out path correction on each robot path according to a plurality of repulsive potential field values corresponding to each robot to obtain a corrected robot path corresponding to each robot.
It is understood that by the artificial potential field method, the robot is caused to directly generate repulsive force, so that the robot does not explore the same area.
Specifically, multiple robots should explore different directions, so a repulsive force is introduced to each robot, and robot α adds to the repulsive potential field at point i, as shown in the formula:
Figure BDA0003697926400000171
wherein, d s Representing the laser sensor range, r d (α, i) is the robot to point distance, k r Is the repulsive force coefficient. Through the formula, the robots feel repulsion between the robots when meeting each other, and therefore other exploration points are explored.
In the embodiment, the repulsion potential field values of the distances from the multiple robots to the point are respectively calculated through the second formula to obtain the multiple repulsion potential field values, and the paths of the robots are respectively corrected according to the multiple repulsion potential field values to obtain the corrected paths of the robots, so that the problems that too many repeated paths can occur in a complex environment, the searching efficiency is reduced, and the map searching efficiency is improved are solved.
Optionally, as another embodiment of the present invention, when the number of the boundary points is 0, the robot completes the exploration.
Optionally, as another embodiment of the invention, the invention uses NDT-Transformer to construct a map fusion frame, and enables pose estimation among multiple robots to perform map fusion through subgraphs shared among robots; and by using a PFRRT path planning algorithm based on the artificial potential field and the RRT, the iteration times are reduced, and the map establishment is accelerated.
Optionally, as another embodiment of the present invention, the technical problem to be solved by the present invention is: 1. aiming at the map fusion problem without initial pose, a map fusion method based on subgraphs is provided, the subgraph matching speed is accelerated, and the map fusion precision is improved. 2. A method for coordinating multiple robots is provided to search different areas and increase the search efficiency. 3. A path planning method is provided to make the exploration path as smooth as possible and avoid touching the obstacle.
Optionally, as another embodiment of the present invention, the beneficial effects of the present invention are: a map fusion frame is constructed through the 2D-NDT-Transformer, and the pose estimation and map fusion can be carried out among multiple robots through a sub-graph shared among the robots. In the RRT path planning algorithm, an artificial potential field is added, so that random points are more ordered, branches of a tree are fewer, iteration times are reduced, a path curve is smoother, and the establishment of a map is accelerated.
Optionally, as another embodiment of the invention, the 2D-NDT-transform network is combined with a cartography sub-graph to construct a map fusion framework, and meanwhile, an artificial potential field is added on the basis of RRT to form a new PFRRT path planning algorithm, and a multi-robot exploration system under an unknown environment is provided by combining the two points. The common RRT path planning algorithm has more branches and longer paths; and after the PFRRT algorithm is added into the artificial potential field, the path branches are fewer, the path is smoother, and the exploration efficiency is accelerated. Therefore, the robot map searching method has the function of searching the environment in an unknown environment, meanwhile, maps of a plurality of robots can be fused perfectly, and searching efficiency is improved.
Fig. 2 is a block diagram of a multi-robot path planning apparatus according to an embodiment of the present invention.
Optionally, as another embodiment of the present invention, as shown in fig. 2, a multi-robot path planning apparatus includes:
the data acquisition module is used for respectively acquiring a plurality of laser data corresponding to each robot from laser radars arranged on the robots and respectively acquiring a plurality of visual images corresponding to the robots from cameras arranged on the robots;
the map construction module is used for respectively carrying out map construction on a plurality of laser data corresponding to each robot to obtain a grid map corresponding to each robot;
the pose information analysis module is used for respectively analyzing the pose information of the plurality of visual images corresponding to the robots to obtain the pose information corresponding to the robots;
the path construction module is used for respectively constructing paths in the grid map corresponding to each robot according to the pose information corresponding to each robot to obtain a robot path corresponding to each robot;
the map correction analysis module is used for respectively performing map correction analysis on each grid map to obtain corrected maps corresponding to the robots;
and the planning result obtaining module is used for respectively carrying out path correction analysis on each robot path to obtain a corrected robot path corresponding to each robot, and controlling each robot to move in a corrected map corresponding to each robot along the corrected robot path corresponding to each robot.
Optionally, another embodiment of the present invention provides a multi-robot path planning apparatus, including a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the multi-robot path planning method as described above is implemented. The device may be a computer or the like.
Optionally, another embodiment of the invention provides a computer-readable storage medium, which stores a computer program that, when executed by a processor, implements the multi-robot path planning method as described above.
It should be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the above-described apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, and various media capable of storing program codes.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and should not be taken as limiting the scope of the present invention, which is intended to cover any modifications, equivalents, improvements, etc. within the spirit and scope of the present invention.

Claims (10)

1. A multi-robot path planning method is characterized by comprising the following steps:
obtaining a plurality of laser data corresponding to each robot from laser radars arranged on the robots respectively, and obtaining a plurality of visual images corresponding to the robots respectively from cameras arranged on the robots respectively;
respectively carrying out map construction on a plurality of laser data corresponding to each robot to obtain a grid map corresponding to each robot;
analyzing the pose information of a plurality of visual images corresponding to each robot respectively to obtain the pose information corresponding to each robot;
respectively constructing paths in a grid map corresponding to each robot according to the pose information corresponding to each robot to obtain a robot path corresponding to each robot;
respectively carrying out map correction analysis on each grid map to obtain corrected maps corresponding to the robots;
and respectively carrying out path correction analysis on each robot path to obtain a corrected robot path corresponding to each robot, and controlling each robot to move in a corrected map corresponding to each robot along the corrected robot path corresponding to each robot.
2. The multi-robot path planning method according to claim 1, wherein the process of respectively mapping the plurality of laser data corresponding to each of the robots to obtain the grid map corresponding to each of the robots includes:
and respectively carrying out map construction on a plurality of laser data corresponding to each robot based on a cartography framework to obtain a grid map corresponding to each robot.
3. The multi-robot path planning method according to claim 1, wherein the process of analyzing the pose information of each of the plurality of visual images corresponding to each of the robots to obtain the pose information corresponding to each of the robots includes:
respectively extracting the features of a plurality of visual images corresponding to the robots based on an ORBSLAM3 frame to obtain a plurality of visual feature maps corresponding to the robots;
performing interframe matching on the plurality of visual feature maps corresponding to the robots respectively to obtain a plurality of matched feature maps corresponding to the robots;
respectively obtaining a visual odometer corresponding to each matched feature map through each matched feature map;
and respectively positioning a plurality of visual odometers corresponding to the robots to obtain pose information corresponding to the robots.
4. The multi-robot path planning method according to claim 1, wherein the grid map includes a plurality of points to be identified and a plurality of obstacle points, and the process of obtaining the robot path corresponding to each robot by constructing the path in the grid map corresponding to each robot according to the pose information corresponding to each robot includes:
verifying whether only one preset boundary threshold value exists in a plurality of adjacent points of the point to be recognized one by one, if the verification is successful, taking the point to be recognized as a boundary point, thereby obtaining a plurality of boundary points corresponding to each robot, wherein the adjacent points are the points to be recognized or obstacle points adjacent to the point to be recognized which is currently verified;
constructing a plurality of boundaries corresponding to the robots through a plurality of boundary points corresponding to the robots, and taking the center points of the boundaries as target points to obtain a plurality of target points corresponding to the robots;
and respectively constructing paths of a plurality of target points corresponding to the robots, a plurality of barrier points corresponding to the robots and pose information corresponding to the robots by using an artificial potential field method to obtain robot paths corresponding to the robots.
5. The multi-robot path planning method according to claim 1, wherein the process of performing map correction analysis on each grid map respectively to obtain a corrected map corresponding to each robot comprises:
dividing each grid map by using a spatial distribution sampling algorithm to obtain a plurality of NDT detection units corresponding to each robot;
constructing an initial training model, and respectively training and analyzing the initial training model according to a plurality of NDT detection units corresponding to the robots to obtain training models corresponding to the robots;
respectively positioning each NDT detection unit corresponding to each robot through each training model to obtain a plurality of sub-graph descriptors corresponding to each robot, and storing all the sub-graph descriptors into a pre-established sub-graph descriptor database;
respectively screening and analyzing a plurality of sub-graph descriptors corresponding to the robots according to the sub-graph descriptor database to obtain sub-graph candidate queues corresponding to the robots;
respectively carrying out target image transformation matrix analysis on the sub-image candidate queues corresponding to the robots to obtain target image transformation matrixes corresponding to the robots;
and obtaining a corrected map corresponding to each robot through the target image transformation matrix corresponding to each robot.
6. The multi-robot path planning method according to claim 5, wherein the process of constructing an initial training model, and performing training analysis on the initial training model according to the plurality of NDT detection units corresponding to the respective robots to obtain the training model corresponding to the respective robots includes:
constructing an NDT-Transformer point cloud network, and performing dimensionality reduction on the NDT-Transformer point cloud network to obtain a dimensionality-reduced point cloud network;
respectively judging whether the radius of each NDT detection unit is equal to a preset judgment threshold value, if so, taking the NDT detection unit as a similar sub-graph; if not, taking the NDT detection unit as a dissimilar subgraph until all the NDT detection units are judged to be finished, thereby obtaining a plurality of similar subgraphs corresponding to the robots and a plurality of dissimilar subgraphs corresponding to the robots;
and training the dimensionality reduced point cloud network according to the plurality of NDT detection units corresponding to the robots, the plurality of similar subgraphs corresponding to the robots and the plurality of dissimilar subgraphs corresponding to the robots respectively to obtain training models corresponding to the robots.
7. The multi-robot path planning method according to claim 5, wherein the step of respectively performing screening analysis on a plurality of sub-graph descriptors corresponding to each of the robots according to the sub-graph descriptor database to obtain sub-graph candidate queues corresponding to each of the robots includes:
respectively comparing Euclidean distances of the sub-graph descriptors according to the sub-graph descriptors in the sub-graph descriptor database to obtain descriptor distances corresponding to the sub-graph descriptors;
and if the descriptor distance is smaller than a preset distance threshold, taking the corresponding sub-graph descriptor in the sub-graph descriptor database as a sub-graph candidate descriptor, and storing the sub-graph candidate descriptor in a pre-established sub-graph candidate queue, so as to obtain the sub-graph candidate queue corresponding to each robot.
8. The multi-robot path planning method according to claim 7, wherein the process of performing target image transformation matrix analysis on the sub-image candidate queues corresponding to the respective robots to obtain target image transformation matrices corresponding to the respective robots comprises:
respectively extracting features of the subgraph candidate descriptors in each subgraph candidate queue by using an AKAZE algorithm to obtain feature points corresponding to each subgraph candidate descriptor;
respectively carrying out image matching on a plurality of feature points corresponding to the robots by utilizing an OpenCV tool to obtain original image transformation matrixes corresponding to the robots, scale factors corresponding to the robots and initial confidence degrees corresponding to the robots;
respectively carrying out normalization processing on each scale factor and the initial confidence degree corresponding to each robot through a first formula to obtain a target confidence degree corresponding to each robot, wherein the first formula is as follows:
Figure FDA0003697926390000041
wherein the content of the first and second substances,
Figure FDA0003697926390000042
in order to be the confidence level of the target,
Figure FDA0003697926390000043
is a scale factor, and is a function of,
Figure FDA0003697926390000044
is the initial confidence;
and screening the original image transformation matrix corresponding to each robot according to each target confidence coefficient to obtain a target image transformation matrix corresponding to each robot.
9. The multi-robot path planning method according to claim 1, wherein the robot path includes distances from a plurality of robots to a point, and the process of respectively performing path correction analysis on each robot path to obtain a corrected robot path corresponding to each robot includes:
respectively calculating repulsive potential field values of the distances from the robot to the point corresponding to each robot through a second formula to obtain a plurality of repulsive potential field values corresponding to each robot, wherein the second formula is as follows:
Figure FDA0003697926390000051
wherein, P r (α, i) is the value of the repulsive potential field, d s Is the laser radar range value, r d (α, i) is a robotThe distance to the point, alpha is the serial number of the robot, and i is the point in the grid map;
and respectively carrying out path correction on each robot path according to a plurality of repulsive potential field values corresponding to each robot to obtain a corrected robot path corresponding to each robot.
10. A multi-robot path planning apparatus, comprising:
the data acquisition module is used for respectively acquiring a plurality of laser data corresponding to each robot from laser radars arranged on the robots and respectively acquiring a plurality of visual images corresponding to the robots from cameras arranged on the robots;
the map construction module is used for respectively carrying out map construction on a plurality of laser data corresponding to each robot to obtain a grid map corresponding to each robot;
the pose information analysis module is used for respectively analyzing the pose information of the plurality of visual images corresponding to the robots to obtain the pose information corresponding to the robots;
the path construction module is used for respectively constructing paths in the grid map corresponding to each robot according to the pose information corresponding to each robot to obtain a robot path corresponding to each robot;
the map correction analysis module is used for respectively carrying out map correction analysis on each grid map to obtain corrected maps corresponding to each robot;
and the planning result obtaining module is used for respectively carrying out path correction analysis on each robot path to obtain a corrected robot path corresponding to each robot, and controlling each robot to move in a corrected map corresponding to each robot along the corrected robot path corresponding to each robot.
CN202210686058.0A 2022-06-16 2022-06-16 Multi-robot path planning method and device Pending CN115586767A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210686058.0A CN115586767A (en) 2022-06-16 2022-06-16 Multi-robot path planning method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210686058.0A CN115586767A (en) 2022-06-16 2022-06-16 Multi-robot path planning method and device

Publications (1)

Publication Number Publication Date
CN115586767A true CN115586767A (en) 2023-01-10

Family

ID=84771048

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210686058.0A Pending CN115586767A (en) 2022-06-16 2022-06-16 Multi-robot path planning method and device

Country Status (1)

Country Link
CN (1) CN115586767A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116358531A (en) * 2023-06-01 2023-06-30 佛山云曼健康科技有限公司 Map construction method, device, robot and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116358531A (en) * 2023-06-01 2023-06-30 佛山云曼健康科技有限公司 Map construction method, device, robot and storage medium
CN116358531B (en) * 2023-06-01 2023-09-01 佛山市星曼信息科技有限公司 Map construction method, device, robot and storage medium

Similar Documents

Publication Publication Date Title
CN112179330B (en) Pose determination method and device of mobile equipment
US20200300637A1 (en) Collaborative navigation and mapping
EP2202672B1 (en) Information processing apparatus, information processing method, and computer program
CN108229416B (en) Robot SLAM method based on semantic segmentation technology
Zhang et al. Hierarchical topic model based object association for semantic SLAM
CN111536964A (en) Robot positioning method and device, and storage medium
CN110675307A (en) Implementation method of 3D sparse point cloud to 2D grid map based on VSLAM
CN110376605B (en) Map construction method, navigation method and device
Senanayake et al. Spatio-temporal hilbert maps for continuous occupancy representation in dynamic environments
CN116255992A (en) Method and device for simultaneously positioning and mapping
CN114549738A (en) Unmanned vehicle indoor real-time dense point cloud reconstruction method, system, equipment and medium
CN111753696A (en) Method for sensing scene information, simulation device and robot
Yin et al. Stabilize an unsupervised feature learning for LiDAR-based place recognition
CN110751722B (en) Method and device for simultaneously positioning and establishing image
CN110348359B (en) Hand gesture tracking method, device and system
CN115586767A (en) Multi-robot path planning method and device
CN114577196B (en) Lidar positioning using optical flow
CN113761647B (en) Simulation method and system of unmanned cluster system
CN112733971B (en) Pose determination method, device and equipment of scanning equipment and storage medium
Pedrosa et al. A non-linear least squares approach to SLAM using a dynamic likelihood field
CN116698043A (en) Visual navigation method for indoor mobile robot
CN116520302A (en) Positioning method applied to automatic driving system and method for constructing three-dimensional map
CN111724438B (en) Data processing method and device
Pai et al. Optimization and Path Planning of Simultaneous Localization and Mapping Construction Based on Binocular Stereo Vision.
Chen et al. Towards bio-inspired place recognition over multiple spatial scales

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination