CN117420821A - Intelligent ant colony multi-agent path planning method based on learning - Google Patents

Intelligent ant colony multi-agent path planning method based on learning Download PDF

Info

Publication number
CN117420821A
CN117420821A CN202310030779.0A CN202310030779A CN117420821A CN 117420821 A CN117420821 A CN 117420821A CN 202310030779 A CN202310030779 A CN 202310030779A CN 117420821 A CN117420821 A CN 117420821A
Authority
CN
China
Prior art keywords
agent
intelligent
ant colony
neural network
path
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310030779.0A
Other languages
Chinese (zh)
Inventor
李伟
邱江
刘翼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN202310030779.0A priority Critical patent/CN117420821A/en
Publication of CN117420821A publication Critical patent/CN117420821A/en
Pending legal-status Critical Current

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides an intelligent ant colony multi-agent path planning method based on learning, which enables each ant to be an intelligent individual through learning historical experience knowledge, greatly improves the planning efficiency of an ant colony algorithm and solves the limitation that the traditional ant colony algorithm needs to be re-planned in each new task. When the traditional ant colony algorithm is used for solving the path planning of multiple agents, the path conflict is considered, the path conflict is planned for each agent, and then the conflict is solved, and the planning time is exponentially increased along with the increase of the number of the agents in the process. And then, the planned collision-free path is learned by using a convolutional neural network, and whether collision is possible to occur is predicted based on the neural network, so that the number of times of collision is reduced, even collision is avoided, and the planning time is greatly shortened. In addition, as planning is performed within a local observation range, only local environment information is needed to be known, global information is not needed, and the map planning method has the characteristics of simplicity, convenience and high efficiency, and can be better expanded and applied to a map with a larger scale.

Description

Intelligent ant colony multi-agent path planning method based on learning
Technical Field
The invention belongs to the technical field of path planning, and particularly relates to an intelligent ant colony multi-agent path planning method based on learning.
Background
The task of multi-agent path planning is to plan a path set from a starting position to a target position for each of a plurality of agents without collision, and meet certain constraint conditions, such as minimizing the time that all agents reach a target point, minimizing the sum of action costs of all agents reaching the target point, and the like. Research on the problem has wide application scenarios such as intelligent logistics warehouse system, intelligent military security, intelligent port scheduling and the like.
Currently, a path planning method of a single agent has been widely studied, and a common path planning method includes a search-based method: such as Dijkstra, a, and improved algorithms thereof; sampling-based methods: such as probabilistic roadmap methods (Probabilistic Roadmap Method, PRM), fast extended random tree algorithms (Rapidly-Exploring Random Tree, RRT), etc.; intelligent bionic algorithm: such as ant colony optimization algorithms (Ant Colony Optimization, ACO), particle swarm optimization algorithms (Particle Swarm Optimization, PSO), genetic algorithms (Genetic Algorithm, GA), etc. However, with the improvement of the degree of industrial intelligence, the application of a single agent cannot meet the actual production requirement, so that a multi-agent path planning method capable of realizing group collaboration has been developed, and the technology is gradually a new research hotspot in the robot field and is widely focused by industry personnel. The difficulty of robot path planning is greatly improved due to the increase of the number of obstacles and mobile robots, which is a more realistic problem and is a field of mobile robot technology which needs to be expanded.
The multi-agent path planning method (Multi Agent Path Finding, MAPF) is different in terms of planning, and can be mainly classified into a centralized planning algorithm and a distributed planning algorithm. The centralized planning algorithm is the most classical and most commonly used MAPF algorithm, and is mainly classified into a-based search, conflict-based search, cost-increase tree-based search, protocol-based method, and the like. The distributed planning algorithm is a MAPF algorithm based on reinforcement learning and deep reinforcement learning, which is raised in the field of artificial intelligence, and mainly comprises an expert demonstration type algorithm, an improved communication type algorithm, a task decomposition type algorithm and the like. The centralized planning algorithm uses a central planner to plan all the agents, so that the planning speed is higher for static environments and small-scale environments, the solution quality is higher, but when the number of agents is increased or the environments are more dynamically complex, the re-planning is time-consuming and the expandability is poor. Each agent in the distributed planning algorithm independently executes actions according to current observation, so that the intelligent planning method can be well expanded to a large-scale environment, but the planning speed and the quality of solution in a small-scale environment are poorer than those of the centralized planning algorithm, and proper rewards are difficult to design, so that the learning efficiency is low, and the convergence speed is low.
The ant colony algorithm is a bionic group intelligent algorithm which is inspired by the ant colony in nature to find the shortest path between nest and food source, has the characteristics of positive feedback, distributed calculation, indirect communication, strong robustness and the like, and has been widely applied to single-agent path planning. However, the conventional ant colony algorithm has the problems of low convergence speed, easy trapping in local optimum, easy trapping in deadlock and the like, so that the planning speed is low in multi-agent path planning, and the problems of easy conflict occur, and the application in the field of multi-agent path planning is less.
Disclosure of Invention
The invention aims to solve the problems of the traditional ant colony algorithm in multi-agent path planning application, and provides a learning-based intelligent ant colony algorithm multi-agent path planning method. The method combines the ideas of a deep learning method and an ant colony algorithm, and aims to efficiently solve the problem of multi-agent path planning by using an intelligent ant colony algorithm. The specific technical scheme is as follows:
the invention provides an intelligent ant colony multi-agent path planning method based on learning, which is characterized by comprising the following steps: step S1, setting an agent and initial and target positions thereof based on a map, randomly generating obstacles, and planning a path of a plurality of agents by using a traditional ant colony algorithm and searching based on conflict; step S2, collecting a training data set according to the result obtained in the step S1, wherein the training data set comprises local environment state information of the intelligent agent and corresponding feasible space pheromone distribution, and the feasible space pheromone distribution is the next action probability distribution of the intelligent agent in the current state; step S3, training the convolutional neural network CNN based on a loss function by using a training data set, and acquiring a trained neural network model for multi-agent path planning; and S4, planning a conflict-free optimal path for the multi-agent cluster to be tested based on the trained neural network model.
The intelligent ant colony multi-agent path planning method based on learning provided by the invention can also have the technical characteristics that the step S1 comprises the following substeps: step S1-1, randomly generating a certain proportion of obstacles and the initial positions S= { S of k agents in an n multiplied by n map 1 ,s 2 ,…,s k Sum target position g= { G 1 ,g 2 ,…,g k -a }; step S1-2, using the traditional ant colony algorithm to plan path pi for each agent i =(a 0 ,a 1 ,…,a n ) Where i= {1,2,..k }, a t An action taken by the intelligent agent at the time step t can be to move to four adjacent nodes up, down, left and right or wait in place; step S1-3, using conflict-based search to perform conflict detection on the paths in the step, selecting branches with the minimum cost to re-perform path search until all paths have no conflict, and obtaining a k conflict-free path set pi= { pi 12 ,…,π k }。
The intelligent ant colony multi-agent path planning method based on learning provided by the invention can also have the technical characteristics that training data is obtained by collecting a group of data at each time step t for each collision-free path obtained in the step S1-3, and local environment state information is local environment state information in an observation range of m multiplied by m with a current agent as a center, and the method comprises the following steps: the method comprises the steps of positioning a current intelligent agent, positioning target points of the current intelligent agent, positioning other intelligent agents in an observation range, positioning target points of other intelligent agents in the observation range and positioning obstacles; the feasible spatial pheromone distribution is expressed as (state, p), wherein state is local environment state information, p is a vector of 5 multiplied by 1, the state is a label value corresponding to the environment information in the current state, the first 4 dimensions respectively represent values obtained by normalizing the pheromone distribution of adjacent nodes in the upper, lower, left and right directions, and the 5 th dimension represents whether the next step is to wait in place or not: when the next action is moving, whether the waiting value is 0; when waiting, the value of each direction pheromone is 0, and whether the waiting value is 1.
The intelligent ant colony multi-agent path planning method based on learning provided by the invention can also have the technical characteristics that the training is as follows:
taking training data as input to the neural network, wherein the training data is an mxmxmx 5 tensor; the neural network model has the structure of 3 convolution layers, 1 maximum pooling layer, 3 convolution layers, 1 maximum pooling layer and 3 full connection layers, and finally obtains a 5 multiplied by 1 distribution vector q through a softmax activation function; the loss function is:
loss=-p T log q+c||θ|| 2
wherein, p is the collected real label value, q is the probability of neural network prediction, θ is the network parameter, and c is the regularization coefficient for controlling the contribution of regularization term to the loss function; the network parameters are updated in a gradient descending and back propagation mode during the training of the neural network, so that the value of the loss function is reduced as much as possible, namely, the difference between the network predicted value q and the real label value p is reduced, and the network prediction is more accurate.
The intelligent ant colony multi-agent path planning method based on learning provided by the invention can also have the technical characteristics that the step S4 comprises the following substeps: step S4-1, initializing a map, wherein the map comprises the positions and the number of barriers, the number of the to-be-detected intelligent agents, and the corresponding initial positions and target positions of the to-be-detected intelligent agents; s4-2, for each intelligent agent to be detected, acquiring local environment state information in an m multiplied by m observation range taking the intelligent agent as a center, and transmitting the local environment state information into a neural network model to obtain network output, namely pheromone distribution of surrounding feasible nodes; step S4-3, selecting a next action by the to-be-detected agent based on the pheromone distribution of surrounding feasible nodes, wherein a probability formula of the to-be-detected agent selecting the next action is as follows:
in the method, in the process of the invention,representing agent k Selecting a probability of node j at node i; τ net The pheromone distribution predicted for the neural network; />Representing heuristic information, i.e., the inverse of the distance of node j from target location e; alpha and beta are super parameters, and respectively represent the relative importance degrees of the pheromone and the heuristic factor; allowed k Representing agent k Feasible nodes in the current state; the intelligent agent to be tested selects the next action a according to the action selection probability formula j Until the target position is reached.
The actions and effects of the invention
According to the intelligent ant colony multi-agent path planning method based on learning, each ant becomes an intelligent individual through learning of historical experience knowledge, so that the planning efficiency of an ant colony algorithm is greatly improved, and the limitation that the traditional ant colony algorithm needs to be re-planned in each new task is solved. When the traditional ant colony algorithm is used for solving the path planning of multiple agents, the path conflict is considered, the path conflict is planned for each agent, and then the conflict is solved, and the planning time grows exponentially along with the increase of the number of the agents in the process. And then, the planned collision-free path is learned by using a convolutional neural network, and whether collision is possible to occur is predicted based on the neural network, so that the number of times of collision is reduced, and even collision is avoided, and the planning time can be greatly reduced. In addition, the path planning method of the invention performs planning within a local observation range, only needs to know local environment information without global information, has the characteristics of simplicity, convenience and high efficiency, and can be better expanded and applied to a map with a larger scale.
Drawings
Fig. 1 is a flowchart of a learning-based intelligent ant colony multi-agent path planning method in an embodiment of the present invention;
fig. 2 is a schematic diagram of multi-agent path planning based on a conventional ant colony algorithm and CBS in an embodiment of the present invention;
FIG. 3 is a schematic view of a local scope of view of a smart body in an embodiment of the present invention;
FIG. 4 is a schematic illustration of characteristics of neural network model inputs in an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a neural network model in an embodiment of the present invention.
Detailed Description
In order to make the technical means, creation characteristics, achievement purposes and effects achieved by the invention easy to understand, the learning-based intelligent ant colony multi-agent path planning method of the invention is specifically described below with reference to the embodiment and the attached drawings.
< example >
Fig. 1 is a flowchart of a learning-based intelligent ant colony multi-agent path planning method in an embodiment of the present invention.
As shown in fig. 1, the learning-based intelligent ant colony multi-agent path planning method of the embodiment specifically includes the following steps:
step 1, generating data: based on the map, setting up the agent and its initial and target positions, randomly generating obstacles, and carrying out multi-agent path planning by using the traditional ant colony algorithm and searching based on conflict.
Fig. 2 is a schematic diagram of multi-agent path planning based on a conventional ant colony algorithm and CBS in an embodiment of the present invention.
As shown in fig. 2, in this embodiment, taking a 9×9 map as an example, the specific procedure of step 1 is as follows:
step 1-1, initializing a map, randomly generating a certain proportion of obstacles (black boxes in fig. 2), a certain number of agents (here, 3 agents are taken as an example), starting positions of each agent (shown by gray circles in the figure, wherein numbers represent agent numbers), and target positions of each agent (shown by gray five-pointed stars in the figure, wherein numbers represent agent numbers corresponding to the target positions).
Step 1-2, using the traditional ant colony algorithm to plan paths for a plurality of agents in the map, as shown in fig. 2 (a), respectively planning the shortest path for each agent.
And step 1-3, performing conflict detection on the paths by using a conflict-based search (CBS), if 2 or more agents arrive at the same node in the same time step, regarding the collision, adding a constraint, re-planning by using a traditional ant colony algorithm, and planning a collision-free path meeting the constraint for each agent, as shown in fig. 2 (b).
Step 2, collecting data: and (3) collecting a training data set according to the result obtained in the step (S1), wherein the training data contains local environment state information of the intelligent agent and corresponding feasible space pheromone distribution, and the feasible space pheromone distribution is the next action probability distribution of the intelligent agent in the current state.
FIG. 3 is a schematic view of a local scope of view of a smart body in an embodiment of the present invention.
In this embodiment, for each collision-free path obtained in steps 1-3, a set of data is collected at each time step t: local environment state information and actionable spatial pheromone distribution (state, p), as shown in fig. 3.
FIG. 4 is a schematic illustration of the characteristics of neural network model inputs in an embodiment of the present invention.
The local environment information state in this step is the map information within a square range centered on the current agent and having a side length of 9, as shown in fig. 4, which includes (a) the current agent position, (b) the current agent target position, and if the target point is not within the local observation range, the line connecting the target point and the current agent is on the boundary of the observation rangeThe projected point is used as a sub-target point, (c) the positions of other agents in the observation range, (d) the target points of other agents in the observation range, (e) the position of the obstacle, and the obstacle is regarded as the obstacle if the observation range exceeds the map boundary. The movable space pheromone distribution p is the pheromone distribution p of the adjacent nodes of the intelligent agent in the current state i E (0, 1), i=0, 1,2,3, and p 4 = {0,1} indicates whether the next action waits.
Step 3, training phase: and training the convolutional neural network CNN based on the loss function by using a training data set, and acquiring a trained neural network model for multi-agent path planning.
Fig. 5 is a schematic structural diagram of a neural network model in an embodiment of the present invention.
As shown in fig. 5, the neural network model in this embodiment has a structure of 3 convolution layers, 1 max pooling layer, and 3 full connection layers, and finally obtains a 5×1 distribution vector q through a softmax activation function.
In training, the input of the neural network model is a 9×9×5 tensor, that is, the 5-layer feature map obtained in the above step 2 (fig. 4), wherein in each layer of feature map, a binary matrix is used to represent that 1 indicates that the coordinate point is an agent position, a target position or an obstacle position, and the rest coordinate points are all 0.
In this embodiment, the loss function of the neural network model is:
loss=-p T logq+c||θ|| 2
wherein p is the collected real label value, q is the probability of neural network prediction, θ is the network parameter, and c is the regularization coefficient for controlling the contribution of the regularization term to the loss function. The regularization term is introduced to constrain the model parameters to reduce the overfitting of the model on the training dataset.
The neural network is optimized through gradient descent during the training of the neural network, so that the loss function loss value is reduced as much as possible, namely, the difference between the network predicted value q and the real label value p is reduced, and the network prediction is more accurate.
Step 4, executing the steps: and planning a conflict-free optimal path for the multi-agent cluster to be tested based on the trained neural network model. The specific process is as follows:
and 4-1, initializing a map for the multi-agent cluster to be tested, wherein the map comprises the positions and the number of the obstacles, the number of the agents to be tested, the corresponding initial positions and target positions.
And 4-2, for each intelligent agent to be detected, acquiring local environment state information in an observation range of m multiplied by m with each intelligent agent to be detected as a center in each time step t, and transmitting the local environment state information into a neural network model to obtain network output, namely pheromone distribution of surrounding feasible nodes.
Step 4-3, selecting a next action by the to-be-tested agent based on the pheromone distribution of surrounding feasible nodes, wherein a probability formula of the to-be-tested agent selecting the next action is as follows:
in the method, in the process of the invention,representing agent k Selecting a probability of node j at node i; τ net The pheromone distribution predicted for the neural network; />Representing heuristic information, i.e., the inverse of the distance of node j from target location e; alpha and beta are super parameters, and respectively represent the relative importance degrees of the pheromone and the heuristic factor; allowed k Representing agent k Feasible nodes in the current state.
The intelligent agent to be tested selects the next action a according to the action selection probability formula j Until the target position is reached.
Example operation and Effect
According to the intelligent ant colony multi-agent path planning method based on learning, each ant becomes an intelligent individual through learning of historical experience knowledge, so that the planning efficiency of an ant colony algorithm is greatly improved, and the limitation that a traditional ant colony algorithm needs to be planned again in each new task is solved. When the traditional ant colony algorithm is used for solving the path planning of multiple agents, the path conflict is considered, the path conflict is planned for each agent, and then the conflict is solved, and the planning time grows exponentially along with the increase of the number of the agents in the process. And then, the planned collision-free path is learned by using a convolutional neural network, and whether collision is possible to occur is predicted based on the neural network, so that the number of times of collision is reduced, and even collision is avoided, and the planning time can be greatly reduced.
In the embodiment, as planning is performed within the local observation range, only local environment information is needed to be known, and global information is not needed, the method has the characteristics of simplicity, convenience and high efficiency, and can be better expanded and applied to a map with a larger scale.
The above examples are only for illustrating the specific embodiments of the present invention, and the present invention is not limited to the description scope of the above examples.

Claims (5)

1. The intelligent ant colony multi-agent path planning method based on learning is characterized by comprising the following steps of:
step S1, setting an agent and initial and target positions thereof based on a map, randomly generating obstacles, and planning a path of a plurality of agents by using a traditional ant colony algorithm and searching based on conflict;
step S2, collecting a training data set according to the result obtained in the step S1, wherein the training data set comprises local environment state information of the intelligent agent and corresponding feasible space pheromone distribution, and the feasible space pheromone distribution is the next action probability distribution of the intelligent agent in the current state;
step S3, training the convolutional neural network CNN based on a loss function by using the training data set, and acquiring a trained neural network model for multi-agent path planning;
and S4, planning a conflict-free optimal path for the multi-agent cluster to be tested based on the trained neural network model.
2. The learning-based intelligent ant colony multi-agent path planning method according to claim 1, characterized in that:
wherein, the step S1 comprises the following substeps:
step S1-1, randomly generating a certain proportion of obstacles and the initial positions S= { S of k agents in an n multiplied by n map 1 ,s 2 ,…,s k Sum target position g= { G 1 ,g 2 ,…,g k };
Step S1-2, using the traditional ant colony algorithm to plan path pi for each agent i =(a 0 ,a 1 ,…,a n ) Where i= {1,2,..k }, a t An action taken by the intelligent agent at the time step t can be to move to four adjacent nodes up, down, left and right or wait in place;
step S1-3, using conflict-based search to perform conflict detection on the paths in the step, selecting branches with the minimum cost to re-perform path search until all paths have no conflict, and obtaining k conflict-free path sets II= { pi 12 ,…,π k }。
3. The learning-based intelligent ant colony multi-agent path planning method according to claim 2, characterized in that:
wherein said training data is obtained by collecting a set of data at each time step t for each collision-free path obtained in said step S1-3,
the local environment state information is local environment state information within an observation range of m×m centering on a current agent, and includes: the method comprises the steps of positioning a current intelligent agent, positioning target points of the current intelligent agent, positioning other intelligent agents in an observation range, positioning target points of other intelligent agents in the observation range and positioning obstacles;
the feasible spatial pheromone distribution is expressed as (state, p), wherein state is the local environment state information, p is a vector of 5×1, and is a label value corresponding to the environment information in the current state, the first 4 dimensions respectively represent values obtained by normalizing the pheromone distribution of adjacent nodes in the upper, lower, left and right directions, and the 5 th dimension represents whether the next step is to wait in place: when the next action is moving, whether the waiting value is 0; when waiting, the value of each direction pheromone is 0, and whether the waiting value is 1.
4. The intelligent ant colony multi-agent path planning method based on learning of claim 3, wherein:
wherein, the training is:
taking the training data as input to a neural network, wherein the training data is an mxmxmx 5 tensor;
the neural network model has the structure of 3 convolution layers, 1 maximum pooling layer, 3 convolution layers, 1 maximum pooling layer and 3 full connection layers, and finally obtains a 5 multiplied by 1 distribution vector q through a softmax activation function;
the loss function is:
loss=-p T log q+c||θ|| 2
wherein, p is the collected real label value, q is the probability of neural network prediction, θ is the network parameter, and c is the regularization coefficient for controlling the contribution of regularization term to the loss function;
the network parameters are updated in a gradient descending and back propagation mode during the neural network training, so that the value of the loss function is reduced as much as possible, namely, the gap between the network predicted value q and the real label value p is reduced, and the network prediction is more accurate.
5. The learning-based intelligent ant colony multi-agent path planning method according to claim 1, characterized in that:
wherein, the step S4 comprises the following substeps:
step S4-1, initializing a map, wherein the map comprises the positions and the number of barriers, the number of the to-be-detected intelligent agents, and the corresponding initial positions and target positions of the to-be-detected intelligent agents;
s4-2, for each intelligent agent to be detected, acquiring local environment state information in an m multiplied by m observation range taking the intelligent agent as a center, and transmitting the local environment state information into a neural network model to obtain network output, namely pheromone distribution of surrounding feasible nodes;
step S4-3, selecting a next action by the to-be-detected agent based on the pheromone distribution of the surrounding feasible nodes, wherein a probability formula of the to-be-detected agent selecting the next action is as follows:
in the method, in the process of the invention,representing agent k Selecting a probability of node j at node i; τ net The pheromone distribution predicted for the neural network;representing heuristic information, i.e., the inverse of the distance of node j from target location e; alpha and beta are super parameters, and respectively represent the relative importance degrees of the pheromone and the heuristic factor; allowed k Representing agent k Feasible nodes in the current state;
the intelligent agent to be tested selects the next action a according to the action selection probability formula j Until the target position is reached.
CN202310030779.0A 2023-01-10 2023-01-10 Intelligent ant colony multi-agent path planning method based on learning Pending CN117420821A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310030779.0A CN117420821A (en) 2023-01-10 2023-01-10 Intelligent ant colony multi-agent path planning method based on learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310030779.0A CN117420821A (en) 2023-01-10 2023-01-10 Intelligent ant colony multi-agent path planning method based on learning

Publications (1)

Publication Number Publication Date
CN117420821A true CN117420821A (en) 2024-01-19

Family

ID=89523471

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310030779.0A Pending CN117420821A (en) 2023-01-10 2023-01-10 Intelligent ant colony multi-agent path planning method based on learning

Country Status (1)

Country Link
CN (1) CN117420821A (en)

Similar Documents

Publication Publication Date Title
Patle et al. A review: On path planning strategies for navigation of mobile robot
Zhu et al. Biologically inspired self-organizing map applied to task assignment and path planning of an AUV system
Yue et al. Review and empirical analysis of sparrow search algorithm
CN113159432A (en) Multi-agent path planning method based on deep reinforcement learning
CN113592162B (en) Multi-agent reinforcement learning-based multi-underwater unmanned vehicle collaborative search method
Tang et al. A novel cooperative path planning for multirobot persistent coverage in complex environments
Zhang et al. Path planning based quadtree representation for mobile robot using hybrid-simulated annealing and ant colony optimization algorithm
CN113805609A (en) Unmanned aerial vehicle group target searching method based on chaos lost pigeon group optimization mechanism
Dewangan et al. A solution for priority-based multi-robot path planning problem with obstacles using ant lion optimization
Hua et al. Research on many-to-many target assignment for unmanned aerial vehicle swarm in three-dimensional scenarios
Su et al. Robot path planning based on random coding particle swarm optimization
Chen et al. A multirobot distributed collaborative region coverage search algorithm based on Glasius bio-inspired neural network
Abujabal et al. A comprehensive review of the latest path planning developments for multi-robot formation systems
Alshawi et al. Minimal time dynamic task allocation for a swarm of robots
Ou et al. Hybrid path planning based on adaptive visibility graph initialization and edge computing for mobile robots
Yu et al. AGV multi-objective path planning method based on improved cuckoo algorithm
Liu et al. Reduce UAV coverage energy consumption through actor-critic algorithm
CN117420821A (en) Intelligent ant colony multi-agent path planning method based on learning
Golluccio et al. Robotic weight-based object relocation in clutter via tree-based q-learning approach using breadth and depth search techniques
Newaz et al. Decentralized multi-robot information gathering from unknown spatial fields
Han et al. Hybrid Algorithm-Based Full Coverage Search Approach With Multiple AUVs to Unknown Environments in Internet of Underwater Things
Ma et al. Attention-cooperated reinforcement learning for multi-agent path planning
Zhang et al. A herd-foraging-based approach to adaptive coverage path planning in dual environments
Zeng et al. Research Status and Development Trend of UAV Path Planning Algorithms
Chai et al. Mobile robot path planning in 2d space: A survey

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination