CN111489049B - Multi-agent distributed task allocation method - Google Patents
Multi-agent distributed task allocation method Download PDFInfo
- Publication number
- CN111489049B CN111489049B CN202010140739.8A CN202010140739A CN111489049B CN 111489049 B CN111489049 B CN 111489049B CN 202010140739 A CN202010140739 A CN 202010140739A CN 111489049 B CN111489049 B CN 111489049B
- Authority
- CN
- China
- Prior art keywords
- task
- agent
- value
- type
- intelligent
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0631—Resource planning, allocation, distributing or scheduling for enterprises or organisations
Landscapes
- Business, Economics & Management (AREA)
- Human Resources & Organizations (AREA)
- Engineering & Computer Science (AREA)
- Strategic Management (AREA)
- Entrepreneurship & Innovation (AREA)
- Economics (AREA)
- Operations Research (AREA)
- Game Theory and Decision Science (AREA)
- Development Economics (AREA)
- Marketing (AREA)
- Educational Administration (AREA)
- Quality & Reliability (AREA)
- Tourism & Hospitality (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer And Data Communications (AREA)
Abstract
The invention discloses a multi-agent distributed task method, in particular to a task allocation method based on a distributed negotiation mechanism, which is used for realizing multi-agent task allocation under the condition of limited communication. The invention adopts the idea of auction algorithm, and the intelligent agent carries out auction, bid and negotiation on different tasks. And finally, selecting a proper intelligent agent to execute the task under the condition that the communication topologies are locally communicated. Factors such as the capacity and task difficulty of the agent, the position of the agent and the like are considered in the auction process. An information updating mechanism among the agents ensures that task allocation does not conflict, and the updating mode reduces the calculation pressure of the agents. The invention can effectively solve the problem of multi-agent distributed task allocation under the condition of local communication.
Description
Technical Field
The invention relates to the technical field of intelligent agents, in particular to a multi-agent distributed task allocation method.
Background
In a complex real dynamic environment, due to factors such as environment change, time constraint, resource distribution inequality and the like, an intelligent system needs to solve coordination and cooperation problems such as resource allocation, task scheduling, behavior coordination, conflict resolution and the like under the conditions of limited time and limited resources. The research of the multi-agent system mainly aims to enable agents with independent functions to have cooperative consciousness similar to human beings, complete complex tasks through negotiation, cooperation and coordination, and solve the problem which cannot be solved by a single agent. The multi-agent task allocation problem is widely applied to various fields such as production scheduling, industrial manufacturing, military striking and the like.
For the problem of multi-agent cooperative task allocation, common distributed task allocation methods include a market mechanism-based method, an idle chain-based method, a threshold response method and the like. The distributed task allocation has no central node or central controller, each intelligent agent independently makes a decision, and the task planning scheme is realized by communication, cooperation and negotiation among the intelligent agents. Distributed decision parallel computing, good expansibility and robustness, and is suitable for large-scale systems.
However, in the existing distributed task allocation scheme, the situations that communication between agents is blocked and the agents are increased or decreased at any time are not considered, and in the two situations, the distributed task allocation scheme should reduce the dependence on communication bandwidth and improve task allocation speed, so as to prevent task allocation conflict.
Disclosure of Invention
In view of this, the invention provides a multi-agent distributed task allocation method, which can effectively and quickly allocate multi-target tasks, reduce communication bandwidth, and improve allocation efficiency.
In order to achieve the purpose, the technical scheme of the invention comprises the following steps:
step 1: each agent initializes agent information and task information, the agent information including: the intelligent agent comprises an intelligent agent number, an intelligent agent position coordinate, an intelligent agent speed, an intelligent agent communication range, an intelligent agent physical ability value and an intelligent agent state value.
Wherein the intelligent physical ability value is a quantized value of the intelligent physical ability; the intelligent agent state value is a state indication value of the intelligent agent; the initialized agent state value is 0.
Initializing each task information, wherein the task information comprises: task number, task location, and task difficulty value; the task difficulty value is an intelligent physical ability value required for completing a task;
and 2, step: the intelligent agent calculates the bid value of the intelligent agent for each uncompleted task, selects the task with the highest value and acts as a task manager; namely, each task manager corresponds to one task; the output value is the degree of match between the agent and the task.
And step 3: each task manager makes the following judgments: the task manager judges whether the task can be completed according to the self intelligent physical ability value, if the task can be completed, the task manager completes the corresponding task independently, modifies the self intelligent body state value to be 2, and broadcasts a seventh type of information to all intelligent bodies in the communication range; the seventh type of message is used for indicating the situation that the current task manager independently completes the corresponding task;
and the agent receiving the seventh message updates the task state in the task information stored by the agent to enable the task manager to finish the corresponding task independently.
If the task manager can not finish the corresponding task independently, the corresponding task is taken as a task to be organized, the task manager modifies the state value of the agent to be 1, and broadcasts the first type of information to all agents in the communication range; the first type of message comprises task information of the task to be organized and the output value of the task to be organized of the current task manager.
And 4, step 4: and the agent receiving the first type of message judges the state value of the agent.
If the state value of the agent is 0, after receiving the first type of message, calculating the output value of the task to be organized, and sending the bid value to a task manager of the task to be organized as the point-to-point of the second type of message.
If the state value of the agent is 1, after receiving the first type of message, judging whether the task to be organized is the same as the task currently serving as a task manager; if not, ignoring the received first type message; if the two types of messages are the same, judging whether the bid value of the task to be organized by the user is larger than the bid value of the sender of the received first type of messages, if so, ignoring the received first type of messages, otherwise, giving up a task manager serving as the task to be organized, modifying the state of the agent to be 0, and simultaneously sending the bid value of the task to be organized by the user to the sender of the received first type of messages as the point-to-point of the second type of messages.
If the agent state value is 2, directly ignoring the first type of message after receiving the first type of message.
And 5: and the agent serving as the task manager receives the second type of messages within a set first time limit, sorts the received second type of messages according to the bid value from high to low after the first time limit is reached, selects the second type of message sender with the highest bid value, and sends the third type of messages point to point, wherein the third type of messages is used for informing the second type of message sender with the highest bid value to participate in the execution of the task to be organized.
Step 6: and the agent receiving the third type of message replies a fourth type of message to the sender of the third type of message, wherein the fourth type of message is used for confirming participation in executing the task to be organized and modifying the state value of the agent to be 2 to start executing the task.
And 7: the agent serving as the task manager receives the fourth type of information within a set second time limit, evaluates that all agents which are confirmed to participate in executing the task to be organized can complete the task to be organized after the second time limit is reached, and if the tasks can be completed, the task manager broadcasts the fifth type of information, modifies the state of the agent to be organized into 2, and starts to execute the task, wherein the fifth type of information is used for indicating that the execution scheme of the task to be organized is determined; if the task can not be completed, the task to be organized is not completed, and the step 2 is returned.
And 8: and after the intelligent agent finishes the task, modifying the state value of the intelligent agent to be 0, broadcasting the sixth type of information and informing the intelligent agent in the communication range that the task is finished.
And step 9: and returning to the step 2 until all the tasks are executed and completed.
Further, the intelligent agent is an unmanned aerial vehicle, a robot, a reconnaissance plane or an intelligent attack weapon.
Further, the agent information comprises an agent number i and agent position coordinatesSpeed of agentCommunication rangeIntelligent physical ability value aiState value of agent The initial value is 0.
The task information comprises a task number j and a task position coordinate (x)j T、yj T) Degree of task difficulty di。
The intelligent agent calculates the output value of each uncompleted task, and specifically comprises the following steps: the bid value of agent i for task j is:whereinIs the time that agent i predicts to reach task j.
Further, in step 3, the task manager judges whether the task can be completed according to the self intelligent physical ability value, specifically: and if the intelligent physical ability value of the task manager is larger than or equal to the task difficulty value of the task, the task manager completes the corresponding task.
Has the advantages that:
the invention provides a distributed task allocation method, which adopts an improved auction algorithm to perform task allocation with fast, distributed and low communication requirements on target tasks. The practicability is strong, the task allocation scheme without conflict can be ensured under the condition that the communication is not completely communicated, and the task allocation rate reaches 100%. The behavior and the communication of the intelligent agent are based on a unified framework, the intelligent agent can be added or deleted based on the framework, the robustness of the whole system is improved, the intelligent agent is suitable for being damaged, and the scenes of the intelligent agent can be increased at any time. The information topology of the whole system is not fully communicated, the conflict of the task allocation process is solved by the intelligent agent information updating mechanism, the communicating intelligent agents can update the mutual information at regular time, and the rationality of the allocation scheme is effectively guaranteed. Therefore, the method considers the situation that the communication between the intelligent bodies is blocked and the intelligent bodies are increased or decreased at any time. The method has low requirement on communication bandwidth, has high task allocation speed, and can effectively prevent task allocation conflict.
Drawings
FIG. 1 is a flow chart of a distributed task allocation method according to the present invention;
FIG. 2 is a schematic diagram illustrating state classification of agents;
FIG. 3 is a schematic diagram of an agent owning information;
fig. 4 is a basic configuration diagram of a message.
Detailed Description
The invention is described in detail below by way of example with reference to the accompanying drawings.
The invention provides a multi-agent distributed task allocation method which is suitable for multi-agent distributed task allocation. The intelligent agent is an abstracted concept, and can sense dynamic conditions and information in the environment; performing an action affects an environmental condition; reasoning is performed to solve the problem. The entities it maps may be: fire extinguishing unmanned aerial vehicle in forest fire; a recourse robot in a natural disaster recourse task; reconnaissance aircraft, intelligent striking weapons and the like in military striking missions.
In all the above scenarios, there are several features:
the ability of a single agent to accomplish a task is limited and varied. Many situations require multiple agents to collaborate to perform a task at a target task point. In disaster relief, for example, the carrying ability, searching ability, and the like of the robot are limited, and the values are given in advance according to expert experience and are hereinafter abstracted as "intelligent ability values".
The degree of difficulty varies for each task. For example, in a forest fire fighting task, the difficulty of a small fire starting point task is small, and one fire extinguishing unmanned vehicle can complete the task; some tasks are large in ignition intensity and large in task difficulty, and a plurality of unmanned aerial vehicles can complete the fire extinguishing task. Hereinafter, the degree of difficulty of the task is represented by "task difficulty value".
Because a plurality of agents are required to perform and complete a task together, the task allocation scheme requires one agent to negotiate and allocate. Hereinafter, the agent responsible for coordinating the task allocation scheme is denoted "manager".
The multi-agent multi-task distributed task allocation method provided by the invention considers the situation that no global communication condition exists and the communication topology is local, and the multi-agent distributed negotiation generates a rapid and reasonable task allocation scheme. The invention takes the improved auction algorithm as the solving algorithm of the distributed task allocation, the algorithm has low requirement on communication bandwidth and high task allocation speed, and can effectively prevent task allocation conflict.
As shown in FIG. 1, the present invention provides a multi-agent distributed task allocation method, which comprises the following steps:
step 1: each agent initializes its agent information, the agent information including: the intelligent agent comprises an intelligent agent number, an intelligent agent position coordinate, an intelligent agent speed, an intelligent agent communication range, an intelligent agent physical ability value and an intelligent agent state value.
Wherein the intelligent physical ability value is a quantized value of the intelligent physical ability; the intelligent agent state value is a state indication value of the intelligent agent; the initialized agent state value is 0, i.e. the agent is indicated as idle state.
In the embodiment of the invention, the intelligent agent information comprises an intelligent agent number i and an intelligent agent position coordinateSpeed of agentCommunication rangeIntelligent physical ability value aiState value of agent The initial value is 0 as shown in FIG. 2.
Initializing each task information, wherein the task information comprises: task number, task location, and task difficulty value. In order to be able to record the execution status of the task, the task status may also be increased. Where the task difficulty value is the agent's capability required to complete the task. Wherein task difficulty value and intelligent physical ability value adopt the same quantization mode, for example to climbing robot, its intelligent physical ability value can be its climbing ability of quantization, and to a climbing robot, its intelligent physical ability value is the climbing slope in its performance range, and the climbing task that corresponds, its task difficulty value also is the climbing slope.
In the embodiment of the invention, the task information comprises a task number j and a task position coordinate (x)j T、yj T) Degree of task difficulty diAs shown in fig. 3.
Step 2: the intelligent agent calculates the bid value of the intelligent agent for each uncompleted task, selects the task with the highest value and acts as a task manager; each task manager corresponds to one task, but a plurality of task managers can exist in the same task; the output value is the matching degree between the intelligent agent and the task; a larger bid value indicates that the agent is more suitable for completing the task.
And step 3: each task manager makes the following judgments: the task manager judges whether the task manager can complete the corresponding task according to the intelligent physical ability value of the task manager (namely, the intelligent physical ability value of the task manager is larger than or equal to the task difficulty value of the task), if the task manager can complete the corresponding task, the task manager independently completes the corresponding task, the task manager modifies the state value of the intelligent body to be 2 (namely, the intelligent body is indicated to be in a busy state for executing the task), and broadcasts a seventh type of message to all the intelligent bodies in a communication range; the seventh type of message is used for indicating the situation that the current task manager independently completes the corresponding task.
Agent i is able to communicate with agent k by the definition:
And the agent receiving the seventh message updates the task state in the task information stored by the agent to enable the task manager to finish the corresponding task independently.
If the task manager can not finish the corresponding task independently, the corresponding task is taken as a task to be organized, the task manager modifies the state value of the agent to be 1 (namely, the agent is indicated to be in a busy state of the organizing task), and broadcasts a first type of message to all agents in a communication range; the first type of message comprises task information of the task to be organized and the output value of the task to be organized of the current task manager.
Fig. 4 is a basic configuration diagram of a message. The mode of message transmission between agents is carried out according to this structure. The updated information part in the fourth figure is that the intelligent agent summarizes the execution conditions of all tasks, when any type of message is transmitted, the updated information is transmitted and updated along with the transmission and the updating, the updated information is mapped into a character string with fixed length through a Hash (Hash) function, and the character string after the Hash (Hash) function and a task information source code are sent to other intelligent agents through a broadcast protocol. After receiving the update message, the receiver compares the Hash character string of the update message with the received Hash character string, if the Hash character string is the same, the update information of the sender and the receiver is consistent, the specific information does not need to be checked one by one, and the operation amount of the intelligent agent is reduced.
And 4, step 4: and the agent receiving the first type of message judges the state value of the agent.
If the state value of the agent is 0, after receiving the first type of message, calculating the output value of the task to be organized, and sending the bid value to a task manager of the task to be organized as the point-to-point of the second type of message.
If the state value of the agent is 1, after receiving the first type of message, judging whether the task to be organized is the same as the task currently serving as a task manager; if not, ignoring the received first type message; if the two types of messages are the same, judging whether the bid value of the task to be organized is larger than the bid value of the sender of the received first type of messages, if so, ignoring the received first type of messages, otherwise, giving up a task manager serving as the task to be organized, modifying the state of the agent to be 0, and sending the bid value of the task to be organized serving as a second type of message point-to-point to the sender of the received first type of messages.
If the agent state value is 2, the agent is in a busy state for executing tasks, and therefore the agent is directly ignored after receiving the first type of message.
And 5: the agent as the task manager receives the second type of messages within a set first time limit, and after the first time limit is reached (the first time limit is set according to experience in the embodiment of the invention), the received second type of messages are sorted according to the bid value from high to low, the second type of message sender with the highest bid value is selected, and the third type of messages are sent point to point, and the third type of messages are used for informing the second type of message sender with the highest bid value to participate in the execution of the task to be organized.
Step 6: and the second-class message sender with the highest bid value receives the agent of the third-class message, replies a fourth-class message to the third-class message sender, wherein the fourth-class message is used for confirming participation in executing the task to be organized, and modifies the state value of the agent to be 2, namely, the agent is indicated to be in a busy state for executing the task, and the task is started to be executed.
And 7: the agent as the task manager receives the fourth type of message within the second time limit, and after the second time limit is reached (in the embodiment of the present invention, the second time limit is set according to experience), it evaluates that all agents that confirm to participate in executing the task to be organized are able to complete the task to be organized, and if so (that is, the agents can complete the task to be organized)That is, the sum of the manager i and the assessed intelligent physical ability value is greater than or equal to the danger value, omega, of the task jiRepresenting a set successfully summoned by an agent i), the task manager broadcasts a fifth type of message, modifies the state of the agent to be organized to be 2, and starts to execute the task, wherein the fifth type of message is used for indicating that the execution scheme of the task to be organized is determined; if the task can not be completed, the task to be organized is not completed, and the step 2 is returned.
And the agent receiving the fifth type of message updates the task state in the task information stored by the agent to be the determined execution scheme, so that the subsequent conflict generated on the task allocation is avoided.
And 8: after the agent finishes the task, the state value of the agent is modified to be 0, and the agent broadcasts the sixth type of information (namely, the information is changed from 2 (the executed task is busy) to 0 (idle)), so that the agent is informed that the task in the communication range is finished;
and when the intelligent agent receives the message with the type of 6, updating the task state in the task information to be completed.
And step 9: and returning to the step 2 until all the tasks are executed and completed.
In summary, the above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (4)
1. A multi-agent distributed task allocation method, comprising the steps of:
step 1: each agent initializes agent information and task information;
the agent information includes: the intelligent agent comprises an intelligent agent number, an intelligent agent position coordinate, an intelligent agent speed, an intelligent agent communication range, an intelligent agent physical ability value and an intelligent agent state value;
wherein the mental ability value is a quantified value of the mental ability performance; the agent state value is a state indication value of the agent; the initialized agent state value is 0;
initializing each task information, wherein the task information comprises: task number, task location, and task difficulty value; the task difficulty value is an intelligent physical ability value required for completing the task;
step 2: the intelligent agent calculates the bid value of the intelligent agent for each uncompleted task, selects the task with the highest value and acts as a task manager; namely, each task manager corresponds to one task; the output value is the matching degree between the intelligent agent and the task;
and step 3: each task manager makes the following judgments: the task manager judges whether the task can be completed according to the intelligent physical ability value of the task manager, if the task can be completed, the task manager completes the corresponding task independently, the task manager modifies the intelligent agent state value of the task manager to be 2, broadcasts a seventh type of information to all intelligent agents in a communication range, and then the step 8 is carried out; the seventh type of message is used for indicating the situation that the current task manager independently completes the corresponding task;
if the task manager can not finish the corresponding task independently, the corresponding task is taken as a task to be organized, the task manager modifies the state value of the agent to be 1, and broadcasts the first type of information to all agents in the communication range; the first type of message comprises task information of the task to be organized and the output value of the current task manager to the task to be organized;
and 4, step 4: the agent receiving the first type of message judges the state value of the agent;
if the state value of the agent is 0, after receiving the first type of message, calculating the output value of the agent to the task to be organized, and sending the output value to a task manager of the task to be organized as a second type of message point-to-point;
if the state value of the agent is 1, after receiving the first type of message, judging whether the task to be organized is the same as the task currently serving as a task manager; if not, ignoring the received first type message; if the two types of messages are the same, judging whether the self bid value of the task to be organized is larger than the bid value of the sender of the received first type of messages, if so, ignoring the received first type of messages, otherwise, giving up the task manager serving as the task to be organized, modifying the state of the agent to be 0, and simultaneously sending the received first type of messages to the sender of the received first type of messages in a point-to-point manner by taking the self bid value of the task to be organized as the second type of message;
if the state value of the agent is 2, directly ignoring the first type of message after receiving the first type of message;
and 5: an agent serving as a task manager receives second messages within a set first time limit, orders the received second messages according to the bid value from high to low after the first time limit is reached, selects a second message sender with the highest bid value, and sends a third message in a point-to-point manner, wherein the third message is used for informing the second message sender with the highest bid value to participate in the execution of the task to be organized;
step 6: the agent receiving the third type message replies a fourth type message to the sender of the third type message, and modifies the state value of the agent to 2 to start executing the task, wherein the fourth type message is used for confirming to participate in executing the task to be organized;
and 7: the intelligent agent serving as a task manager receives a fourth type of message within a set second time limit, evaluates whether all intelligent agents which confirm to participate in executing the task to be organized can complete the task to be organized after the second time limit is reached, and if the intelligent agents can complete the task to be organized, the task manager broadcasts a fifth type of message, modifies the state of the intelligent agent to be 2, and starts to execute the task, wherein the fifth type of message is used for indicating that the task execution scheme to be organized is determined; if the task cannot be completed, the task to be organized is not completed, and the step 2 is returned;
and 8: after the intelligent agent finishes the task, the state value of the intelligent agent is modified to be 0, the sixth type of information is broadcasted, and the intelligent agent in the communication range is informed that the task is finished;
and step 9: and returning to the step 2 until all the tasks are executed and completed.
2. The method of claim 1, wherein the agent is a drone, a robot, a reconnaissance plane, or an intelligent percussion weapon.
3. The method of claim 2, wherein the agent information comprises agent number i, agent location coordinatesSpeed of agentCommunication rangeIntelligent physical ability value aiState value of agent The initial value is 0;
the task information comprises a task number j and a task position coordinateDegree of task difficulty di;
The above-mentionedThe intelligent agent calculates the output value of each uncompleted task, and specifically comprises the following steps: the bid value of agent i for task j is:whereinThe time for the agent i to estimate to reach the task j;
4. The method according to claim 3, wherein in step 3, the task manager determines whether it can complete the corresponding task according to its own intelligent physical ability value, specifically:
and if the intelligent physical ability value of the task manager is greater than or equal to the task difficulty value of the task, the task manager can complete the corresponding task.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010140739.8A CN111489049B (en) | 2020-03-03 | 2020-03-03 | Multi-agent distributed task allocation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010140739.8A CN111489049B (en) | 2020-03-03 | 2020-03-03 | Multi-agent distributed task allocation method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111489049A CN111489049A (en) | 2020-08-04 |
CN111489049B true CN111489049B (en) | 2022-07-05 |
Family
ID=71794317
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010140739.8A Active CN111489049B (en) | 2020-03-03 | 2020-03-03 | Multi-agent distributed task allocation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111489049B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112070383B (en) * | 2020-08-31 | 2022-04-12 | 北京理工大学 | Dynamic task-oriented multi-agent distributed task allocation method |
CN112529313B (en) * | 2020-12-17 | 2022-12-09 | 中国航空综合技术研究所 | Intelligent human-machine engineering design optimization method based on negotiation strategy |
CN116260882B (en) * | 2023-05-15 | 2023-07-28 | 中国人民解放军国防科技大学 | Multi-agent scheduling asynchronous consistency method and device with low communication flow |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006031570A (en) * | 2004-07-21 | 2006-02-02 | Yaskawa Electric Corp | Work assignment method for distributed cooperation system |
CN101364110A (en) * | 2008-09-28 | 2009-02-11 | 重庆邮电大学 | Cooperating work control method and system for robot of multiple degree of freedom |
CN105975332A (en) * | 2016-05-03 | 2016-09-28 | 北京理工大学 | Method for forming multi-agent distributed union |
CN106875090A (en) * | 2017-01-09 | 2017-06-20 | 中南大学 | A kind of multirobot distributed task scheduling towards dynamic task distributes forming method |
CN110852486A (en) * | 2019-10-16 | 2020-02-28 | 中国人民解放军国防科技大学 | Task planning method for autonomous cooperation of unmanned aerial vehicle cluster |
-
2020
- 2020-03-03 CN CN202010140739.8A patent/CN111489049B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006031570A (en) * | 2004-07-21 | 2006-02-02 | Yaskawa Electric Corp | Work assignment method for distributed cooperation system |
CN101364110A (en) * | 2008-09-28 | 2009-02-11 | 重庆邮电大学 | Cooperating work control method and system for robot of multiple degree of freedom |
CN105975332A (en) * | 2016-05-03 | 2016-09-28 | 北京理工大学 | Method for forming multi-agent distributed union |
CN106875090A (en) * | 2017-01-09 | 2017-06-20 | 中南大学 | A kind of multirobot distributed task scheduling towards dynamic task distributes forming method |
CN110852486A (en) * | 2019-10-16 | 2020-02-28 | 中国人民解放军国防科技大学 | Task planning method for autonomous cooperation of unmanned aerial vehicle cluster |
Also Published As
Publication number | Publication date |
---|---|
CN111489049A (en) | 2020-08-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111489049B (en) | Multi-agent distributed task allocation method | |
CN112070383B (en) | Dynamic task-oriented multi-agent distributed task allocation method | |
CN111461488B (en) | Multi-robot distributed cooperative task allocation method facing workshop carrying problem | |
CN109922137B (en) | Unmanned aerial vehicle assisted calculation migration method | |
CN112698634B (en) | Event trigger-based traffic intelligent system fixed time dichotomy consistency method | |
CN109951568B (en) | Aviation cluster mixed multi-layer alliance building method for improving contract network | |
CN113312172B (en) | Multi-unmanned aerial vehicle cluster dynamic task scheduling model based on adaptive network | |
CN113448703B (en) | Unmanned plane bee colony dynamic reconnaissance task scheduling system and method based on perception array | |
Masadeh et al. | Reinforcement learning-based security/safety UAV system for intrusion detection under dynamic and uncertain target movement | |
CN114326827B (en) | Unmanned aerial vehicle cluster multitasking dynamic allocation method and system | |
CN116610144A (en) | Unmanned plane collaborative dynamic task allocation method based on expansion consistency packet algorithm | |
CN115016537A (en) | Heterogeneous unmanned aerial vehicle configuration and mission planning joint optimization method under SEDA scene | |
CN114115329A (en) | Relay cooperative unmanned aerial vehicle task planning method and device | |
CN112818207A (en) | Network structure search method, device, equipment, storage medium and program product | |
CN109617968B (en) | Communication means between Multi-Agent Cooperation system and its intelligent body, intelligent body | |
He et al. | An operation planning generation and optimization method for the new intelligent combat SoS | |
Khan et al. | An efficient optimization technique for node clustering in VANETs using gray wolf optimization | |
CN110673651A (en) | Robust formation method for unmanned aerial vehicle cluster under limited communication condition | |
CN114326824B (en) | Heterogeneous high-density hybrid unmanned aerial vehicle cluster topology control method based on bionic algorithm | |
Han et al. | Cooperative Multi-task Assignment of Unmanned Autonomous Helicopters Based on Hybrid Enhanced Learning ABC Algorithm | |
CN113495574B (en) | Unmanned aerial vehicle group flight control method and device | |
CN114792072A (en) | Function-based equipment decision behavior simulation modeling method and system | |
CN116009569A (en) | Heterogeneous multi-unmanned aerial vehicle task planning method based on multi-type gene chromosome genetic algorithm in SEAD task scene | |
CN110618689B (en) | Multi-UUV system negotiation cooperation modeling method based on contract net under constraint condition | |
CN113050678A (en) | Autonomous cooperative control method and system based on artificial intelligence |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |