CN112114594A - Multi-unmanned aerial vehicle cooperative control method and system based on vision and performance constraints - Google Patents

Multi-unmanned aerial vehicle cooperative control method and system based on vision and performance constraints Download PDF

Info

Publication number
CN112114594A
CN112114594A CN202011088200.9A CN202011088200A CN112114594A CN 112114594 A CN112114594 A CN 112114594A CN 202011088200 A CN202011088200 A CN 202011088200A CN 112114594 A CN112114594 A CN 112114594A
Authority
CN
China
Prior art keywords
unmanned aerial
follower
error
pilot
unmanned
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011088200.9A
Other languages
Chinese (zh)
Other versions
CN112114594B (en
Inventor
王耀南
林杰
缪志强
毛建旭
张辉
朱青
钟杭
唐永鹏
聂静谋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN202011088200.9A priority Critical patent/CN112114594B/en
Publication of CN112114594A publication Critical patent/CN112114594A/en
Priority to PCT/CN2021/075626 priority patent/WO2022077817A1/en
Application granted granted Critical
Publication of CN112114594B publication Critical patent/CN112114594B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • G05D1/104Simultaneous control of position or course in three dimensions specially adapted for aircraft involving a plurality of aircrafts, e.g. formation flying

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention provides a multi-unmanned aerial vehicle cooperative control method and system based on vision and performance constraints. The multi-unmanned-aerial-vehicle cooperative control method based on vision and performance constraints comprises the following steps: step S1: decomposing the total target task to obtain independent subtasks, and establishing an unmanned system of the subtasks; step S2: selecting an optimal navigator in the unmanned system of the current subtask, and detecting an Aruco reference mark carried by the navigator by a follower, thereby acquiring the relative pose of the follower relative to the navigator; step S3: establishing an unmanned system model of the subtask based on the navigator-follower frame; step S4: designing an error transformation method based on a preset task performance specification; step S5: and designing a PID control law of the follower according to the converted error, ensuring that the follower follows the pilot according to the performance of a preset task, and finally achieving the aim of autonomous cooperative control of the multiple unmanned aerial vehicles. The invention can realize the multi-dimensional effective cooperation of time, space, tasks and the like in the environment of GPS deficiency, and meet the requirements of miniaturization, intellectualization and autonomy.

Description

Multi-unmanned aerial vehicle cooperative control method and system based on vision and performance constraints
Technical Field
The invention relates to the technical field of unmanned aerial vehicles, in particular to a multi-unmanned aerial vehicle cooperative control method and system based on vision and performance constraints.
Background
With the rapid development of new generation information technologies represented by artificial intelligence, military operations and industrial civilians are shifting from human-machine tasks to unmanned aerial vehicle autonomous tasks, and in particular, in order to meet the operational and working requirements of high dynamics, high danger and multitask, an autonomous unmanned system has become an important support for military intelligence and industrial intelligence. The autonomous unmanned system has great application requirements in military and civil fields, including military reconnaissance, strike, information relay, topographic mapping, intelligent warehousing and the like, and is a new technical high point in military operations and industrial development because the autonomous unmanned system has the advantages of low cost, wide application, good effect and the like. Unmanned systems are always hot for research at home and abroad, but most of researches are still focused on the research of a multi-unmanned-aerial-vehicle task allocation method, and autonomous cooperative control schemes of the multi-unmanned aerial vehicle under the constraint of task performance are few, especially in the environment of GPS deficiency.
Disclosure of Invention
The invention provides a multi-unmanned aerial vehicle cooperative control method and system based on vision and performance constraints, and aims to solve the technical problem of autonomous cooperative control of multiple unmanned aerial vehicles in a GPS (global positioning system) missing environment in the background art.
In order to achieve the above object, an embodiment of the present invention provides a multi-drone cooperative control method based on vision and performance constraints, including the following steps:
step S1: decomposing the total target task to obtain independent subtasks, confirming the type and the number of the unmanned aerial vehicles according to the subtasks, and establishing an unmanned system of the subtasks;
step S2: selecting an optimal navigator in the unmanned system of the current subtask, and detecting an Aruco reference mark carried by the navigator by a follower, thereby acquiring the relative pose of the follower relative to the navigator;
step S3: establishing an unmanned system model of the subtask based on the navigator-follower frame;
step S4: designing an error transformation method based on a preset task performance specification;
step S5: and designing a PID control law of the follower according to the converted error, ensuring that the follower follows the pilot according to the performance of a preset task, and finally achieving the aim of autonomous cooperative control of the multiple unmanned aerial vehicles.
Preferably, step S1 specifically includes the following steps:
step S101: decomposing the total target task to obtain independent subtasks;
step S102: and selecting the form of an aerial unmanned aerial vehicle, an above-ground unmanned aerial vehicle or the combination of the aerial unmanned aerial vehicle and the above-ground unmanned aerial vehicle according to the requirements of the subtasks, determining the number of the unmanned aerial vehicles, and establishing the unmanned system of the subtasks.
Preferably, step S2 specifically includes the following steps:
step S201: in the unmanned system of the current subtask, the control station selects an optimal unmanned aerial vehicle as a pilot for receiving tasks issued by the control station in the unmanned system of the current subtask;
step S202: each unmanned aerial vehicle is provided with an ArUco square reference mark with a known size, a follower uses an airborne vision to detect the ArUco mark, the ArUco mark is composed of a black frame and an internal binary matrix for determining an identifier of the ArUco mark, a single mark can provide enough corresponding relations (four corners) to obtain the pose of a camera relative to the ArUco mark, and the pose of a pilot and a follower is obtained according to the fixed transformation relation of a camera coordinate system and a follower coordinate system, the ArUco mark coordinate system and a pilot coordinate systemZeta the relative position between the partnerslfAnd Aruco marks that every unmanned aerial vehicle all provides corresponding ID, guarantees reliable, effectual following.
Preferably, step S3 includes building a navigator-follower model from the navigator-follower framework:
ζlf=ζlf
therein, ζlfPosition of the pilot relative to the follower, ζlFor the position of the pilot in the world coordinate system, ζfIs the position and posture of the follower in the world coordinate system.
Preferably, the unmanned system model is composed of n-1 pilot-follower models mentioned above, where n is the number of unmanned aerial vehicles in the unmanned system.
Preferably, the S4 specifically includes the following steps:
step S401: defining errors, specifically:
designing an error transformation method with a predetermined performance index according to a task, estimating the relative pose between a pilot and a follower by an Aruco marker, and defining an error as
Figure RE-GDA0002784860950000021
Wherein the content of the first and second substances,
Figure RE-GDA0002784860950000031
representing the pose of the desired pilot relative to the follower; if the relative pose r between the pilot and the follower is indirectly represented by image information according to visual kinematicslfE represents the error between the currently acquired image feature and the expected image feature;
step S402: defining error performance, specifically:
defining an error performance function such that an output error e is outputkFunction of time p along the absolute decaykConvergence to a predefined residue set:
Figure RE-GDA0002784860950000032
wherein e iskSetting parameters for the kth output error quantity expressed as error vector eγ kAnd
Figure RE-GDA0002784860950000033
is composed of
Figure RE-GDA0002784860950000034
ρk(0) Respectively representing the initial maximum allowable error so as to ensure that the absolute value of the initial error meets 0 < | | ek(0)||<ρk(0) Designing the time function rho of the absolute attenuationk(t) is
ρk(t)=(ρ0)e-lt
Wherein the parameter l > 0 controls the speed of exponential convergence,
Figure RE-GDA0002784860950000035
a steady state level representing a predetermined task performance specification, which may be designed to be small enough to guarantee the task performance specification;
step S403: setting an output error function, specifically:
to achieve control that meets the task performance specifications, the output error is set to:
ek=S(kk(t)
wherein, S: (k) Is a continuous smooth function which monotonically increases and meets the following requirements:
Figure RE-GDA0002784860950000036
according to the above requirements, the transformation function is designed as:
Figure RE-GDA0002784860950000037
step S404: obtaining an error transformation function having a predetermined performance specification: definition of xk=ekkDue to S: (k) Is strictly increasing, so that its inverse function is always present, so that the error transformation with a predetermined performance specificationkThe description is as follows:
Figure RE-GDA0002784860950000041
preferably, step S5 specifically includes the following steps:
step S501: designing the PID control law of the follower according to the converted error to ensure the converted errorkConvergence, the discrete form of the control law is as follows:
Figure RE-GDA0002784860950000042
wherein u iskDenotes the kth control quantity, kpIs the proportionality coefficient, kiIs the integral coefficient, kdIs a differential coefficient;
step S502: according to the nature of the error transformation function, when the transformed error is comparedkUpon convergence, the error transformation with the predetermined performance specification obtained in step S404kFunction, the true state error e can be obtainedkAnd according to the convergence of preset performance, the multiple unmanned aerial vehicles complete the expected subtasks, and synchronously or sequentially complete each subtask according to the total task target, and finally, the multi-dimensional effective cooperation in time, space and task is realized.
The embodiment of the invention provides a vision-based multi-unmanned aerial vehicle autonomous cooperative control system which comprises a control station and a plurality of unmanned aerial vehicles, wherein the control station controls the plurality of unmanned aerial vehicles, the plurality of unmanned aerial vehicles comprise a plurality of aerial unmanned aerial vehicles and a plurality of ground unmanned aerial vehicles, the plurality of aerial unmanned aerial vehicles and the plurality of ground unmanned aerial vehicles respectively comprise a pilot and a follower, the control station is used for controlling the pilot in subtasks, an offline automatic or real-time manual task is set for the pilot, the follower and the pilot keep expected relative poses and move along with the pilot, and autonomous cooperative control of the plurality of unmanned aerial vehicles is realized.
Preferably, unmanned aerial vehicle includes control unit, perception unit, communication unit and power supply unit, communication unit is used for unmanned aerial vehicle to receive the task that the control station issued, the perception unit is the airborne camera for follower detects the ArUco mark that the pilot has, the control unit is airborne CPU for calculate and give unmanned aerial vehicle's control law, power supply unit provides the electric energy for unmanned aerial vehicle.
Preferably, the aerial drone comprises an onboard camera that can be suitably rotated according to different tasks.
The technical effects that can be achieved by adopting the invention are as follows: the multi-dimensional effective cooperation of time, space, tasks and the like can be realized in the environment of GPS deficiency, and the requirements of unmanned system miniaturization, intellectualization and autonomy are met. The cooperative control of the unmanned system mainly depends on a sensing and control technology, the visual sensing method relies on the ArUco mark detected by an airborne camera to obtain the pose of a target relative to a local coordinate system, does not depend on a GPS, can be deployed in indoor or outdoor scenes, has the advantages of small size, low cost, rich target information and the like, does not depend on the GPS, can be deployed in any indoor or outdoor scene, has the advantages of small size, low cost, rich target information and the like, and is beneficial to a large number of small-cost and miniaturized unmanned aerial vehicles to establish a large-scale autonomous cooperative unmanned system. The design of the controller of the unmanned aerial vehicle usually needs to consider output limited control, common control methods include Model Predictive Control (MPC), control based on a barrier function and the like, however, the above controllers can only guarantee space constraint, consider more general constraint, namely, for simultaneous constraint on time and space, the control method based on the predetermined performance specification effectively solves the double constraint of time and space, improves the intelligent level of the unmanned aerial vehicle, and enables the unmanned aerial vehicle to more accurately complete target tasks. The unmanned system based on the navigator-follower framework has the advantages of simplicity in implementation and application scalability, and is favorable for achieving distributed cooperative control of the unmanned system and further improving the autonomy of the unmanned system.
Compared with the prior art, the invention has the advantages that:
(1) high degree of miniaturization
In addition, the control law based on the performance function has the advantages of strong robustness, low calculation complexity, independence on model information and the like, and the calculation load of the unmanned aerial vehicle is reduced, so that the size of the unmanned aerial vehicle can be further reduced on sensing and calculating hardware, and the unmanned aerial vehicle with low cost and small size is favorable for establishing a large-scale autonomous cooperative unmanned system.
(2) Considering time constraints
The control law provided by the invention ensures that the output error meets the time and space constraints at the same time, and the output error converges to a certain predefined residual error along with the time function of the absolute attenuation, so that the multi-unmanned-aerial-vehicle autonomous cooperative control system can more accurately complete the target task.
(3) High degree of autonomy
The unmanned system provided by the invention is based on a navigator-follower framework, has simplicity in realization and application scalability, and can be used for establishing unmanned systems of different scales according to different tasks. The distributed cooperative control of the unmanned system is realized based on airborne vision, communication is not needed between unmanned aerial vehicles, the cooperative control between the unmanned aerial vehicles is realized by means of abundant visual information, and the autonomy of the unmanned system is further enhanced.
Drawings
Fig. 1 is a general flow chart of a vision-based vision and performance constraint-based multi-drone cooperative control method according to the present invention;
fig. 2 is a schematic diagram of an embodiment of a vision-based vision and performance constraint-based multi-drone cooperative control method according to the present invention;
fig. 3 is a detailed flowchart of a navigator-follower control method of the vision-based vision and performance constraint-based multi-drone cooperative control method of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the following detailed description is given with reference to the accompanying drawings and specific embodiments.
Aiming at the existing problems, the invention provides a vision-based multi-unmanned aerial vehicle cooperative control method based on vision and performance constraint, as shown in fig. 1 and 3, comprising the following steps:
step S1: decomposing the total target task to obtain independent subtasks, confirming the type and the number of the unmanned aerial vehicles according to the subtasks, and establishing an unmanned system of the subtasks;
step S2: selecting an optimal navigator in the unmanned system of the current subtask, and detecting an Aruco reference mark carried by the navigator by a follower, thereby acquiring the relative pose of the follower relative to the navigator;
step S3: establishing an unmanned system model of the subtask based on the navigator-follower frame;
step S4: designing an error transformation method based on a preset task performance specification;
step S5: and designing a PID control law of the follower according to the converted error, ensuring that the follower follows the pilot according to the performance of a preset task, and finally achieving the aim of autonomous cooperative control of the multiple unmanned aerial vehicles.
Specifically, step S1 specifically includes the following steps:
step S101: decomposing the total target task to obtain independent subtasks;
step S102: and selecting the form of an aerial unmanned aerial vehicle, an above-ground unmanned aerial vehicle or the combination of the aerial unmanned aerial vehicle and the above-ground unmanned aerial vehicle according to the requirements of the subtasks, determining the number of the unmanned aerial vehicles, and establishing the unmanned system of the subtasks.
Step S2 specifically includes the following steps:
step S201: in the unmanned system of the current subtask, the control station selects an optimal unmanned aerial vehicle as a pilot for receiving tasks issued by the control station in the unmanned system of the current subtask;
step S202: each drone is equipped with an ArUco party of known sizeThe method comprises the steps that a shape reference mark is detected by a follower through airborne vision, the ArUco mark is composed of a black frame and an internal binary matrix for determining an identifier of the ArUco mark, enough corresponding relations (four corners) can be provided by a single mark to obtain the pose of a camera relative to the ArUco mark, and the relative pose zeta between a pilot and the follower is obtained according to the fixed transformation relations of a camera coordinate system and a follower coordinate system, the ArUco mark coordinate system and a pilot coordinate systemlfAnd Aruco marks that every unmanned aerial vehicle all provides corresponding ID, guarantees reliable, effectual following.
Step S3 includes building a navigator-follower model from the navigator-follower framework:
ζlf=ζlf
therein, ζlfPosition of the pilot relative to the follower, ζlFor the position of the pilot in the world coordinate system, ζfIs the position and posture of the follower in the world coordinate system.
The unmanned system model is composed of n-1 pilot-follower models mentioned above, wherein n is the number of unmanned aerial vehicles in the unmanned system.
The S4 specifically includes the following steps:
step S401: defining errors, specifically:
designing an error transformation method with a predetermined performance index according to a task, estimating the relative pose between a pilot and a follower by an Aruco marker, and defining an error as
Figure RE-GDA0002784860950000071
Wherein the content of the first and second substances,
Figure RE-GDA0002784860950000072
representing the pose of the desired pilot relative to the follower; if the relative pose r between the pilot and the follower is indirectly represented by image information according to visual kinematicslfE represents the error between the currently acquired image feature and the expected image feature;
step S402: defining error performance, specifically:
defining an error performance function such that an output error e is outputkFunction of time p along the absolute decaykConvergence to a predefined residue set:
Figure RE-GDA0002784860950000073
wherein e iskSetting parameters for the kth output error quantity expressed as error vector eγ kAnd
Figure RE-GDA0002784860950000074
is composed of
Figure RE-GDA0002784860950000075
ρk(0) Respectively representing the initial maximum allowable error so as to ensure that the absolute value of the initial error meets 0 < | | ek(0)||<ρk(0) Designing the time function rho of the absolute attenuationk(t) is
ρk(t)=(ρ0)e-lt
Wherein the parameter l > 0 controls the speed of exponential convergence,
Figure RE-GDA0002784860950000076
a steady state level representing a predetermined task performance specification, which may be designed to be small enough to guarantee the task performance specification;
step S403: setting an output error function, specifically:
to achieve control that meets the task performance specifications, the output error is set to:
ek=S(kk(t)
wherein, S: (k) Is a continuous smooth function which monotonically increases and meets the following requirements:
Figure RE-GDA0002784860950000081
according to the above requirements, the transformation function is designed as:
Figure RE-GDA0002784860950000082
step S404: obtaining an error transformation function having a predetermined performance specification: definition of xk=ekkDue to S: (k) Is strictly increasing, so that its inverse function is always present, so that the error transformation with a predetermined performance specificationkThe description is as follows:
Figure RE-GDA0002784860950000083
step S5 specifically includes the following steps:
step S501: designing the PID control law of the follower according to the converted error to ensure the converted errorkConvergence, the discrete form of the control law is as follows:
Figure RE-GDA0002784860950000084
wherein u iskDenotes the kth control quantity, kpIs the proportionality coefficient, kiIs the integral coefficient, kdIs a differential coefficient;
step S502: according to the nature of the error transformation function, when the error is transformedkUpon convergence, the error transformation with the predetermined performance specification obtained in step S404kFunction, the true state error e can be obtainedkAnd according to the convergence of preset performance, the multiple unmanned aerial vehicles complete the expected subtasks, and synchronously or sequentially complete each subtask according to the total task target, and finally, the multi-dimensional effective cooperation in time, space and task is realized.
As shown in fig. 2, the system for applying the vision-based cooperative control method for multiple unmanned aerial vehicles based on vision and performance constraints provided by the invention comprises a control station and multiple unmanned aerial vehicles, wherein the control station controls the multiple unmanned aerial vehicles, the multiple unmanned aerial vehicles comprise multiple aerial unmanned aerial vehicles and multiple ground unmanned aerial vehicles, the multiple aerial unmanned aerial vehicles and the multiple ground unmanned aerial vehicles respectively comprise a pilot and a follower, the control station is used for controlling the pilot in subtasks, an offline automatic or real-time manual task is set for the pilot, the follower and the pilot maintain a desired relative pose and move along with the pilot, and autonomous cooperative control of the multiple unmanned aerial vehicles is realized.
The multi-unmanned aerial vehicle cooperative control system can realize three subtasks of air-to-air, air-to-ground and ground-to-ground, in the air-to-air subtasks, the control station selects the optimal unmanned aerial vehicle as a pilot controlled by the control station, a model is established according to a pilot-follower frame, a follower detects an Aruco reference mark carried by the pilot, the relative pose of the follower relative to the pilot is obtained, a preset performance function is adopted to carry out error transformation, and a controller of the follower is designed, so that the follower 1-1, the follower 1-2 and the pilot 1 keep expected relative poses and move along with the appointed pilot 1. Similarly, in the air-to-ground and ground-to-ground subtasks, the optimal pilot 2 is selected by the control station, then a model is established according to a pilot-follower frame, and the cooperative tasks of landing the aerial unmanned aerial vehicle on the ground unmanned mobile platform, formation of the ground unmanned mobile platform and the like are completed by using the sensing and control method of airborne vision. It should be noted that, in the air-to-air, air-to-ground, or ground-to-ground subtasks, only one pilot is selected, but the number of followers may be plural.
The unmanned aerial vehicle comprises a control unit, a sensing unit, a communication unit and a power supply unit. The communication unit is used for receiving tasks issued by the control station by the unmanned aerial vehicle, the sensing unit is an airborne camera and used for detecting an ArUco mark carried by a pilot by a follower, the control unit is an airborne CPU and used for calculating and giving a control law of the unmanned aerial vehicle, and the power supply unit provides electric energy for the unmanned aerial vehicle.
The sensing unit uses airborne vision to enable a follower to detect the ArUco mark on the body of a pilot, the control unit estimates and controls and calculates the state of the unmanned aerial vehicle, and the power supply unit provides electric energy for the unmanned aerial vehicle.
The aerial drone comprises an onboard camera that can rotate appropriately according to different tasks. When carrying out the air to air task, aerial unmanned aerial vehicle's airborne camera optical axis is forward for its better visual characteristic of catching the aerial navigator in the place ahead, when carrying out the air to ground task, aerial unmanned aerial vehicle's airborne camera optical axis is down, makes its better visual characteristic of catching the pilot on the ground.
By adopting the vision-based multi-unmanned aerial vehicle cooperative control method based on vision and performance constraint, the technical advantages are as follows:
the multi-dimensional effective cooperation of time, space, tasks and the like can be realized in the environment of GPS deficiency, and the requirements of unmanned system miniaturization, intellectualization and autonomy are met. The cooperative control of the unmanned system mainly depends on a sensing and control technology, the visual sensing method relies on the ArUco mark detected by an airborne camera to obtain the pose of a target relative to a local coordinate system, does not depend on a GPS, can be deployed in indoor or outdoor scenes, has the advantages of small volume, low cost, rich target information and the like, and is beneficial to a large number of unmanned aerial vehicles with low cost and miniaturization to establish a large-scale autonomous cooperative unmanned system. The design of the controller of the unmanned aerial vehicle usually needs to consider output limited control, common control methods include Model Predictive Control (MPC), control based on a barrier function and the like, however, the above controllers can only guarantee space constraint, consider more general constraint, namely, for simultaneous constraint on time and space, the control method based on the predetermined performance specification effectively solves the double constraint of time and space, improves the intelligent level of the unmanned aerial vehicle, and enables the unmanned aerial vehicle to more accurately complete target tasks. The unmanned system based on the navigator-follower framework has the advantages of simplicity in implementation and application scalability, and is favorable for achieving distributed cooperative control of the unmanned system and further improving the autonomy of the unmanned system.
Compared with the prior art, the invention has the advantages that:
(1) high degree of miniaturization
In addition, the control law based on the performance function has the advantages of strong robustness, low calculation complexity, independence on model information and the like, and the calculation load of the unmanned aerial vehicle is reduced, so that the size of the unmanned aerial vehicle can be further reduced on sensing and calculating hardware, and the unmanned aerial vehicle with low cost and small size is favorable for establishing a large-scale autonomous cooperative unmanned system.
(2) Considering time constraints
The control law provided by the invention ensures that the output error meets the time and space constraints at the same time, and the output error converges to a certain predefined residual error along with the time function of the absolute attenuation, so that the multi-unmanned-aerial-vehicle autonomous cooperative control system can more accurately complete the target task.
(3) High degree of autonomy
The unmanned system provided by the invention is based on a navigator-follower framework, has simplicity in realization and application scalability, and can be used for establishing unmanned systems of different scales according to different tasks. The distributed cooperative control of the unmanned system is realized based on airborne vision, communication is not needed between unmanned aerial vehicles, the cooperative control between the unmanned aerial vehicles is realized by means of abundant visual information, and the autonomy of the unmanned system is further enhanced.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A multi-unmanned aerial vehicle cooperative control method based on vision and performance constraints is characterized by comprising the following steps:
step S1: decomposing the total target task to obtain independent subtasks, confirming the type and the number of the unmanned aerial vehicles according to the subtasks, and establishing an unmanned system of the subtasks;
step S2: selecting an optimal navigator in the unmanned system of the current subtask, and detecting an Aruco reference mark carried by the navigator by a follower, thereby acquiring the relative pose of the follower relative to the navigator;
step S3: establishing an unmanned system model of the subtask based on the navigator-follower frame;
step S4: designing an error transformation method based on a preset task performance specification;
step S5: and designing a PID control law of the follower according to the converted error, ensuring that the follower follows the pilot according to the performance of a preset task, and finally achieving the aim of autonomous cooperative control of the multiple unmanned aerial vehicles.
2. The cooperative control method for multiple unmanned aerial vehicles based on vision and performance constraints as claimed in claim 1, wherein step S1 specifically includes the following steps:
step S101: decomposing the total target task to obtain independent subtasks;
step S102: and selecting the form of an aerial unmanned aerial vehicle, an above-ground unmanned aerial vehicle or the combination of the aerial unmanned aerial vehicle and the above-ground unmanned aerial vehicle according to the requirements of the subtasks, determining the number of the unmanned aerial vehicles, and establishing the unmanned system of the subtasks.
3. The method and system for cooperative control of multiple unmanned aerial vehicles based on vision and performance constraints as claimed in claim 1, wherein step S2 specifically includes the following steps:
step S201: in the unmanned system of the current subtask, the control station selects an optimal unmanned aerial vehicle as a pilot for receiving tasks issued by the control station in the unmanned system of the current subtask;
step S202: each drone is equipped with a square reference marker of known size, which is square to the size of the ArUco, and the follower uses on-board vision to detect the ArUco marker, which consists of a black border and an internal binary matrix that determines its identifier, and a single marker can provide enough correspondence (four corners) to obtain the pose of the camera with respect to the ArUco marker, according to the camera coordinate system and the follower coordinate system, the ArUco marker coordinate system andobtaining the relative position zeta between the pilot and the follower by the fixed transformation relation of the pilot coordinate systemlfAnd Aruco marks that every unmanned aerial vehicle all provides corresponding ID, guarantees reliable, effectual following.
4. The cooperative control method of multiple unmanned aerial vehicles based on vision and performance constraints as claimed in claim 1, wherein step S3 includes establishing a navigator-follower model according to the navigator-follower framework:
ζlf=ζlf
therein, ζlfPosition of the pilot relative to the follower, ζlFor the position of the pilot in the world coordinate system, ζfIs the position and posture of the follower in the world coordinate system.
5. The cooperative control method for multiple unmanned aerial vehicles based on vision and performance constraints as claimed in claim 4, wherein the unmanned system model is composed of n-1 pilot-follower models mentioned above, where n is the number of unmanned aerial vehicles in the unmanned system.
6. The method for cooperative control of multiple drones based on vision and performance constraints according to claim 1, wherein the S4 specifically includes the following steps:
step S401: defining errors, specifically:
designing an error transformation method with a predetermined performance index according to a task, estimating the relative pose between a pilot and a follower by an Aruco marker, and defining an error as
Figure RE-FDA0002784860940000021
Wherein the content of the first and second substances,
Figure RE-FDA0002784860940000022
indicating a desired pilot versus followingThe pose of the person; if the relative pose r between the pilot and the follower is indirectly represented by image information according to visual kinematicslfE represents the error between the currently acquired image feature and the expected image feature;
step S402: defining error performance, specifically:
defining an error performance function such that an output error e is outputkFunction of time p along the absolute decaykConvergence to a predefined residue set:
Figure RE-FDA0002784860940000023
wherein e iskSetting parameters for the kth output error quantity expressed as error vector eΥ kAnd
Figure RE-FDA0002784860940000024
is composed of
Figure RE-FDA0002784860940000025
ρk(0) Respectively representing the initial maximum allowable error so as to ensure that the absolute value of the initial error meets 0 < | | ek(0)||<ρk(0) Designing the time function rho of the absolute attenuationk(t) is
ρk(t)=(ρ0)e-lt
Wherein the parameter l > 0 controls the speed of exponential convergence,
Figure RE-FDA0002784860940000026
a steady state level representing a predetermined task performance specification, which may be designed to be small enough to guarantee the task performance specification;
step S403: setting an output error function, specifically:
to achieve control that meets the task performance specifications, the output error is set to:
ek=S(kk(t)
wherein, S: (k) Is a continuous smooth function which monotonically increases and meets the following requirements:
Figure RE-FDA0002784860940000031
according to the above requirements, the transformation function is designed as:
Figure RE-FDA0002784860940000032
step S404: obtaining an error transformation function having a predetermined performance specification: definition of xk=ekkDue to S: (k) Is strictly increasing, so that its inverse function is always present, so that the error transformation with a predetermined performance specificationkThe description is as follows:
Figure RE-FDA0002784860940000033
7. the cooperative control method for multiple unmanned aerial vehicles based on vision and performance constraints as claimed in claim 6, wherein step S5 specifically includes the following steps:
step S501: designing the PID control law of the follower according to the converted error to ensure the converted errorkConvergence, the discrete form of the control law is as follows:
Figure RE-FDA0002784860940000034
wherein u iskDenotes the kth control quantity, kpIs the proportionality coefficient, kiIs the integral coefficient, kdIs a differential coefficient;
step S502: according to the nature of the error transformation function, when the transformed error is comparedkWhen the convergence is reached, the step S404Error transformation with predetermined performance specificationskFunction, the true state error e can be obtainedkAnd according to the convergence of preset performance, the multiple unmanned aerial vehicles complete the expected subtasks, and synchronously or sequentially complete each subtask according to the total task target, and finally, the multi-dimensional effective cooperation in time, space and task is realized.
8. A multi-unmanned aerial vehicle autonomous cooperative control system applying the vision and performance constraint-based multi-unmanned aerial vehicle cooperative control method of any one of claims 1 to 7, comprising a control station and a plurality of unmanned aerial vehicles, wherein the control station controls the plurality of unmanned aerial vehicles, the plurality of unmanned aerial vehicles comprises a plurality of aerial unmanned aerial vehicles and a plurality of aboveground unmanned aerial vehicles, the plurality of aerial unmanned aerial vehicles and the plurality of aboveground unmanned aerial vehicles each comprise a pilot and a follower, the control station is used for controlling the pilot in subtasks, an offline automatic or real-time manual task is set for the pilot, the followers and the pilot maintain a desired relative pose and follow and designate the pilot to move, and autonomous cooperative control of the plurality of unmanned aerial vehicles is realized.
9. The system of claim 8, wherein the unmanned aerial vehicles comprise a control unit, a sensing unit, a communication unit and a power supply unit, the communication unit is used for the unmanned aerial vehicles to receive tasks issued by the control station, the sensing unit is an onboard camera used for a follower to detect an ArUco mark carried by a pilot, the control unit is an onboard CPU used for calculating and giving control laws of the unmanned aerial vehicles, and the power supply unit provides electric energy for the unmanned aerial vehicles.
10. The autonomous cooperative control system of multiple drones according to claim 8, wherein said aerial drones comprise onboard cameras that can rotate properly according to different tasks.
CN202011088200.9A 2020-10-13 2020-10-13 Multi-unmanned aerial vehicle cooperative control method and system based on vision and performance constraints Active CN112114594B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011088200.9A CN112114594B (en) 2020-10-13 2020-10-13 Multi-unmanned aerial vehicle cooperative control method and system based on vision and performance constraints
PCT/CN2021/075626 WO2022077817A1 (en) 2020-10-13 2021-02-05 Multiple unmanned aerial vehicle cooperative control method and system based on vision and performance constraints

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011088200.9A CN112114594B (en) 2020-10-13 2020-10-13 Multi-unmanned aerial vehicle cooperative control method and system based on vision and performance constraints

Publications (2)

Publication Number Publication Date
CN112114594A true CN112114594A (en) 2020-12-22
CN112114594B CN112114594B (en) 2021-07-16

Family

ID=73798720

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011088200.9A Active CN112114594B (en) 2020-10-13 2020-10-13 Multi-unmanned aerial vehicle cooperative control method and system based on vision and performance constraints

Country Status (2)

Country Link
CN (1) CN112114594B (en)
WO (1) WO2022077817A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022077817A1 (en) * 2020-10-13 2022-04-21 湖南大学 Multiple unmanned aerial vehicle cooperative control method and system based on vision and performance constraints
CN117270485A (en) * 2023-11-23 2023-12-22 中国科学院数学与系统科学研究院 Distributed multi-machine action cooperative control method oriented to industrial Internet scene

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114721275B (en) * 2022-05-13 2022-09-09 北京航空航天大学 Visual servo robot self-adaptive tracking control method based on preset performance
CN115297000B (en) * 2022-06-21 2023-09-26 北京理工大学 Distributed self-adaptive event-triggered multi-autonomous inclusion control method under directed topology
CN115582838B (en) * 2022-11-09 2023-06-13 广东海洋大学 Multi-mechanical arm predefined time H based on preset performance ∞ Consistency control method
CN116627179B (en) * 2023-07-19 2023-10-31 陕西德鑫智能科技有限公司 Unmanned aerial vehicle formation control method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170269612A1 (en) * 2016-03-18 2017-09-21 Sunlight Photonics Inc. Flight control methods for operating close formation flight
CN107844127A (en) * 2017-09-20 2018-03-27 北京飞小鹰科技有限责任公司 Towards the formation flight device cooperative control method and control system of finite time
CN108052110A (en) * 2017-09-25 2018-05-18 南京航空航天大学 UAV Formation Flight method and system based on binocular vision
CN109992000A (en) * 2019-04-04 2019-07-09 北京航空航天大学 A kind of multiple no-manned plane path collaborative planning method and device based on Hierarchical reinforcement learning
CN110286694A (en) * 2019-08-05 2019-09-27 重庆邮电大学 A kind of unmanned plane formation cooperative control method of more leaders
CN110703798A (en) * 2019-10-23 2020-01-17 中国人民解放军军事科学院国防科技创新研究院 Unmanned aerial vehicle formation flight control method based on vision
CN111552314A (en) * 2020-05-09 2020-08-18 北京航空航天大学 Self-adaptive formation tracking control method for multiple unmanned aerial vehicles

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8639396B1 (en) * 2008-10-08 2014-01-28 Raytheon Company Cooperative control of unmanned aerial vehicles for tracking targets
CN102707693B (en) * 2012-06-05 2015-03-04 清华大学 Method for building spatio-temporal cooperative control system of multiple unmanned aerial vehicles
CN102768518B (en) * 2012-07-11 2014-05-21 清华大学 Multiple-unmanned plane platform cooperative control system
CN108983823B (en) * 2018-08-27 2020-07-17 安徽农业大学 Plant protection unmanned aerial vehicle cluster cooperative control method
CN109213198A (en) * 2018-09-11 2019-01-15 中国科学院长春光学精密机械与物理研究所 Multiple no-manned plane cooperative control system
CN110703795B (en) * 2019-09-27 2020-09-15 南京航空航天大学 Unmanned aerial vehicle group cooperative security control method based on switching topology
CN111338374B (en) * 2019-12-06 2023-11-17 中国电子科技集团公司电子科学研究院 Unmanned aerial vehicle cluster formation control method
CN112114594B (en) * 2020-10-13 2021-07-16 湖南大学 Multi-unmanned aerial vehicle cooperative control method and system based on vision and performance constraints

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170269612A1 (en) * 2016-03-18 2017-09-21 Sunlight Photonics Inc. Flight control methods for operating close formation flight
CN107844127A (en) * 2017-09-20 2018-03-27 北京飞小鹰科技有限责任公司 Towards the formation flight device cooperative control method and control system of finite time
CN108052110A (en) * 2017-09-25 2018-05-18 南京航空航天大学 UAV Formation Flight method and system based on binocular vision
CN109992000A (en) * 2019-04-04 2019-07-09 北京航空航天大学 A kind of multiple no-manned plane path collaborative planning method and device based on Hierarchical reinforcement learning
CN110286694A (en) * 2019-08-05 2019-09-27 重庆邮电大学 A kind of unmanned plane formation cooperative control method of more leaders
CN110703798A (en) * 2019-10-23 2020-01-17 中国人民解放军军事科学院国防科技创新研究院 Unmanned aerial vehicle formation flight control method based on vision
CN111552314A (en) * 2020-05-09 2020-08-18 北京航空航天大学 Self-adaptive formation tracking control method for multiple unmanned aerial vehicles

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ATHANASIOS K. GKESOULIS 等: "Distributed UAV Formation Control with Prescribed Performance", 《 2020 INTERNATIONAL CONFERENCE ON UNMANNED AIRCRAFT SYSTEMS (ICUAS)》 *
ZIQUANYU 等: "Decentralized finite-time adaptive fault-tolerant synchronization tracking control for multiple UAVs with prescribed performance", 《JOURNAL OF THE FRANKLIN INSTITUTE》 *
易国 等: "非完整移动机器人领航-跟随编队分布式控制", 《仪器仪表学报》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022077817A1 (en) * 2020-10-13 2022-04-21 湖南大学 Multiple unmanned aerial vehicle cooperative control method and system based on vision and performance constraints
CN117270485A (en) * 2023-11-23 2023-12-22 中国科学院数学与系统科学研究院 Distributed multi-machine action cooperative control method oriented to industrial Internet scene
CN117270485B (en) * 2023-11-23 2024-02-06 中国科学院数学与系统科学研究院 Distributed multi-machine action cooperative control method oriented to industrial Internet scene

Also Published As

Publication number Publication date
WO2022077817A1 (en) 2022-04-21
CN112114594B (en) 2021-07-16

Similar Documents

Publication Publication Date Title
CN112114594B (en) Multi-unmanned aerial vehicle cooperative control method and system based on vision and performance constraints
CN109579843B (en) Multi-robot cooperative positioning and fusion image building method under air-ground multi-view angles
CN105974932B (en) Unmanned aerial vehicle (UAV) control method
CN105857582A (en) Method and device for adjusting shooting angle, and unmanned air vehicle
US9811083B2 (en) System and method of controlling autonomous vehicles
CN110989639B (en) Underwater vehicle formation control method based on stress matrix
CN105303899A (en) Child-mother type robot cooperation system of combination of unmanned surface vessel and unmanned aerial vehicle
CN105182992A (en) Unmanned aerial vehicle control method and device
CN107065929A (en) A kind of unmanned plane is around flying method and system
CN110609556A (en) Multi-unmanned-boat cooperative control method based on LOS navigation method
CN112789672A (en) Control and navigation system, attitude optimization, mapping and positioning technology
WO2021003657A1 (en) Control method for collaborative operation by unmanned aerial vehicles, electronic device, and system
CN110243381B (en) Cooperative sensing monitoring method for air-ground robot
CN109405830B (en) Unmanned aerial vehicle automatic inspection method based on line coordinate sequence
WO2022095060A1 (en) Path planning method, path planning apparatus, path planning system, and medium
EP3077880B1 (en) Imaging method and apparatus
CN112068539A (en) Unmanned aerial vehicle automatic driving inspection method for blades of wind turbine generator
CN113406968B (en) Unmanned aerial vehicle autonomous take-off and landing cruising method based on digital twin
CN107144281A (en) Unmanned plane indoor locating system and localization method based on cooperative target and monocular vision
WO2022095067A1 (en) Path planning method, path planning device, path planning system, and medium thereof
CN102190081A (en) Vision-based fixed point robust control method for airship
CN112414401B (en) Unmanned aerial vehicle cooperative positioning system and method based on graph neural network
CN109900272B (en) Visual positioning and mapping method and device and electronic equipment
CN111665870A (en) Trajectory tracking method and unmanned aerial vehicle
Smaoui et al. Automated scanning of concrete structures for crack detection and assessment using a drone

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant