WO2022077817A1 - Multiple unmanned aerial vehicle cooperative control method and system based on vision and performance constraints - Google Patents

Multiple unmanned aerial vehicle cooperative control method and system based on vision and performance constraints Download PDF

Info

Publication number
WO2022077817A1
WO2022077817A1 PCT/CN2021/075626 CN2021075626W WO2022077817A1 WO 2022077817 A1 WO2022077817 A1 WO 2022077817A1 CN 2021075626 W CN2021075626 W CN 2021075626W WO 2022077817 A1 WO2022077817 A1 WO 2022077817A1
Authority
WO
WIPO (PCT)
Prior art keywords
leader
follower
error
task
performance
Prior art date
Application number
PCT/CN2021/075626
Other languages
French (fr)
Chinese (zh)
Inventor
王耀南
林杰
缪志强
毛建旭
张辉
朱青
钟杭
唐永鹏
聂静谋
Original Assignee
湖南大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 湖南大学 filed Critical 湖南大学
Publication of WO2022077817A1 publication Critical patent/WO2022077817A1/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • G05D1/104Simultaneous control of position or course in three dimensions specially adapted for aircraft involving a plurality of aircrafts, e.g. formation flying

Definitions

  • the invention relates to the technical field of unmanned aerial vehicles, in particular to a method and system for collaborative control of multiple unmanned aerial vehicles based on vision and performance constraints.
  • the present invention provides a multi-UAV cooperative control method and system based on vision and performance constraints, the purpose of which is to solve the technical problem of autonomous cooperative control of multiple UAVs in the background without GPS.
  • Step S1 Decompose the overall target task to obtain mutually independent sub-tasks, confirm the type and quantity of UAVs according to the sub-tasks, and establish an unmanned system for the sub-tasks;
  • Step S2 select an optimal leader in the unmanned system of the current sub-task, and the follower detects the ArUco fiducial marker carried by the leader, thereby obtaining its relative pose relative to the leader;
  • Step S3 establishing an unmanned system model of the sub-task based on the leader-follower framework
  • Step S4 designing an error transformation method based on the predetermined task performance specification
  • Step S5 Design the PID control law of the follower according to the transformed error to ensure that the follower follows the leader according to the predetermined task performance, and finally achieves the goal of autonomous cooperative control of multiple UAVs.
  • step S1 specifically includes the following steps:
  • Step S101 decompose the overall target task to obtain each subtask that is independent of each other;
  • Step S102 Select the form of aerial drone, ground drone, or combination of aerial drone and ground drone according to the requirements of the subtask, and determine the number of drones to establish an unmanned system for the subtask.
  • step S2 specifically includes the following steps:
  • Step S201 in the unmanned system of the current sub-task, the control station selects an optimal UAV as the leader of the task issued by the control station in the unmanned system of the current sub-task;
  • Step S202 Each UAV is equipped with an ArUco square fiducial marker of known size, and the follower detects the ArUco marker using airborne vision.
  • the ArUco marker consists of a black border and an internal binary matrix that determines its identifier.
  • a single marker can be Provide enough correspondence (four corners) to obtain the pose of the camera relative to the ArUco marker.
  • the ArUco marker coordinate system and the leader coordinate system obtain the leader and follower.
  • the relative pose ⁇ lf between the drones, and the ArUco mark provides a corresponding ID for each drone to ensure reliable and effective following.
  • step S3 includes establishing a leader-follower model according to the leader-follower framework:
  • ⁇ lf is the pose of the leader relative to the follower
  • ⁇ l is the pose of the leader in the world coordinate system
  • ⁇ f is the pose of the follower in the world coordinate system.
  • the unmanned system model is composed of n-1 above-mentioned leader-follower models, where n is the number of unmanned aerial vehicles in the unmanned system.
  • the S4 specifically includes the following steps:
  • Step S401 define the error, specifically:
  • Step S402 Define error performance, specifically:
  • the error performance function is defined such that the output error e k converges to a predefined residual set along the absolute decay time function ⁇ k :
  • ek is expressed as the kth output error amount of the error vector e
  • the parameters ⁇ k and for ⁇ k (0) respectively represent the initial maximum allowable error, so as to ensure that the absolute value of the initial error satisfies 0 ⁇
  • ⁇ k (t) ( ⁇ 0 - ⁇ ⁇ )e -lt + ⁇ ⁇
  • Step S403 Set the output error function, specifically:
  • the output error is set as:
  • the designed transformation function is:
  • the error transformation ⁇ k is described as:
  • step S5 specifically includes the following steps:
  • Step S501 Design the PID control law of the follower according to the transformed error to ensure the convergence of the transformed error ⁇ k .
  • the discrete form of the control law is as follows:
  • uk represents the kth control variable
  • k p is the proportional coefficient
  • ki is the integral coefficient
  • k d is the differential coefficient
  • Step S502 According to the properties of the error transformation function, when the transformed error ⁇ k converges, the error transformation ⁇ k function with the predetermined performance specification obtained in step S404 can be used to obtain the true state error e k converges according to the predetermined performance, and more The UAV completes the desired sub-tasks, and completes each sub-task synchronously or sequentially according to the overall task goal, and finally achieves effective multi-dimensional coordination in time, space and tasks.
  • An embodiment of the present invention provides a vision-based autonomous cooperative control system for multiple unmanned aerial vehicles, including a control station and multiple unmanned aerial vehicles, the control station controls multiple unmanned aerial vehicles, and a plurality of unmanned aerial vehicles
  • the aircraft includes a plurality of aerial drones and a plurality of ground drones, and the plurality of the aerial drones and the plurality of ground drones include a leader and a follower, and the control station is used to control the sub-tasks.
  • the navigator sets offline automatic or real-time manual tasks for the navigator, the follower maintains the desired relative pose with the navigator, and follows the designated navigator to move to achieve autonomous coordinated control of multiple drones.
  • the UAV includes a control unit, a sensing unit, a communication unit and a power supply unit
  • the communication unit is used for the UAV to receive tasks issued by the control station
  • the sensing unit is an onboard camera for the follower
  • the ArUco mark carried by the navigator is detected
  • the control unit is an on-board CPU, used to calculate and give the control law of the drone
  • the power supply unit provides electrical energy for the drone.
  • the aerial drone includes an on-board camera, and the on-board camera can be rotated appropriately according to different tasks.
  • the technical effect that can be achieved by the invention is that the multi-dimensional effective coordination of time, space and tasks can be realized in the environment where GPS is missing, and the requirements of miniaturization, intelligence and autonomy of the unmanned system can be met.
  • the cooperative control of the unmanned system mainly relies on perception and control technology.
  • the visual perception method relies on the ArUco marker detected by the airborne camera to obtain the pose of the target relative to the local coordinate system. It does not rely on GPS and can be deployed in indoor or outdoor scenes. It has the advantages of small size, low cost, and rich target information. It does not rely on GPS and can be deployed in any indoor or outdoor scene. It has the advantages of small size, low cost, and rich target information. of UAVs to build a large-scale autonomous collaborative unmanned system.
  • the controller design of UAV often needs to consider output-limited control.
  • Common control methods include model predictive control (MPC), barrier function-based control, etc.
  • MPC model predictive control
  • barrier function-based control etc.
  • the above controllers can only guarantee space constraints, and consider more general constraints. That is, for the simultaneous constraints of time and space, the control method based on the predetermined performance specification can effectively solve the dual constraints of time and space, improve the intelligence level of the UAV, and make it more accurate to complete the target task.
  • the unmanned system based on the leader-follower framework has simple implementation and application scalability, which is conducive to the realization of distributed collaborative control of unmanned systems and further improves the autonomy of unmanned systems.
  • the invention adopts a low-cost perception tool, namely airborne vision, which has the characteristics of small size and light weight.
  • the control law based on the performance function has strong robustness, low computational complexity, and does not depend on model information. It reduces the computing load of the UAV, so it can further reduce the size of the UAV in terms of sensing and computing hardware, which is beneficial to a large number of low-cost, miniaturized UAVs to establish a large-scale autonomous collaborative unmanned system. .
  • the control law proposed by the invention ensures that the output error satisfies the constraints of time and space at the same time, and the output error converges to a predetermined residual error with the time function of absolute decay, so that the multi-UAV autonomous cooperative control system can be completed more accurately target task.
  • the unmanned system proposed by the present invention is based on the navigator-follower framework, which has the simplicity of implementation and application scalability, and can establish unmanned systems of different scales according to different tasks.
  • the distributed cooperative control of unmanned systems is realized based on airborne vision. There is no need for communication between UAVs, and the cooperative control between UAVs is realized by relying on rich visual information, which further enhances the autonomy of unmanned systems.
  • FIG. 1 is a general block diagram of the process flow of a vision-based multi-UAV cooperative control method based on vision and performance constraints of the present invention
  • FIG. 2 is a schematic diagram of an embodiment of a vision-based multi-UAV cooperative control method based on vision and performance constraints according to the present invention
  • FIG. 3 is a specific flowchart of a leader-follower control method of a vision-based multi-UAV cooperative control method based on vision and performance constraints of the present invention.
  • the present invention provides a vision-based multi-UAV cooperative control method based on vision and performance constraints, as shown in Figure 1 and Figure 3, including the following steps:
  • Step S1 Decompose the overall target task to obtain mutually independent sub-tasks, confirm the type and quantity of UAVs according to the sub-tasks, and establish an unmanned system for the sub-tasks;
  • Step S2 select an optimal leader in the unmanned system of the current sub-task, and the follower detects the ArUco fiducial marker carried by the leader, thereby obtaining its relative pose relative to the leader;
  • Step S3 establishing an unmanned system model of the sub-task based on the leader-follower framework
  • Step S4 designing an error transformation method based on the predetermined task performance specification
  • Step S5 Design the PID control law of the follower according to the transformed error to ensure that the follower follows the leader according to the predetermined task performance, and finally achieves the goal of autonomous cooperative control of multiple UAVs.
  • step S1 specifically includes the following steps:
  • Step S101 decompose the overall target task to obtain each subtask that is independent of each other;
  • Step S102 Select the form of aerial drone, ground drone, or combination of aerial drone and ground drone according to the requirements of the subtask, and determine the number of drones to establish an unmanned system for the subtask.
  • Step S2 specifically includes the following steps:
  • Step S201 in the unmanned system of the current sub-task, the control station selects an optimal UAV as the leader of the task issued by the control station in the unmanned system of the current sub-task;
  • Step S202 Each UAV is equipped with an ArUco square fiducial marker of known size, and the follower detects the ArUco marker using airborne vision.
  • the ArUco marker consists of a black border and an internal binary matrix that determines its identifier.
  • a single marker can be Provide enough correspondence (four corners) to obtain the pose of the camera relative to the ArUco marker.
  • the ArUco marker coordinate system and the leader coordinate system obtain the leader and follower.
  • the relative pose ⁇ lf between the drones, and the ArUco mark provides a corresponding ID for each drone to ensure reliable and effective following.
  • Step S3 includes establishing a leader-follower model according to the leader-follower framework:
  • ⁇ lf is the pose of the leader relative to the follower
  • ⁇ l is the pose of the leader in the world coordinate system
  • ⁇ f is the pose of the follower in the world coordinate system.
  • the unmanned system model consists of n-1 above-mentioned leader-follower models, where n is the number of UAVs in the unmanned system.
  • the S4 specifically includes the following steps:
  • Step S401 define the error, specifically:
  • Step S402 Define error performance, specifically:
  • the error performance function is defined such that the output error e k converges to a predefined residual set along the absolute decay time function ⁇ k :
  • ek is expressed as the kth output error amount of the error vector e
  • the parameters ⁇ k and for ⁇ k (0) respectively represent the initial maximum allowable error, so as to ensure that the absolute value of the initial error satisfies 0 ⁇
  • ⁇ k (t) ( ⁇ 0 - ⁇ ⁇ )e -lt + ⁇ ⁇
  • Step S403 Set the output error function, specifically:
  • the output error is set as:
  • the designed transformation function is:
  • the error transformation ⁇ k is described as:
  • Step S5 specifically includes the following steps:
  • Step S501 Design the PID control law of the follower according to the transformed error to ensure the convergence of the transformed error ⁇ k .
  • the discrete form of the control law is as follows:
  • uk represents the kth control variable
  • k p is the proportional coefficient
  • ki is the integral coefficient
  • k d is the differential coefficient
  • Step S502 According to the properties of the error transformation function, when the transformed error ⁇ k converges, from the error transformation ⁇ k function with a predetermined performance specification obtained in step S404, it can be obtained that the real state error e k converges according to the predetermined performance, and there is no Human-machine completes the desired sub-tasks, and completes each sub-task synchronously or sequentially according to the overall task objective, and finally achieves multi-dimensional effective coordination in time, space, and tasks.
  • a system for applying the multi-UAV cooperative control method based on vision and performance constraints includes a control station and a plurality of UAVs, and the control station controls a plurality of UAVs.
  • a plurality of the drones, the plurality of the drones include a plurality of aerial drones and a plurality of ground drones, and the plurality of the aerial drones and the plurality of ground drones include a leader and a follower
  • the control station is used to control the leader in the sub-tasks, set offline automatic or real-time manual tasks for the leader, the follower and the leader maintain the desired relative pose, and follow the designated leader It can realize the autonomous cooperative control of multiple UAVs.
  • the multi-UAV cooperative control system can realize three sub-tasks of air-to-air, air-to-ground and ground-to-ground.
  • the leader-follower framework builds a model, the follower detects the ArUco fiducial marker carried by the leader, and obtains its relative pose relative to the leader, uses a predetermined performance function to do error transformation and designs the follower's controller, so that Follower 1-1, follower 1-2 and leader 1 maintain the desired relative pose, and follow the designated leader 1 to move.
  • the control station selects the optimal leader 2, and then builds a model according to the leader-follower framework, using only the perception and control method of airborne vision, and finally completes the
  • the aerial drones land on the ground unmanned mobile platform and the ground unmanned mobile platform formation and other coordinated tasks. It should be noted that, no matter it is an air-to-air, air-to-ground, or ground-to-ground subtask, only one leader is selected, but the number of followers can be multiple.
  • the UAV includes a control unit, a sensing unit, a communication unit and a power supply unit.
  • the communication unit is used for the drone to receive the tasks issued by the control station
  • the sensing unit is an onboard camera, and is used for the follower to detect the ArUco mark carried by the leader
  • the control unit is the onboard CPU, which is used for computing.
  • the control law of the drone is given, and the power supply unit provides power for the drone.
  • the sensing unit uses airborne vision to enable the follower to detect the ArUco mark on the leader, the control unit estimates and controls the state of the UAV, and the power supply unit provides electrical energy for the UAV.
  • the aerial drone includes an onboard camera, which can be rotated appropriately according to different tasks.
  • the optical axis of the airborne camera of the aerial drone faces forward, so that it can better capture the visual characteristics of the air leader in front.
  • the camera's optical axis is down, making it better to capture the visual features of the pilot on the ground.
  • the cooperative control of the unmanned system mainly relies on perception and control technology.
  • the visual perception method relies on the ArUco marker detected by the airborne camera to obtain the pose of the target relative to the local coordinate system. It does not rely on GPS and can be deployed in indoor or outdoor scenes. It has the advantages of small size, low cost, and rich target information, which is beneficial to a large number of low-cost, miniaturized UAVs to establish a large-scale autonomous and collaborative unmanned system.
  • the controller design of UAV often needs to consider output-limited control. Common control methods include model predictive control (MPC), barrier function-based control, etc.
  • the above controllers can only guarantee space constraints, and consider more general constraints. That is, for the simultaneous constraints of time and space, the control method based on the predetermined performance specification can effectively solve the dual constraints of time and space, improve the intelligence level of the UAV, and make it more accurate to complete the target task.
  • the unmanned system based on the leader-follower framework has simple implementation and application scalability, which is conducive to the realization of distributed collaborative control of unmanned systems and further improves the autonomy of unmanned systems.
  • the invention adopts a low-cost perception tool, namely airborne vision, which has the characteristics of small size and light weight.
  • the control law based on the performance function has strong robustness, low computational complexity, and does not depend on model information. It reduces the computing load of the UAV, so it can further reduce the size of the UAV in terms of sensing and computing hardware, which is beneficial to a large number of low-cost, miniaturized UAVs to establish a large-scale autonomous collaborative unmanned system. .
  • the control law proposed by the invention ensures that the output error satisfies the constraints of time and space at the same time, and the output error converges to a predefined residual error with the time function of absolute decay, so that the multi-UAV autonomous cooperative control system can be completed more accurately target task.
  • the unmanned system proposed by the present invention is based on the navigator-follower framework, which has the simplicity of implementation and application scalability, and can establish unmanned systems of different scales according to different tasks.
  • the distributed cooperative control of unmanned systems is realized based on airborne vision. There is no need for communication between UAVs, and the cooperative control between UAVs is realized by relying on rich visual information, which further enhances the autonomy of unmanned systems.
  • a vision-based multi-UAV cooperative control method and system based on vision and performance constraints provided by the present invention are described above in detail.
  • the principles and implementations of the present invention are described herein by using specific examples, and the descriptions of the above embodiments are only used to help understand the core idea of the present invention. It should be pointed out that for those skilled in the art, without departing from the principle of the present invention, several improvements and modifications can also be made to the present invention, and these improvements and modifications also fall within the protection scope of the claims of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

A multiple unmanned aerial vehicle cooperative control method and system based on vision and performance constraints. The method comprises: selecting an optimal pilot from an unmanned system of a sub-task, and a follower detecting, on the basis of vision, a mark carried by the pilot, so as to acquire a relative posture of the follower relative to the pilot (S2); designing an error transformation method that is based on a predetermined task performance specification (S4); and designing a PID control law of the follower according to a transformed error, so as to ensure that the follower follows the pilot according to predetermined task performance, and finally achieve the goal of autonomous cooperative control of multiple unmanned aerial vehicles (S5). By means of the method and system, multi-dimensional effective cooperation of time, space, tasks, etc., can be realized in an environment in which GPS is missing, thereby meeting the requirements of miniaturization, intelligentization and autonomization.

Description

一种基于视觉与性能约束的多无人机协同控制方法及系统A multi-UAV cooperative control method and system based on vision and performance constraints
本申请要求于2020年10月13日提交中国专利局的中国专利申请的优先权,其中国专利申请为:申请号为202011088200.9,发明名称为“一种基于视觉与性能约束的多无人机协同控制方法及系统”,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application submitted to the Chinese Patent Office on October 13, 2020, and the Chinese patent application is: application number 202011088200.9, and the name of the invention is "a vision and performance constraint-based multi-UAV collaboration Control Method and System", which is incorporated herein by reference in its entirety.
技术领域technical field
本发明涉及无人机技术领域,特别涉及一种基于视觉与性能约束的多无人机协同控制方法及系统。The invention relates to the technical field of unmanned aerial vehicles, in particular to a method and system for collaborative control of multiple unmanned aerial vehicles based on vision and performance constraints.
背景技术Background technique
随着以人工智能为代表的新一代信息技术的快速发展,军事作战以及工业民生正从人利用机器执行任务向无人机自主执行任务转变,尤其是为了解决高动态、高危险、多任务的作战及作业要求,自主无人系统已经成为军事智能化与工业智能化的重要支撑。自主无人系统在军用和民用领域均有重大的应用需求,包括军事侦察、打击,信息中继,地形测绘以及智能仓储等,由于自主无人系统具备成本低、应用广、效果好等优势,是军事作战和工业发展中新的技术制高点。无人系统一直是国内外研究热点,但大多数的研究还是侧重于多无人机任务分配方法的研究,对于多无人机在任务性能约束下的自主协同控制方案较少,尤其是在GPS缺失的环境下。With the rapid development of a new generation of information technology represented by artificial intelligence, military operations and industrial people's livelihood are changing from using machines to perform tasks to drones performing tasks autonomously, especially to solve high-dynamic, high-risk, multi-tasking tasks. Combat and operational requirements, autonomous unmanned systems have become an important support for military intelligence and industrial intelligence. Autonomous unmanned systems have significant application requirements in both military and civilian fields, including military reconnaissance, strike, information relay, terrain mapping, and intelligent warehousing. Because autonomous unmanned systems have the advantages of low cost, wide application, and good effect, It is a new technological commanding height in military operations and industrial development. Unmanned systems have always been a research hotspot at home and abroad, but most of the research focuses on the research of multi-UAV task allocation methods. There are few autonomous cooperative control schemes for multi-UAVs under the constraints of task performance, especially in GPS in a missing environment.
发明内容SUMMARY OF THE INVENTION
本发明提供了一种基于视觉与性能约束的多无人机协同控制方法及系统,其目的是为了解决背景技术中在GPS缺失的环境下,多无人机自主的协同控制的技术问题。The present invention provides a multi-UAV cooperative control method and system based on vision and performance constraints, the purpose of which is to solve the technical problem of autonomous cooperative control of multiple UAVs in the background without GPS.
为了达到上述目的,本发明的实施例提供的一种基于视觉与性能约束 的多无人机协同控制方法,包括如下步骤:In order to achieve the above-mentioned purpose, a kind of multi-UAV cooperative control method based on vision and performance constraints provided by an embodiment of the present invention comprises the following steps:
步骤S1:对总目标任务分解得到相互独立的子任务,根据子任务确认无人机的类别与数量,建立子任务的无人系统;Step S1: Decompose the overall target task to obtain mutually independent sub-tasks, confirm the type and quantity of UAVs according to the sub-tasks, and establish an unmanned system for the sub-tasks;
步骤S2:在当前子任务的无人系统中选择一个最优领航者,跟随者检测领航者带有的ArUco基准标记,由此获取它相对于领航者的相对位姿;Step S2: select an optimal leader in the unmanned system of the current sub-task, and the follower detects the ArUco fiducial marker carried by the leader, thereby obtaining its relative pose relative to the leader;
步骤S3:基于领航者-跟随者框架建立子任务的无人系统模型;Step S3: establishing an unmanned system model of the sub-task based on the leader-follower framework;
步骤S4:设计基于预定任务性能规范的误差变换方法;Step S4: designing an error transformation method based on the predetermined task performance specification;
步骤S5:根据变换后的误差设计跟随者的PID控制律,保证跟随者按预定任务性能跟随领航者,最后达到多无人机自主协同控制的目标。Step S5: Design the PID control law of the follower according to the transformed error to ensure that the follower follows the leader according to the predetermined task performance, and finally achieves the goal of autonomous cooperative control of multiple UAVs.
优选地,步骤S1具体包括如下步骤:Preferably, step S1 specifically includes the following steps:
步骤S101:对总目标任务分解,得到相互独立的各个子任务;Step S101: decompose the overall target task to obtain each subtask that is independent of each other;
步骤S102:根据子任务的要求选择空中无人机、地上无人机或者空中无人机与地上无人机组合的形式,并确定无人机的数量,建立子任务的无人系统。Step S102: Select the form of aerial drone, ground drone, or combination of aerial drone and ground drone according to the requirements of the subtask, and determine the number of drones to establish an unmanned system for the subtask.
优选地,步骤S2具体包括如下步骤:Preferably, step S2 specifically includes the following steps:
步骤S201:在当前子任务的无人系统中,控制站选择一个最优的无人机作为当前子任务的无人系统中接收控制站发布任务的领航者;Step S201: in the unmanned system of the current sub-task, the control station selects an optimal UAV as the leader of the task issued by the control station in the unmanned system of the current sub-task;
步骤S202:每个无人机都配有一个已知大小的ArUco方形基准标记,跟随者使用机载视觉检测ArUco标记,ArUco标记由黑色边框和确定其标识符的内部二进制矩阵组成,单个标记可提供足够的对应关系(四个角)来获得相机相对ArUco标记的位姿,根据相机坐标系和跟随者坐标系、ArUco标记坐标系和领航者坐标系的固定变换关系,求得领航者与跟随者之间的相对位姿ζ lf,并且ArUco标记为每个无人机都提供了对应的ID,保证可靠、有效的跟随。 Step S202: Each UAV is equipped with an ArUco square fiducial marker of known size, and the follower detects the ArUco marker using airborne vision. The ArUco marker consists of a black border and an internal binary matrix that determines its identifier. A single marker can be Provide enough correspondence (four corners) to obtain the pose of the camera relative to the ArUco marker. According to the fixed transformation relationship between the camera coordinate system and the follower coordinate system, the ArUco marker coordinate system and the leader coordinate system, obtain the leader and follower. The relative pose ζ lf between the drones, and the ArUco mark provides a corresponding ID for each drone to ensure reliable and effective following.
优选地,步骤S3包括根据领航者-跟随者框架建立领航者-跟随者模型:Preferably, step S3 includes establishing a leader-follower model according to the leader-follower framework:
ζ lf=ζ lf ζ lf = ζ lf
其中,ζ lf为领航者相对于跟随者的位姿,ζ l为领航者在世界坐标系内的位姿,ζ f为跟随者在世界坐标系内的位姿。 Among them, ζ lf is the pose of the leader relative to the follower, ζ l is the pose of the leader in the world coordinate system, and ζ f is the pose of the follower in the world coordinate system.
优选地,无人系统模型由n-1个上述提到的领航者-跟随者模型构成,其中n为无人系统中无人机的个数。Preferably, the unmanned system model is composed of n-1 above-mentioned leader-follower models, where n is the number of unmanned aerial vehicles in the unmanned system.
优选地,所述S4具体包括如下步骤:Preferably, the S4 specifically includes the following steps:
步骤S401:定义误差,具体为:Step S401: define the error, specifically:
根据任务设计具有预先确定性能指标的误差变换方法,通过ArUco标记估计领航者与跟随者之间的相对位姿,则定义误差为Design an error transformation method with predetermined performance indicators according to the task, and estimate the relative pose between the leader and the follower through the ArUco marker, then the error is defined as
Figure PCTCN2021075626-appb-000001
Figure PCTCN2021075626-appb-000001
其中,
Figure PCTCN2021075626-appb-000002
表示期望的领航者相对于跟随者的位姿;若是根据视觉运动学,用图像信息间接的表示领航者与跟随者之间的相对位姿r lf,则e表示为当前获取图像特征与期望图像特征的误差;
in,
Figure PCTCN2021075626-appb-000002
Represents the desired pose of the leader relative to the follower; if the relative pose r lf between the leader and the follower is indirectly represented by image information according to visual kinematics, then e represents the current acquired image feature and the desired image error in characteristics;
步骤S402:定义误差性能,具体为:Step S402: Define error performance, specifically:
定义误差性能函数使得输出误差e k沿绝对衰减的时间函数ρ k收敛到预定义的残集: The error performance function is defined such that the output error e k converges to a predefined residual set along the absolute decay time function ρ k :
Figure PCTCN2021075626-appb-000003
Figure PCTCN2021075626-appb-000003
其中,e k表示为误差向量e的第k个输出误差量,设定参数 Υ k
Figure PCTCN2021075626-appb-000004
Figure PCTCN2021075626-appb-000005
ρ k(0)分别表示初始最大允许误差,从而保证初始误差绝对值满足0<||e k(0)||<ρ k(0),设计绝对衰减的时间函数ρ k(t)为
Among them, ek is expressed as the kth output error amount of the error vector e, and the parameters Υ k and
Figure PCTCN2021075626-appb-000004
for
Figure PCTCN2021075626-appb-000005
ρ k (0) respectively represent the initial maximum allowable error, so as to ensure that the absolute value of the initial error satisfies 0<||e k (0)||<ρ k (0), the time function ρ k (t) of the design absolute decay is
ρ k(t)=(ρ 0)e -lt ρ k (t)=(ρ 0 )e -lt
其中,参数l>0控制指数收敛的速度,
Figure PCTCN2021075626-appb-000006
表示预定任务性能规范的稳态水平,其可以设计的足够小来保证任务性能规范;
where the parameter l>0 controls the speed of exponential convergence,
Figure PCTCN2021075626-appb-000006
Represents the steady state level of the predetermined mission performance specification, which can be designed to be small enough to guarantee the mission performance specification;
步骤S403:设置输出误差函数,具体为:Step S403: Set the output error function, specifically:
为了实现满足任务性能规范的控制,输出误差设置为:To achieve control that meets the task performance specification, the output error is set as:
e k=S(ε kk(t) e k =S(ε kk (t)
其中,S(ε k)是一个单调递增的连续光滑函数,并且满足如下要求: where S(ε k ) is a monotonically increasing continuous smooth function that satisfies the following requirements:
Figure PCTCN2021075626-appb-000007
Figure PCTCN2021075626-appb-000007
根据以上要求,设计变换函数为:According to the above requirements, the designed transformation function is:
Figure PCTCN2021075626-appb-000008
Figure PCTCN2021075626-appb-000008
步骤S404:获取具有预定性能规范的误差变换函数:定义χ k=e kk,由于S(ε k)是严格递增的,因此它的反函数总是存在的,所以具有预定性能规范的误差变换ε k描述为: Step S404: Obtain an error transformation function with a predetermined performance specification: define χ k =e kk , since S(ε k ) is strictly increasing, its inverse function always exists, so it has a predetermined performance specification. The error transformation ε k is described as:
Figure PCTCN2021075626-appb-000009
Figure PCTCN2021075626-appb-000009
优选地,步骤S5具体包括如下步骤:Preferably, step S5 specifically includes the following steps:
步骤S501:根据变换后的误差设计跟随者的PID控制律,保证变换后误差ε k收敛,控制律的离散形式如下: Step S501: Design the PID control law of the follower according to the transformed error to ensure the convergence of the transformed error εk . The discrete form of the control law is as follows:
Figure PCTCN2021075626-appb-000010
Figure PCTCN2021075626-appb-000010
其中,u k表示第k个控制量,k p是比例系数,k i是积分系数,k d是微分系数; Among them, uk represents the kth control variable, k p is the proportional coefficient, ki is the integral coefficient, and k d is the differential coefficient;
步骤S502:根据误差变换函数的性质,当对变换后误差ε k收敛时,由步骤S404中获得的具有预定性能规范的误差变换ε k函数,可得真实状态误差e k按预定性能收敛,多无人机完成期望子任务,并且根据总任务目标同步或者依次完成每一个子任务,最后实现时间、空间、任务上多维度的有效协同。 Step S502: According to the properties of the error transformation function, when the transformed error ε k converges, the error transformation ε k function with the predetermined performance specification obtained in step S404 can be used to obtain the true state error e k converges according to the predetermined performance, and more The UAV completes the desired sub-tasks, and completes each sub-task synchronously or sequentially according to the overall task goal, and finally achieves effective multi-dimensional coordination in time, space and tasks.
本发明的实施例提供的一种基于视觉的多无人机自主协同控制系统,包括控制站与多个无人机,所述控制站控制多个所述无人机,多个所述无人机包括多个空中无人机及多个地上无人机,多个所述空中无人机及多个地上无人机均包括领航者及跟随者,所述控制站用于控制子任务中的领航者,为领航者设置离线自动或者实时手动的任务,所述跟随者与所述领航者保持期望的相对位姿,并跟随指定所述领航者运动,实现多无人机的自主协同控制。An embodiment of the present invention provides a vision-based autonomous cooperative control system for multiple unmanned aerial vehicles, including a control station and multiple unmanned aerial vehicles, the control station controls multiple unmanned aerial vehicles, and a plurality of unmanned aerial vehicles The aircraft includes a plurality of aerial drones and a plurality of ground drones, and the plurality of the aerial drones and the plurality of ground drones include a leader and a follower, and the control station is used to control the sub-tasks. The navigator sets offline automatic or real-time manual tasks for the navigator, the follower maintains the desired relative pose with the navigator, and follows the designated navigator to move to achieve autonomous coordinated control of multiple drones.
优选地,所述无人机包括控制单元、感知单元、通信单元以及电源单元,所述通信单元用于无人机接收控制站发布的任务,所述感知单元是机载相机,用于跟随者检测领航者带有的ArUco标记,所述控制单元是机载CPU,用于计算并给出所述无人机的控制律,所述电源单元为无人机提供电能。Preferably, the UAV includes a control unit, a sensing unit, a communication unit and a power supply unit, the communication unit is used for the UAV to receive tasks issued by the control station, and the sensing unit is an onboard camera for the follower The ArUco mark carried by the navigator is detected, the control unit is an on-board CPU, used to calculate and give the control law of the drone, and the power supply unit provides electrical energy for the drone.
优选地,所述空中无人机包括机载相机,所述机载相机可以根据不同的任务进行合适的旋转。Preferably, the aerial drone includes an on-board camera, and the on-board camera can be rotated appropriately according to different tasks.
采用本发明能达到的技术效果:能够在GPS缺失的环境下实现时间、空间与任务等多维度的有效协同,满足无人系统小型化、智能化和自主化的 需求。无人系统的协同控制主要依赖于感知与控制技术,视觉感知方法依靠机载相机检测到的ArUco标记来获得目标相对于本地坐标系下的位姿,不依赖GPS,可以部署在室内或室外场景中,具有体积小,成本低,目标信息丰富等优点,不依赖GPS,可以部署在任何室内或室外场景中,具有体积小,成本低,目标信息丰富等优点,有利于大量低成本,小型化的无人机建立一个大规模地自主协同的无人系统。无人机的控制器设计常需要考虑输出受限控制,常见的控制方法有模型预测控制(MPC),基于屏障函数的控制等,然而以上控制器只能保证空间约束,考虑更一般的约束,即对于时间和空间上的同时约束,基于预定性能规范的控制方法有效解决时间与空间的双重约束,提高了无人机的智能化水平,使其更精确的完成目标任务。基于领航者-跟随者框架的无人系统,具有实现简单性和应用可伸缩性,有利于实现无人系统的分布式协同控制,进一步的提高无人系统的自主性。The technical effect that can be achieved by the invention is that the multi-dimensional effective coordination of time, space and tasks can be realized in the environment where GPS is missing, and the requirements of miniaturization, intelligence and autonomy of the unmanned system can be met. The cooperative control of the unmanned system mainly relies on perception and control technology. The visual perception method relies on the ArUco marker detected by the airborne camera to obtain the pose of the target relative to the local coordinate system. It does not rely on GPS and can be deployed in indoor or outdoor scenes. It has the advantages of small size, low cost, and rich target information. It does not rely on GPS and can be deployed in any indoor or outdoor scene. It has the advantages of small size, low cost, and rich target information. of UAVs to build a large-scale autonomous collaborative unmanned system. The controller design of UAV often needs to consider output-limited control. Common control methods include model predictive control (MPC), barrier function-based control, etc. However, the above controllers can only guarantee space constraints, and consider more general constraints. That is, for the simultaneous constraints of time and space, the control method based on the predetermined performance specification can effectively solve the dual constraints of time and space, improve the intelligence level of the UAV, and make it more accurate to complete the target task. The unmanned system based on the leader-follower framework has simple implementation and application scalability, which is conducive to the realization of distributed collaborative control of unmanned systems and further improves the autonomy of unmanned systems.
与现有技术相比,本发明的优点在于:Compared with the prior art, the advantages of the present invention are:
(1)小型化程度高(1) High degree of miniaturization
本发明采用低成本的感知工具,即机载视觉,其具备体积小,重量轻等特点,除此之外,基于性能函数的控制律具有鲁棒性强、计算复杂度低、不依赖模型信息等优点,降低了无人机计算负荷,因此能在感知与计算硬件上进一步缩小无人机的体积,有利于大量低成本,小型化的无人机建立一个大规模地自主协同的无人系统。The invention adopts a low-cost perception tool, namely airborne vision, which has the characteristics of small size and light weight. In addition, the control law based on the performance function has strong robustness, low computational complexity, and does not depend on model information. It reduces the computing load of the UAV, so it can further reduce the size of the UAV in terms of sensing and computing hardware, which is beneficial to a large number of low-cost, miniaturized UAVs to establish a large-scale autonomous collaborative unmanned system. .
(2)考虑时间约束(2) Consider time constraints
本发明提出的控制律保证输出误差同时满足时间和空间的约束,且输出误差随绝对衰减的时间函数收敛到某一预定义的残差内,使得多无人机自主协同控制系统更精确的完成目标任务。The control law proposed by the invention ensures that the output error satisfies the constraints of time and space at the same time, and the output error converges to a predetermined residual error with the time function of absolute decay, so that the multi-UAV autonomous cooperative control system can be completed more accurately target task.
(3)自主化程度高(3) High degree of autonomy
本发明提出的无人系统基于领航者-跟随者框架,具有实现简单性和应用可伸缩性,能根据不同的任务建立不同规模的无人系统。基于机载视觉来实现无人系统的分布式协同控制,无人机之间不需要通信,依靠丰富的视觉信息实现无人机之间的协同控制,进一步的增强无人系统的自主性。The unmanned system proposed by the present invention is based on the navigator-follower framework, which has the simplicity of implementation and application scalability, and can establish unmanned systems of different scales according to different tasks. The distributed cooperative control of unmanned systems is realized based on airborne vision. There is no need for communication between UAVs, and the cooperative control between UAVs is realized by relying on rich visual information, which further enhances the autonomy of unmanned systems.
附图说明Description of drawings
图1为本发明的一种基于视觉的基于视觉与性能约束的多无人机协同控制方法的流程总框图;1 is a general block diagram of the process flow of a vision-based multi-UAV cooperative control method based on vision and performance constraints of the present invention;
图2为本发明的一种基于视觉的基于视觉与性能约束的多无人机协同控制方法的实施例的示意图;2 is a schematic diagram of an embodiment of a vision-based multi-UAV cooperative control method based on vision and performance constraints according to the present invention;
图3为本发明的一种基于视觉的基于视觉与性能约束的多无人机协同控制方法的领航者-跟随者控制方法具体流程图。FIG. 3 is a specific flowchart of a leader-follower control method of a vision-based multi-UAV cooperative control method based on vision and performance constraints of the present invention.
具体实施方式Detailed ways
为了使本技术领域的人员更好地理解本发明的技术方案,下面结合附图对本发明作进一步的详细说明。In order to make those skilled in the art better understand the technical solutions of the present invention, the present invention will be further described in detail below with reference to the accompanying drawings.
本发明针对现有的问题,提供了一种基于视觉的基于视觉与性能约束的多无人机协同控制方法,如图1及图3所示,包括如下步骤:Aiming at the existing problems, the present invention provides a vision-based multi-UAV cooperative control method based on vision and performance constraints, as shown in Figure 1 and Figure 3, including the following steps:
步骤S1:对总目标任务分解得到相互独立的子任务,根据子任务确认无人机的类别与数量,建立子任务的无人系统;Step S1: Decompose the overall target task to obtain mutually independent sub-tasks, confirm the type and quantity of UAVs according to the sub-tasks, and establish an unmanned system for the sub-tasks;
步骤S2:在当前子任务的无人系统中选择一个最优领航者,跟随者检测领航者带有的ArUco基准标记,由此获取它相对于领航者的相对位姿;Step S2: select an optimal leader in the unmanned system of the current sub-task, and the follower detects the ArUco fiducial marker carried by the leader, thereby obtaining its relative pose relative to the leader;
步骤S3:基于领航者-跟随者框架建立子任务的无人系统模型;Step S3: establishing an unmanned system model of the sub-task based on the leader-follower framework;
步骤S4:设计基于预定任务性能规范的误差变换方法;Step S4: designing an error transformation method based on the predetermined task performance specification;
步骤S5:根据变换后的误差设计跟随者的PID控制律,保证跟随者按预定任务性能跟随领航者,最后达到多无人机自主协同控制的目标。Step S5: Design the PID control law of the follower according to the transformed error to ensure that the follower follows the leader according to the predetermined task performance, and finally achieves the goal of autonomous cooperative control of multiple UAVs.
具体地,步骤S1具体包括如下步骤:Specifically, step S1 specifically includes the following steps:
步骤S101:对总目标任务分解,得到相互独立的各个子任务;Step S101: decompose the overall target task to obtain each subtask that is independent of each other;
步骤S102:根据子任务的要求选择空中无人机、地上无人机或者空中无人机与地上无人机组合的形式,并确定无人机的数量,建立子任务的无人系统。Step S102: Select the form of aerial drone, ground drone, or combination of aerial drone and ground drone according to the requirements of the subtask, and determine the number of drones to establish an unmanned system for the subtask.
步骤S2具体包括如下步骤:Step S2 specifically includes the following steps:
步骤S201:在当前子任务的无人系统中,控制站选择一个最优的无人机作为当前子任务的无人系统中接收控制站发布任务的领航者;Step S201: in the unmanned system of the current sub-task, the control station selects an optimal UAV as the leader of the task issued by the control station in the unmanned system of the current sub-task;
步骤S202:每个无人机都配有一个已知大小的ArUco方形基准标记,跟随者使用机载视觉检测ArUco标记,ArUco标记由黑色边框和确定其标识符的内部二进制矩阵组成,单个标记可提供足够的对应关系(四个角)来获得相机相对ArUco标记的位姿,根据相机坐标系和跟随者坐标系、ArUco标记坐标系和领航者坐标系的固定变换关系,求得领航者与跟随者之间的相对位姿ζ lf,并且ArUco标记为每个无人机都提供了对应的ID,保证可靠、有效的跟随。 Step S202: Each UAV is equipped with an ArUco square fiducial marker of known size, and the follower detects the ArUco marker using airborne vision. The ArUco marker consists of a black border and an internal binary matrix that determines its identifier. A single marker can be Provide enough correspondence (four corners) to obtain the pose of the camera relative to the ArUco marker. According to the fixed transformation relationship between the camera coordinate system and the follower coordinate system, the ArUco marker coordinate system and the leader coordinate system, obtain the leader and follower. The relative pose ζ lf between the drones, and the ArUco mark provides a corresponding ID for each drone to ensure reliable and effective following.
步骤S3包括根据领航者-跟随者框架建立领航者-跟随者模型:Step S3 includes establishing a leader-follower model according to the leader-follower framework:
ζ lf=ζ lf ζ lf = ζ lf
其中,ζ lf为领航者相对于跟随者的位姿,ζ l为领航者在世界坐标系内的位姿,ζ f为跟随者在世界坐标系内的位姿。 Among them, ζ lf is the pose of the leader relative to the follower, ζ l is the pose of the leader in the world coordinate system, and ζ f is the pose of the follower in the world coordinate system.
无人系统模型由n-1个上述提到的领航者-跟随者模型构成,其中n为无人系统中无人机的个数。The unmanned system model consists of n-1 above-mentioned leader-follower models, where n is the number of UAVs in the unmanned system.
所述S4具体包括如下步骤:The S4 specifically includes the following steps:
步骤S401:定义误差,具体为:Step S401: define the error, specifically:
根据任务设计具有预先确定性能指标的误差变换方法,通过ArUco标记估计领航者与跟随者之间的相对位姿,则定义误差为Design an error transformation method with predetermined performance indicators according to the task, and estimate the relative pose between the leader and the follower through the ArUco marker, then the error is defined as
Figure PCTCN2021075626-appb-000011
Figure PCTCN2021075626-appb-000011
其中,
Figure PCTCN2021075626-appb-000012
表示期望的领航者相对于跟随者的位姿;若是根据视觉运动学,用图像信息间接的表示领航者与跟随者之间的相对位姿r lf,则e表示为当前获取图像特征与期望图像特征的误差;
in,
Figure PCTCN2021075626-appb-000012
Represents the desired pose of the leader relative to the follower; if the relative pose r lf between the leader and the follower is indirectly represented by image information according to visual kinematics, then e represents the current acquired image feature and the desired image error in characteristics;
步骤S402:定义误差性能,具体为:Step S402: Define error performance, specifically:
定义误差性能函数使得输出误差e k沿绝对衰减的时间函数ρ k收敛到预定义的残集: The error performance function is defined such that the output error e k converges to a predefined residual set along the absolute decay time function ρ k :
Figure PCTCN2021075626-appb-000013
Figure PCTCN2021075626-appb-000013
其中,e k表示为误差向量e的第k个输出误差量,设定参数 Υ k
Figure PCTCN2021075626-appb-000014
Figure PCTCN2021075626-appb-000015
ρ k(0)分别表示初始最大允许误差,从而保证初始误差绝对值满足0<||e k(0)||<ρ k(0),设计绝对衰减的时间函数ρ k(t)为
Among them, ek is expressed as the kth output error amount of the error vector e, and the parameters Υ k and
Figure PCTCN2021075626-appb-000014
for
Figure PCTCN2021075626-appb-000015
ρ k (0) respectively represent the initial maximum allowable error, so as to ensure that the absolute value of the initial error satisfies 0<||e k (0)||<ρ k (0), the time function ρ k (t) of the design absolute decay is
ρ k(t)=(ρ 0)e -lt ρ k (t)=(ρ 0 )e -lt
其中,参数l>0控制指数收敛的速度,
Figure PCTCN2021075626-appb-000016
表示预定任务性能规范的稳态水平,其可以设计的足够小来保证任务性能规范;
where the parameter l>0 controls the speed of exponential convergence,
Figure PCTCN2021075626-appb-000016
Represents the steady state level of the predetermined mission performance specification, which can be designed to be small enough to guarantee the mission performance specification;
步骤S403:设置输出误差函数,具体为:Step S403: Set the output error function, specifically:
为了实现满足任务性能规范的控制,输出误差设置为:To achieve control that meets the task performance specification, the output error is set as:
e k=S(ε kk(t) e k =S(ε kk (t)
其中,S(ε k)是一个单调递增的连续光滑函数,并且满足如下要求: where S(ε k ) is a monotonically increasing continuous smooth function that satisfies the following requirements:
Figure PCTCN2021075626-appb-000017
Figure PCTCN2021075626-appb-000017
根据以上要求,设计变换函数为:According to the above requirements, the designed transformation function is:
Figure PCTCN2021075626-appb-000018
Figure PCTCN2021075626-appb-000018
步骤S404:获取具有预定性能规范的误差变换函数:定义χ k=e kk,由于S(ε k)是严格递增的,因此它的反函数总是存在的,所以具有预定性能规范的误差变换ε k描述为: Step S404: Obtain an error transformation function with a predetermined performance specification: define χ k =e kk , since S(ε k ) is strictly increasing, its inverse function always exists, so it has a predetermined performance specification. The error transformation ε k is described as:
Figure PCTCN2021075626-appb-000019
Figure PCTCN2021075626-appb-000019
步骤S5具体包括如下步骤:Step S5 specifically includes the following steps:
步骤S501:根据变换后的误差设计跟随者的PID控制律,保证变换后误差ε k收敛,控制律的离散形式如下: Step S501: Design the PID control law of the follower according to the transformed error to ensure the convergence of the transformed error εk . The discrete form of the control law is as follows:
Figure PCTCN2021075626-appb-000020
Figure PCTCN2021075626-appb-000020
其中,u k表示第k个控制量,k p是比例系数,k i是积分系数,k d是微分系数; Among them, uk represents the kth control variable, k p is the proportional coefficient, ki is the integral coefficient, and k d is the differential coefficient;
步骤S502:根据误差变换函数的性质,当变换后误差ε k收敛时,由步骤S404中获得的具有预定性能规范的误差变换ε k函数,可得真实状态误差e k按预定性能收敛,多无人机完成期望子任务,并且根据总任务目标同步或者依次完成每一个子任务,最后实现时间、空间、任务上多维度的有效 协同。 Step S502: According to the properties of the error transformation function, when the transformed error ε k converges, from the error transformation ε k function with a predetermined performance specification obtained in step S404, it can be obtained that the real state error e k converges according to the predetermined performance, and there is no Human-machine completes the desired sub-tasks, and completes each sub-task synchronously or sequentially according to the overall task objective, and finally achieves multi-dimensional effective coordination in time, space, and tasks.
如图2所示,本发明提供的一种应用所述的一种基于视觉与性能约束的多无人机协同控制方法的系统,包括控制站与多个无人机,所述控制站控制多个所述无人机,多个所述无人机包括多个空中无人机及多个地上无人机,多个所述空中无人机及多个地上无人机均包括领航者及跟随者,所述控制站用于控制子任务中的领航者,为领航者设置离线自动或者实时手动的任务,所述跟随者与所述领航者保持期望的相对位姿,并跟随指定所述领航者运动,实现多无人机的自主协同控制。As shown in FIG. 2 , a system for applying the multi-UAV cooperative control method based on vision and performance constraints provided by the present invention includes a control station and a plurality of UAVs, and the control station controls a plurality of UAVs. A plurality of the drones, the plurality of the drones include a plurality of aerial drones and a plurality of ground drones, and the plurality of the aerial drones and the plurality of ground drones include a leader and a follower Or, the control station is used to control the leader in the sub-tasks, set offline automatic or real-time manual tasks for the leader, the follower and the leader maintain the desired relative pose, and follow the designated leader It can realize the autonomous cooperative control of multiple UAVs.
多无人机协同控制系统可以实现空对空、空对地以及地对地三种子任务,在空对空子任务中,由控制站选择最优的无人机作为控制站控制的领航者,根据领航者-跟随者框架建立模型,跟随者检测领航者带有的ArUco基准标记,由此获取它相对于领航者的相对位姿,采用预定性能函数做误差变换并设计跟随者的控制器,使得跟随者1-1,跟随者1-2与领航者1保持期望的相对位姿,并跟随指定领航者1运动。同样,在空对地以及地对地子任务中,同样由控制站选择最优的领航者2,然后根据领航者-跟随者框架建立模型,仅使用机载视觉的感知与控制方法,最后完成空中无人机降落在地上无人移动平台以及地上无人移动平台编队等协同任务。需要说明的是,无论是空对空、空对地,还是地对地子任务中,均只选定一个领航者,但是跟随者的数量可为多个。The multi-UAV cooperative control system can realize three sub-tasks of air-to-air, air-to-ground and ground-to-ground. The leader-follower framework builds a model, the follower detects the ArUco fiducial marker carried by the leader, and obtains its relative pose relative to the leader, uses a predetermined performance function to do error transformation and designs the follower's controller, so that Follower 1-1, follower 1-2 and leader 1 maintain the desired relative pose, and follow the designated leader 1 to move. Similarly, in the air-to-ground and ground-to-ground sub-missions, the control station selects the optimal leader 2, and then builds a model according to the leader-follower framework, using only the perception and control method of airborne vision, and finally completes the The aerial drones land on the ground unmanned mobile platform and the ground unmanned mobile platform formation and other coordinated tasks. It should be noted that, no matter it is an air-to-air, air-to-ground, or ground-to-ground subtask, only one leader is selected, but the number of followers can be multiple.
所述无人机包括控制单元、感知单元、通信单元以及电源单元。所述通信单元用于无人机接收控制站发布的任务,所述感知单元是机载相机,用于跟随者检测领航者带有的ArUco标记,所述控制单元是机载CPU,用于计算并给出所述无人机的控制律,所述电源单元为无人机提供电能。The UAV includes a control unit, a sensing unit, a communication unit and a power supply unit. The communication unit is used for the drone to receive the tasks issued by the control station, the sensing unit is an onboard camera, and is used for the follower to detect the ArUco mark carried by the leader, and the control unit is the onboard CPU, which is used for computing. And the control law of the drone is given, and the power supply unit provides power for the drone.
所述感知单元为使用机载视觉,使跟随者检测到领航者身上的ArUco 标记,所述控制单元对所述无人机的状态估计与控制计算,所述电源单元为无人机提供电能。The sensing unit uses airborne vision to enable the follower to detect the ArUco mark on the leader, the control unit estimates and controls the state of the UAV, and the power supply unit provides electrical energy for the UAV.
所述空中无人机包括机载相机,所述机载相机可以根据不同的任务进行合适的旋转。当执行空对空任务时,空中无人机的机载相机光轴朝前,使得其更好的捕捉前方空中领航者的视觉特征,当执行空对地任务时,空中无人机的机载相机光轴朝下,使得其更好的捕捉地上领航者的视觉特征。The aerial drone includes an onboard camera, which can be rotated appropriately according to different tasks. When performing an air-to-air mission, the optical axis of the airborne camera of the aerial drone faces forward, so that it can better capture the visual characteristics of the air leader in front. The camera's optical axis is down, making it better to capture the visual features of the pilot on the ground.
采用本发明所提供的一种基于视觉与性能约束的多无人机协同控制方法,其技术优点体现如下:Adopting a multi-UAV cooperative control method based on vision and performance constraints provided by the present invention, its technical advantages are as follows:
能够在GPS缺失的环境下实现时间、空间与任务等多维度的有效协同,满足无人系统小型化、智能化和自主化的需求。无人系统的协同控制主要依赖于感知与控制技术,视觉感知方法依靠机载相机检测到的ArUco标记来获得目标相对于本地坐标系下的位姿,不依赖GPS,可以部署在室内或室外场景中,具有体积小,成本低,目标信息丰富等优点,有利于大量低成本,小型化的无人机建立一个大规模地自主协同的无人系统。无人机的控制器设计常需要考虑输出受限控制,常见的控制方法有模型预测控制(MPC),基于屏障函数的控制等,然而以上控制器只能保证空间约束,考虑更一般的约束,即对于时间和空间上的同时约束,基于预定性能规范的控制方法有效解决时间与空间的双重约束,提高了无人机的智能化水平,使其更精确的完成目标任务。基于领航者-跟随者框架的无人系统,具有实现简单性和应用可伸缩性,有利于实现无人系统的分布式协同控制,进一步的提高无人系统的自主性。It can realize multi-dimensional effective coordination of time, space and tasks in the absence of GPS, and meet the needs of miniaturization, intelligence and autonomy of unmanned systems. The cooperative control of the unmanned system mainly relies on perception and control technology. The visual perception method relies on the ArUco marker detected by the airborne camera to obtain the pose of the target relative to the local coordinate system. It does not rely on GPS and can be deployed in indoor or outdoor scenes. It has the advantages of small size, low cost, and rich target information, which is beneficial to a large number of low-cost, miniaturized UAVs to establish a large-scale autonomous and collaborative unmanned system. The controller design of UAV often needs to consider output-limited control. Common control methods include model predictive control (MPC), barrier function-based control, etc. However, the above controllers can only guarantee space constraints, and consider more general constraints. That is, for the simultaneous constraints of time and space, the control method based on the predetermined performance specification can effectively solve the dual constraints of time and space, improve the intelligence level of the UAV, and make it more accurate to complete the target task. The unmanned system based on the leader-follower framework has simple implementation and application scalability, which is conducive to the realization of distributed collaborative control of unmanned systems and further improves the autonomy of unmanned systems.
与现有技术相比,本发明的优点在于:Compared with the prior art, the advantages of the present invention are:
(1)小型化程度高(1) High degree of miniaturization
本发明采用低成本的感知工具,即机载视觉,其具备体积小,重量轻 等特点,除此之外,基于性能函数的控制律具有鲁棒性强、计算复杂度低、不依赖模型信息等优点,降低了无人机计算负荷,因此能在感知与计算硬件上进一步缩小无人机的体积,有利于大量低成本,小型化的无人机建立一个大规模地自主协同的无人系统。The invention adopts a low-cost perception tool, namely airborne vision, which has the characteristics of small size and light weight. In addition, the control law based on the performance function has strong robustness, low computational complexity, and does not depend on model information. It reduces the computing load of the UAV, so it can further reduce the size of the UAV in terms of sensing and computing hardware, which is beneficial to a large number of low-cost, miniaturized UAVs to establish a large-scale autonomous collaborative unmanned system. .
(2)考虑时间约束(2) Consider time constraints
本发明提出的控制律保证输出误差同时满足时间和空间的约束,且输出误差随绝对衰减的时间函数收敛到某一预定义的残差内,使得多无人机自主协同控制系统更精确的完成目标任务。The control law proposed by the invention ensures that the output error satisfies the constraints of time and space at the same time, and the output error converges to a predefined residual error with the time function of absolute decay, so that the multi-UAV autonomous cooperative control system can be completed more accurately target task.
(3)自主化程度高(3) High degree of autonomy
本发明提出的无人系统基于领航者-跟随者框架,具有实现简单性和应用可伸缩性,能根据不同的任务建立不同规模的无人系统。基于机载视觉来实现无人系统的分布式协同控制,无人机之间不需要通信,依靠丰富的视觉信息实现无人机之间的协同控制,进一步的增强无人系统的自主性。The unmanned system proposed by the present invention is based on the navigator-follower framework, which has the simplicity of implementation and application scalability, and can establish unmanned systems of different scales according to different tasks. The distributed cooperative control of unmanned systems is realized based on airborne vision. There is no need for communication between UAVs, and the cooperative control between UAVs is realized by relying on rich visual information, which further enhances the autonomy of unmanned systems.
以上对本发明所提供的一种基于视觉的基于视觉与性能约束的多无人机协同控制方法及系统进行了详细介绍。本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的核心思想。应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以对本发明进行若干改进和修饰,这些改进和修饰也落入本发明权利要求的保护范围内。A vision-based multi-UAV cooperative control method and system based on vision and performance constraints provided by the present invention are described above in detail. The principles and implementations of the present invention are described herein by using specific examples, and the descriptions of the above embodiments are only used to help understand the core idea of the present invention. It should be pointed out that for those skilled in the art, without departing from the principle of the present invention, several improvements and modifications can also be made to the present invention, and these improvements and modifications also fall within the protection scope of the claims of the present invention.

Claims (10)

  1. 一种基于视觉与性能约束的多无人机协同控制系统,其特征在于,包括如下步骤:A multi-unmanned aerial vehicle cooperative control system based on vision and performance constraints, characterized in that it includes the following steps:
    步骤S1:对总目标任务分解得到相互独立的子任务,根据子任务确认无人机的类别与数量,建立子任务的无人系统;Step S1: Decompose the overall target task to obtain mutually independent sub-tasks, confirm the type and quantity of UAVs according to the sub-tasks, and establish an unmanned system for the sub-tasks;
    步骤S2:在当前子任务的无人系统中选择一个最优领航者,跟随者检测领航者带有的标记,由此获取它相对于领航者的相对位姿;Step S2: select an optimal leader in the unmanned system of the current sub-task, and the follower detects the mark carried by the leader, thereby obtaining its relative pose relative to the leader;
    步骤S3:基于领航者-跟随者框架建立子任务的无人系统模型;Step S3: establishing an unmanned system model of the sub-task based on the leader-follower framework;
    步骤S4:设计基于预定任务性能规范的误差变换方法;Step S4: designing an error transformation method based on the predetermined task performance specification;
    步骤S5:根据变换后的误差设计跟随者的PID控制律,保证跟随者按预定任务性能跟随领航者,最后达到多无人机自主协同控制的目标。Step S5: Design the PID control law of the follower according to the transformed error to ensure that the follower follows the leader according to the predetermined task performance, and finally achieves the goal of autonomous cooperative control of multiple UAVs.
  2. 根据权利要求1所述的一种基于视觉与性能约束的多无人机协同控制方法,其特征在于,步骤S1具体包括如下步骤:A multi-UAV cooperative control method based on vision and performance constraints according to claim 1, wherein step S1 specifically includes the following steps:
    步骤S101:对总目标任务分解,得到相互独立的各个子任务;Step S101: decompose the overall target task to obtain each subtask that is independent of each other;
    步骤S102:根据子任务的要求选择空中无人机、地上无人机或者空中无人机与地上无人机组合的形式,并确定无人机的数量,建立子任务的无人系统。Step S102: Select the form of aerial drone, ground drone, or combination of aerial drone and ground drone according to the requirements of the subtask, and determine the number of drones to establish an unmanned system for the subtask.
  3. 根据权利要求1所述的一种基于视觉与性能约束的多无人机协同控制方法及系统,其特征在于,步骤S2具体包括如下步骤:The method and system for multi-UAV cooperative control based on vision and performance constraints according to claim 1, wherein step S2 specifically includes the following steps:
    步骤S201:在当前子任务的无人系统中,控制站选择一个最优的无人机作为当前子任务的无人系统中接收控制站发布任务的领航者;Step S201: in the unmanned system of the current sub-task, the control station selects an optimal UAV as the leader of the task issued by the control station in the unmanned system of the current sub-task;
    步骤S202:每个无人机都配有一个已知大小的ArUco方形基准标记,跟随者使用机载视觉检测ArUco标记,ArUco标记由黑色边框和确定其标识符的内部二进制矩阵组成,单个标记可提供足够的对应关系(四个角)来 获得相机相对ArUco标记的位姿,根据相机坐标系和跟随者坐标系、ArUco标记坐标系和领航者坐标系的固定变换关系,求得领航者与跟随者之间的相对位姿ζ lf,并且ArUco标记为每个无人机都提供了对应的ID,保证可靠、有效的跟随。 Step S202: Each UAV is equipped with an ArUco square fiducial marker of known size, and the follower detects the ArUco marker using airborne vision. The ArUco marker consists of a black border and an internal binary matrix that determines its identifier. A single marker can be Provide enough correspondence (four corners) to obtain the pose of the camera relative to the ArUco marker. According to the fixed transformation relationship between the camera coordinate system and the follower coordinate system, the ArUco marker coordinate system and the leader coordinate system, obtain the leader and follower. The relative pose ζ lf between the drones, and the ArUco mark provides a corresponding ID for each drone to ensure reliable and effective following.
  4. 根据权利要求1所述的一种基于视觉与性能约束的多无人机协同控制方法,其特征在于,步骤S3包括根据领航者-跟随者框架建立领航者-跟随者模型:A multi-UAV cooperative control method based on vision and performance constraints according to claim 1, wherein step S3 comprises establishing a leader-follower model according to a leader-follower framework:
    ζ lf=ζ lf ζ lf = ζ lf
    其中,ζ lf为领航者相对于跟随者的位姿,ζ l为领航者在世界坐标系内的位姿,ζ f为跟随者在世界坐标系内的位姿。 Among them, ζ lf is the pose of the leader relative to the follower, ζ l is the pose of the leader in the world coordinate system, and ζ f is the pose of the follower in the world coordinate system.
  5. 根据权利要求4所述的一种基于视觉与性能约束的多无人机协同控制方法,其特征在于,无人系统模型由n-1个上述提到的领航者-跟随者模型构成,其中n为无人系统中无人机的个数。A multi-UAV cooperative control method based on vision and performance constraints according to claim 4, wherein the unmanned system model is composed of n-1 above-mentioned leader-follower models, wherein n is the number of UAVs in the unmanned system.
  6. 根据权利要求1所述的一种基于视觉与性能约束的多无人机协同控制方法,其特征在于,所述S4具体包括如下步骤:A multi-UAV cooperative control method based on vision and performance constraints according to claim 1, wherein the S4 specifically includes the following steps:
    步骤S401:定义误差,具体为:Step S401: define the error, specifically:
    根据任务设计具有预先确定性能指标的误差变换方法,通过ArUco标记估计领航者与跟随者之间的相对位姿,则定义误差为Design an error transformation method with predetermined performance indicators according to the task, and estimate the relative pose between the leader and the follower through the ArUco marker, then the error is defined as
    Figure PCTCN2021075626-appb-100001
    Figure PCTCN2021075626-appb-100001
    其中,
    Figure PCTCN2021075626-appb-100002
    表示期望的领航者相对于跟随者的位姿;若是根据视觉运动学,用图像信息间接的表示领航者与跟随者之间的相对位姿r lf,则e表示为当前获取图像特征与期望图像特征的误差;
    in,
    Figure PCTCN2021075626-appb-100002
    Represents the desired pose of the leader relative to the follower; if the relative pose r lf between the leader and the follower is indirectly represented by image information according to visual kinematics, then e represents the current acquired image feature and the desired image error in characteristics;
    步骤S402:定义误差性能,具体为:Step S402: Define error performance, specifically:
    定义误差性能函数使得输出误差e k沿绝对衰减的时间函数ρ k收敛到预定义的残集: The error performance function is defined such that the output error e k converges to a predefined residual set along the absolute decay time function ρ k :
    Figure PCTCN2021075626-appb-100003
    Figure PCTCN2021075626-appb-100003
    其中,e k表示为误差向量e的第k个输出误差量,设定参数 Υ k
    Figure PCTCN2021075626-appb-100004
    Figure PCTCN2021075626-appb-100005
    ρ k(0)分别表示初始最大允许误差,从而保证初始误差绝对值满足0<||e k(0)||<ρ k(0),设计绝对衰减的时间函数ρ k(t)为
    Among them, ek is expressed as the kth output error amount of the error vector e, and the parameters Υ k and
    Figure PCTCN2021075626-appb-100004
    for
    Figure PCTCN2021075626-appb-100005
    ρ k (0) respectively represent the initial maximum allowable error, so as to ensure that the absolute value of the initial error satisfies 0<||e k (0)||<ρ k (0), the time function ρ k (t) of the design absolute decay is
    ρ k(t)=(ρ 0)e -lt ρ k (t)=(ρ 0 )e -lt
    其中,参数l>0控制指数收敛的速度,
    Figure PCTCN2021075626-appb-100006
    表示预定任务性能规范的稳态水平,其可以设计的足够小来保证任务性能规范;
    where the parameter l>0 controls the speed of exponential convergence,
    Figure PCTCN2021075626-appb-100006
    Represents the steady state level of the predetermined mission performance specification, which can be designed to be small enough to guarantee the mission performance specification;
    步骤S403:设置输出误差函数,具体为:Step S403: Set the output error function, specifically:
    为了实现满足任务性能规范的控制,输出误差设置为:To achieve control that meets the task performance specification, the output error is set as:
    e k=S(ε kk(t) e k =S(ε kk (t)
    其中,S(ε k)是一个单调递增的连续光滑函数,并且满足如下要求: where S(ε k ) is a monotonically increasing continuous smooth function that satisfies the following requirements:
    Figure PCTCN2021075626-appb-100007
    Figure PCTCN2021075626-appb-100007
    根据以上要求,设计变换函数为:According to the above requirements, the designed transformation function is:
    Figure PCTCN2021075626-appb-100008
    Figure PCTCN2021075626-appb-100008
    步骤S404:获取具有预定性能规范的误差变换函数:定义χ k=e kk, 由于S(ε k)是严格递增的,因此它的反函数总是存在的,所以具有预定性能规范的误差变换ε k描述为: Step S404: Obtain an error transformation function with a predetermined performance specification: define χ k =e kk , since S(ε k ) is strictly increasing, its inverse function always exists, so it has a predetermined performance specification. The error transformation ε k is described as:
    Figure PCTCN2021075626-appb-100009
    Figure PCTCN2021075626-appb-100009
  7. 根据权利要求6所述的一种基于视觉与性能约束的多无人机协同控制方法,其特征在于,步骤S5具体包括如下步骤:A multi-UAV cooperative control method based on vision and performance constraints according to claim 6, wherein step S5 specifically includes the following steps:
    步骤S501:根据变换后的误差设计跟随者的PID控制律,保证变换后误差ε k收敛,控制律的离散形式如下: Step S501: Design the PID control law of the follower according to the transformed error to ensure the convergence of the transformed error εk . The discrete form of the control law is as follows:
    Figure PCTCN2021075626-appb-100010
    Figure PCTCN2021075626-appb-100010
    其中,u k表示第k个控制量,k p是比例系数,k i是积分系数,k d是微分系数; Among them, uk represents the kth control variable, k p is the proportional coefficient, ki is the integral coefficient, and k d is the differential coefficient;
    步骤S502:根据误差变换函数的性质,当对变换后误差ε k收敛时,由步骤S404中获得的具有预定性能规范的误差变换ε k函数,可得真实状态误差e k按预定性能收敛,多无人机完成期望子任务,并且根据总任务目标同步或者依次完成每一个子任务,最后实现时间、空间、任务上多维度的有效协同。 Step S502: According to the properties of the error transformation function, when the transformed error ε k converges, the error transformation ε k function with the predetermined performance specification obtained in step S404 can be used to obtain the true state error e k converges according to the predetermined performance, and more The UAV completes the desired sub-tasks, and completes each sub-task synchronously or sequentially according to the overall task goal, and finally achieves effective multi-dimensional coordination in time, space and tasks.
  8. 一种应用权利要求1至7中任意一项所述的一种基于视觉与性能约束的多无人机协同控制方法的多无人机自主协同控制系统,其特征在于,包括控制站与多个无人机,所述控制站控制多个所述无人机,多个所述无人机包括多个空中无人机及多个地上无人机,多个所述空中无人机及多个地上无人机均包括领航者及跟随者,所述控制站用于控制子任务中的领航者,为领航者设置离线自动或者实时手动的任务,所述跟随者与所述领航者保 持期望的相对位姿,并跟随指定所述领航者运动,实现多无人机的自主协同控制。A multi-unmanned aerial vehicle autonomous cooperative control system applying the visual and performance constraint-based multi-unmanned aerial vehicle cooperative control method according to any one of claims 1 to 7, characterized in that it comprises a control station and a plurality of UAVs, the control station controls a plurality of the UAVs, and the plurality of the UAVs includes a plurality of aerial UAVs and a plurality of ground UAVs, a plurality of the aerial UAVs and a plurality of The ground drones all include a leader and a follower. The control station is used to control the leader in the sub-tasks, and set offline automatic or real-time manual tasks for the leader. The follower and the leader keep the desired Relative pose, and follow the movement of the designated leader to achieve autonomous coordinated control of multiple UAVs.
  9. 根据权利要求8所述的一种多无人机自主协同控制系统,其特征在于,所述无人机包括控制单元、感知单元、通信单元以及电源单元,所述通信单元用于无人机接收控制站发布的任务,所述感知单元是机载相机,用于跟随者检测领航者带有的ArUco标记,所述控制单元是机载CPU,用于计算并给出所述无人机的控制律,所述电源单元为无人机提供电能。The multi-unmanned aerial vehicle autonomous cooperative control system according to claim 8, wherein the unmanned aerial vehicle comprises a control unit, a sensing unit, a communication unit and a power supply unit, and the communication unit is used for the unmanned aerial vehicle to receive The task issued by the control station, the perception unit is an onboard camera for the follower to detect the ArUco mark carried by the leader, the control unit is the onboard CPU, which is used to calculate and give the control of the drone Law, the power supply unit provides electrical energy for the drone.
  10. 根据权利要求8所述的一种多无人机自主协同控制系统,其特征在于,所述空中无人机包括机载相机,所述机载相机可以根据不同的任务进行合适的旋转。The multi-UAV autonomous cooperative control system according to claim 8, wherein the aerial UAV includes an onboard camera, and the onboard camera can be rotated appropriately according to different tasks.
PCT/CN2021/075626 2020-10-13 2021-02-05 Multiple unmanned aerial vehicle cooperative control method and system based on vision and performance constraints WO2022077817A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011088200.9A CN112114594B (en) 2020-10-13 2020-10-13 Multi-unmanned aerial vehicle cooperative control method and system based on vision and performance constraints
CN202011088200.9 2020-10-13

Publications (1)

Publication Number Publication Date
WO2022077817A1 true WO2022077817A1 (en) 2022-04-21

Family

ID=73798720

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/075626 WO2022077817A1 (en) 2020-10-13 2021-02-05 Multiple unmanned aerial vehicle cooperative control method and system based on vision and performance constraints

Country Status (2)

Country Link
CN (1) CN112114594B (en)
WO (1) WO2022077817A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114721275A (en) * 2022-05-13 2022-07-08 北京航空航天大学 Visual servo robot self-adaptive tracking control method based on preset performance
CN115297000A (en) * 2022-06-21 2022-11-04 北京理工大学 Distributed self-adaptive event-triggered multi-autonomous-body inclusion control method under directed topology
CN115454125A (en) * 2022-08-31 2022-12-09 北京瀚景锦河科技有限公司 Unmanned aerial vehicle strike alliance building method based on iterative updating algorithm
CN115582838A (en) * 2022-11-09 2023-01-10 广东海洋大学 Multi-mechanical-arm predefined time H based on preset performance ∞ Consistency control method
CN116627179A (en) * 2023-07-19 2023-08-22 陕西德鑫智能科技有限公司 Unmanned aerial vehicle formation control method and device

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112114594B (en) * 2020-10-13 2021-07-16 湖南大学 Multi-unmanned aerial vehicle cooperative control method and system based on vision and performance constraints
CN117270485B (en) * 2023-11-23 2024-02-06 中国科学院数学与系统科学研究院 Distributed multi-machine action cooperative control method oriented to industrial Internet scene

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102707693A (en) * 2012-06-05 2012-10-03 清华大学 Method for building spatio-tempora cooperative control system of multiple unmanned aerial vehicles
CN102768518A (en) * 2012-07-11 2012-11-07 清华大学 Multiple-unmanned plane platform cooperative control system
US8639396B1 (en) * 2008-10-08 2014-01-28 Raytheon Company Cooperative control of unmanned aerial vehicles for tracking targets
CN108983823A (en) * 2018-08-27 2018-12-11 安徽农业大学 A kind of plant protection drone cluster cooperative control method
CN109213198A (en) * 2018-09-11 2019-01-15 中国科学院长春光学精密机械与物理研究所 Multiple no-manned plane cooperative control system
CN110703795A (en) * 2019-09-27 2020-01-17 南京航空航天大学 Unmanned aerial vehicle group cooperative security control method based on switching topology
CN111338374A (en) * 2019-12-06 2020-06-26 中国电子科技集团公司电子科学研究院 Unmanned aerial vehicle cluster formation control method
CN112114594A (en) * 2020-10-13 2020-12-22 湖南大学 Multi-unmanned aerial vehicle cooperative control method and system based on vision and performance constraints

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170269612A1 (en) * 2016-03-18 2017-09-21 Sunlight Photonics Inc. Flight control methods for operating close formation flight
CN107844127A (en) * 2017-09-20 2018-03-27 北京飞小鹰科技有限责任公司 Towards the formation flight device cooperative control method and control system of finite time
CN108052110A (en) * 2017-09-25 2018-05-18 南京航空航天大学 UAV Formation Flight method and system based on binocular vision
CN109992000B (en) * 2019-04-04 2020-07-03 北京航空航天大学 Multi-unmanned aerial vehicle path collaborative planning method and device based on hierarchical reinforcement learning
CN110286694B (en) * 2019-08-05 2022-08-02 重庆邮电大学 Multi-leader unmanned aerial vehicle formation cooperative control method
CN110703798B (en) * 2019-10-23 2022-10-28 中国人民解放军军事科学院国防科技创新研究院 Unmanned aerial vehicle formation flight control method based on vision
CN111552314B (en) * 2020-05-09 2021-05-18 北京航空航天大学 Self-adaptive formation tracking control method for multiple unmanned aerial vehicles

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8639396B1 (en) * 2008-10-08 2014-01-28 Raytheon Company Cooperative control of unmanned aerial vehicles for tracking targets
CN102707693A (en) * 2012-06-05 2012-10-03 清华大学 Method for building spatio-tempora cooperative control system of multiple unmanned aerial vehicles
CN102768518A (en) * 2012-07-11 2012-11-07 清华大学 Multiple-unmanned plane platform cooperative control system
CN108983823A (en) * 2018-08-27 2018-12-11 安徽农业大学 A kind of plant protection drone cluster cooperative control method
CN109213198A (en) * 2018-09-11 2019-01-15 中国科学院长春光学精密机械与物理研究所 Multiple no-manned plane cooperative control system
CN110703795A (en) * 2019-09-27 2020-01-17 南京航空航天大学 Unmanned aerial vehicle group cooperative security control method based on switching topology
CN111338374A (en) * 2019-12-06 2020-06-26 中国电子科技集团公司电子科学研究院 Unmanned aerial vehicle cluster formation control method
CN112114594A (en) * 2020-10-13 2020-12-22 湖南大学 Multi-unmanned aerial vehicle cooperative control method and system based on vision and performance constraints

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114721275A (en) * 2022-05-13 2022-07-08 北京航空航天大学 Visual servo robot self-adaptive tracking control method based on preset performance
CN114721275B (en) * 2022-05-13 2022-09-09 北京航空航天大学 Visual servo robot self-adaptive tracking control method based on preset performance
CN115297000A (en) * 2022-06-21 2022-11-04 北京理工大学 Distributed self-adaptive event-triggered multi-autonomous-body inclusion control method under directed topology
CN115297000B (en) * 2022-06-21 2023-09-26 北京理工大学 Distributed self-adaptive event-triggered multi-autonomous inclusion control method under directed topology
CN115454125A (en) * 2022-08-31 2022-12-09 北京瀚景锦河科技有限公司 Unmanned aerial vehicle strike alliance building method based on iterative updating algorithm
CN115582838A (en) * 2022-11-09 2023-01-10 广东海洋大学 Multi-mechanical-arm predefined time H based on preset performance ∞ Consistency control method
CN116627179A (en) * 2023-07-19 2023-08-22 陕西德鑫智能科技有限公司 Unmanned aerial vehicle formation control method and device
CN116627179B (en) * 2023-07-19 2023-10-31 陕西德鑫智能科技有限公司 Unmanned aerial vehicle formation control method and device

Also Published As

Publication number Publication date
CN112114594B (en) 2021-07-16
CN112114594A (en) 2020-12-22

Similar Documents

Publication Publication Date Title
WO2022077817A1 (en) Multiple unmanned aerial vehicle cooperative control method and system based on vision and performance constraints
CN109613931B (en) Heterogeneous unmanned aerial vehicle cluster target tracking system and method based on biological social force
CN109579843B (en) Multi-robot cooperative positioning and fusion image building method under air-ground multi-view angles
CN111522258B (en) Multi-unmanned aerial vehicle cooperative control simulation system and construction method and simulation method thereof
CN109917767A (en) A kind of distribution unmanned plane cluster autonomous management system and control method
Mademlis et al. Autonomous unmanned aerial vehicles filming in dynamic unstructured outdoor environments [applications corner]
CN108229587B (en) Autonomous transmission tower scanning method based on hovering state of aircraft
CN110243381B (en) Cooperative sensing monitoring method for air-ground robot
CN105974932B (en) Unmanned aerial vehicle (UAV) control method
CN110609556A (en) Multi-unmanned-boat cooperative control method based on LOS navigation method
CN110618691B (en) Machine vision-based method for accurately landing concentric circle targets of unmanned aerial vehicle
WO2022095060A1 (en) Path planning method, path planning apparatus, path planning system, and medium
CN112580537B (en) Deep reinforcement learning method for multi-unmanned aerial vehicle system to continuously cover specific area
CN112068539A (en) Unmanned aerial vehicle automatic driving inspection method for blades of wind turbine generator
CN112789672A (en) Control and navigation system, attitude optimization, mapping and positioning technology
CN113050677A (en) Control method, system and storage medium for maintaining and changing formation of multiple unmanned aerial vehicles
CN111474953A (en) Multi-dynamic-view-angle-coordinated aerial target identification method and system
CN113406968A (en) Unmanned aerial vehicle autonomous take-off, landing and cruising method based on digital twinning
Xu et al. LUNA: Lightweight UAV navigation based on airborne vision for disaster management
CN110850889B (en) Unmanned aerial vehicle autonomous inspection system based on RTK navigation
CN114186859B (en) Multi-machine cooperative multi-target task allocation method in complex unknown environment
CN115657724A (en) Manned and unmanned aircraft cooperative formation form transformation system and method
Rojas-Perez et al. Real-time landing zone detection for UAVs using single aerial images
CN109900272B (en) Visual positioning and mapping method and device and electronic equipment
WO2022226720A1 (en) Path planning method, path planning device, and medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21878874

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21878874

Country of ref document: EP

Kind code of ref document: A1