CN113342032A - Unmanned aerial vehicle cluster cooperative tracking method based on multi-region division - Google Patents

Unmanned aerial vehicle cluster cooperative tracking method based on multi-region division Download PDF

Info

Publication number
CN113342032A
CN113342032A CN202110568412.5A CN202110568412A CN113342032A CN 113342032 A CN113342032 A CN 113342032A CN 202110568412 A CN202110568412 A CN 202110568412A CN 113342032 A CN113342032 A CN 113342032A
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
target
tracking
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110568412.5A
Other languages
Chinese (zh)
Other versions
CN113342032B (en
Inventor
左源
朱效洲
刘圣洋
姚雯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Defense Technology Innovation Institute PLA Academy of Military Science
Original Assignee
National Defense Technology Innovation Institute PLA Academy of Military Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Defense Technology Innovation Institute PLA Academy of Military Science filed Critical National Defense Technology Innovation Institute PLA Academy of Military Science
Priority to CN202110568412.5A priority Critical patent/CN113342032B/en
Publication of CN113342032A publication Critical patent/CN113342032A/en
Application granted granted Critical
Publication of CN113342032B publication Critical patent/CN113342032B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • G05D1/104Simultaneous control of position or course in three dimensions specially adapted for aircraft involving a plurality of aircrafts, e.g. formation flying

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses an unmanned aerial vehicle cluster cooperative tracking method based on multi-region division, which comprises the following steps: performing subarea segmentation on the tracking area according to the body capacity of the unmanned aerial vehicle; each unmanned aerial vehicle is controlled to patrol in a sub-area which is in charge of the unmanned aerial vehicle, and when any unmanned aerial vehicle in the unmanned aerial vehicle cluster finds a target, any unmanned aerial vehicle is utilized to track the target; when the unmanned aerial vehicle is used for tracking the target, if the target leaves a sub-area which is responsible for the unmanned aerial vehicle, a cooperative tracking request comprising target motion information is broadcasted to other unmanned aerial vehicles in the unmanned aerial vehicle cluster; and determining a follow-up handover unmanned aerial vehicle needing to track the target according to the target motion information and the motion states of other unmanned aerial vehicles, and tracking and handing over by using the handover unmanned aerial vehicle and the current unmanned aerial vehicle carrying out target tracking. The invention can fully consider the difference of the individual performance of the unmanned aerial vehicle, fully utilize the large-range coverage capability of the cluster, continuously keep the tracking state of the target and avoid the target loss.

Description

Unmanned aerial vehicle cluster cooperative tracking method based on multi-region division
Technical Field
The invention relates to the technical field of unmanned aerial vehicles, in particular to an unmanned aerial vehicle cluster cooperative tracking method based on multi-region division.
Background
The unmanned aerial vehicle has the characteristics of low cost, high maneuverability, flexible deployment and the like, and is widely applied to various fields of military, civil use and the like. Along with the continuous development of unmanned aerial vehicle technique, compare in single unmanned aerial vehicle and carry out the military and keep watch on and scout, the not good problem of effect that exists when large-scale disaster scene search and rescue the task, the cooperation of the unmanned aerial vehicle cluster that constitutes by a plurality of unmanned aerial vehicles has outstanding task performance ability. Compared with a single unmanned aerial vehicle, the unmanned aerial vehicle cluster has the advantages of multi-machine cooperation, multi-machine redundant resources, large-range action capability and the like, can use the cluster as a whole as an action view angle, and has sufficient potential for smoothly performing tasks.
At present, research aiming at unmanned aerial vehicle cluster cooperation mainly comprises unmanned aerial vehicle cluster cooperative reconnaissance, cooperative search, cooperative target tracking, cooperative positioning and the like. In the aspect of cooperative target tracking, in the prior art, target information is transmitted in a communication broadcasting mode, hardware devices such as an airborne detection sensor module, a calculation tracking module and a communication module are used for calculating the relative distance and angle between an idle unmanned aerial vehicle and a target in an unmanned aerial vehicle cluster, then a space potential field is constructed based on the relative distance and angle, and potential field resultant force is calculated to decide a tracking behavior, so that target tracking is realized. Although the method is simple in implementation, due to the fact that no connection and handover decision is made, resultant force is calculated according to the spatial potential field, real-time information of the target is easily lost at a specific time, the vacuum state of the unmanned aerial vehicle cluster in the collaborative handover process is easily caused, and a plurality of unmanned aerial vehicles possibly decide to give up tasks at the same time under the boundary condition to lose tracking of the target.
Disclosure of Invention
In order to solve the technical problems in the prior art, the invention provides an unmanned aerial vehicle cluster cooperative tracking method based on multi-region division.
The technical scheme of the invention is as follows:
the utility model provides an unmanned aerial vehicle cluster cooperative tracking method based on multi-zone division, which comprises the following steps:
performing subarea division on the tracking area according to the organism capacity of the unmanned aerial vehicles so as to determine subareas which are responsible for each unmanned aerial vehicle in the unmanned aerial vehicle cluster;
controlling each unmanned aerial vehicle to patrol in the sub-area in which the unmanned aerial vehicle is responsible according to the divided sub-areas, and tracking a target by using any unmanned aerial vehicle when the unmanned aerial vehicle in the unmanned aerial vehicle cluster finds the target;
when the unmanned aerial vehicle is used for tracking a target, if the target leaves a sub-area which is responsible for the unmanned aerial vehicle, broadcasting a cooperative tracking request comprising target motion information to other unmanned aerial vehicles in the unmanned aerial vehicle cluster;
and determining a follow-up handover unmanned aerial vehicle needing to track the target according to the target motion information and the motion states of the other unmanned aerial vehicles, and performing tracking handover by using the handover unmanned aerial vehicle and the current unmanned aerial vehicle carrying out target tracking.
In some possible implementations, a partition area calculation function based on boltzmann distribution is used to perform sub-region partition on the tracking area to determine a sub-region for which each drone in the drone cluster is responsible.
In some possible implementation manners, a partition area calculation function based on boltzmann distribution as shown in formula one is adopted to perform sub-area partition on the tracking area;
Figure BDA0003081670850000021
wherein A isiRepresents the sub-area, A, of the cluster of drones for which the ith drone is responsibleTDenotes the total area of the tracking area, BiAnd DtiThe total energy amount and the detection range of the ith unmanned aerial vehicle are respectively represented, alpha and beta respectively represent dimensionless coefficients, and e represents a natural constant.
In some possible implementation manners, when tracking a target by using an unmanned aerial vehicle, if the target leaves a sub-area for which the unmanned aerial vehicle is responsible, broadcasting a cooperative tracking request including target motion information to other unmanned aerial vehicles in the unmanned aerial vehicle cluster, including:
when the unmanned aerial vehicle is used for tracking the target, the unmanned aerial vehicle is used for recording the moving track of the target in real time, estimating the motion mode and the motion speed of the target, predicting the position of the target after a preset time step, and broadcasting a cooperative tracking request comprising target motion information to other unmanned aerial vehicles in the unmanned aerial vehicle cluster when the predicted position leaves a sub-area which is responsible for the unmanned aerial vehicle, wherein the target motion information comprises the moving track of the target, the motion mode and the motion speed.
In some possible implementations, the target moving track is recorded in real time at preset time intervals, wherein the recorded target moving track sequence data is represented as:
Figure BDA0003081670850000022
wherein WPseries represents target movement trajectory series data,
Figure BDA0003081670850000023
representing the mth path point data in the target movement trajectory,
Figure BDA0003081670850000024
represent the object at
Figure BDA0003081670850000025
The position of the moment of time is,
Figure BDA0003081670850000026
time and
Figure BDA0003081670850000027
the time interval of the moment is a preset time interval tseg
In some possible implementation manners, the long and short term memory network after pre-training is used for processing the target moving track sequence data, and the position of the target after a preset time step is predicted.
In some possible implementation manners, determining a handover unmanned aerial vehicle that needs to track a target subsequently according to the target motion information and the motion states of the other unmanned aerial vehicles, and performing tracking handover by using the handover unmanned aerial vehicle and a current unmanned aerial vehicle that performs target tracking, including:
predicting target positions of a plurality of preset future time points according to target motion information, and determining an unmanned aerial vehicle to be determined according to a sub-region where the target positions of the plurality of preset future time points are located, wherein the sub-region where the unmanned aerial vehicle to be determined is responsible comprises at least one target position of the preset future time point;
according to the target position of the preset future time point contained in the sub-region for which the unmanned aerial vehicle to be determined is responsible, sequentially judging whether the unmanned aerial vehicle to be determined can track and detect the corresponding target position at or before the preset future time point according to the time sequence, wherein if the unmanned aerial vehicle to be determined can track and detect the corresponding target position at or before the preset future time point, subsequent judgment is not carried out;
if the target position of the unmanned aerial vehicle is the preset future time point, determining the currently pending unmanned aerial vehicle as a handover unmanned aerial vehicle, and enabling the handover unmanned aerial vehicle to reach the preset area where the target position of the corresponding preset future time point is located, wherein the preset area is determined according to the detection range of the unmanned aerial vehicle, so that when the unmanned aerial vehicle is located in the preset area, the target on the target position of the corresponding preset future time point can be tracked and detected;
when handing-over unmanned aerial vehicle can track and detect the target, utilize handing-over unmanned aerial vehicle will confirm handing-over information broadcast extremely other unmanned aerial vehicles in the unmanned aerial vehicle cluster make the unmanned aerial vehicle that carries out target tracking at present return to and patrol in the subregion that self was responsible for.
In some possible implementations, predicting target locations for a plurality of preset future time points according to the target motion information includes:
processing the target motion information by using the long and short term memory network after pre-training, and predicting first target positions of a plurality of preset future time points;
setting the motion mode of the target as linear motion, and predicting second target positions of a plurality of preset future time points by adopting a linear function;
and calculating the mean value of the first target position and the second target position, and taking the obtained mean value as the prediction result of the target positions of a plurality of preset future time points.
In some possible implementations, the first target position for a plurality of preset future time points is predicted using the following equation five;
Figure BDA0003081670850000031
predicting second target positions of a plurality of preset future time points by using the following formula six;
Figure BDA0003081670850000032
wherein, WPLSTMFirst target location data representing a plurality of preset future points in time, LSTM () representing a long-short term memory network model mapping function,
Figure BDA0003081670850000041
representing the jth predetermined future point in time, Historytrace representing the historical data of the target movement trajectory, WPlinAnd v represents the target movement speed of the unmanned aerial vehicle when the target movement information is received.
In some possible implementations, whether the unmanned aerial vehicle can track and detect the corresponding target position at or before the preset future time point is determined by the following method:
calculating the estimated distance range between the unmanned aerial vehicle and the target position of the preset future time point by using the following formula eight;
Figure BDA0003081670850000042
judging whether the following formula nine is true, if true, indicating that the unmanned aerial vehicle can track and detect the corresponding target position at or before the preset future time point, and if false, indicating that the unmanned aerial vehicle cannot track and detect the corresponding target position at or before the preset future time point;
Figure BDA0003081670850000043
wherein the content of the first and second substances,
Figure BDA0003081670850000044
representing the estimated range of distance, TCIndicating a corresponding predetermined future point in time, WP (T)C) Representing a preset future point in time TCTarget position of, TrIndicating the moment at which the drone receives the target movement information, vUAVThe self traveling speed of the unmanned aerial vehicle is represented, and Dt represents the detection range of the unmanned aerial vehicle.
The technical scheme of the invention has the following main advantages:
the unmanned aerial vehicle cluster cooperative tracking method based on multi-region division carries out planning division on tracking regions according to the organism capability of each unmanned aerial vehicle, can fully consider the difference of unmanned aerial vehicle clusters on individual performance, simultaneously enables each unmanned aerial vehicle to execute a target tracking task in the sub-region in charge of the unmanned aerial vehicle based on the divided sub-regions, can fully utilize the large-range covering capability of the clusters, continuously keeps the tracking state of the target, and avoids losing the tracking target.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of an unmanned aerial vehicle cluster cooperative tracking method based on multi-region division according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the specific embodiments of the present invention and the accompanying drawings. It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
The technical scheme provided by the embodiment of the invention is described in detail below with reference to the accompanying drawings.
As shown in fig. 1, an embodiment of the present invention provides an unmanned aerial vehicle cluster cooperative tracking method based on multi-zone division, where the method includes:
performing subarea division on the tracking area according to the organism capacity of the unmanned aerial vehicles so as to determine subareas which are responsible for each unmanned aerial vehicle in the unmanned aerial vehicle cluster;
controlling each unmanned aerial vehicle to patrol in the sub-area in which the unmanned aerial vehicle is responsible according to the divided sub-areas, and tracking a target by using any unmanned aerial vehicle when any unmanned aerial vehicle in the unmanned aerial vehicle cluster finds the target;
when the unmanned aerial vehicle is used for tracking the target, if the target leaves a sub-area which is responsible for the unmanned aerial vehicle, a cooperative tracking request comprising target motion information is broadcasted to other unmanned aerial vehicles in the unmanned aerial vehicle cluster;
and determining a follow-up handover unmanned aerial vehicle needing to track the target according to the target motion information and the motion states of other unmanned aerial vehicles, and tracking and handing over by using the handover unmanned aerial vehicle and the current unmanned aerial vehicle carrying out target tracking.
The unmanned aerial vehicle cluster cooperative tracking method based on multi-region division provided by the embodiment of the invention performs planning division of tracking regions according to the organism capability of each unmanned aerial vehicle, can fully consider the difference of unmanned aerial vehicle clusters in individual performance, and simultaneously enables each unmanned aerial vehicle to execute a target tracking task in the sub-region in charge of the unmanned aerial vehicle based on the divided sub-regions, thereby fully utilizing the large-range coverage capability of the clusters, continuously keeping a tracking state for targets and avoiding losing the tracking targets.
Each step of the unmanned aerial vehicle cluster cooperative tracking method based on multi-region division provided by an embodiment of the present invention is specifically described below.
In an embodiment of the present invention, a partition area calculation function based on boltzmann distribution may be adopted to perform sub-region partition on the tracking area, so as to determine a sub-region in which each unmanned aerial vehicle in the unmanned aerial vehicle cluster is responsible.
Specifically, a sub-region segmentation is carried out on the tracking region by adopting a segmentation area calculation function based on Boltzmann distribution as shown in formula I;
Figure BDA0003081670850000051
wherein A isiRepresents the sub-area, A, responsible for the ith drone in the cluster of dronesTDenotes the total area of the tracking area, BiAnd DtiThe total energy amount and the detection range of the ith unmanned aerial vehicle are respectively represented, alpha and beta respectively represent dimensionless coefficients, and e represents a natural constant.
When the cluster of drones is a homogenous cluster of drones, the partition area calculation function based on boltzmann distribution can be expressed as:
Figure BDA0003081670850000061
wherein N represents the total number of drones in the cluster of drones.
When the unmanned aerial vehicle cluster is a homogeneous unmanned aerial vehicle cluster, the size of the area for which each unmanned aerial vehicle is responsible is the same.
Further, after the cluster task is deployed, according to the divided sub-areas, each unmanned aerial vehicle is controlled to carry out circular inspection in the sub-area in which the unmanned aerial vehicle is responsible. Optionally, the unmanned aerial vehicle may perform cyclic patrol in a random flight manner, and execute corresponding actions according to the set subsequent task until the unmanned aerial vehicle finds a target or receives a cooperative tracking request broadcast by another unmanned aerial vehicle.
In an embodiment of the present invention, when an unmanned aerial vehicle finds a target, the unmanned aerial vehicle is used to track the target, and when the unmanned aerial vehicle is used to track the target, if the target leaves a sub-area in which the unmanned aerial vehicle is responsible, a cooperative tracking request including target motion information is broadcast to other unmanned aerial vehicles in an unmanned aerial vehicle cluster.
Specifically, when tracking the target by using the unmanned aerial vehicle, if the target leaves a sub-area for which the unmanned aerial vehicle is responsible, broadcasting a cooperative tracking request including target motion information to other unmanned aerial vehicles in the unmanned aerial vehicle cluster, including:
when the unmanned aerial vehicle is used for tracking the target, the unmanned aerial vehicle is used for recording the moving track of the target in real time, estimating the motion mode and the motion speed of the target, predicting the position of the target after a preset time step, and broadcasting a cooperative tracking request comprising target motion information to other unmanned aerial vehicles in the unmanned aerial vehicle cluster until the predicted position leaves a sub-area which is responsible for the unmanned aerial vehicle, wherein the target motion information comprises the moving track of the target, the motion mode and the motion speed.
For the recording of the target movement track, the target movement track can be recorded in real time at preset time intervals. For example, the recorded target movement trajectory series data is expressed as:
Figure BDA0003081670850000062
wherein WPseries represents target movement trajectory series data,
Figure BDA0003081670850000063
representing the mth path point data in the target movement trajectory,
Figure BDA0003081670850000064
represent the object at
Figure BDA0003081670850000065
The position of the moment of time is,
Figure BDA0003081670850000066
time and
Figure BDA0003081670850000067
the time interval of the moment is a preset time interval tseg
For the prediction of the target position, the long and short term memory network after pre-training can be utilized to process the target moving track sequence data so as to predict the position of the target after the preset time step.
Specifically, the position of the target after the preset time step can be predicted by using the following formula four;
WPsep=LSTM(tsep| WPSeries) formula four
Wherein, tsepRepresenting a preset time step, WPsepRepresenting the target at a preset time step tsepLater, LSTM () represents the long-short term memory network model mapping function, and WPseries represents the target movement trajectory sequence data.
The long-short term memory network can adopt the moving track historical data or the simulation data of other targets to pre-train in advance, and the other targets can be selected according to the characteristics of the current target so as to ensure the prediction precision of the long-short term memory network.
In the process of predicting the target position, if the unmanned aerial vehicle cluster finds the target for the first time, the initial moving track of the target can be assumed to be linear motion within a set time because the data volume of the moving track of the target recorded by the unmanned aerial vehicle cluster is small, and the target position can be predicted based on the assumed linear motion.
Furthermore, in the process of executing the cooperative tracking task, except the current unmanned aerial vehicle for target tracking, other unmanned aerial vehicles perform sub-area circular inspection in a random flight mode and receive cooperative tracking requests broadcasted by the unmanned aerial vehicles at any time, when the other unmanned aerial vehicles in the unmanned aerial vehicle cluster receive the cooperative tracking requests, the unmanned aerial vehicles needing to be tracked subsequently are determined according to the motion states of the target motion information and the other unmanned aerial vehicles, and the unmanned aerial vehicles carrying out tracking handover by using the handover unmanned aerial vehicles and the current unmanned aerial vehicle carrying out target tracking.
Specifically, confirm follow-up handing-over unmanned aerial vehicle that needs to track the target according to target motion information and other unmanned aerial vehicle's motion state, utilize handing-over unmanned aerial vehicle and the unmanned aerial vehicle that carries out target tracking at present to track the handing-over, include:
predicting target positions of a plurality of preset future time points according to the target motion information, and determining the unmanned aerial vehicle to be determined according to the sub-region where the target positions of the plurality of preset future time points are located, wherein the sub-region where the unmanned aerial vehicle to be determined is responsible comprises at least one target position of the preset future time point;
sequentially judging whether the to-be-determined unmanned aerial vehicle can track and detect the corresponding target position at or before the preset future time point according to the target position of the preset future time point contained in the sub-region for which the to-be-determined unmanned aerial vehicle is responsible, wherein if the to-be-determined unmanned aerial vehicle can track and detect the corresponding target position at or before the preset future time point, subsequent judgment is not carried out;
if the target position of the unmanned aerial vehicle is the preset future time point, determining the currently pending unmanned aerial vehicle as a handover unmanned aerial vehicle, and enabling the handover unmanned aerial vehicle to reach the preset area where the target position of the corresponding preset future time point is located, wherein the preset area is determined according to the detection range of the unmanned aerial vehicle, so that when the unmanned aerial vehicle is located in the preset area, the target on the target position of the corresponding preset future time point can be tracked and detected;
when handing-over unmanned aerial vehicle can track and detect the target, utilize handing-over unmanned aerial vehicle will confirm handing-over information broadcast to other unmanned aerial vehicles in the unmanned aerial vehicle cluster, make the unmanned aerial vehicle that carries out target tracking at present return to the subregion that self was responsible for and patrol.
In an embodiment of the invention, the tracking handover method can realize large-range coverage by utilizing the detection capability and detection range of different unmanned aerial vehicles in the unmanned aerial vehicle cluster, continuously keep a tracking state on a target, and avoid losing the tracking target; when the target position is predicted, the target positions of a plurality of preset future time points are predicted, namely, multi-step prediction is carried out, subsequent handover judgment can be facilitated, and the selection accuracy of the handover unmanned aerial vehicle is improved.
Further, predicting target positions of a plurality of preset future time points according to the target motion information may include:
processing the target motion information by using the long and short term memory network after pre-training, and predicting first target positions of a plurality of preset future time points;
setting the motion mode of the target as linear motion, and predicting second target positions of a plurality of preset future time points by adopting a linear function;
and calculating the mean value of the first target position and the second target position, and taking the obtained mean value as the prediction result of the target positions of a plurality of preset future time points.
The prediction accuracy of the target position can be improved by adopting two prediction modes of a long-term and short-term memory network and a linear function to carry out multi-step prediction.
Specifically, the following formula five can be used to predict the first target positions of a plurality of preset future time points;
Figure BDA0003081670850000081
the second target positions of a plurality of preset future time points may be predicted using the following formula six;
Figure BDA0003081670850000082
the predicted results of the target positions at a plurality of preset future time points can be calculated by using the following formula seven;
Figure BDA0003081670850000083
wherein, WPLSTMFirst target location data representing a plurality of preset future points in time, LSTM () representing a long-short term memory network model mapping function,
Figure BDA0003081670850000084
representing the jth predetermined future point in time, Historytrace representing the historical data of the target movement trajectory, WPlinSecond target position data representing a plurality of preset future time points, v representing a target movement speed of the unmanned aerial vehicle when receiving the target movement information, and WP representing target positions of the plurality of preset future time points.
Further, it can be determined whether the unmanned aerial vehicle can perform tracking detection on the corresponding target position at or before the preset future time point by using the following method:
calculating the estimated distance range between the unmanned aerial vehicle and the target position of the preset future time point by using the following formula eight;
Figure BDA0003081670850000085
judging whether the following formula nine is true, if true, indicating that the unmanned aerial vehicle can track and detect the corresponding target position at or before the preset future time point, and if false, indicating that the unmanned aerial vehicle cannot track and detect the corresponding target position at or before the preset future time point;
Figure BDA0003081670850000091
wherein the content of the first and second substances,
Figure BDA0003081670850000092
representing the estimated range of distance, TCIndicating a corresponding predetermined future point in time, WP (T)C) Representing a preset future point in time TCTarget position of, TrIndicating the moment at which the drone receives the target movement information, vUAVThe self traveling speed of the unmanned aerial vehicle is represented, and Dt represents the detection range of the unmanned aerial vehicle.
The unmanned aerial vehicle cluster cooperative tracking method based on multi-region division provided by the embodiment of the invention divides and configures the tracking region based on Boltzmann distribution, so that the unmanned aerial vehicles start searching in a circular inspection process in respective sub-regions or wait for receiving a cooperative tracking request, when a target is found, the unmanned aerial vehicles perform a tracking task cycle, record a target moving track in real time, predict a target future moving track, broadcast the cooperative tracking request when the prediction result is confirmed to need multi-cooperation, utilize other unmanned aerial vehicles in the circular inspection process to receive information and perform corresponding processing and prediction, execute cooperative action, complete target tracking handover, fully utilize differences of the unmanned aerial vehicles in individual performance, fully utilize the large-range coverage capability of the cluster, continuously keep the tracking state on the target, and avoid losing the tracking target.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. In addition, "front", "rear", "left", "right", "upper" and "lower" in this document are referred to the placement states shown in the drawings.
Finally, it should be noted that: the above examples are only for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. An unmanned aerial vehicle cluster cooperative tracking method based on multi-region division is characterized by comprising the following steps:
performing subarea division on the tracking area according to the organism capacity of the unmanned aerial vehicles so as to determine subareas which are responsible for each unmanned aerial vehicle in the unmanned aerial vehicle cluster;
controlling each unmanned aerial vehicle to patrol in the sub-area in which the unmanned aerial vehicle is responsible according to the divided sub-areas, and tracking a target by using any unmanned aerial vehicle when the unmanned aerial vehicle in the unmanned aerial vehicle cluster finds the target;
when the unmanned aerial vehicle is used for tracking a target, if the target leaves a sub-area which is responsible for the unmanned aerial vehicle, broadcasting a cooperative tracking request comprising target motion information to other unmanned aerial vehicles in the unmanned aerial vehicle cluster;
and determining a follow-up handover unmanned aerial vehicle needing to track the target according to the target motion information and the motion states of the other unmanned aerial vehicles, and performing tracking handover by using the handover unmanned aerial vehicle and the current unmanned aerial vehicle carrying out target tracking.
2. The unmanned aerial vehicle cluster cooperative tracking method based on multi-region division according to claim 1, wherein a division area calculation function based on Boltzmann distribution is adopted to perform sub-region division on a tracking region to determine a sub-region in charge of each unmanned aerial vehicle in the unmanned aerial vehicle cluster.
3. The unmanned aerial vehicle cluster cooperative tracking method based on multi-region division according to claim 2, wherein a division area calculation function based on boltzmann distribution as shown in formula one is adopted to perform sub-region division on the tracking region;
Figure FDA0003081670840000011
wherein A isiRepresents the sub-area, A, of the cluster of drones for which the ith drone is responsibleTDenotes the total area of the tracking area, BiAnd DtiRespectively representing the total energy amount and the detection range of the ith unmanned aerial vehicle, respectively representing the dimensionless coefficient by alpha and beta, and respectively representing the natural constant by e。
4. The unmanned aerial vehicle cluster cooperative tracking method based on multi-region division according to claim 1, wherein when tracking a target by using an unmanned aerial vehicle, if the target leaves a sub-region for which the unmanned aerial vehicle is responsible, broadcasting a cooperative tracking request including target motion information to other unmanned aerial vehicles in the unmanned aerial vehicle cluster, comprises:
when the unmanned aerial vehicle is used for tracking the target, the unmanned aerial vehicle is used for recording the moving track of the target in real time, estimating the motion mode and the motion speed of the target, predicting the position of the target after a preset time step, and broadcasting a cooperative tracking request comprising target motion information to other unmanned aerial vehicles in the unmanned aerial vehicle cluster when the predicted position leaves a sub-area which is responsible for the unmanned aerial vehicle, wherein the target motion information comprises the moving track of the target, the motion mode and the motion speed.
5. The unmanned aerial vehicle cluster cooperative tracking method based on multi-zone division according to claim 4, wherein target movement trajectories are recorded in real time at preset time intervals, wherein the recorded target movement trajectory sequence data are expressed as:
Figure FDA0003081670840000012
wherein WPseries represents target movement trajectory series data,
Figure FDA0003081670840000021
representing the mth path point data in the target movement trajectory,
Figure FDA0003081670840000022
represent the object at
Figure FDA0003081670840000023
The position of the moment of time is,
Figure FDA0003081670840000024
time and
Figure FDA0003081670840000025
the time interval of the moment is a preset time interval tseg
6. The unmanned aerial vehicle cluster cooperative tracking method based on multi-region division according to claim 5, wherein the long and short term memory network after pre-training is used for processing the target moving trajectory sequence data to predict the position of the target after a preset time step.
7. The unmanned aerial vehicle cluster cooperative tracking method based on multi-region division according to claim 4 or 6, wherein a handover unmanned aerial vehicle which needs to track a target subsequently is determined according to the target motion information and the motion states of the other unmanned aerial vehicles, and the handover unmanned aerial vehicle is utilized to perform tracking handover with the unmanned aerial vehicle which currently performs target tracking, and the method comprises the following steps:
predicting target positions of a plurality of preset future time points according to target motion information, and determining an unmanned aerial vehicle to be determined according to a sub-region where the target positions of the plurality of preset future time points are located, wherein the sub-region where the unmanned aerial vehicle to be determined is responsible comprises at least one target position of the preset future time point;
according to the target position of the preset future time point contained in the sub-region for which the unmanned aerial vehicle to be determined is responsible, sequentially judging whether the unmanned aerial vehicle to be determined can track and detect the corresponding target position at or before the preset future time point according to the time sequence, wherein if the unmanned aerial vehicle to be determined can track and detect the corresponding target position at or before the preset future time point, subsequent judgment is not carried out;
if the target position of the unmanned aerial vehicle is the preset future time point, determining the currently pending unmanned aerial vehicle as a handover unmanned aerial vehicle, and enabling the handover unmanned aerial vehicle to reach the preset area where the target position of the corresponding preset future time point is located, wherein the preset area is determined according to the detection range of the unmanned aerial vehicle, so that when the unmanned aerial vehicle is located in the preset area, the target on the target position of the corresponding preset future time point can be tracked and detected;
when handing-over unmanned aerial vehicle can track and detect the target, utilize handing-over unmanned aerial vehicle will confirm handing-over information broadcast extremely other unmanned aerial vehicles in the unmanned aerial vehicle cluster make the unmanned aerial vehicle that carries out target tracking at present return to and patrol in the subregion that self was responsible for.
8. The unmanned aerial vehicle cluster cooperative tracking method based on multi-region division according to claim 7, wherein predicting target positions of a plurality of preset future time points according to target motion information comprises:
processing the target motion information by using the long and short term memory network after pre-training, and predicting first target positions of a plurality of preset future time points;
setting the motion mode of the target as linear motion, and predicting second target positions of a plurality of preset future time points by adopting a linear function;
and calculating the mean value of the first target position and the second target position, and taking the obtained mean value as the prediction result of the target positions of a plurality of preset future time points.
9. The unmanned aerial vehicle cluster cooperative tracking method based on multi-zone division according to claim 8, wherein the first target positions of a plurality of preset future time points are predicted by using the following formula five;
Figure FDA0003081670840000031
predicting second target positions of a plurality of preset future time points by using the following formula six;
Figure FDA0003081670840000032
wherein, WPLSTMTo representFirst target location data for a plurality of predetermined future points in time, LSTM () representing a long-short term memory network model mapping function,
Figure FDA0003081670840000033
representing the jth predetermined future point in time, Historytrace representing the historical data of the target movement trajectory, WPlinAnd v represents the target movement speed of the unmanned aerial vehicle when the target movement information is received.
10. The unmanned aerial vehicle cluster cooperative tracking method based on multi-zone division according to any one of claims 7 to 9, wherein the following method is used to determine whether the unmanned aerial vehicle can track and detect the corresponding target position at or before a preset future time point:
calculating the estimated distance range between the unmanned aerial vehicle and the target position of the preset future time point by using the following formula eight;
Figure FDA0003081670840000034
judging whether the following formula nine is true, if true, indicating that the unmanned aerial vehicle can track and detect the corresponding target position at or before the preset future time point, and if false, indicating that the unmanned aerial vehicle cannot track and detect the corresponding target position at or before the preset future time point;
Figure FDA0003081670840000035
wherein the content of the first and second substances,
Figure FDA0003081670840000036
representing the estimated range of distance, TCIndicating a corresponding predetermined future point in time, WP (T)C) Representing a preset future point in time TCTarget position of, TrIndicating unmanned aerial vehicleTime of receiving target motion information, vUAVThe self traveling speed of the unmanned aerial vehicle is represented, and Dt represents the detection range of the unmanned aerial vehicle.
CN202110568412.5A 2021-05-25 2021-05-25 Unmanned aerial vehicle cluster cooperative tracking method based on multi-region division Active CN113342032B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110568412.5A CN113342032B (en) 2021-05-25 2021-05-25 Unmanned aerial vehicle cluster cooperative tracking method based on multi-region division

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110568412.5A CN113342032B (en) 2021-05-25 2021-05-25 Unmanned aerial vehicle cluster cooperative tracking method based on multi-region division

Publications (2)

Publication Number Publication Date
CN113342032A true CN113342032A (en) 2021-09-03
CN113342032B CN113342032B (en) 2022-09-20

Family

ID=77471200

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110568412.5A Active CN113342032B (en) 2021-05-25 2021-05-25 Unmanned aerial vehicle cluster cooperative tracking method based on multi-region division

Country Status (1)

Country Link
CN (1) CN113342032B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114863688A (en) * 2022-07-06 2022-08-05 深圳联和智慧科技有限公司 Intelligent positioning method and system for muck vehicle based on unmanned aerial vehicle

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102419598A (en) * 2011-12-08 2012-04-18 南京航空航天大学 Method for cooperatively detecting moving target by using multiple unmanned aerial vehicles
CN110288165A (en) * 2019-07-02 2019-09-27 南京信息工程大学 The unmanned monitoring and tracking method of multiple target based on quick clustering and optimization distribution
CN110602633A (en) * 2019-08-02 2019-12-20 广东工业大学 Explosive flow-oriented mobile edge computing unmanned aerial vehicle cluster auxiliary communication method
CN110825112A (en) * 2019-11-22 2020-02-21 渤海大学 Oil field dynamic invasion target tracking system and method based on multiple unmanned aerial vehicles
US10593109B1 (en) * 2017-06-27 2020-03-17 State Farm Mutual Automobile Insurance Company Systems and methods for controlling a fleet of drones for data collection
JP2020061082A (en) * 2018-10-12 2020-04-16 パナソニックi−PROセンシングソリューションズ株式会社 Tracking system, patrol system, and unmanned air vehicle

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102419598A (en) * 2011-12-08 2012-04-18 南京航空航天大学 Method for cooperatively detecting moving target by using multiple unmanned aerial vehicles
US10593109B1 (en) * 2017-06-27 2020-03-17 State Farm Mutual Automobile Insurance Company Systems and methods for controlling a fleet of drones for data collection
JP2020061082A (en) * 2018-10-12 2020-04-16 パナソニックi−PROセンシングソリューションズ株式会社 Tracking system, patrol system, and unmanned air vehicle
CN110288165A (en) * 2019-07-02 2019-09-27 南京信息工程大学 The unmanned monitoring and tracking method of multiple target based on quick clustering and optimization distribution
CN110602633A (en) * 2019-08-02 2019-12-20 广东工业大学 Explosive flow-oriented mobile edge computing unmanned aerial vehicle cluster auxiliary communication method
CN110825112A (en) * 2019-11-22 2020-02-21 渤海大学 Oil field dynamic invasion target tracking system and method based on multiple unmanned aerial vehicles

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
戴定川等: "无人机航迹规划分段需求分析", 《战术导弹技术》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114863688A (en) * 2022-07-06 2022-08-05 深圳联和智慧科技有限公司 Intelligent positioning method and system for muck vehicle based on unmanned aerial vehicle

Also Published As

Publication number Publication date
CN113342032B (en) 2022-09-20

Similar Documents

Publication Publication Date Title
CN108594834B (en) Multi-AUV self-adaptive target searching and obstacle avoiding method oriented to unknown environment
CN108897312B (en) Method for planning continuous monitoring path of multiple unmanned aerial vehicles to large-scale environment
CN106842184B (en) Multi-target detection and tracking method based on beam scheduling
US7765062B2 (en) Method and system for autonomous tracking of a mobile target by an unmanned aerial vehicle
CN111695776A (en) Unmanned aerial vehicle cluster distributed online cooperative area reconnaissance method and device
WO2019152149A1 (en) Actively complementing exposure settings for autonomous navigation
US20120093361A1 (en) Tracking system and method for regions of interest and computer program product thereof
WO2017074966A1 (en) Joint processing for embedded data inference
CN112051862A (en) Multi-machine heterogeneous sensor cooperative multi-target tracking oriented to observation optimization
CN109976386A (en) A kind of method and system of multiple no-manned plane collaboration tracking target
CN111006669B (en) Unmanned aerial vehicle system task cooperation and path planning method
CN103310190A (en) Facial image sample acquiring and optimizing method based on heterogeneous active vision network
CN111474953A (en) Multi-dynamic-view-angle-coordinated aerial target identification method and system
Natalizio et al. Two families of algorithms to film sport events with flying robots
CN109657928B (en) Cooperative scheduling method of closed-loop cooperative scheduling framework of vehicle-mounted sensor system
CN113342032B (en) Unmanned aerial vehicle cluster cooperative tracking method based on multi-region division
CN110823223A (en) Path planning method and device for unmanned aerial vehicle cluster
CN112363527B (en) Multi-aircraft cooperative trapping method and system based on optimal control theory
Bousias et al. Collaborative visual area coverage using aerial agents equipped with PTZ-cameras under localization uncertainty
Loscrí et al. Performance evaluation of novel distributed coverage techniques for swarms of flying robots
CN103677734A (en) Multi-target data association algorithm based on feature matching matrix
CN114047785A (en) Method and system for cooperatively searching multiple moving targets by unmanned aerial vehicle cluster
Kim et al. Airborne multisensor management for multitarget tracking
CN116954261A (en) Multi-platform cooperative target track tracking method and device
CN115840463A (en) Data processing method and device for unmanned aerial vehicle cluster cooperative reconnaissance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant