CN113534834A - Multi-unmanned aerial vehicle cooperative tracking and positioning method - Google Patents
Multi-unmanned aerial vehicle cooperative tracking and positioning method Download PDFInfo
- Publication number
- CN113534834A CN113534834A CN202010283032.2A CN202010283032A CN113534834A CN 113534834 A CN113534834 A CN 113534834A CN 202010283032 A CN202010283032 A CN 202010283032A CN 113534834 A CN113534834 A CN 113534834A
- Authority
- CN
- China
- Prior art keywords
- fusion
- node
- local
- unmanned aerial
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 27
- 230000004927 fusion Effects 0.000 claims abstract description 66
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 22
- 238000001914 filtration Methods 0.000 claims abstract description 16
- 238000004891 communication Methods 0.000 claims abstract description 8
- 239000011159 matrix material Substances 0.000 claims description 29
- 238000005259 measurement Methods 0.000 claims description 6
- 238000010586 diagram Methods 0.000 claims description 5
- 238000013459 approach Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/10—Simultaneous control of position or course in three dimensions
- G05D1/101—Simultaneous control of position or course in three dimensions specially adapted for aircraft
- G05D1/104—Simultaneous control of position or course in three dimensions specially adapted for aircraft involving a plurality of aircrafts, e.g. formation flying
Landscapes
- Engineering & Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Navigation (AREA)
- Feedback Control In General (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention discloses a multi-unmanned aerial vehicle cooperative tracking and positioning method. The method comprises the following steps: establishing a directed connectivity graph based on a graph theory method according to an unmanned aerial vehicle communication network topology structure graph to obtain connectivity information of unmanned aerial vehicle sensor nodes and adjacent nodes; establishing a linear continuous time system model; estimating target unknown information by using an IKCF filtering algorithm; and performing data fusion on the estimation result of each node by adopting a sequential fast covariance cross fusion algorithm to obtain the determined target position. The method makes full use of the estimation information of the adjacent unmanned aerial vehicle nodes, improves the real-time performance of the system, and enables the target position to be estimated more accurately.
Description
Technical Field
The invention relates to the technical field of unmanned aerial vehicle control and navigation, in particular to a multi-unmanned aerial vehicle cooperative tracking and positioning method.
Background
Along with the development of science and technology, unmanned aerial vehicle obtains extensive application in military and civilian field, because single unmanned aerial vehicle has the restriction of sensor angle, can not diversely observe the target and the weak shortcoming such as duration, thereby the many unmanned aerial vehicles collaborative work reaches and widens observation range, safe and reliable's purpose.
When the drones work together, the network topology changes frequently, and the time-varying topology is generally called a switching topology. Generally speaking, the estimation problem based on the switching topology can well meet the requirement of topology change, and finally the position estimation result is more accurate. For such practical situations, the discrete system often cannot completely reflect the switching situation of the topological structure, so that the research on the filtering method of the continuous time system is significant. In recent years, distributed consensus kalman filtering algorithms with fixed topologies have attracted a great deal of attention. In these filtering algorithms, a sensor node receives metrology information from its neighborhood and performs state estimation using the inverse of the covariance matrix. In a sensor network, the target state may not be fully observable for some nodes, and therefore the target state estimate for that node is not accurate enough.
Disclosure of Invention
The invention aims to provide a multi-unmanned aerial vehicle cooperative tracking and positioning method which is good in real-time performance and high in accuracy.
The technical solution for realizing the purpose of the invention is as follows: a multi-unmanned aerial vehicle cooperative tracking and positioning method comprises the following steps:
and 4, performing data fusion on the estimation result of each node by adopting a sequential rapid covariance cross fusion algorithm to obtain the determined target position.
Further, the step 2 of establishing the linear continuous time system model specifically includes:
wherein x ∈ RnThe process noise w-N (0, Q) represents Gaussian white noise with variance Q;is the derivative of the target state, u is the input state, A is the system matrix, B is the input matrix, and F is the noise matrix;
suppose that the state quantity in the formula (1) is G by N connected graphs1The observation model of the nodes in the system is expressed as:
wherein ω isij~N(0,nc) Represents a variance ofInter-point communication noise of (1); ziIs a direct observation of the target by sensor i:
Zi=Hix+vi (3)
Hifor the measurement matrix, the noise v is measuredi~(0,Ri) Is a variance of RiWhite gaussian noise of (1);
zithe observation result of the sensor with the neighbor node state and the weighting coefficient is obtained; a isijIndicating whether the sensor node i and the node j communicate or not, if so, aij1, otherwise aij=0;And PjIs a variance matrix of state estimators and estimators; g if sensor i can directly obtain the measured information of the target i01, otherwise gi0=0。
Further, in step 3, the target unknown information is estimated by using an IKCF filtering algorithm, which specifically includes the following steps:
IKCF filtering is performed according to formulas (3) to (6)
Wherein subscripts i and j represent ith and jth sensor nodes; anddirect and indirect kalman gains, respectively;and PjIs a variance matrix of state estimators and estimators.
Further, in step 4, a sequential fast covariance cross fusion algorithm is adopted to perform data fusion on the estimation result of each node to obtain a determined target position, which specifically includes:
performing sequential fast covariance cross fusion according to the formulas (7) to (10), completing fusion of motion information of the target by multiple steps, and performing fusion once after each node receives one frame of data:
wherein k is the number of local estimation values which are fused by the local estimation values contained in the current time point fusion node,is a fusion value P obtained by fusing k local estimation values with the current nodef,kIs a variance matrix of fusion values obtained after the current node fuses k local estimation values,is a local estimated value to be fused newly received after the current node fuses k local estimated values, PnewIs a variance matrix of the local estimation values to be fused newly received after the current node fuses k local estimation values, epsilonfIs the intermediate fusion coefficient, omega, of the fusion value obtained by fusing k local estimation values with the current nodefIs a normalized fusion coefficient of a fusion value obtained by fusing k local estimation values with the current node, epsilonnewIs the intermediate fusion coefficient, omega, of the local estimated values to be fused newly received after the current node fuses k local estimated valuesnewIs a normalized fusion coefficient of a local estimation value to be fused newly received after the current node fuses k local estimation values,is the fusion value of the local target state estimation values of k +1 nodes, Pf,k+1Is a fusion variance matrix of k +1 node local target state estimation values.
Compared with the prior art, the invention has the following remarkable advantages:
(1) the IKCF algorithm has smaller error mean square error of a tracking result and more accurate estimation result, is used for solving the problem of state estimation of a sensor network with a known topological communication structure and provides technical support for the problem of target estimation of a subsequent switching system;
(2) in an information weighted Kalman consistency filter algorithm (IKCF), a sensor node i measurement model is constructed by using local measurement information of a node and estimation information of a target motion state of a neighbor node, so that a node which cannot observe a target can update the state through an observation value of the neighbor node;
(3) by adopting a sequential fast covariance crossover algorithm (SFCI), the calculation amount of each fusion node can be reduced, the fusion result of each node is ensured to be consistent, and uniform data input is provided for the subsequent function realization of the system.
Drawings
Fig. 1 is a schematic diagram of cooperative work of unmanned aerial vehicles.
Fig. 2 is a network topology directed connectivity graph.
FIG. 3 is a tracking state error diagram of each node, in which (a) to (d) are x1 to x4, respectively.
FIG. 4 is a data fusion result chart in which (a) to (d) are data fusion results of x1 to x4, respectively.
Detailed Description
The invention discloses a multi-unmanned aerial vehicle cooperative tracking and positioning method. Tracking a target in the cooperative flight process of the unmanned aerial vehicle, estimating unknown Information of the target by using an Information-weighted Kalman consistency Filter (IKCF), and performing data fusion on estimation results of each node by using a sequential fast covariance Cross fusion (SFCI) algorithm to obtain a determined target position. The method makes full use of the estimation information of the adjacent unmanned aerial vehicle nodes, further improves the real-time performance of the system, and enables the target position to be estimated more accurately.
With reference to fig. 1, the cooperative tracking and positioning method for multiple unmanned aerial vehicles of the present invention includes the following steps:
and 4, performing data fusion on the estimation result of each node by adopting a sequential rapid covariance cross fusion algorithm to obtain the determined target position.
Further, the step 2 of establishing the linear continuous time system model specifically includes:
wherein x ∈ RnThe process noise w-N (0, Q) represents Gaussian white noise with variance Q;is the derivative of the target state, u is the input state, A is the system matrix, B is the input matrix, and F is the noise matrix;
suppose that the state quantity in the formula (1) is G by N connected graphs1The observation model of the nodes in the system is expressed as:
wherein ω isij~N(0,nc) Represents a variance ofInter-point communication noise of (1); ziIs a direct observation of the target by sensor i:
Zi=Hix+vi (3)
Hifor the measurement matrix, the noise v is measuredi~(0,Ri) Is a variance of RiWhite gaussian noise of (1);
zithe observation result of the sensor with the neighbor node state and the weighting coefficient is obtained; a isijIndicating whether the sensor node i and the node j communicate or not, if so, aij1, otherwise aij=0;And PjIs a variance matrix of state estimators and estimators; g if sensor i can directly obtain the measured information of the target i01, otherwise gi0=0。
Further, in step 3, the target unknown information is estimated by using an IKCF filtering algorithm, which specifically includes the following steps:
IKCF filtering is performed according to formulas (3) to (6)
Wherein subscripts i and j represent ith and jth sensor nodes; andare respectively directAnd an indirect kalman gain;and PjIs a variance matrix of state estimators and estimators.
Further, in step 4, a sequential fast covariance cross fusion algorithm is adopted to perform data fusion on the estimation result of each node to obtain a determined target position, which specifically includes:
performing sequential fast covariance cross fusion according to the formulas (7) to (10), completing fusion of motion information of the target by multiple steps, and performing fusion once after each node receives one frame of data:
wherein k is the number of local estimation values which are fused by the local estimation values contained in the current time point fusion node,is a fusion value P obtained by fusing k local estimation values with the current nodef,kIs a variance matrix of fusion values obtained after the current node fuses k local estimation values,is a local estimated value to be fused newly received after the current node fuses k local estimated values, PnewIs a variance matrix of the local estimation values to be fused newly received after the current node fuses k local estimation values, epsilonfIs the intermediate fusion coefficient, omega, of the fusion value obtained by fusing k local estimation values with the current nodefIs a normalized fusion coefficient of a fusion value obtained by fusing k local estimation values with the current node, epsilonnewIs the intermediate fusion coefficient, omega, of the local estimated values to be fused newly received after the current node fuses k local estimated valuesnewIs a normalized fusion coefficient of a local estimation value to be fused newly received after the current node fuses k local estimation values,is the fusion value of the local target state estimation values of k +1 nodes, Pf,k+1Is a fusion variance matrix of k +1 node local target state estimation values.
The technical solution of the present invention is described in detail with reference to the following examples, but the scope of the present invention is not limited to the examples.
Examples
In the embodiment, a target0 is searched for the unmanned aerial vehicle topological structure directed connected graph shown in fig. 2. Wherein the node 1 can directly observe the target, and the new measurement value transmits state estimation information in the cyclic network for node state updating.
Firstly, establishing a connected network G ═ v, epsilon and a }, performing filtering estimation by using an IKCF algorithm, and finally performing data fusion by using an SFCI algorithm to obtain uniquely determined target state information. The method comprises the following specific steps:
state x ═ x1 x2 x3 x4]TContaining two position information and two velocity information, w is the variance Q ═ 2211]TWhite gaussian noise.
Establishing observation model and observation matrix H of each nodei=I4Measuring the noise wijIs a variance of nc=0.5*[1 1 1 1]TWhite gaussian noise.
And 3, carrying out IKCF filtering according to the formulas (3) to (6).
And 4, performing sequential fast covariance cross fusion according to the formulas (7) to (10).
The embodiment is based on a Matlab simulation platform. As can be seen from fig. 3(a) to (d), the tracking error of each unmanned aerial vehicle sensor node gradually approaches to 0, and the mean square error of the estimation error of each node for the target is shown in the following table:
Source(IKCF) | MSE |
Node1 | 0.002535 |
Node2 | 0.002611 |
Node3 | 0.002591 |
Node4 | 0.002469 |
Node5 | 0.002693 |
DataFusion | 0.002425 |
as shown in FIGS. 4(a) to (d), the SFCI data fusion results show that the error is almost 0 or so. From the above results, the target tracking and positioning method based on the IKCF filtering is adopted to realize the state consistency estimation of the multiple unmanned aerial vehicles on the observed target under the fixed topological structure, namely the whole unmanned aerial vehicle network gradually approaches to achieve the consistency tracking on the target, the fusion results of all nodes can be ensured to be consistent, and the unified data input is provided for the subsequent tracking and reconnaissance realization of the system.
Claims (4)
1. A multi-unmanned aerial vehicle cooperative tracking and positioning method is characterized by comprising the following steps:
step 1, establishing a directed connectivity graph based on a graph theory method according to a topological structure diagram of an unmanned aerial vehicle communication network, and obtaining connectivity information of sensor nodes and adjacent nodes of the unmanned aerial vehicle;
step 2, establishing a linear continuous time system model;
step 3, estimating target unknown information by using an IKCF filtering algorithm;
and 4, performing data fusion on the estimation result of each node by adopting a sequential rapid covariance cross fusion algorithm to obtain the determined target position.
2. The method for cooperative tracking and positioning of multiple unmanned aerial vehicles according to claim 1, wherein the step 2 of establishing the linear continuous time system model specifically comprises the following steps:
wherein x ∈ RnThe process noise w-N (0, Q) represents Gaussian white noise with variance Q;is the derivative of the target state, u is the input state, A is the system matrix, B is the input matrix, and F is the noise matrix;
suppose that the state quantity in the formula (1) is G by N connected graphs1The observation model of the nodes in the system is expressed as:
wherein ω isij~N(0,nc) Represents a variance ofInter-point communication noise of (1); ziIs a direct observation of the target by sensor i:
Zi=Hix+vi (3)
Hifor the measurement matrix, the noise v is measuredi~(0,Ri) Is a variance of RiWhite gaussian noise of (1);
zithe observation result of the sensor with the neighbor node state and the weighting coefficient is obtained; a isijIndicating whether the sensor node i and the node j communicate or not, if so, aij1, otherwise aij=0;And PjIs a variance matrix of state estimators and estimators; g if sensor i can directly obtain the measured information of the targeti01, otherwise gi0=0。
3. The cooperative tracking and positioning method for multiple unmanned aerial vehicles according to claim 2, wherein the estimation of the target unknown information by using the IKCF filtering algorithm in step 3 is as follows:
IKCF filtering is performed according to formulas (3) to (6)
4. The cooperative tracking and positioning method for multiple unmanned aerial vehicles according to claim 3, wherein in step 4, a sequential fast covariance cross fusion algorithm is adopted to perform data fusion on the estimation results of each node to obtain a determined target position, and the specific steps are as follows:
performing sequential fast covariance cross fusion according to the formulas (7) to (10), completing fusion of motion information of the target by multiple steps, and performing fusion once after each node receives one frame of data:
wherein k is the number of local estimation values which are fused by the local estimation values contained in the current time point fusion node,is a fusion value P obtained by fusing k local estimation values with the current nodef,kIs a variance matrix of fusion values obtained after the current node fuses k local estimation values,is a local estimated value to be fused newly received after the current node fuses k local estimated values, PnewIs a variance matrix of the local estimation values to be fused newly received after the current node fuses k local estimation values, epsilonfIs the intermediate fusion coefficient, omega, of the fusion value obtained by fusing k local estimation values with the current nodefIs a normalized fusion coefficient of a fusion value obtained by fusing k local estimation values with the current node, epsilonnewIs the intermediate fusion coefficient, omega, of the local estimated values to be fused newly received after the current node fuses k local estimated valuesnewIs a normalized fusion coefficient of a local estimation value to be fused newly received after the current node fuses k local estimation values,is the fusion value of the local target state estimation values of k +1 nodes, Pf,k+1Is a fusion variance matrix of k +1 node local target state estimation values.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010283032.2A CN113534834A (en) | 2020-04-13 | 2020-04-13 | Multi-unmanned aerial vehicle cooperative tracking and positioning method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010283032.2A CN113534834A (en) | 2020-04-13 | 2020-04-13 | Multi-unmanned aerial vehicle cooperative tracking and positioning method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113534834A true CN113534834A (en) | 2021-10-22 |
Family
ID=78087783
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010283032.2A Pending CN113534834A (en) | 2020-04-13 | 2020-04-13 | Multi-unmanned aerial vehicle cooperative tracking and positioning method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113534834A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116295359A (en) * | 2023-05-23 | 2023-06-23 | 中国科学院数学与系统科学研究院 | Distributed self-adaptive collaborative tracking positioning method |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110044356A (en) * | 2019-04-22 | 2019-07-23 | 北京壹氢科技有限公司 | A kind of lower distributed collaboration method for tracking target of communication topology switching |
-
2020
- 2020-04-13 CN CN202010283032.2A patent/CN113534834A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110044356A (en) * | 2019-04-22 | 2019-07-23 | 北京壹氢科技有限公司 | A kind of lower distributed collaboration method for tracking target of communication topology switching |
Non-Patent Citations (2)
Title |
---|
从金亮: "快速协方差交叉融合算法及应用", 《自动化学报》 * |
吉鸿海: "自适应迭代学习控制和卡尔曼一致性滤波及在高速列车运行控制中的应用", 《工程科技Ⅱ辑信息科技》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116295359A (en) * | 2023-05-23 | 2023-06-23 | 中国科学院数学与系统科学研究院 | Distributed self-adaptive collaborative tracking positioning method |
CN116295359B (en) * | 2023-05-23 | 2023-08-15 | 中国科学院数学与系统科学研究院 | Distributed self-adaptive collaborative tracking positioning method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108896047B (en) | Distributed sensor network collaborative fusion and sensor position correction method | |
CN107255795B (en) | Indoor mobile robot positioning method and device based on EKF/EFIR hybrid filtering | |
CN108364014A (en) | A kind of multi-sources Information Fusion Method based on factor graph | |
Huang et al. | An observability-constrained sliding window filter for SLAM | |
Fang et al. | Graph optimization approach to range-based localization | |
Zhou et al. | Reinforcement learning based data fusion method for multi-sensors | |
CN109151759B (en) | Sensor network distributed information weighted consistency state filtering method | |
CN104777469B (en) | A kind of radar node selecting method based on error in measurement covariance matrix norm | |
Liu et al. | Measurement dissemination-based distributed bayesian filter using the latest-in-and-full-out exchange protocol for networked unmanned vehicles | |
CN113534834A (en) | Multi-unmanned aerial vehicle cooperative tracking and positioning method | |
Zhao et al. | L1-norm constraint kernel adaptive filtering framework for precise and robust indoor localization under the internet of things | |
CN109341690B (en) | Robust and efficient combined navigation self-adaptive data fusion method | |
Zamani et al. | Minimum-energy distributed filtering | |
CN111883265A (en) | Target state estimation method applied to fire control system | |
Di Rocco et al. | Sensor network localisation using distributed extended kalman filter | |
CN103313384A (en) | Wireless sensor network target tracking method based on informational consistency | |
CN109474892B (en) | Strong robust sensor network target tracking method based on information form | |
CN111216146B (en) | Two-part consistency quantitative control method suitable for networked robot system | |
CN110807478B (en) | Cooperative target tracking method under condition of observing intermittent loss | |
Kong et al. | Hybrid indoor positioning method of BLE and monocular VINS based smartphone | |
CN109282820B (en) | Indoor positioning method based on distributed hybrid filtering | |
CN114705223A (en) | Inertial navigation error compensation method and system for multiple mobile intelligent bodies in target tracking | |
Jung et al. | Scalable and Modular Ultra-Wideband Aided Inertial Navigation | |
CN111695617A (en) | Distributed fire control fusion method based on improved covariance cross algorithm | |
CN112285697A (en) | Multi-sensor multi-target space-time deviation calibration and fusion method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20211022 |
|
RJ01 | Rejection of invention patent application after publication |