CN113534834A - Multi-unmanned aerial vehicle cooperative tracking and positioning method - Google Patents

Multi-unmanned aerial vehicle cooperative tracking and positioning method Download PDF

Info

Publication number
CN113534834A
CN113534834A CN202010283032.2A CN202010283032A CN113534834A CN 113534834 A CN113534834 A CN 113534834A CN 202010283032 A CN202010283032 A CN 202010283032A CN 113534834 A CN113534834 A CN 113534834A
Authority
CN
China
Prior art keywords
fusion
node
local
unmanned aerial
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010283032.2A
Other languages
Chinese (zh)
Inventor
贾越
戚国庆
李银伢
盛安冬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202010283032.2A priority Critical patent/CN113534834A/en
Publication of CN113534834A publication Critical patent/CN113534834A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • G05D1/104Simultaneous control of position or course in three dimensions specially adapted for aircraft involving a plurality of aircrafts, e.g. formation flying

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Navigation (AREA)
  • Feedback Control In General (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a multi-unmanned aerial vehicle cooperative tracking and positioning method. The method comprises the following steps: establishing a directed connectivity graph based on a graph theory method according to an unmanned aerial vehicle communication network topology structure graph to obtain connectivity information of unmanned aerial vehicle sensor nodes and adjacent nodes; establishing a linear continuous time system model; estimating target unknown information by using an IKCF filtering algorithm; and performing data fusion on the estimation result of each node by adopting a sequential fast covariance cross fusion algorithm to obtain the determined target position. The method makes full use of the estimation information of the adjacent unmanned aerial vehicle nodes, improves the real-time performance of the system, and enables the target position to be estimated more accurately.

Description

Multi-unmanned aerial vehicle cooperative tracking and positioning method
Technical Field
The invention relates to the technical field of unmanned aerial vehicle control and navigation, in particular to a multi-unmanned aerial vehicle cooperative tracking and positioning method.
Background
Along with the development of science and technology, unmanned aerial vehicle obtains extensive application in military and civilian field, because single unmanned aerial vehicle has the restriction of sensor angle, can not diversely observe the target and the weak shortcoming such as duration, thereby the many unmanned aerial vehicles collaborative work reaches and widens observation range, safe and reliable's purpose.
When the drones work together, the network topology changes frequently, and the time-varying topology is generally called a switching topology. Generally speaking, the estimation problem based on the switching topology can well meet the requirement of topology change, and finally the position estimation result is more accurate. For such practical situations, the discrete system often cannot completely reflect the switching situation of the topological structure, so that the research on the filtering method of the continuous time system is significant. In recent years, distributed consensus kalman filtering algorithms with fixed topologies have attracted a great deal of attention. In these filtering algorithms, a sensor node receives metrology information from its neighborhood and performs state estimation using the inverse of the covariance matrix. In a sensor network, the target state may not be fully observable for some nodes, and therefore the target state estimate for that node is not accurate enough.
Disclosure of Invention
The invention aims to provide a multi-unmanned aerial vehicle cooperative tracking and positioning method which is good in real-time performance and high in accuracy.
The technical solution for realizing the purpose of the invention is as follows: a multi-unmanned aerial vehicle cooperative tracking and positioning method comprises the following steps:
step 1, establishing a directed connectivity graph based on a graph theory method according to a topological structure diagram of an unmanned aerial vehicle communication network, and obtaining connectivity information of sensor nodes and adjacent nodes of the unmanned aerial vehicle;
step 2, establishing a linear continuous time system model;
step 3, estimating target unknown information by using an IKCF filtering algorithm;
and 4, performing data fusion on the estimation result of each node by adopting a sequential rapid covariance cross fusion algorithm to obtain the determined target position.
Further, the step 2 of establishing the linear continuous time system model specifically includes:
Figure BDA0002447414910000011
wherein x ∈ RnThe process noise w-N (0, Q) represents Gaussian white noise with variance Q;
Figure BDA0002447414910000021
is the derivative of the target state, u is the input state, A is the system matrix, B is the input matrix, and F is the noise matrix;
suppose that the state quantity in the formula (1) is G by N connected graphs1The observation model of the nodes in the system is expressed as:
Figure BDA0002447414910000022
wherein ω isij~N(0,nc) Represents a variance of
Figure BDA0002447414910000023
Inter-point communication noise of (1); ziIs a direct observation of the target by sensor i:
Zi=Hix+vi (3)
Hifor the measurement matrix, the noise v is measuredi~(0,Ri) Is a variance of RiWhite gaussian noise of (1);
zithe observation result of the sensor with the neighbor node state and the weighting coefficient is obtained; a isijIndicating whether the sensor node i and the node j communicate or not, if so, aij1, otherwise aij=0;
Figure BDA0002447414910000024
And PjIs a variance matrix of state estimators and estimators; g if sensor i can directly obtain the measured information of the target i01, otherwise gi0=0。
Further, in step 3, the target unknown information is estimated by using an IKCF filtering algorithm, which specifically includes the following steps:
IKCF filtering is performed according to formulas (3) to (6)
Figure BDA0002447414910000025
Figure BDA0002447414910000026
Figure BDA0002447414910000027
Figure BDA0002447414910000028
Wherein subscripts i and j represent ith and jth sensor nodes;
Figure BDA0002447414910000031
Figure BDA0002447414910000032
and
Figure BDA0002447414910000033
direct and indirect kalman gains, respectively;
Figure BDA0002447414910000034
and PjIs a variance matrix of state estimators and estimators.
Further, in step 4, a sequential fast covariance cross fusion algorithm is adopted to perform data fusion on the estimation result of each node to obtain a determined target position, which specifically includes:
performing sequential fast covariance cross fusion according to the formulas (7) to (10), completing fusion of motion information of the target by multiple steps, and performing fusion once after each node receives one frame of data:
Figure BDA0002447414910000035
Figure BDA0002447414910000036
Figure BDA0002447414910000037
Figure BDA0002447414910000038
wherein k is the number of local estimation values which are fused by the local estimation values contained in the current time point fusion node,
Figure BDA0002447414910000039
is a fusion value P obtained by fusing k local estimation values with the current nodef,kIs a variance matrix of fusion values obtained after the current node fuses k local estimation values,
Figure BDA00024474149100000310
is a local estimated value to be fused newly received after the current node fuses k local estimated values, PnewIs a variance matrix of the local estimation values to be fused newly received after the current node fuses k local estimation values, epsilonfIs the intermediate fusion coefficient, omega, of the fusion value obtained by fusing k local estimation values with the current nodefIs a normalized fusion coefficient of a fusion value obtained by fusing k local estimation values with the current node, epsilonnewIs the intermediate fusion coefficient, omega, of the local estimated values to be fused newly received after the current node fuses k local estimated valuesnewIs a normalized fusion coefficient of a local estimation value to be fused newly received after the current node fuses k local estimation values,
Figure BDA00024474149100000311
is the fusion value of the local target state estimation values of k +1 nodes, Pf,k+1Is a fusion variance matrix of k +1 node local target state estimation values.
Compared with the prior art, the invention has the following remarkable advantages:
(1) the IKCF algorithm has smaller error mean square error of a tracking result and more accurate estimation result, is used for solving the problem of state estimation of a sensor network with a known topological communication structure and provides technical support for the problem of target estimation of a subsequent switching system;
(2) in an information weighted Kalman consistency filter algorithm (IKCF), a sensor node i measurement model is constructed by using local measurement information of a node and estimation information of a target motion state of a neighbor node, so that a node which cannot observe a target can update the state through an observation value of the neighbor node;
(3) by adopting a sequential fast covariance crossover algorithm (SFCI), the calculation amount of each fusion node can be reduced, the fusion result of each node is ensured to be consistent, and uniform data input is provided for the subsequent function realization of the system.
Drawings
Fig. 1 is a schematic diagram of cooperative work of unmanned aerial vehicles.
Fig. 2 is a network topology directed connectivity graph.
FIG. 3 is a tracking state error diagram of each node, in which (a) to (d) are x1 to x4, respectively.
FIG. 4 is a data fusion result chart in which (a) to (d) are data fusion results of x1 to x4, respectively.
Detailed Description
The invention discloses a multi-unmanned aerial vehicle cooperative tracking and positioning method. Tracking a target in the cooperative flight process of the unmanned aerial vehicle, estimating unknown Information of the target by using an Information-weighted Kalman consistency Filter (IKCF), and performing data fusion on estimation results of each node by using a sequential fast covariance Cross fusion (SFCI) algorithm to obtain a determined target position. The method makes full use of the estimation information of the adjacent unmanned aerial vehicle nodes, further improves the real-time performance of the system, and enables the target position to be estimated more accurately.
With reference to fig. 1, the cooperative tracking and positioning method for multiple unmanned aerial vehicles of the present invention includes the following steps:
step 1, establishing a directed connectivity graph based on a graph theory method according to a topological structure diagram of an unmanned aerial vehicle communication network, and obtaining connectivity information of sensor nodes and adjacent nodes of the unmanned aerial vehicle;
step 2, establishing a linear continuous time system model;
step 3, estimating target unknown information by using an IKCF filtering algorithm;
and 4, performing data fusion on the estimation result of each node by adopting a sequential rapid covariance cross fusion algorithm to obtain the determined target position.
Further, the step 2 of establishing the linear continuous time system model specifically includes:
Figure BDA0002447414910000041
wherein x ∈ RnThe process noise w-N (0, Q) represents Gaussian white noise with variance Q;
Figure BDA0002447414910000042
is the derivative of the target state, u is the input state, A is the system matrix, B is the input matrix, and F is the noise matrix;
suppose that the state quantity in the formula (1) is G by N connected graphs1The observation model of the nodes in the system is expressed as:
Figure BDA0002447414910000051
wherein ω isij~N(0,nc) Represents a variance of
Figure BDA0002447414910000052
Inter-point communication noise of (1); ziIs a direct observation of the target by sensor i:
Zi=Hix+vi (3)
Hifor the measurement matrix, the noise v is measuredi~(0,Ri) Is a variance of RiWhite gaussian noise of (1);
zithe observation result of the sensor with the neighbor node state and the weighting coefficient is obtained; a isijIndicating whether the sensor node i and the node j communicate or not, if so, aij1, otherwise aij=0;
Figure BDA0002447414910000053
And PjIs a variance matrix of state estimators and estimators; g if sensor i can directly obtain the measured information of the target i01, otherwise gi0=0。
Further, in step 3, the target unknown information is estimated by using an IKCF filtering algorithm, which specifically includes the following steps:
IKCF filtering is performed according to formulas (3) to (6)
Figure BDA0002447414910000054
Figure BDA0002447414910000055
Figure BDA0002447414910000056
Figure BDA0002447414910000057
Wherein subscripts i and j represent ith and jth sensor nodes;
Figure BDA0002447414910000058
Figure BDA0002447414910000059
and
Figure BDA00024474149100000510
are respectively directAnd an indirect kalman gain;
Figure BDA00024474149100000511
and PjIs a variance matrix of state estimators and estimators.
Further, in step 4, a sequential fast covariance cross fusion algorithm is adopted to perform data fusion on the estimation result of each node to obtain a determined target position, which specifically includes:
performing sequential fast covariance cross fusion according to the formulas (7) to (10), completing fusion of motion information of the target by multiple steps, and performing fusion once after each node receives one frame of data:
Figure BDA0002447414910000061
Figure BDA0002447414910000062
Figure BDA0002447414910000063
Figure BDA0002447414910000064
wherein k is the number of local estimation values which are fused by the local estimation values contained in the current time point fusion node,
Figure BDA0002447414910000065
is a fusion value P obtained by fusing k local estimation values with the current nodef,kIs a variance matrix of fusion values obtained after the current node fuses k local estimation values,
Figure BDA0002447414910000066
is a local estimated value to be fused newly received after the current node fuses k local estimated values, PnewIs a variance matrix of the local estimation values to be fused newly received after the current node fuses k local estimation values, epsilonfIs the intermediate fusion coefficient, omega, of the fusion value obtained by fusing k local estimation values with the current nodefIs a normalized fusion coefficient of a fusion value obtained by fusing k local estimation values with the current node, epsilonnewIs the intermediate fusion coefficient, omega, of the local estimated values to be fused newly received after the current node fuses k local estimated valuesnewIs a normalized fusion coefficient of a local estimation value to be fused newly received after the current node fuses k local estimation values,
Figure BDA0002447414910000067
is the fusion value of the local target state estimation values of k +1 nodes, Pf,k+1Is a fusion variance matrix of k +1 node local target state estimation values.
The technical solution of the present invention is described in detail with reference to the following examples, but the scope of the present invention is not limited to the examples.
Examples
In the embodiment, a target0 is searched for the unmanned aerial vehicle topological structure directed connected graph shown in fig. 2. Wherein the node 1 can directly observe the target, and the new measurement value transmits state estimation information in the cyclic network for node state updating.
Firstly, establishing a connected network G ═ v, epsilon and a }, performing filtering estimation by using an IKCF algorithm, and finally performing data fusion by using an SFCI algorithm to obtain uniquely determined target state information. The method comprises the following specific steps:
step 1, establishing a directed connectivity graph according to a graph theory method based on an obtained network topology structure graph, and obtaining connectivity information of unmanned aerial vehicle sensor nodes and adjacent nodes as follows:
Figure BDA0002447414910000071
step 2, establishing a linear continuous time system model, wherein the system input is assumed to be 0, and the target motion state equation is as follows:
Figure BDA0002447414910000072
state x ═ x1 x2 x3 x4]TContaining two position information and two velocity information, w is the variance Q ═ 2211]TWhite gaussian noise.
Establishing observation model and observation matrix H of each nodei=I4Measuring the noise wijIs a variance of nc=0.5*[1 1 1 1]TWhite gaussian noise.
And 3, carrying out IKCF filtering according to the formulas (3) to (6).
And 4, performing sequential fast covariance cross fusion according to the formulas (7) to (10).
The embodiment is based on a Matlab simulation platform. As can be seen from fig. 3(a) to (d), the tracking error of each unmanned aerial vehicle sensor node gradually approaches to 0, and the mean square error of the estimation error of each node for the target is shown in the following table:
Source(IKCF) MSE
Node1 0.002535
Node2 0.002611
Node3 0.002591
Node4 0.002469
Node5 0.002693
DataFusion 0.002425
as shown in FIGS. 4(a) to (d), the SFCI data fusion results show that the error is almost 0 or so. From the above results, the target tracking and positioning method based on the IKCF filtering is adopted to realize the state consistency estimation of the multiple unmanned aerial vehicles on the observed target under the fixed topological structure, namely the whole unmanned aerial vehicle network gradually approaches to achieve the consistency tracking on the target, the fusion results of all nodes can be ensured to be consistent, and the unified data input is provided for the subsequent tracking and reconnaissance realization of the system.

Claims (4)

1. A multi-unmanned aerial vehicle cooperative tracking and positioning method is characterized by comprising the following steps:
step 1, establishing a directed connectivity graph based on a graph theory method according to a topological structure diagram of an unmanned aerial vehicle communication network, and obtaining connectivity information of sensor nodes and adjacent nodes of the unmanned aerial vehicle;
step 2, establishing a linear continuous time system model;
step 3, estimating target unknown information by using an IKCF filtering algorithm;
and 4, performing data fusion on the estimation result of each node by adopting a sequential rapid covariance cross fusion algorithm to obtain the determined target position.
2. The method for cooperative tracking and positioning of multiple unmanned aerial vehicles according to claim 1, wherein the step 2 of establishing the linear continuous time system model specifically comprises the following steps:
Figure FDA0002447414900000011
wherein x ∈ RnThe process noise w-N (0, Q) represents Gaussian white noise with variance Q;
Figure FDA0002447414900000015
is the derivative of the target state, u is the input state, A is the system matrix, B is the input matrix, and F is the noise matrix;
suppose that the state quantity in the formula (1) is G by N connected graphs1The observation model of the nodes in the system is expressed as:
Figure FDA0002447414900000012
wherein ω isij~N(0,nc) Represents a variance of
Figure FDA0002447414900000013
Inter-point communication noise of (1); ziIs a direct observation of the target by sensor i:
Zi=Hix+vi (3)
Hifor the measurement matrix, the noise v is measuredi~(0,Ri) Is a variance of RiWhite gaussian noise of (1);
zithe observation result of the sensor with the neighbor node state and the weighting coefficient is obtained; a isijIndicating whether the sensor node i and the node j communicate or not, if so, aij1, otherwise aij=0;
Figure FDA0002447414900000014
And PjIs a variance matrix of state estimators and estimators; g if sensor i can directly obtain the measured information of the targeti01, otherwise gi0=0。
3. The cooperative tracking and positioning method for multiple unmanned aerial vehicles according to claim 2, wherein the estimation of the target unknown information by using the IKCF filtering algorithm in step 3 is as follows:
IKCF filtering is performed according to formulas (3) to (6)
Figure FDA0002447414900000021
Figure FDA0002447414900000022
Figure FDA0002447414900000023
Figure FDA0002447414900000024
Wherein subscripts i and j represent ith and jth sensor nodes;
Figure FDA0002447414900000025
Figure FDA0002447414900000026
Figure FDA0002447414900000027
and
Figure FDA0002447414900000028
direct and indirect kalman gains, respectively;
Figure FDA0002447414900000029
and PjIs a variance matrix of state estimators and estimators.
4. The cooperative tracking and positioning method for multiple unmanned aerial vehicles according to claim 3, wherein in step 4, a sequential fast covariance cross fusion algorithm is adopted to perform data fusion on the estimation results of each node to obtain a determined target position, and the specific steps are as follows:
performing sequential fast covariance cross fusion according to the formulas (7) to (10), completing fusion of motion information of the target by multiple steps, and performing fusion once after each node receives one frame of data:
Figure FDA00024474149000000210
Figure FDA00024474149000000211
Figure FDA00024474149000000212
Figure FDA00024474149000000213
wherein k is the number of local estimation values which are fused by the local estimation values contained in the current time point fusion node,
Figure FDA00024474149000000214
is a fusion value P obtained by fusing k local estimation values with the current nodef,kIs a variance matrix of fusion values obtained after the current node fuses k local estimation values,
Figure FDA00024474149000000215
is a local estimated value to be fused newly received after the current node fuses k local estimated values, PnewIs a variance matrix of the local estimation values to be fused newly received after the current node fuses k local estimation values, epsilonfIs the intermediate fusion coefficient, omega, of the fusion value obtained by fusing k local estimation values with the current nodefIs a normalized fusion coefficient of a fusion value obtained by fusing k local estimation values with the current node, epsilonnewIs the intermediate fusion coefficient, omega, of the local estimated values to be fused newly received after the current node fuses k local estimated valuesnewIs a normalized fusion coefficient of a local estimation value to be fused newly received after the current node fuses k local estimation values,
Figure FDA0002447414900000031
is the fusion value of the local target state estimation values of k +1 nodes, Pf,k+1Is a fusion variance matrix of k +1 node local target state estimation values.
CN202010283032.2A 2020-04-13 2020-04-13 Multi-unmanned aerial vehicle cooperative tracking and positioning method Pending CN113534834A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010283032.2A CN113534834A (en) 2020-04-13 2020-04-13 Multi-unmanned aerial vehicle cooperative tracking and positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010283032.2A CN113534834A (en) 2020-04-13 2020-04-13 Multi-unmanned aerial vehicle cooperative tracking and positioning method

Publications (1)

Publication Number Publication Date
CN113534834A true CN113534834A (en) 2021-10-22

Family

ID=78087783

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010283032.2A Pending CN113534834A (en) 2020-04-13 2020-04-13 Multi-unmanned aerial vehicle cooperative tracking and positioning method

Country Status (1)

Country Link
CN (1) CN113534834A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116295359A (en) * 2023-05-23 2023-06-23 中国科学院数学与系统科学研究院 Distributed self-adaptive collaborative tracking positioning method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110044356A (en) * 2019-04-22 2019-07-23 北京壹氢科技有限公司 A kind of lower distributed collaboration method for tracking target of communication topology switching

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110044356A (en) * 2019-04-22 2019-07-23 北京壹氢科技有限公司 A kind of lower distributed collaboration method for tracking target of communication topology switching

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
从金亮: "快速协方差交叉融合算法及应用", 《自动化学报》 *
吉鸿海: "自适应迭代学习控制和卡尔曼一致性滤波及在高速列车运行控制中的应用", 《工程科技Ⅱ辑信息科技》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116295359A (en) * 2023-05-23 2023-06-23 中国科学院数学与系统科学研究院 Distributed self-adaptive collaborative tracking positioning method
CN116295359B (en) * 2023-05-23 2023-08-15 中国科学院数学与系统科学研究院 Distributed self-adaptive collaborative tracking positioning method

Similar Documents

Publication Publication Date Title
CN108896047B (en) Distributed sensor network collaborative fusion and sensor position correction method
CN107255795B (en) Indoor mobile robot positioning method and device based on EKF/EFIR hybrid filtering
CN108364014A (en) A kind of multi-sources Information Fusion Method based on factor graph
Huang et al. An observability-constrained sliding window filter for SLAM
Fang et al. Graph optimization approach to range-based localization
Zhou et al. Reinforcement learning based data fusion method for multi-sensors
CN109151759B (en) Sensor network distributed information weighted consistency state filtering method
CN104777469B (en) A kind of radar node selecting method based on error in measurement covariance matrix norm
Liu et al. Measurement dissemination-based distributed bayesian filter using the latest-in-and-full-out exchange protocol for networked unmanned vehicles
CN113534834A (en) Multi-unmanned aerial vehicle cooperative tracking and positioning method
Zhao et al. L1-norm constraint kernel adaptive filtering framework for precise and robust indoor localization under the internet of things
CN109341690B (en) Robust and efficient combined navigation self-adaptive data fusion method
Zamani et al. Minimum-energy distributed filtering
CN111883265A (en) Target state estimation method applied to fire control system
Di Rocco et al. Sensor network localisation using distributed extended kalman filter
CN103313384A (en) Wireless sensor network target tracking method based on informational consistency
CN109474892B (en) Strong robust sensor network target tracking method based on information form
CN111216146B (en) Two-part consistency quantitative control method suitable for networked robot system
CN110807478B (en) Cooperative target tracking method under condition of observing intermittent loss
Kong et al. Hybrid indoor positioning method of BLE and monocular VINS based smartphone
CN109282820B (en) Indoor positioning method based on distributed hybrid filtering
CN114705223A (en) Inertial navigation error compensation method and system for multiple mobile intelligent bodies in target tracking
Jung et al. Scalable and Modular Ultra-Wideband Aided Inertial Navigation
CN111695617A (en) Distributed fire control fusion method based on improved covariance cross algorithm
CN112285697A (en) Multi-sensor multi-target space-time deviation calibration and fusion method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20211022

RJ01 Rejection of invention patent application after publication