CN116383966A - Multi-unmanned system distributed cooperative positioning method based on interaction multi-model - Google Patents

Multi-unmanned system distributed cooperative positioning method based on interaction multi-model Download PDF

Info

Publication number
CN116383966A
CN116383966A CN202310331808.7A CN202310331808A CN116383966A CN 116383966 A CN116383966 A CN 116383966A CN 202310331808 A CN202310331808 A CN 202310331808A CN 116383966 A CN116383966 A CN 116383966A
Authority
CN
China
Prior art keywords
model
unmanned system
unmanned
moment
under
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310331808.7A
Other languages
Chinese (zh)
Other versions
CN116383966B (en
Inventor
王国庆
范潇潇
赵嘉祥
张子昊
赵鑫
林常见
马磊
杨春雨
代伟
王帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Mining and Technology CUMT
Original Assignee
China University of Mining and Technology CUMT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Mining and Technology CUMT filed Critical China University of Mining and Technology CUMT
Priority to CN202310331808.7A priority Critical patent/CN116383966B/en
Publication of CN116383966A publication Critical patent/CN116383966A/en
Application granted granted Critical
Publication of CN116383966B publication Critical patent/CN116383966B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/15Vehicle, aircraft or watercraft design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/11Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/17Function evaluation by approximation methods, e.g. inter- or extrapolation, smoothing, least mean square method
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation

Abstract

The invention discloses a multi-unmanned system distributed cooperative positioning method based on an interactive multi-model, which comprises the following steps: modeling the motion state of the unmanned system by adopting a multi-model strategy, carrying out distributed filtering update on measurement information between the relative landmark and other unmanned systems by means of first-order Taylor expansion, and realizing the fusion of state estimation results under each model by utilizing interactive multi-model. The invention solves the problem of reduced or even divergent positioning precision caused by inaccurate modeling of the conventional method under the complex maneuvering condition of the unmanned system, realizes the purpose of assisting the unmanned system with high-precision positioning equipment in positioning other unmanned systems with low-precision positioning equipment, and provides a positioning method with low cost, easy expansion and high precision for the unmanned system cluster operation.

Description

Multi-unmanned system distributed cooperative positioning method based on interaction multi-model
Technical Field
The invention relates to the field of unmanned system distributed co-location, in particular to a multi-unmanned system distributed co-location method based on an interactive multi-model.
Background
In the application process, the reliable and accurate positioning of an unmanned system is a primary premise for completing various operations. The co-location technology of the multi-unmanned system is always the focus of research, so that the improvement of the co-location precision of the multi-unmanned system has great significance in theory and practice.
Under the closed or obstacle shielding environment, for example, when the unmanned underwater vehicle is used for cooperative operation to complete the tasks of mine removal, tracking, investigation and the like, the unmanned underwater vehicle needs to determine the information such as the position and the like of the unmanned underwater vehicle so as to facilitate the subsequent planning and control. As the GPS signal in water decays rapidly and all unmanned submarines are equipped with expensive high-precision navigation systems, the cost is high, and the adoption of the submarines equipped with high-precision navigation equipment for correcting the positioning precision of the low-precision navigation equipment by using relative observation is an economic and reliable scheme. How to fuse the relative measurement information between unmanned platforms such as unmanned submarines and the like which are all under the maneuvering motion is a great challenge for realizing the high-precision co-positioning unmanned system cluster operation, and the difficulty is how to model the complex motion and how to utilize the relative measurement to realize the tracking of other unmanned platforms to finally realize the co-positioning.
Chinese patent publication No. CN104252178B discloses a strong maneuver-based target tracking method, which uses an IMM algorithm for recalculating weights based on the IMM algorithm. The method not only utilizes the model probability, but also fully utilizes the filtering covariance matrix, so that the tracking accuracy is higher. The Chinese patent with publication number of CN102568004A discloses a high maneuvering target tracking algorithm, which tracks maneuvering targets by adopting an IMM-based Kalman filter, combines a current statistical model with acceleration self-adaptive adjustment with CV and CA models in the IMM algorithm, improves the performance of the whole IMM algorithm, calculates Markov transition probability on line in real time by utilizing system mode information hidden in current measurement, thereby obtaining more accurate posterior estimation and improving model fusion precision. The two Chinese patents provide different solutions for the problem that the unmanned system has low positioning accuracy due to strong mobility under single model modeling, but the two algorithms are only applicable to a single unmanned system and are not applicable to a plurality of unmanned systems which need to cooperate with each other to finish the operation.
The Chinese patent with publication number of CN11595348B discloses a master-slave cooperative positioning method of an autonomous underwater vehicle integrated navigation system, which is characterized in that the master AUV transmits own position information, the slave AUV acquires the relative distance with the master AUV through sound velocity and time delay, and the master AUV utilizes speed measurement information and distance measurement information to cooperatively position any slave AUV, so that the distance between the master AUV and the slave AUV is corrected, and the precision of master-slave cooperative positioning can be improved. However, in the state space modeling of the method, the state equation is modeled by using only one traditional single model, and the autonomous underwater vehicle has great mobility in actual actions, and the master-slave scheme is not suitable for the situation that the number of underwater vehicles is large.
In the above-mentioned research unmanned system co-location method, the complexity and mobility of unmanned system cluster movement are not considered at the same time, and the distributed location strategy which is easy to expand and maintain is considered at the same time.
Disclosure of Invention
The invention aims to: the invention aims to provide a multi-unmanned system distributed cooperative positioning method based on an interactive multi-model, which can realize the purpose that an unmanned system provided with high-precision positioning equipment assists other unmanned systems provided with low-precision positioning equipment to position.
The technical scheme is as follows: the invention relates to a multi-unmanned system distributed cooperative positioning method, which comprises the following steps:
s1, analyzing a motion form mathematical model of an unmanned system by utilizing a multi-model strategy, constructing a model set, and simultaneously establishing a nonlinear measurement equation of the multi-unmanned system;
s2, obtaining interactive input by using state estimation values of each unmanned system at the last moment under different motion models;
s3, after time updating is carried out on each unmanned system, filtering updating is realized by adopting a distributed structure by utilizing relative measurement of each unmanned system and measurement information of relative landmarks;
s4, updating the probability of each unmanned system corresponding to different motion models through likelihood functions;
and S5, carrying out weighted fusion on the estimation results of the unmanned systems corresponding to different models to obtain a fusion estimation result of the positioning of each unmanned system.
Further, in step S1, according to the complexity and mobility of the unmanned system, a motion form mathematical model of the unmanned system is analyzed by using a multi-model strategy and a model set is constructed, and a nonlinear measurement equation of the multi-unmanned system is established; the specific implementation steps are as follows:
step 11, constructing a model set of a motion equation of the multi-unmanned system:
Figure BDA0004155192020000021
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004155192020000022
for the ith unmanned systemSystem state variable under mth model at k time, F m (k) For the state transition matrix under the mth model at the k moment, G m (k) A system noise matrix for model m; w (w) m (k) Is zero in mean value and Q in covariance matrix m Is a process noise of (2);
step 12, modeling a nonlinear measurement equation:
Figure BDA0004155192020000023
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004155192020000031
for the measurement variable of the ith unmanned system under the mth model at k moment, +.>
Figure BDA0004155192020000032
An observation function matrix of the ith unmanned system under the mth model at the k moment; v m (k) Is zero mean and covariance matrix is R m Is a measurement noise of the test piece.
Further, in step S2, a state estimate by the ith unmanned system under the nth model at time k-1
Figure BDA0004155192020000033
Covariance matrix +.>
Figure BDA0004155192020000034
Model probability values μ for each filter are combined n (k-1) a Markov probability transition matrix p nm Calculating to obtain a mixed state estimated value of the ith unmanned system under the mth model at the k-1 moment +.>
Figure BDA0004155192020000035
And hybrid covariance value->
Figure BDA0004155192020000036
Performing cyclic calculation by taking the mixed state estimation value and the mixed covariance value as initial states; detailed description of the inventionThe method comprises the following steps:
s21, calculating the mixing probability from the model n to the model m as follows:
Figure BDA0004155192020000037
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004155192020000038
the probability is predicted for the model m, and the calculation formula is as follows:
Figure BDA0004155192020000039
wherein p is nm The transition probability from the corresponding model n to the corresponding model m; mu (mu) n (k) Is the probability of model n at time k;
s22, calculating a model m mixed state estimated value as follows:
Figure BDA00041551920200000310
s23, calculating a model m mixed covariance value:
Figure BDA00041551920200000311
where r is the total number of models and T is the transpose of the matrix.
Further, in step S3, after the time update, each unmanned system performs filtering update on the relative measurement of each other and the measurement information of the relative landmarks by means of first-order taylor expansion, and calculates respective state estimation values, error covariance matrix, innovation and innovation covariance matrix, and a semi-cross-correlation covariance matrix between unmanned systems, which specifically includes the following steps:
s31, using the hybrid state estimation value of the unmanned system i
Figure BDA0004155192020000041
And a hybrid covariance matrix
Figure BDA0004155192020000042
And (5) time updating:
Figure BDA0004155192020000043
Figure BDA0004155192020000044
Figure BDA0004155192020000045
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004155192020000046
and->
Figure BDA0004155192020000047
The state prediction value and the state prediction error covariance matrix of the ith unmanned system under the mth model are respectively +.>
Figure BDA0004155192020000048
The method is a semi-cross correlation covariance matrix of the unmanned system i and the unmanned system j under the mth model at the k moment, and the semi-cross correlation covariance matrix meets the following conditions:
Figure BDA0004155192020000049
Figure BDA00041551920200000410
the cross-correlation covariance matrix of the unmanned system i and the unmanned system j under the m model at the k moment;
s32, detecting the position of the landmark L, and calculating an innovation covariance matrix under the m-th model at the k moment through an actual measurement value and a predicted measurement value of the unmanned system i relative to the landmark L, wherein the calculation formula is as follows:
Figure BDA00041551920200000411
Figure BDA00041551920200000412
wherein the method comprises the steps of
Figure BDA00041551920200000413
Is->
Figure BDA00041551920200000414
At->
Figure BDA00041551920200000415
A measured jacobian matrix at the location;
s33, calculating a Kalman filtering gain of the unmanned system i relative to the landmark L under the mth model at the k moment:
Figure BDA00041551920200000416
s34, calculating a filter estimated value and a filter covariance matrix of the unmanned system i relative to the landmark L under the mth model at the k moment:
Figure BDA00041551920200000417
Figure BDA00041551920200000418
Figure BDA00041551920200000419
wherein I represents an identity matrix;
s35, detecting the position of the unmanned system j, and calculating an innovation covariance matrix under the m model at the k moment through an actual measurement value and a predicted measurement value of the unmanned system i relative to the unmanned system j, wherein the calculation formula is as follows:
Figure BDA0004155192020000051
Figure BDA0004155192020000052
wherein the method comprises the steps of
Figure BDA0004155192020000053
To augment the jacobian matrix +.>
Figure BDA0004155192020000054
And
Figure BDA0004155192020000055
respectively is a measurement function h m (X m ) At->
Figure BDA0004155192020000056
And->
Figure BDA0004155192020000057
A measured jacobian matrix at the location;
s36, calculating the augmentation state estimation of the unmanned system i and the unmanned system j under the mth model at the k moment:
Figure BDA0004155192020000058
s37, calculating an augmented estimation error covariance matrix of the unmanned system i and the unmanned system j under the mth model at the k moment:
Figure BDA0004155192020000059
wherein the method comprises the steps of
Figure BDA00041551920200000510
S38, calculating a filter estimated value and a filter covariance matrix of the unmanned system i relative to the unmanned system j under the mth model at the k moment:
Figure BDA00041551920200000511
Figure BDA00041551920200000512
Figure BDA00041551920200000513
Figure BDA00041551920200000514
wherein t is the remaining unmanned systems except unmanned system i and unmanned system j;
further, in step S4, the probability that each unmanned system corresponds to a different motion model is updated, and the specific implementation steps are as follows:
s41, updating the model probability at the moment k through a likelihood function, wherein the likelihood function of the model m is as follows:
Figure BDA00041551920200000515
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004155192020000061
the new information covariance matrix and the new information covariance matrix of the unmanned system i under the mth model at the k moment are respectively, and x is the dimension of a state variable;
s42, the probability of the model m is:
Figure BDA0004155192020000062
wherein c is a normalization constant,
Figure BDA0004155192020000063
further, in step S5, each unmanned system performs weighted fusion on the estimation results corresponding to different models, so as to obtain a fusion estimation result of positioning of each unmanned system, and the specific implementation steps are as follows:
s51, based on the model probability value corresponding to each filter, obtaining a total state estimated value by weighting calculation on the estimated value of each filter of the unmanned system i at the moment:
Figure BDA0004155192020000064
s52, calculating an overall covariance estimation value of the unmanned system i:
Figure BDA0004155192020000065
compared with the prior art, the invention has the following remarkable effects:
1. the invention provides a multi-unmanned system distributed cooperative positioning method considering switching of different motion models, which is characterized in that a multi-model method is adopted to model the motion state of an unmanned system, distributed filtering updating is carried out on measurement information between a relative landmark and other unmanned systems by means of first-order Taylor expansion, and the fusion of state results under each model is realized by utilizing an interactive multi-model method, so that the optimal state estimation of each unmanned system is obtained;
2. the invention adopts a distributed scheme to provide reliable positioning information for the cluster operation of the multi-unmanned system, realizes the purpose that the unmanned system provided with high-precision positioning equipment assists other unmanned systems provided with low-precision positioning equipment in positioning, has the characteristics of low cost, strong expansibility, high estimation precision and the like, can improve the positioning precision of the multi-unmanned system in the environment where radio signals such as underground, indoor, underwater and the like are interfered, and ensures the collaborative operation task of the multi-unmanned system.
Drawings
FIG. 1 is a schematic flow chart of an algorithm of the present invention;
FIG. 2 is a diagram of simulation results of the present invention;
fig. 3 is a schematic diagram of a practical application scenario of the collaborative operation in the underwater multi-unmanned system.
Detailed Description
The invention is described in further detail below with reference to the drawings and the detailed description.
As shown in fig. 1, the distributed co-location method of the multi-unmanned system based on the interactive multi-model specifically includes the following steps:
step 1, unmanned system modeling
According to the motion complexity and mobility of the unmanned system, a mathematical model of a possible motion form of the unmanned system is analyzed by utilizing a multi-model strategy, a model set is constructed, and a nonlinear measurement equation of the multi-unmanned system is established; the specific method comprises the following steps:
step 11, constructing a model set of a motion equation of the multi-unmanned system:
Figure BDA0004155192020000071
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004155192020000072
is the system state variable of the ith unmanned system under the mth model at the k moment, F m (k) For the state transition matrix under the mth model at the k moment, G m (k) A system noise matrix for model m; w (w) m (k) Is zero in mean value and Q in covariance matrix m Is a process noise of (a).
Step 12, modeling a nonlinear measurement equation:
Figure BDA0004155192020000073
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004155192020000074
for the measurement variable of the ith unmanned system under the mth model at k moment, +.>
Figure BDA0004155192020000075
An observation function matrix of the ith unmanned system under the mth model at the k moment; v m (k) Is zero mean and covariance matrix is R m Is a measurement noise of the test piece.
Step 2, model fusion input
And obtaining interactive input by using the state estimation value of each unmanned system at the last moment under different motion models. State estimation by the ith unmanned system under model n at time k-1
Figure BDA0004155192020000076
Covariance matrix
Figure BDA0004155192020000077
Model probability values μ for each filter are combined n (k-1) a Markov probability transition matrix p nm Calculating to obtain mixed state estimated value +.>
Figure BDA0004155192020000078
And hybrid covariance value->
Figure BDA0004155192020000079
And taking the mixed state estimation value and the mixed covariance value as initial states to carry out cyclic calculation. The specific calculation is as follows:
step 21, calculating the mixing probability of the model n to the model m as follows:
Figure BDA00041551920200000710
in the method, in the process of the invention,
Figure BDA00041551920200000711
the prediction probability (normalization constant) of the model m is calculated as:
Figure BDA0004155192020000081
wherein r is the total number of models, and pnm is the transition probability between the corresponding model n and the corresponding model m; mu (mu) n (k) Is the probability of model n at time k;
step 22, calculating the model m mixed state estimation value as follows:
Figure BDA0004155192020000082
step 23, calculating a model m hybrid covariance value as follows:
Figure BDA0004155192020000083
where "T" represents the matrix transpose.
Step 3, filtering the model condition,
after time updating, each unmanned system realizes filtering updating on relative measurement of each other and measurement information of relative landmarks by means of first-order Taylor expansion by adopting a distributed structure, and calculates respective state estimation values, error covariance matrixes, innovation and innovation covariance matrixes and semi-cross-correlation covariance matrixes among the unmanned systems, wherein the specific method comprises the following steps:
step 31, using the hybrid state estimation value of the unmanned system i
Figure BDA0004155192020000084
And a hybrid covariance matrix
Figure BDA0004155192020000085
And (5) time updating:
Figure BDA0004155192020000086
Figure BDA0004155192020000087
Figure BDA0004155192020000088
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004155192020000089
the method is a semi-cross correlation covariance matrix of the unmanned system i and the unmanned system j under the mth model at the k moment, and the semi-cross correlation covariance matrix meets the following conditions:
Figure BDA00041551920200000810
Figure BDA00041551920200000811
the cross-correlation covariance matrix of the unmanned system i and the unmanned system j under the mth model at the k moment.
Step 32, detecting the position of the landmark L, and calculating an innovation and innovation covariance matrix under the k moment m model by the unmanned system i relative to the actual measurement value and the predicted measurement value of the landmark L, wherein the calculation formula is as follows:
Figure BDA0004155192020000091
Figure BDA0004155192020000092
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004155192020000093
for measuring jacobian matrix.
Step 33, calculating a Kalman filtering gain of the unmanned system i relative to the landmark L under the mth model at the k moment:
Figure BDA0004155192020000094
step 34, calculating a filter estimated value and a filter covariance matrix of the unmanned system i relative to the landmark L under the mth model at the k moment:
Figure BDA0004155192020000095
Figure BDA0004155192020000096
Figure BDA0004155192020000097
wherein I represents an identity matrix.
Step 35, detecting the position of the unmanned system j, and calculating an innovation covariance matrix under the m-th model at the k moment by using the actual measurement value and the predicted measurement value of the unmanned system i relative to the unmanned system j, wherein the calculation formula is as follows:
Figure BDA0004155192020000098
Figure BDA0004155192020000099
wherein the method comprises the steps of
Figure BDA00041551920200000910
In order to augment the jacobian matrix,/>
Figure BDA00041551920200000911
and
Figure BDA00041551920200000912
respectively is a measurement function h m (X m ) At->
Figure BDA00041551920200000913
And->
Figure BDA00041551920200000914
A measured jacobian matrix at the location;
step 36, calculating the augmented state estimation of the unmanned systems i and j under the mth model at the k moment:
Figure BDA00041551920200000915
step 37, calculating an augmented estimation error covariance matrix of the unmanned systems i and j under the mth model at the k moment:
Figure BDA0004155192020000101
wherein the method comprises the steps of
Figure BDA0004155192020000102
Step 38, calculating a filter estimated value and a filter covariance matrix of the unmanned system i relative to the unmanned system j under the mth model at the k moment:
Figure BDA0004155192020000103
Figure BDA0004155192020000104
Figure BDA0004155192020000105
Figure BDA0004155192020000106
wherein t is the remaining unmanned systems except unmanned system i and unmanned system j;
step 4, updating the model probability
The probability of each unmanned system corresponding to different motion models is updated, and the specific method comprises the following steps:
step 41, updating the model probability at the time k through a likelihood function, wherein the likelihood function of the model m is:
Figure BDA0004155192020000107
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004155192020000108
and->
Figure BDA0004155192020000109
The new information covariance matrix and the new information covariance matrix of the unmanned system i under the mth model at the k moment are respectively, and x is the dimension of a state variable;
in step 42, the probability of model m is:
Figure BDA00041551920200001010
wherein c is a normalization constant,
Figure BDA00041551920200001011
step 5, model fusion output
Each unmanned system carries out weighted fusion on the estimation results corresponding to different models to obtain a fusion estimation result of each unmanned system positioning, and the specific method comprises the following steps:
step 51, based on the model probability value corresponding to each filter, obtaining a total state estimated value for the estimated value of each filter of the unmanned system i at the moment through weighted calculation:
Figure BDA0004155192020000111
step 52, calculating the total covariance estimate of the unmanned system i:
Figure BDA0004155192020000112
the effectiveness of the present invention is further verified by specific examples as follows:
in order to verify the effect of the method provided by the invention, four unmanned systems and three motion models are selected to construct a multi-unmanned system co-location system, namely r=3.
And selecting a second-order constant speed model and a second-order coordinated turning model to design a process equation model set:
Figure BDA0004155192020000113
Figure BDA0004155192020000114
wherein Δt is the sampling time interval; f (F) 1 Is a second-order constant speed model;
Figure BDA0004155192020000115
wherein ω is the turn rate; f (F) 2 Is a second-order coordinated turning model;
modeling a nonlinear metrology equation using distance and orientation:
Figure BDA0004155192020000116
in the method, in the process of the invention,
Figure BDA0004155192020000117
for the position of the unmanned system i at time k, < >>
Figure BDA0004155192020000118
Is the location of a landmark, radar or other unmanned system;
in the simulation, the embodiment sets the model initialization probability of each unmanned system to μ 0 =[0.3,0.3,0.4] T The initial state transition matrix is
Figure BDA0004155192020000121
The course of motion of each unmanned system is shown in table 1:
table 1 four unmanned system maneuver state tables
Figure BDA0004155192020000122
Note that: CV is uniform linear motion; CT_ (+2) is a left-turn motion with a turning rate of 2 DEG/s, similarly CT_ (-2) is a right-turn motion with a turning rate of 2 DEG/s, and CT_ (+3), CT_ (+4), CT_ (+5) and CT_ (-3), CT_ (-4), CT_ (-3.5) are analogically.
Fig. 2 is a simulation result diagram of the method proposed by the present invention, where U1t represents a real track of the unmanned system 1, U1e represents a track after filtering by the unmanned system 1, and so on for U2t, U3t, U4t, and U2e, U3e, and U4e. The result shows that the invention can achieve better positioning accuracy for the multi-unmanned system co-positioning system with multiple movements.
As shown in fig. 3, in the practical application scenario of the embodiment, the radio is not utilized underwater due to attenuation of the electromagnetic wave signal under water, and one or more unmanned submarines carry high-precision positioning devices or float to the water surface at regular time to perform state correction, so that the positioning precision of the cooperative system is improved.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (6)

1. A multi-unmanned system distributed cooperative positioning method based on an interactive multi-model is characterized by comprising the following steps:
s1, analyzing a motion form mathematical model of an unmanned system by utilizing a multi-model strategy, constructing a model set, and simultaneously establishing a nonlinear measurement equation of the multi-unmanned system;
s2, obtaining interactive input by using state estimation values of each unmanned system at the last moment under different motion models;
s3, after time updating is carried out on each unmanned system, filtering updating is realized by adopting a distributed structure by utilizing relative measurement of each unmanned system and measurement information of relative landmarks;
s4, updating the probability of each unmanned system corresponding to different motion models through likelihood functions;
and S5, carrying out weighted fusion on the estimation results of the unmanned systems corresponding to different models to obtain a fusion estimation result of the positioning of each unmanned system.
2. The interactive multi-model-based multi-unmanned system distributed cooperative positioning method according to claim 1, wherein in step S1, according to the complexity and mobility of the unmanned system motion, a multi-model strategy is utilized to analyze the mathematical model of the unmanned system motion form and construct a model set, and simultaneously a nonlinear measurement equation of the multi-unmanned system is established; the specific implementation steps are as follows:
step 11, constructing a model set of a motion equation of the multi-unmanned system:
Figure QLYQS_1
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure QLYQS_2
is the system state variable of the ith unmanned system under the mth model at the k moment, F m (k) For the state transition matrix under the mth model at the k moment, G m (k) A system noise matrix for model m; w (w) m (k) Is zero in mean value and Q in covariance matrix m Is a process noise of (2);
step 12, modeling a nonlinear measurement equation:
Figure QLYQS_3
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure QLYQS_4
for the measurement variable of the ith unmanned system under the mth model at k moment, +.>
Figure QLYQS_5
An observation function matrix of the ith unmanned system under the mth model at the k moment; v m (k) Is zero mean and covariance matrix is R m Is a measurement noise of the test piece.
3. The interactive multi-model based multi-unmanned system distributed co-location method according to claim 2, wherein in step S2, the state estimate by the i-th unmanned system under the n-th model at time k-1
Figure QLYQS_6
Covariance matrix +.>
Figure QLYQS_7
Model probability values μ for each filter are combined n (k-1) a Markov probability transition matrix p nm Calculating to obtain a mixed state estimated value of the ith unmanned system under the mth model at the k-1 moment +.>
Figure QLYQS_8
And hybrid covariance value->
Figure QLYQS_9
Performing cyclic calculation by taking the mixed state estimation value and the mixed covariance value as initial states; the specific implementation steps are as follows:
s21, calculating the mixing probability from the model n to the model m as follows:
Figure QLYQS_10
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure QLYQS_11
the probability is predicted for the model m, and the calculation formula is as follows:
Figure QLYQS_12
wherein p is nm The transition probability from the corresponding model n to the corresponding model m; mu (mu) n (k) Is the probability of model n at time k;
s22, calculating a model m mixed state estimated value as follows:
Figure QLYQS_13
s23, calculating a model m mixed covariance value:
Figure QLYQS_14
where r is the total number of models and T is the transpose of the matrix.
4. The method for distributed co-location of multiple unmanned systems based on interactive multiple models according to claim 2, wherein in step S3, after each unmanned system performs time update, the relative measurement of each other and the measurement information of the relative landmarks are updated by means of a first-order taylor expansion using a distributed structure, and the respective state estimation value, error covariance matrix, innovation and innovation covariance matrix, and semi-cross-correlation covariance matrix between unmanned systems are calculated, which comprises the following specific implementation steps:
s31, using the hybrid state estimation value of the unmanned system i
Figure QLYQS_15
And a hybrid covariance matrix
Figure QLYQS_16
And (5) time updating:
Figure QLYQS_17
Figure QLYQS_18
Figure QLYQS_19
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure QLYQS_20
and->
Figure QLYQS_21
The state prediction value and the state prediction error covariance matrix of the ith unmanned system under the mth model are respectively +.>
Figure QLYQS_22
The method is a semi-cross correlation covariance matrix of the unmanned system i and the unmanned system j under the mth model at the k moment, and the semi-cross correlation covariance matrix meets the following conditions:
Figure QLYQS_23
Figure QLYQS_24
the cross-correlation covariance matrix of the unmanned system i and the unmanned system j under the m model at the k moment;
s32, detecting the position of the landmark L, and calculating an innovation covariance matrix under the m-th model at the k moment through an actual measurement value and a predicted measurement value of the unmanned system i relative to the landmark L, wherein the calculation formula is as follows:
Figure QLYQS_25
Figure QLYQS_26
wherein the method comprises the steps of
Figure QLYQS_27
Is->
Figure QLYQS_28
At->
Figure QLYQS_29
A measured jacobian matrix at the location;
s33, calculating a Kalman filtering gain of the unmanned system i relative to the landmark L under the mth model at the k moment:
Figure QLYQS_30
s34, calculating a filter estimated value and a filter covariance matrix of the unmanned system i relative to the landmark L under the mth model at the k moment:
Figure QLYQS_31
Figure QLYQS_32
Figure QLYQS_33
wherein I represents an identity matrix;
s35, detecting the position of the unmanned system j, and calculating an innovation covariance matrix under the m model at the k moment through an actual measurement value and a predicted measurement value of the unmanned system i relative to the unmanned system j, wherein the calculation formula is as follows:
Figure QLYQS_34
Figure QLYQS_35
wherein the method comprises the steps of
Figure QLYQS_36
To augment the jacobian matrix +.>
Figure QLYQS_37
And
Figure QLYQS_38
respectively is a measurement function h m (X m ) At->
Figure QLYQS_39
And->
Figure QLYQS_40
A measured jacobian matrix at the location;
s36, calculating the augmentation state estimation of the unmanned system i and the unmanned system j under the mth model at the k moment:
Figure QLYQS_41
s37, calculating an augmented estimation error covariance matrix of the unmanned system i and the unmanned system j under the mth model at the k moment:
Figure QLYQS_42
wherein the method comprises the steps of
Figure QLYQS_43
S38, calculating a filter estimated value and a filter covariance matrix of the unmanned system i relative to the unmanned system j under the mth model at the k moment:
Figure QLYQS_44
Figure QLYQS_45
Figure QLYQS_46
Figure QLYQS_47
where t is the remaining unmanned systems except unmanned system i and unmanned system j.
5. The distributed co-location method of multiple unmanned systems based on interactive multiple models according to claim 4, wherein in step S4, updating the probability of each unmanned system corresponding to a different motion model is performed by:
s41, updating the model probability at the moment k through a likelihood function, wherein the likelihood function of the model m is as follows:
Figure QLYQS_48
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure QLYQS_49
the new information covariance matrix and the new information covariance matrix of the unmanned system i under the mth model at the k moment are respectively, and x is the dimension of a state variable;
s42, the probability of the model m is:
Figure QLYQS_50
wherein c is a normalization constant,
Figure QLYQS_51
6. the multi-unmanned system distributed cooperative positioning method based on the interactive multi-model according to claim 5, wherein in step S5, each unmanned system performs weighted fusion on the estimation results of the corresponding different models to obtain a fusion estimation result of each unmanned system positioning, and the specific implementation steps are as follows:
s51, based on the model probability value corresponding to each filter, obtaining a total state estimated value by weighting calculation on the estimated value of each filter of the unmanned system i at the moment:
Figure QLYQS_52
s52, calculating an overall covariance estimation value of the unmanned system i:
Figure QLYQS_53
CN202310331808.7A 2023-03-30 2023-03-30 Multi-unmanned system distributed cooperative positioning method based on interaction multi-model Active CN116383966B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310331808.7A CN116383966B (en) 2023-03-30 2023-03-30 Multi-unmanned system distributed cooperative positioning method based on interaction multi-model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310331808.7A CN116383966B (en) 2023-03-30 2023-03-30 Multi-unmanned system distributed cooperative positioning method based on interaction multi-model

Publications (2)

Publication Number Publication Date
CN116383966A true CN116383966A (en) 2023-07-04
CN116383966B CN116383966B (en) 2023-11-21

Family

ID=86974405

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310331808.7A Active CN116383966B (en) 2023-03-30 2023-03-30 Multi-unmanned system distributed cooperative positioning method based on interaction multi-model

Country Status (1)

Country Link
CN (1) CN116383966B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103697889A (en) * 2013-12-29 2014-04-02 北京航空航天大学 Unmanned aerial vehicle self-navigation and positioning method based on multi-model distributed filtration
CN107193009A (en) * 2017-05-23 2017-09-22 西北工业大学 A kind of many UUV cooperative systems underwater target tracking algorithms of many interaction models of fuzzy self-adaption
CN109655826A (en) * 2018-12-16 2019-04-19 成都汇蓉国科微系统技术有限公司 The low slow Small object track filtering method of one kind and device
WO2021082571A1 (en) * 2019-10-29 2021-05-06 苏宁云计算有限公司 Robot tracking method, device and equipment and computer readable storage medium
CN114035154A (en) * 2021-11-10 2022-02-11 中国人民解放军空军工程大学 Motion parameter assisted single-station radio frequency signal positioning method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103697889A (en) * 2013-12-29 2014-04-02 北京航空航天大学 Unmanned aerial vehicle self-navigation and positioning method based on multi-model distributed filtration
CN107193009A (en) * 2017-05-23 2017-09-22 西北工业大学 A kind of many UUV cooperative systems underwater target tracking algorithms of many interaction models of fuzzy self-adaption
CN109655826A (en) * 2018-12-16 2019-04-19 成都汇蓉国科微系统技术有限公司 The low slow Small object track filtering method of one kind and device
WO2021082571A1 (en) * 2019-10-29 2021-05-06 苏宁云计算有限公司 Robot tracking method, device and equipment and computer readable storage medium
CN114035154A (en) * 2021-11-10 2022-02-11 中国人民解放军空军工程大学 Motion parameter assisted single-station radio frequency signal positioning method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王林;王楠;朱华勇;沈林成;: "一种面向多无人机协同感知的分布式融合估计方法", 控制与决策, no. 06 *

Also Published As

Publication number Publication date
CN116383966B (en) 2023-11-21

Similar Documents

Publication Publication Date Title
CN109459040B (en) Multi-AUV (autonomous Underwater vehicle) cooperative positioning method based on RBF (radial basis function) neural network assisted volume Kalman filtering
CN109521454A (en) A kind of GPS/INS Combinated navigation method based on self study volume Kalman filtering
CN104075715B (en) A kind of underwater navigation localization method of Combining with terrain and environmental characteristic
CN106772524B (en) A kind of agricultural robot integrated navigation information fusion method based on order filtering
CN110779518B (en) Underwater vehicle single beacon positioning method with global convergence
CN109631913A (en) X-ray pulsar navigation localization method and system based on nonlinear prediction strong tracking Unscented kalman filtering
CN104165642B (en) Method for directly correcting and compensating course angle of navigation system
CN109974706A (en) A kind of more AUV collaborative navigation methods of master-slave mode based on double motion models
CN105116431A (en) Inertial navigation platform and Beidou satellite-based high-precision and ultra-tightly coupled navigation method
Petrich et al. On-board wind speed estimation for uavs
CN110779519B (en) Underwater vehicle single beacon positioning method with global convergence
CN105701352A (en) Space motion object locus prediction method
CN107607977A (en) A kind of adaptive UKF Combinated navigation methods based on the sampling of minimum degree of bias simple form
CN113325452A (en) Method for tracking maneuvering target by using three-star passive fusion positioning system
CN111025229B (en) Underwater robot pure orientation target estimation method
Jingsen et al. Integrating extreme learning machine with Kalman filter to bridge GPS outages
CN115933641A (en) AGV path planning method based on model prediction control guidance deep reinforcement learning
Xu et al. Accurate two-step filtering for AUV navigation in large deep-sea environment
CN106863297B (en) A kind of accurate approach method of space rope system robot vision
CN107576932A (en) Cooperative target replaces Kalman&#39;s spatial registration method with what noncooperative target coexisted
Girija et al. Tracking filter and multi-sensor data fusion
Wang et al. Robust filter method for SINS/DVL/USBL tight integrated navigation system
Saadeddin et al. Optimization of intelligent-based approach for low-cost INS/GPS navigation system
CN116383966B (en) Multi-unmanned system distributed cooperative positioning method based on interaction multi-model
CN111854741A (en) GNSS/INS tight combination filter and navigation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant