CN111156987B - Inertia/astronomy combined navigation method based on residual compensation multi-rate CKF - Google Patents

Inertia/astronomy combined navigation method based on residual compensation multi-rate CKF Download PDF

Info

Publication number
CN111156987B
CN111156987B CN201911306973.7A CN201911306973A CN111156987B CN 111156987 B CN111156987 B CN 111156987B CN 201911306973 A CN201911306973 A CN 201911306973A CN 111156987 B CN111156987 B CN 111156987B
Authority
CN
China
Prior art keywords
cns
state
ins
output
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911306973.7A
Other languages
Chinese (zh)
Other versions
CN111156987A (en
Inventor
陈熙源
张雨
柳笛
方琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201911306973.7A priority Critical patent/CN111156987B/en
Publication of CN111156987A publication Critical patent/CN111156987A/en
Application granted granted Critical
Publication of CN111156987B publication Critical patent/CN111156987B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/02Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by astronomical means
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Astronomy & Astrophysics (AREA)
  • Navigation (AREA)

Abstract

The invention provides an inertia/astronomical combined navigation method based on residual compensation multi-rate CKF, which comprises the steps of firstly establishing a state and a measurement equation of a combined navigation system and recording the state quantity of the system; carrying out deblurring processing on the 'trailing' star map by using an image deblurring method based on the SRN; the attitude information after deblurring is used as a part of the observed quantity of the volume Kalman filtering based on residual error compensation, and the observed quantity and the recorded system state quantity are jointly input into a filter for filtering estimation; output delay of the star sensor is corrected through residual compensation, and data synchronization is achieved; a long-time and short-time memory neural network estimator is introduced into a multi-rate cubature Kalman filter based on residual compensation to serve as a filter sub-module, and the same-frequency output of an inertial sensor and a star sensor is realized; and based on the compensated data and model, the optimal estimation of the inertial/astronomical combined navigation attitude is realized. The invention can effectively improve the autonomy of navigation positioning, thereby improving the navigation precision.

Description

Inertia/astronomy combined navigation method based on residual compensation multi-rate CKF
Technical Field
The invention relates to the field of inertial navigation technology and deep learning, in particular to an inertial/astronomical combined navigation method based on residual error compensation multi-rate CKF.
Background
The star sensor is used as a sensor mainly used in a Celestial Navigation System (CNS), and has the advantages of small volume, high measurement precision, and no accumulation of measurement errors along with time, but the instantaneous output frequency is low, and the output has a certain delay. An Inertial Navigation System (INS) has the characteristics of independence all day, high short-time precision and the like. However, under the condition of long endurance, the measurement error will be dispersed with time, the navigation accuracy is greatly reduced, and the requirement of the user on the system positioning accuracy cannot be met. The INS/CNS integrated navigation system can overcome the defect that the INS/CNS integrated navigation system works independently, improve the precision and the reliability of the navigation system, and provide real-time attitude, speed and position information for a carrier.
However, during high speed missile-borne flight, the combination of the star sensor and inertial navigation still has the following problems: 1) due to the swinging of the aircraft and the relative motion of the star sensor, the shot star map has a trailing phenomenon, so that the mass center is difficult to extract, and the attitude calculation precision is influenced. 2) Because the star sensor outputs the attitude result through the operation processes of optical imaging, centroid extraction, coordinate conversion and the like, certain output delay is achieved, and the output data of the combined astronomical navigation and inertial navigation are asynchronous. 3) The INS/CNS integrated navigation system mostly uses kalman filtering, and usually uses the output period of the star sensor as the filter update period, but for the inside of the integrated navigation system, the output frequency of the inertial navigation system is much higher than that of the star sensor, which results in a longer update period of the filtering system and a reduction in the final attitude estimation accuracy. Therefore, the improvement of the performance of the INS/CNS integrated navigation system is of great significance in a missile-borne flight environment.
Disclosure of Invention
The invention provides an inertia/astronomical integrated navigation method based on residual error compensation multi-rate volumetric Kalman filtering (CKF) for solving the problems of output signal delay and low output frequency of a star sensor and improving the attitude measurement precision in INS/CNS integrated navigation, which is oriented to the technical requirements of high-speed missile-borne aircraft navigation positioning, can be used in occasions such as high-speed missile-borne aircraft integrated navigation in a high-dynamic environment, effectively improves the autonomy of navigation positioning and further improves the navigation precision. The method combines the traditional navigation method and a deep learning mechanism, and introduces two types of deep learning networks which are respectively used for attitude calculation of the deblurring of the star map and improvement of the output frequency of the star sensor. The scheme designed by the invention can be used for long-endurance navigation and positioning in high-precision complex environments such as high-speed missile-borne aircrafts or ballistic launch.
In order to achieve the purpose, the invention provides the following technical scheme:
the multi-rate CKF filtering inertia/astronomical combined navigation method based on residual compensation comprises the following steps:
(1) initializing an INS/CNS integrated navigation system, establishing a state and a measurement equation of the integrated navigation system, and recording a star map and attitude information of a star sensor and the resolving output of an inertial sensor in real time;
(2) Carrying out deblurring processing on the 'trailing' star map by using a method for deblurring an image based on a Scale-recursive Network (SRN); in the SRN, acquiring a series of different-scale prior characteristics from a sequence-based blurred image as the input of a network, and taking a corresponding reference image block as a training label; decomposing the fuzzy image scale by using the corresponding reference image block as a training label, and constructing a coding and decoding network under different scales to realize the deblurring operation of the fuzzy star map;
(3) taking the attitude information after deblurring in the step (2) as a part of observed quantity of volume Kalman filtering based on residual error compensation, taking INS position, speed and attitude in the step (1) and constant drift of an accelerometer and a gyroscope as system state quantity, and jointly inputting the system state quantity into a filter for filtering estimation; output delay of the star sensor is corrected through residual compensation, and data synchronization is achieved;
(4) while the step (3) is carried out, the filter still takes the output period of the star sensor as an updating period to continuously carry out down-sampling work; at the moment, a Long-Short-Term Memory Neural Network estimator (LSTM) is introduced into a multi-rate volume Kalman filter based on residual error compensation as a sub-module of the filter, output data of a star sensor is expanded, the output frequency of an astronomical Navigation System (CNS) is improved, and the same-frequency output of the CNS and an Inertial Navigation System (INS) is realized;
(5) And (5) establishing a system state and a measurement model based on the data of the compensation completed in the step (4) to realize the optimal estimation of the INS/CNS integrated navigation attitude.
Further, the step (1) specifically includes the following steps:
defining a state space equation
Figure BDA0002323432720000021
And a metrology model
Figure BDA0002323432720000022
Figure BDA0002323432720000023
Figure BDA0002323432720000024
Wherein x (t) represents a state vector of a state space; f (-) denotes a nonlinear state matrix describing the state of the system, which is the nonlinear transition matrix of the error model in a nonlinear filtering system, and F (-) Fn·X(t),FnRepresenting an n-dimensional state transition matrix; v (t) represents a measurement noise matrix corresponding to the measurement error of the star sensor, typically with covariance
Figure BDA0002323432720000025
W (t) represents a state noise matrix, typically with covariance
Figure BDA0002323432720000026
Zero mean gaussian white noise; (psi, theta, gamma)TRepresenting heading, pitch and roll angles; (v)E,vN,vU)TAnd (L, λ, h)TRespectively representing velocity and position information under a northeast coordinate system (g-system); at the same time, the user can select the desired position,
Figure BDA0002323432720000031
representing the gyroscope and acceleration in the x, y, z carrier coordinate systemA constant drift of the meter; h (t) a measurement matrix indicating fusion position and velocity, h (t) ═ I3×3,03×12];ψCCCRespectively representing the course, the pitch and the roll angle output by the sensor under the star sensor coordinate system (c system); x (t) in the INS/CNS combination system represents the state of the position, velocity and attitude of the system, i.e., x (t) ═ P INS(t),VINS(t),AINS(t)]Z (t) is the difference between INS and CNS output posture, i.e. Z (t) ═ AINS(t)-ACNS(t)];
Based on the state equation and the measurement equation, the attitude and position equation of the INS/CNS integrated navigation is expressed as follows:
Figure BDA0002323432720000032
wherein,
Figure BDA0002323432720000033
Figure BDA0002323432720000034
white noise representing a gyroscope; omegagbTransfer matrix representing angular velocity rate from a carrier coordinate system (b-system) to a navigation coordinate system (n-system)
Figure BDA0002323432720000035
Figure BDA0002323432720000036
Is omegagbProjection in a geographic coordinate system
Figure BDA0002323432720000037
L, lambda and h respectively represent longitude, latitude and height under an inertial system;
RMradius of curvature for the meridian; rNThe radius of main curvature of the unitary mortise ring.
Further, the step (2) specifically includes the following steps:
setting the SRN network scale to be 3, wherein the size of the (i + 1) th scale is half of the ith scale; the latent image at each scale is selected as the primary deblurring object, and the image at each scale can be represented as:
Ii,hi=NetSR(Bi,Ii+1↑,hi+1↑θSR) (4)
i represents a current scale index; b isiAnd IiRepresenting the blurred image and the estimated latent image at the ith scale; netSR(. to) represent the SRN network abstraction model, θSRAs network model parameters; h isiThe method comprises the following steps that (1) the hidden state is characterized in that the hidden state obtains information such as an image structure, a fuzzy kernel and the like from a previous coarse scale; (.)Operator symbols for image adjustment between adjacent scales;
inserting convolution layer in the minimum scale hidden layer to realize continuous scale operation; with the convolution kernel size set to 5 x 5, the SRN network can be described as:
fi=NetE(Bi,Ii+1↑;θE) (5)
hi,gi=ConvLSTM(hi+1↑,fi;θLSTM) (6)
Ii=NetD(gi;θD) (7)
Wherein, NetE,NetDIs expressed with a parameter thetaE、θDA decoder and an encoder of (1); theta.theta.LSTMParameters representing the added convolutional layer; f. ofi giRespectively representing the transmission parameters between the decoder and the convolution layer and between the convolution layer and the encoder;
the structure of each convolutional layer added in the hidden layer is the same, the excitation function is a ReLU function, all convolutional kernel sizes are set to 5, and the loss function at each scale can be described as:
Figure BDA0002323432720000041
in the formula,
Figure BDA0002323432720000042
respectively representing the output and the reference output of the SRN network at the ith scale; { kappa ]iDenotes the weight of each scale, and κ is set empiricallyi=1,NiIs output after normalization IiThe number of middle elements;
obtaining a deblurred star chart IiAttitude information A can be obtained by mass center extraction and identificationCNS(t)。
Further, the step (3) specifically includes the following steps:
in Multi-rate volumetric Kalman filtering (MRCKF), the initial state quantity and covariance x of the filter are setk=E(x0),Pk=cov(x0) The filtering system can be divided into two parts of time updating and measurement updating: (1) the time updating part:
Figure BDA0002323432720000043
wherein chol (. cndot.) represents Cholesky decomposition; pk-1|k-1Representing the predicted mean square error at time k-1; sk-1|k-1Represents the optimal filter estimate covariance P k-1|k-1The square root of the eigenvalues of (d);
Figure BDA0002323432720000044
an optimal state estimate for the filter output at time k-1; x is a radical of a fluorine atomk-1|k-1,iRepresenting a calculated volume point; { ξiThe expression normalized volume point can be set as:
Figure BDA0002323432720000045
(2) calculating an estimate of a state quantity
Figure BDA0002323432720000046
Sum covariance Pk|k-1
Figure BDA0002323432720000047
Figure BDA0002323432720000048
Figure BDA0002323432720000049
Is an estimated value of the system state at the time k-1; pk|k-1The system predicts the mean square error of the next moment k at the moment k-1 by one step; e.g. of the typek|k-1Representing the estimation error between the state quantities; omegaiRepresenting random weights, usually taken
Figure BDA0002323432720000051
(3) Measurement update
Solving the prediction covariance matrix and calculating the volume point xk|k-1,i
Figure BDA0002323432720000052
Calculating the propagation volume point zk|k-1,iAnd obtaining a measurement estimate
Figure BDA0002323432720000053
zk|k-1,i=h(xk|k-1,i) (13)
Figure BDA0002323432720000054
(4) After calculating the estimated value of the metric, the correlation covariance can be calculated:
Figure BDA0002323432720000055
Figure BDA0002323432720000056
Figure BDA0002323432720000057
Figure BDA0002323432720000058
the predicted value of the quantity measurement at the time k is measured at the time k-1; pzz,k|k-1,Pxz,k|k-1Representing autocorrelation and cross-correlation covariance matrices;
in order to realize the residual error compensation of the multi-rate Kalman filtering, the time updating and the measurement updating of the Kalman filtering are required to be processed separately; at the sampling interval of the CNS data, time updating is carried out while estimation on residual errors is added; first, an initial residual error epsilon is determined by filtering estimation at the previous momentkAnd then establishing the relation between the residual error at the next moment and the system error.
Following the self-propagation of the system error, after the initial residual error is determined, the residual error is propagated with time during each sampling interval of the CNS data, and after the residual error is propagated in one sampling period, the CNS data at the next moment are updated and corrected; in the process of residual estimation, assuming that the process causes and the measurement noise is negligible, the residual can be expressed as:
Figure BDA0002323432720000059
HkAn observation matrix representing a system k moment;
during periods when CNS data interval sampling is not available, state errors can be conveyed by the self-update process:
Figure BDA00023234327200000510
ek+j=Fk+j|k+j-1ek+j-1 (20)
the residual error obtained by combining equations (19) and (20) is expressed as the self-updated state error:
Figure BDA0002323432720000061
formula (21) establishes a conversion relation between the error and the residual error, and formula (20) is a self-propagation model of the system error; wherein epsilonk+jRepresenting the system residual error at the moment k + j of the discrete system; hk+j Hk+j-1Respectively are observation matrixes in the system at the moment k + j and the moment k + j-1; fk+j-1Fk+jRepresenting state transition matrixes in the system at the k + j-1 moment and the k + j moment; e.g. of the typek+j-1ek+jRepresenting the error of the discrete system at the moment k + j-1 and k + j;
Figure BDA0002323432720000062
representing the state quantity of the system under k + j-1 and the estimated value of the state quantity; kk+j-1Represents the gain at time k + j-1; i denotes an identity matrix.
The residual can eventually be predicted by estimating the error.
Further, the step (4) comprises the following processes:
firstly, dividing a filtering working process into n nodes according to sampling intervals among CNS, wherein the LSTM-MRCKF can be composed of n LTSM estimators and MRCKF filters with completely same structures and parameters, and the number of n depends on the sampling times among CNS; a group of LSTM and MRCKF are recorded as a filtering node, and the working processes of signal transmission between the nodes are kept consistent; for one node, after the system observation is obtained through calculation in the step (1), a weight vector W of the LSTM-1 network is initialized fAnd an offset vector bfMeanwhile, the LSTM-1 estimator is divided into an off-line training stage and an on-line prediction stage; in the off-line training stage, the CNS frequency is used as a reference, and the CNS resolving attitude A is given at the first time l of the current time tCNSThe inputs for training are noted as:
xt=[ACNS(t-l),...ACNS(t-2),ACNS(t-1)]
output otThe difference between the INS and the CNS attitude at the corresponding time is recorded as:
ot=[z(t-l),...z(t-2),z(t-1)]
wherein z (t) is aINS(t)-ACNS(t);AINS(t)ACNS(t) respectively representing the attitude information of the INS and the CNS at the time t;
input gate i for inputting set input quantity to storage unit in LSTM-1 neural networktMiddle training, the signal passing through the input gate is as shown in equation (22-24), where xtRepresenting the input signal, StRepresents the output state of the input gate:
it=sigmoid(Wi,xxt+Wi,hht-1+bi) (22)
Figure BDA0002323432720000063
Figure BDA0002323432720000064
after the signal passes through the input gate, the signal passes through the forgetting gate (f)t) Output gate (o)t) After the calculation, the optimal training result is obtained:
ot=sigmoid(Wo,xxt+Wo,hht-1+bo) (25)
Figure BDA0002323432720000076
wherein StIndicates the output state of the input gate, htRepresents the output state of the output gate, i.e., the output of the LSTM memory core cell; wf,x,Wf,h,
Figure BDA0002323432720000071
Wi,x,Wi,h Wo,x,Wo,hIs a weight matrix; bf,
Figure BDA0002323432720000072
bi,boIs a deviation vector;
in the online prediction stage, after the INS and the CNS output data are synchronized, m predicted INS and CNS attitude differences corresponding to the time t +1 are output by using a trained LTSM-1 model and are recorded as z '(t +1), z' (t +2), and.
Further, in the step (4), the data at time t in the INS corresponds to t + m data in the CNS prediction.
Further, the step (5) specifically includes the following steps:
the INS and CNS attitude difference values predicted by LSTM-1 are input into MRCKF-1 as filter observed quantity, at this time, in MRCKF-1, the state quantity is the aforementioned 15-D input, and the process noise and the measurement noise can be expressed as:
Figure BDA0002323432720000073
Figure BDA0002323432720000074
optimal estimation of filter output
Figure BDA0002323432720000075
The final optimal estimate of attitude is AINS_update=δA+AINS
Wherein, δ P δ V δ A respectively represents the optimal estimated position, speed and attitude error of the filter; a. theINS_updateRepresenting the optimal estimate of attitude.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1. the method combines the deep learning model and the residual error compensation cubature Kalman filtering to optimally estimate the attitude in the INS/CNS integrated navigation, and is favorable for improving the stability and the precision of continuous navigation in a high dynamic environment.
2. The invention optimizes the output data structure of the star sensor, solves the problems of signal delay and low output frequency, improves the output frequency of the star sensor to compensate the observed quantity required by the filter, solves the problem of attitude estimation precision reduction caused by long filter updating period, and overcomes the problem of attitude measurement precision reduction caused by the tailing phenomenon of the star sensor. MRCKF and LSTM are combined, an LSTM estimator is used for learning the relation between data output by a star sensor and system view measurement, observed quantity data conforming to the data rule of the star sensor is predicted, continuous estimation and updating of a Kalman filter during sampling among CNS are achieved, and the long-time high-precision navigation effect can be still guaranteed in the complex flight environment of a high-speed missile-borne aircraft.
3. Compared with the traditional integrated navigation method, the multi-rate cubature Kalman filtering method based on residual error compensation combines a deep learning model, and eliminates attitude estimation errors from three aspects. The method can improve the frequency of INS and CNS combined filtering and provide durable high-precision navigation in a long-term flight environment.
Drawings
FIG. 1 is a schematic diagram of the operation of the present invention when the estimator is in the training phase.
Fig. 2 is a schematic diagram of the operation of the filter in the present invention when the estimator is in the prediction stage.
Fig. 3(a), 3(b) and 3(c) are schematic diagrams of attitude errors corresponding to different error compensation models.
Detailed Description
The technical solutions provided by the present invention will be described in detail with reference to specific examples, which should be understood that the following specific embodiments are only illustrative and not limiting the scope of the present invention.
The invention respectively introduces two types of deep learning mechanisms in an INS/CNS combined system: and optimizing the star sensor image and the output frequency by multi-Scale Regression (SRN) and Long-term memory neural Network (LSTM). The SRN network realizes the deblurring treatment of the trailing star map and improves the extraction precision of the star map centroid. The LSTM network is used as a submodule of the filter to expand system observation measurement and improve the output frequency of the CNS, so that the CNS and the INS output at the same frequency. And synchronizing INS and CNS data output of different frequencies based on residual compensation multi-rate cubature Kalman filtering to compensate the hysteresis signal. Meanwhile, the LSTM estimator works in the sampling period between CNS, so that the filtering period can be effectively shortened, and continuous seamless navigation is realized. The LSTM estimator can be divided into two phases, off-line training and on-line prediction. In the off-line training stage, the CNS frequency is taken as a reference, the CNS calculated attitude value at the previous stage is taken as input, the difference value of the corresponding INS and the CNS attitude is taken as training output, and the system observed quantity is learned off-line; the online prediction stage is a CNS data inter-sampling stage, and LSTM outputs predicted system observation measurement to compensate low-frequency data during the CNS inter-sampling stage.
Specifically, the INS/CNS integrated navigation method based on residual compensation multi-rate cubature Kalman filtering provided by the invention comprises the following steps:
(1) initializing an INS/CNS integrated navigation system, establishing a state and a measurement equation of the integrated navigation system, and recording a star map and attitude information of a star sensor and the calculation output of an inertial sensor in real time.
The INS/CNS integrated navigation system combination mode is a loose combination mode of attitude, speed and speed errors. In the INS, an inertial navigation system is used to read carrier data collected by a navigation module, including angular velocity and acceleration, as state quantities of an integrated navigation system. In the CNS, the star sensor provides accurate attitude information as a measure of the integrated navigation system, denoted by psiCCCRespectively, heading, pitch and roll. In this step, a state model is established
Figure BDA0002323432720000081
And a metrology model
Figure BDA0002323432720000082
Determining process noise
Figure BDA0002323432720000083
And measuring noise
Figure BDA0002323432720000084
Defining a state space equation and a measurement model:
Figure BDA0002323432720000085
Figure BDA0002323432720000091
wherein x (t) represents a state vector of a state space; f (-) denotes a nonlinear state matrix describing the state of the system, which is the nonlinear transition matrix of the error model in a nonlinear filtering system, and F (-) Fn·X(t),FnRepresenting an n-dimensional state transition matrix; v (t) represents a measurement noise matrix corresponding to the measurement error of the star sensor, typically with covariance
Figure BDA0002323432720000092
W (t) represents a state noise matrix, typically with covariance
Figure BDA0002323432720000099
Zero mean gaussian white noise; (psi, theta, gamma)TIndicating course, pitch and roll angles; (v) ofE,vN,vU)TAnd (L, λ, h)TRespectively representing speed and position information under a northeast coordinate system (g-system); at the same time, the user can select the required time,
Figure BDA0002323432720000093
representing constant drift of the gyroscope and accelerometer, respectively; h (t) a measurement matrix indicating fusion position and velocity, h (t) ═ I3×3,03×12]。
The specific implementation steps are shown in FIG. 1. As can be seen from FIG. 1, INS/CNSX (t) in the combined system indicates the state of the position, speed, and posture of the system, that is, x (t) ([ P)INS(t),VINS(t),AINS(t)]Z (t) is the difference between INS and CNS output attitude, i.e. Z (t) ═ AINS(t)-ACNS(t)]. Based on the state equation and the measurement equation, the attitude and position equations of the INS/CNS integrated navigation can be expressed as:
Figure BDA0002323432720000094
wherein,
Figure BDA0002323432720000095
ωgbtransfer matrix representing angular velocity ratio from b-system to n-system
Figure BDA0002323432720000096
Figure BDA0002323432720000097
Is omegagbProjection in a geographic coordinate system
Figure BDA0002323432720000098
L, lambda and h respectively represent longitude, latitude and altitude under an inertial system;
RMradius of curvature for the meridian; rNRadius of principal curvature of mortise and unitary ring
(2) And carrying out deblurring processing on the 'trailing' star map by using an image deblurring method based on the SRN.
In the CNS, the SRN is used for deblurring a blurred star map recorded by a star sensor, so that the accuracy of star map centroid extraction is improved, and the error of attitude calculation is reduced. In the condition of high-speed flight of the missile-borne aircraft, because the characteristics of the fuzzy star map at different scales are different, in the SRN, a series of different-scale prior characteristics are obtained from the sequence-based fuzzy image and are used as the input of the network, and the corresponding reference image block is used as a training label. In order to reduce the complexity of feature fusion under different scales, the scale of the SRN network is set to be 3, and the size of the (i + 1) th scale is half of that of the (i) th scale. We pick the latent image at each scale as the primary deblurring object, and the image at each scale can be represented as:
Ii,hi=NetSR(Bi,Ii+1↑,hi+1↑θSR) (4)
i represents a current scale index; b isiAnd IiRepresenting the blurred image and the estimated latent image at the ith scale; netSR(. to) represent the SRN network abstraction model, θSRAs network model parameters; h isiThe method comprises the following steps that (1) the hidden state is characterized in that the hidden state obtains information such as an image structure, a fuzzy kernel and the like from a previous coarse scale; (.)Is an operator of image adjustment between adjacent scales.
Since the SRN network inputs the regression network, we insert the convolutional layer in the minimum scale hidden layer to realize continuous scale operation. With the convolution kernel size set to 5 x 5, the SRN network can be described as:
fi=NetE(Bi,Ii+1↑;θE) (5)
hi,gi=ConvLSTM(hi+1↑,fi;θLSTM) (6)
Ii=NetD(gi;θD) (7)
wherein, NetE,NetDWith the parameter thetaE、θDThe decoder and encoder of (1); thetaLSTMParameters of the added convolutional layer are indicated. f. ofi giRepresenting the transfer parameters between decoder and convolutional layer, and convolutional layer and encoder, respectively.
In the present invention, the SRN network has 3 characteristic scales, and the size of the (i + 1) th scale is half of the (i) th scale. It should be noted that the structure of each convolutional layer added to the hidden layer is the same, the excitation function is the ReLU function, and all convolutional kernel sizes are set to 5. The loss function at each scale can be described as:
Figure BDA0002323432720000101
in the formula,
Figure BDA0002323432720000102
respectively representing the output and the reference output of the SRN network at the ith scale; { kappa ] iDenotes the weight of each scale, and κ is set empiricallyi=1,NiIs output after normalization IiThe number of the elements in (B).
Obtaining a deblurred star chart IiAttitude information A can be obtained by mass center extraction and identificationCNS(t), however, because the star map is subjected to a series of solving operations, the output signal has hysteresis and the output frequency is lower than the INS output frequency.
(3) In order to solve the problems of signal delay and multi-rate in the information fusion of different sensors, MRCKF and LSTM are combined, the output frequency of the CNS is fully improved, and the stability of a filtering result is ensured. In MRCKF, the initial state quantity and covariance x of the filter are setk=E(x0),Pk=cov(x0) The filtering system can be divided into a time update part and a measurement update part.
(1) The time updating part:
Figure BDA0002323432720000111
wherein chol (. cndot.) represents Cholesky decomposition; pk-1|k-1Representing the predicted mean square error at time k-1; sk-1|k-1Represents the optimal filter estimate covariance Pk-1|k-1The square root of the eigenvalue of (a);
Figure BDA0002323432720000112
the optimal state estimation is output by the filter at the k-1 moment; x is the number ofk-1|k-1,iRepresenting calculated volume points;{ξiThe expression normalized volume point can be set as:
Figure BDA0002323432720000113
(2) calculating an estimate of a state quantity
Figure BDA0002323432720000114
Sum covariance Pk|k-1
Figure BDA0002323432720000115
Figure BDA0002323432720000116
Figure BDA0002323432720000117
Is an estimated value of the system state at the time k-1; p k|k-1The system predicts the mean square error of the next moment k at the moment k-1 by one step; e.g. of a cylinderk|k-1Representing an estimation error between the state quantities; omegaiRepresenting random weights, usually taken
Figure BDA0002323432720000118
(3) Measurement update
Solving the prediction covariance matrix and calculating the volume point xk|k-1,i
Figure BDA0002323432720000119
Calculating the propagation volume point zk|k-1,iAnd obtaining a measurement estimate
Figure BDA00023234327200001110
zk|k-1,i=h(xk|k-1,i) (13)
Figure BDA00023234327200001111
(4) After calculating the estimated value of the metric, the correlation covariance can be calculated:
Figure BDA00023234327200001112
Figure BDA00023234327200001113
Figure BDA00023234327200001114
Figure BDA00023234327200001115
the predicted value of the quantity measurement at the time k is measured at the time k-1; pzz,k|k-1,Pxz,k|k-1Representing auto-and cross-correlation covariance matrices.
In order to realize the residual compensation of the multirate Kalman filtering, the time update and the measurement update of the Kalman filtering need to be processed separately. The estimation of the residual is increased while the temporal update is performed at the sampling interval of the CNS data. During the sampling interval of the CNS data only INS data, no CNS data, so the initial residual ε is first determined by a last time instant filtering estimatekAnd then establishing the relation between the residual error at the next moment and the system error.
Following the initial residual determination, the residual propagates over time during each CNS data sampling interval, following the self-propagation of systematic errors, and is updated and corrected by the next time CNS data after the residual has completed propagation over one sampling period. In the process of residual estimation, assuming that the process causes and the measurement noise is negligible, the residual can be expressed as:
Figure BDA0002323432720000121
HkRepresenting the observation matrix at the moment of system k.
During periods when CNS data interval sampling is not available, state errors can be conveyed by a self-updating process:
Figure BDA0002323432720000122
ek+j=Fk+j|k+j-1ek+j-1 (2)
the residual error obtained by combining equations (19) and (20) is expressed as the self-updated state error:
Figure BDA0002323432720000123
formula (21) establishes a conversion relation between the error and the residual error, and formula (20) is a self-propagation model of the system error; wherein epsilonk+jRepresenting the system residual error at the moment k + j of the discrete system; hk+j Hk+j-1Respectively are observation matrixes in the system at the moment k + j and the moment k + j-1; fk+j-1Fk+jRepresenting state transition matrixes in the system at the k + j-1 moment and the k + j moment; e.g. of the typek+j-1ek+jRepresenting the error of the discrete system at the moment k + j-1 and k + j;
Figure BDA0002323432720000124
representing the state quantity of the system under k + j-1 and the estimated value of the state quantity; kk+j-1Represents the gain at time k + j-1; i denotes an identity matrix.
And finally, residual errors can be predicted through estimation errors, so that the state quantity value can be compensated through residual error estimation at the CNS data sampling interval, the dispersion of the INS system along with time is avoided, and the stability of the system is improved.
(4) And (4) while the step (3) is carried out, the filter still takes the output period of the star sensor as an updating period to continuously carry out down-sampling operation. At the moment, a Long Short-Term Memory (LSTM) estimator is introduced into the multirate cubature Kalman filter based on residual compensation and serves as a submodule of the filter, output data of the star sensor are expanded, the output frequency of the CNS is improved, and the same-frequency output of the CNS and the INS is realized.
In an INS/CNS combined navigation system, the INS output frequency is much higher than that of a CNS, inertial navigation generally outputs over 50Hz, but the star sensor output is only 1-10Hz, so that accurate updating of state quantity is difficult to realize only through residual compensation during sampling between CNS, and residual estimation has errors, and the accumulation of the errors during sampling easily causes filter divergence. Therefore, the LSTM estimator is introduced into the MRCKF (marked as LSTM-MRCKF), residual estimation is guaranteed, and meanwhile star sensor output data during sampling among CNS are predicted, so that the output frequency of the star sensor is improved. Firstly, the filtering working process is divided into n nodes according to the sampling interval between CNS, then LSTM-MRCKF can be composed of n LTSM estimators and MRCKF filters with the same structure and parameters, and the number of n depends on the sampling times between CNS. And one group of LSTM and MRCKF is recorded as a filtering node, and the working process of signal transmission between the nodes is kept consistent. Taking node 1 as an example, after the system observed quantity is obtained through calculation in step (1), a weight vector W of the LSTM-1 network is initializedfAnd an offset vector bfMeanwhile, the LSTM-1 estimator is divided into an off-line training stage and an on-line prediction stage. In the off-line training stage, the CNS frequency is used as a reference, and the CNS resolving attitude A is given at the first time l of the current time t CNSThe inputs for training are noted:
xt=[ACNS(t-l),...ACNS(t-2),ACNS(t-1)]
output otThe difference between the INS and the CNS attitude at the corresponding time is recorded as:
ot=[z(t-l),...z(t-2),z(t-1)]
wherein z (t) is aINS(t)-ACNS(t);AINS(t)ACNS(t) represents pose information of the INS and CNS at time t, respectively.
Inputting the set input quantity to an input gate i of a storage unit (Memorycell) in the LSTM-1 neural networktMiddle training, the signal passing through the input gate is as shown in equation (22-24), where xtRepresenting the input signal, StRepresents the output state of the input gate:
it=sigmoid(Wi,xxt+Wi,hht-1+bi) (22)
Figure BDA0002323432720000131
Figure BDA0002323432720000132
after the signal passes through the input gate, the signal passes through the forgetting gate (f)t) Output gate (o)t) After the calculation, the optimal training result is obtained:
ot=sigmoid(Wo,xxt+Wo,hht-1+bo) (25)
Figure BDA0002323432720000135
wherein StIndicates the output state of the input gate, htRepresents the output state of the output gate, i.e., the output of the LSTM memory core cell; wf,x,Wf,h,
Figure BDA0002323432720000133
Wi,x,Wi,h Wo,x,Wo,hIs a weight matrix; bf,
Figure BDA0002323432720000134
bi,boIs a deviation vector.
The real-time steps after the synchronization of the INS and the CNS output data are performed in the online prediction stage are shown in FIG. 2. And outputting m predicted INS and CNS attitude difference values at the time t +1 by using the trained LTSM-1 model, wherein the difference values are recorded as z '(t +1), z' (t +2) and. (in the present invention, the time t data in the INS corresponds to t + m data in the CNS predictions).
(5) And (4) establishing a system state and a measurement model based on the data compensated in the step (4) to realize the optimal estimation of the INS/CNS integrated navigation attitude.
The INS and CNS attitude difference values predicted by LSTM-1 are input into MRCKF-1 as filter observed quantity, at this time, in MRCKF-1, the state quantity is the aforementioned 15-D input, and the process noise and the measurement noise can be expressed as:
Figure BDA0002323432720000141
Figure BDA0002323432720000142
optimal estimation of filter output
Figure BDA0002323432720000143
The final optimal estimate of attitude is AINS_update=δA+AINS
Wherein, δ P δ V δ A respectively represents the optimal estimated position, speed and attitude error of the filter; a. theINS_updateRepresenting the optimal estimate of attitude.
The advantage of introducing the LSTM in the residual-based compensation multirate volumetric kalman filtering is: (1) the multi-rate cubature Kalman filtering based on residual compensation can realize the self-updating of the filter in a residual compensation mode during the inter-CNS sampling, thereby not only avoiding the divergence of the filter caused by overlong inter-sampling time, but also realizing the synchronization of INS and CNS data sampling; (2) the LSTM estimator can learn the relationship between the star sensor output data and system observation and measurement, output observation data with the same frequency as the INS system under the condition of low output frequency, and improve the output frequency of the CNS. Meanwhile, the mode of estimating the observed quantity through the LSTM also reduces the accumulated error caused by residual compensation. The output frequency of the star sensor can be improved while residual estimation is ensured, and the effect of optimal estimation is achieved.
According to a simulation embodiment of the invention, the high-speed missile-borne aircraft combined navigation system works in a loose combined mode, the inertial system consists of a gyroscope and an accelerometer, the gyroscope is offset by 1 degree/h, the random noise is 0.5 degree/h, the accelerometer is offset by 100ug, the random noise is 50ug, and the star sensor random noise is 3'. Setting the flight time 400s of the active section of the aircraft, the total flight time 111000s and the initial launching position coordinate (39.98 degrees N and 116.34 degrees E). The output frequency of the inertial navigation system is set to be 100Hz, and the output frequency of the star sensor is set to be 3 Hz. For comparative analysis of the effectiveness of the method proposed by the invention, comparative analysis results of several different implementation methods are given. In the comparative simulation, the attitude is estimated using different models, and the results of pitch, roll and heading angle error estimation can be obtained as shown in fig. 3(a), 3(b) and 3 (c). The model adopted by comparison comprises: volume Kalman Filter method (CKF), LSTM and CKF combination method (LSTM-CKF) and LSTM and MRCKF combination method (LSTM-MRCKF). Neither CKF nor LSTM-CKF are residual compensated and the operation of the filter depends mainly on the output frequency of CNS data. As can be seen from fig. 3, in the working mode without residual compensation, the pitch angle error estimated by only using CKF to perform time updating is large, and after the estimation model with LSTM is added to predict the observed quantity data, the error can be effectively reduced, but the prediction estimation cannot change the matching sequence of the data, so that the problem of data asynchronization still exists between CNS and INS. The data lag can influence the estimation result, and the LSTM-MRCKF provided by the invention can improve a more accurate attitude estimation result so as to meet the requirement of improving the combination accuracy.
The technical means disclosed in the scheme of the invention are not limited to the technical means disclosed in the above embodiments, but also include the technical means formed by any combination of the above technical features. It should be noted that modifications and adaptations can be made by those skilled in the art without departing from the principles of the present invention and are intended to be within the scope of the present invention.

Claims (6)

1. The inertia/astronomical combined navigation method based on the residual error compensation multi-rate CKF is characterized by comprising the following steps of:
(1) initializing an inertia/astronomical integrated navigation system, establishing a state and a measurement equation of the integrated navigation system, and recording a star map and attitude information of a star sensor and the resolving output of an inertia sensor in real time;
(2) carrying out deblurring processing on the 'trailing' star map by using an image deblurring method based on the SRN; in the SRN, acquiring a series of different-scale prior characteristics from a sequence-based blurred image as the input of a network, and taking a corresponding reference image block as a training label; decomposing the fuzzy image scale by taking the corresponding reference image block as a training label, constructing a coding and decoding network under different scales, and finally fusing and outputting a reconstructed restored image;
(3) Taking the attitude information after deblurring in the step (2) as a part of observed quantity of multi-rate volume Kalman filtering based on residual error compensation, taking the INS position, speed and attitude in the step (1) and constant drift of an accelerometer and a gyroscope as system state quantity, and inputting the system state quantity into a filter together for filtering estimation; output delay of the star sensor is corrected through residual error compensation, and data synchronization is achieved;
(4) while the step (3) is carried out, the filter still takes the output period of the star sensor as an updating period to continuously carry out down-sampling work; at the moment, a long-time memory neural network estimator is introduced into a multi-rate volume Kalman filter based on residual compensation and serves as a submodule of the filter, output data of a star sensor is expanded, the output frequency of an astronomical navigation system is improved, and the same-frequency output of a CNS (central nervous system) and an inertial navigation system is realized; the method specifically comprises the following steps:
firstly, dividing a filtering working process into n nodes according to sampling intervals among CNS, wherein the LSTM-MRCKF can be composed of n LTSM estimators and MRCKF filters with completely same structures and parameters, and the number of n depends on the sampling times among CNS; a group of LSTM and MRCKF are recorded as a filtering node, and the working processes of signal transmission between the nodes are kept consistent; for one node, after the system observation is obtained through calculation in the step (1), a weight vector W of the LSTM-1 network is initialized fAnd an offset vector bfMeanwhile, the LSTM-1 estimator is divided into an off-line training stage and an on-line prediction stage; in the off-line training stage, the CNS frequency is used as a reference, and the CNS resolving attitude A is given at the first time l of the current time tCNSThe inputs for training are noted:
xt=[ACNS(t-l),...ACNS(t-2),ACNS(t-1)]
output otThe difference between the INS and the CNS attitude at the corresponding time is recorded as:
ot=[z(t-l),...z(t-2),z(t-1)]
wherein z (t) is aINS(t)-ACNS(t);AINS(t)ACNS(t) respectively representing the attitude information of the INS and the CNS at the time t;
input gate i for inputting set input quantity to storage unit in LSTM-1 neural networktMiddle training, the signal passing through the input gate is as shown in equation (22-24), where xtRepresenting the input signal, StRepresents the output state of the input gate:
it=sigmoid(Wi,xxt+Wi,hht-1+bi) (22)
Figure FDA0003633627070000011
Figure FDA0003633627070000021
after the signal passes through the input gate, the signal passes through the forgetting gate (f)t) Output gate (o)t) After the calculation, the optimal training result is obtained:
ot=sigmoid(Wo,xxt+Wo,hht-1+bo) (25)
Figure FDA0003633627070000022
wherein StIndicates the output state of the input gate, htRepresents the output state of the output gate, i.e., the output of the LSTM memory core cell; wf,x,Wf,h,
Figure FDA0003633627070000023
Wi,x,Wi,h Wo,x,Wo,hIs a weight matrix; bf,
Figure FDA0003633627070000024
bi,boIs a deviation vector;
in the online prediction stage, after the INS and the CNS output data are synchronized, m predicted INS and CNS attitude differences corresponding to the time t +1 are output by using a trained LTSM-1 model and are recorded as z '(t +1), z' (t +2) and.
(5) And (4) establishing a system state and a measurement model based on the data compensated in the step (4) to realize the optimal estimation of the INS/CNS integrated navigation attitude.
2. The method of claim 1, wherein the step (1) comprises the following steps:
defining a state space equation
Figure FDA0003633627070000025
And a metrology model
Figure FDA0003633627070000026
Figure FDA0003633627070000027
Figure FDA0003633627070000028
Wherein x (t) represents a state vector of a state space; f (-) denotes a nonlinear state matrix describing the state of the system, which is the nonlinear transition matrix of the error model in a nonlinear filtering system, and F (-) Fn·X(t),FnRepresenting an n-dimensional state transition matrix; v (t) represents a measurement noise matrix corresponding to the measurement error of the star sensor, and has covariance
Figure FDA0003633627070000029
W (t) represents a state noise matrix, is of covariance
Figure FDA00036336270700000210
Zero mean gaussian white noise; (psi, theta, gamma)TRepresenting heading, pitch and roll angles; (v)E,vN,vU)TAnd (L, λ, h)TRespectively representing speed and position information under a northeast coordinate system; at the same time, the user can select the desired position,
Figure FDA00036336270700000211
respectively representing constant drift of a gyroscope and an accelerometer under a carrier coordinate system x, y and z; h (t) a measurement matrix indicating fusion position and velocity, h (t) ═ I3×3,03×12];ψCCCRespectively representing the course, the pitching and the roll angle output by the sensor under the star sensor coordinate system; x (t) in the INS/CNS combination system represents the state of the position, velocity and attitude of the system, i.e., x (t) ═ P INS(t),VINS(t),AINS(t)]Z (t) is the difference between INS and CNS output attitude, i.e. Z (t) ═ AINS(t)-ACNS(t)];
Based on the state equation and the measurement equation, the attitude and position equations of the INS/CNS integrated navigation are expressed as follows:
Figure FDA0003633627070000031
wherein,
Figure FDA0003633627070000032
Figure FDA0003633627070000033
white noise representing a gyroscope; omegagbTransfer matrix representing angular velocity rate from carrier coordinate system to navigation coordinate system
Figure FDA0003633627070000034
Figure FDA0003633627070000035
Is omegagbProjection in a geographic coordinate system
Figure FDA0003633627070000036
L, lambda and h respectively represent longitude, latitude and altitude under an inertial system;
RMradius of curvature for the meridian; rNThe radius of main curvature of the unitary mortise ring.
3. The method of claim 2, wherein the step (2) comprises the following steps:
setting the SRN network scale to be 3, wherein the size of the (i + 1) th scale is half of the ith scale; the latent image at each scale is selected as the primary deblurring object, and the image at each scale can be represented as:
Ii,hi=NetSR(Bi,Ii+1↑,hi+1↑;θSR) (4)
i represents a current scale index; b isiAnd IiRepresenting the blurred image and the estimated latent image at the ith scale; netSR(. to) represent the SRN network abstraction model, θSRAs a network model parameterCounting; h isiThe method comprises the following steps that (1) the hidden state is characterized in that the hidden state obtains an image structure and fuzzy kernel information from a previous coarse scale; (.) Operator symbols for image adjustment between adjacent scales;
inserting convolution layer in the minimum scale hidden layer to realize continuous scale operation; with the convolution kernel size set to 5 x 5, the SRN network can be described as:
fi=NetE(Bi,Ii+1↑;θE) (5)
hi,gi=ConvLSTM(hi+1↑,fi;θLSTM) (6)
Ii=NetD(gi;θD) (7)
wherein, NetE,NetDWith the parameter thetaE、θDThe decoder and encoder of (1); thetaLSTMParameters representing the added convolutional layer; f. ofi giRespectively representing the transmission parameters between the decoder and the convolutional layer and between the convolutional layer and the encoder;
the structure of each convolutional layer added in the hidden layer is the same, the excitation function is a ReLU function, all convolutional kernel sizes are set to 5, and the loss function at each scale can be described as:
Figure FDA0003633627070000041
in the formula,
Figure FDA0003633627070000042
respectively representing the output and the reference output of the SRN network at the ith scale; { kappa ]iDenotes the weight of each scale, and κ is set empiricallyi=1,NiIs output after normalization IiThe number of middle elements;
obtaining a deblurred star chart IiAttitude information A can be obtained by mass center extraction and identificationCNS(t)。
4. The method of claim 3, wherein the step (3) comprises the following steps:
in multirate volumetric Kalman filtering, initial state quantities and covariance x of the filter are set k=E(x0),Pk=cov(x0) The filtering system can be divided into two parts of time update and measurement update:
(1) the time updating part:
Figure FDA0003633627070000043
wherein chol (. cndot.) represents Cholesky decomposition; pk-1|k-1Representing the predicted mean square error at time k-1; sk-1|k-1Represents the optimal filter estimate covariance Pk-1|k-1The square root of the eigenvalue of (a);
Figure FDA0003633627070000044
the optimal state estimation is output by the filter at the k-1 moment; x is the number ofk-1|k-1,iRepresenting a calculated volume point; { xiiDenotes that the normalized volume point can be set to:
Figure FDA0003633627070000045
(2) calculating an estimate of a state quantity
Figure FDA0003633627070000046
Sum covariance Pk|k-1
Figure FDA0003633627070000047
Figure FDA0003633627070000048
Figure FDA0003633627070000049
Is an estimated value of the system state at the time k-1; pk|k-1The system predicts the mean square error of the next moment k at the moment k-1 by one step; e.g. of the typek|k-1Representing the estimation error between the state quantities; omegaiRepresenting random weights, by taking
Figure FDA00036336270700000410
(3) Measurement update
Solving the prediction covariance matrix and calculating the volume point xk|k-1,i
Figure FDA0003633627070000051
Calculating the propagation volume point zk|k-1,iAnd obtaining a measurement estimate
Figure FDA0003633627070000052
zk|k-1,i=h(xk|k-1,i) (13)
Figure FDA0003633627070000053
(4) After calculating the estimated value of the metrology, the correlation covariance is calculated:
Figure FDA0003633627070000054
Figure FDA0003633627070000055
Figure FDA0003633627070000056
Figure FDA0003633627070000057
the predicted value of the quantity measurement at the time k is measured at the time k-1; pzz,k|k-1,Pxz,k|k-1Representing autocorrelation and cross-correlation covariance matrices;
in order to realize the residual error compensation of the multi-rate Kalman filtering, the time updating and the measurement updating of the Kalman filtering are required to be processed separately; at the sampling interval of the CNS data, time updating is carried out while estimation on residual errors is added; first, an initial residual error epsilon is determined by filtering estimation at the previous moment kEstablishing the relation between the residual error at the next moment and the system error;
following the self-propagation of the system error, after the initial residual error is determined, the residual error is propagated with time during each sampling interval of the CNS data, and after the residual error is propagated in one sampling period, the CNS data at the next moment are updated and corrected; in the process of residual estimation, the process causes and the measurement noise is negligible, and the residual can be expressed as:
Figure FDA0003633627070000058
Hkan observation matrix representing a system k time;
during periods when satellite sensitive CNS data interval sampling is not available, state errors can be conveyed through a self-updating process:
Figure FDA0003633627070000059
ek+j=Fk+j|k+j-1ek+j-1 (20)
the residual error obtained by combining equations (19) and (20) is expressed as the self-updated state error:
Figure FDA0003633627070000061
formula (21) establishes a conversion relation between the error and the residual error, and formula (20) is a self-propagation model of the system error; wherein epsilonk+jRepresenting the system residual error at the moment k + j of the discrete system; hk+j Hk+j-1Respectively are observation matrixes in the system at the moment k + j and the moment k + j-1; fk+j-1 Fk+jRepresenting state transition matrixes in the system at the k + j-1 moment and the k + j moment; e.g. of the typek+j-1 ek+jRepresenting the error of the discrete system at the moment k + j-1 and k + j;
Figure FDA0003633627070000062
representing the state quantity of the system under k + j-1 and the estimated value of the state quantity; kk+j-1Represents the gain at time k + j-1; i represents an identity matrix;
The residual is finally predicted by estimating the error.
5. The integrated inertial/astronomical navigation method based on residual compensation multi-rate CKF according to claim 1, wherein in step (4), the time t data in INS corresponds to t + m data in CNS prediction.
6. The combined inertial/astronomical navigation method based on residual error compensated multi-rate CKF according to claim 1 or 5, wherein said step (5) comprises the following steps:
the INS and CNS attitude difference values predicted by LSTM-1 are input into MRCKF-1 as filter observed quantity, at this time, in MRCKF-1, the state quantity is a 15-dimensional state vector in formula (2), and the process noise and the measurement noise can be expressed as:
Figure FDA0003633627070000063
Figure FDA0003633627070000064
optimal estimation of filter output
Figure FDA0003633627070000065
The final optimal estimate of attitude is AINS_update=δA+AINS
Wherein, δ P δ V δ A respectively represents the optimal estimated position, speed and attitude error of the filter; a. theINS_updateRepresenting the optimal estimate of attitude.
CN201911306973.7A 2019-12-18 2019-12-18 Inertia/astronomy combined navigation method based on residual compensation multi-rate CKF Active CN111156987B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911306973.7A CN111156987B (en) 2019-12-18 2019-12-18 Inertia/astronomy combined navigation method based on residual compensation multi-rate CKF

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911306973.7A CN111156987B (en) 2019-12-18 2019-12-18 Inertia/astronomy combined navigation method based on residual compensation multi-rate CKF

Publications (2)

Publication Number Publication Date
CN111156987A CN111156987A (en) 2020-05-15
CN111156987B true CN111156987B (en) 2022-06-28

Family

ID=70557614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911306973.7A Active CN111156987B (en) 2019-12-18 2019-12-18 Inertia/astronomy combined navigation method based on residual compensation multi-rate CKF

Country Status (1)

Country Link
CN (1) CN111156987B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111709517B (en) * 2020-06-12 2022-07-29 武汉中海庭数据技术有限公司 Method and device for enhancing redundancy fusion positioning based on confidence prediction system
CN111652177A (en) * 2020-06-12 2020-09-11 中国计量大学 Signal feature extraction method based on deep learning
CN112284379B (en) * 2020-09-17 2023-09-22 江苏大学 Inertial pre-integration method of combined motion measurement system based on nonlinear integral compensation
CN112269201B (en) * 2020-10-23 2024-04-16 北京云恒科技研究院有限公司 GNSS/INS tight coupling time dispersion filtering method
CN112284366B (en) * 2020-10-26 2022-04-12 中北大学 Method for correcting course angle error of polarized light compass based on TG-LSTM neural network
CN112747742B (en) * 2020-12-22 2022-10-14 上海交通大学 Terminal position self-adaptive updating method based on Kalman filtering
CN113432608B (en) * 2021-02-03 2024-06-07 东南大学 Generalized high-order CKF method based on maximum correlation entropy for INS/CNS integrated navigation system
CN113158569B (en) * 2021-04-23 2022-11-18 东南大学 Tank car side-tipping state high-reliability estimation method based on long-short term memory network
CN113483755B (en) * 2021-07-09 2023-11-07 北京易航远智科技有限公司 Multi-sensor combination positioning method and system based on non-global consistent map
CN113447019B (en) * 2021-08-05 2023-01-13 齐鲁工业大学 INS/CNS integrated navigation method, system, storage medium and equipment
CN113804189B (en) * 2021-09-06 2024-05-14 国家海洋局北海预报中心((国家海洋局青岛海洋预报台)(国家海洋局青岛海洋环境监测中心站)) INS/CNS-based integrated navigation method
CN113899362B (en) * 2021-09-09 2023-09-22 武汉大学 Pedestrian dead reckoning method with uncertainty evaluation based on residual network
CN114136310B (en) * 2021-10-29 2023-10-13 北京自动化控制设备研究所 Autonomous error suppression system and method for inertial navigation system
CN115112152A (en) * 2022-03-25 2022-09-27 中国航空工业集团公司西安飞行自动控制研究所 Double-inertial navigation synchronization comparison method based on satellite navigation receiver second pulse
CN117192063B (en) * 2023-11-06 2024-03-15 山东大学 Water quality prediction method and system based on coupled Kalman filtering data assimilation

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101825467A (en) * 2010-04-20 2010-09-08 南京航空航天大学 Method for realizing integrated navigation through ship's inertial navigation system (SINS) and celestial navigation system (SNS)

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9044596B2 (en) * 2011-05-24 2015-06-02 Vanderbilt University Method and apparatus of pulsed infrared light for central nervous system neurons

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101825467A (en) * 2010-04-20 2010-09-08 南京航空航天大学 Method for realizing integrated navigation through ship's inertial navigation system (SINS) and celestial navigation system (SNS)

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"Hybrid Multi-frequency Attitude Estimation Based on Vision/Inertial Integrated Measurement System";Zhang Yu;《2019 IEEE International Instrumentation and Measurement Technology Conference (I2MTC)》;20190909;正文第1-6页 *
"Multi-rate cubature Kalman filter based data fusion method with residual compensation to adapt to sampling rate discrepancy in attitude measurement system";Xiaoting Guo;《review of scientific instruments(2017)》;20170808;正文第085002-1-085002-11页 *
"Scale-recurrent Network for Deep Image Deblurring";Xin Tao;《2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition》;20181217;正文第8174-8181页 *

Also Published As

Publication number Publication date
CN111156987A (en) 2020-05-15

Similar Documents

Publication Publication Date Title
CN111156987B (en) Inertia/astronomy combined navigation method based on residual compensation multi-rate CKF
CN109211276B (en) SINS initial alignment method based on GPR and improved SRCKF
CN110030994B (en) Monocular-based robust visual inertia tight coupling positioning method
CN108827310B (en) Marine star sensor auxiliary gyroscope online calibration method
CN101949703B (en) Strapdown inertial/satellite combined navigation filtering method
CN108759838A (en) Mobile robot multiple sensor information amalgamation method based on order Kalman filter
CN106772524B (en) A kind of agricultural robot integrated navigation information fusion method based on order filtering
CN111024064A (en) SINS/DVL combined navigation method for improving Sage-Husa adaptive filtering
CN109724599A (en) A kind of Robust Kalman Filter SINS/DVL Combinated navigation method of anti-outlier
CN107110650A (en) The method of estimation of affined navigational state in terms of observability
CN108344409B (en) Method for improving satellite attitude determination precision
CN112798021B (en) Inertial navigation system inter-travelling initial alignment method based on laser Doppler velocimeter
CN113175926B (en) Self-adaptive horizontal attitude measurement method based on motion state monitoring
CN105300384A (en) Interactive filtering method for satellite attitude determination
CN109343550A (en) A kind of estimation method of the spacecraft angular speed based on moving horizon estimation
CN110567455B (en) Tightly-combined navigation method for quadrature updating volume Kalman filtering
CN103776449B (en) A kind of initial alignment on moving base method that improves robustness
CN110375731A (en) A kind of mixing interacting multiple model filters method
CN112857398A (en) Rapid initial alignment method and device for ships in mooring state
CN111649747A (en) IMU-based adaptive EKF attitude measurement improvement method
CN111854741A (en) GNSS/INS tight combination filter and navigation method
CN110388942B (en) Vehicle-mounted posture fine alignment system based on angle and speed increment
Cohen et al. A-KIT: Adaptive Kalman-informed transformer
CN110736459B (en) Angular deformation measurement error evaluation method for inertial quantity matching alignment
CN110375773B (en) Attitude initialization method for MEMS inertial navigation system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant