Disclosure of Invention
The invention overcomes the defects of the prior art, and the technical problem to be solved is to provide the method for identifying the navigation fault by GRNN auxiliary self-adaptive Kalman filtering, which can improve the success rate of detecting, identifying and eliminating errors in a complex urban area and can also improve the overall positioning precision of GNSS signals in short-term interruption.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
the method for identifying the navigation fault by GRNN auxiliary self-adaptive Kalman filtering comprises the following steps:
firstly, establishing an error detection state equation based on the Mahalanobis distance;
secondly, establishing a robust Kalman filtering equation and an adaptive Kalman filtering equation;
thirdly, the generalized regression neural network algorithm assists robust adaptive Kalman filtering to identify faults of the navigation system, and firstly, an abnormal observation value of the GPS/BDS is eliminated; then, in the dynamic initialization stage of the combined system, the optimal expansion factor of GRNN is adjusted on line; and finally, carrying out GRNN network training by using the residual observed values and the optimal expansion factors and adopting a sliding window method, identifying the source of system faults by tracking a decision threshold, and automatically selecting robust or adaptive Kalman filtering for the integrated system.
Preferably, in the first step, the method for establishing the mahalanobis distance-based error detection state equation is as follows:
the basic equation for kalman filtering is as follows:
the state prediction and state prediction covariance equations are:
where k is the epoch number, the superscript ^ represents the estimated value of the variable,
the state vector representing the system is an estimate of the state vector of the system, phi
k,k-1Being a state transition matrix, w
k-1In order to be a vector of the process noise,
the covariance is predicted for the state of the k epoch,
predicting covariance, Q, for states of k-1 epoch
k-1Is w
k-1T represents the transpose of the matrix;
the observation equation is: z is a radical ofk=HkXk+vkIn the formula, zkRepresents the observed quantity, HkTo represent a relationship matrix, vkMeasuring a noise vector for the representation relationship matrix;
the measurement updating process comprises the following steps:
in the formula, K
kAs Kalman gain, H
kRepresenting a relationship matrix, R
kTo observe the noise covariance, from the observed quantity Z
kTo its mean value
Is considered to be the relevant test statistic y
kExpressed as:
wherein M is
kIs the Mahalanobis distance, z
kThe amount of observation is represented by a graph,
is the observation of the prediction vector(s),
as covariance, the following is expressed:
when observation error and dynamic model noise wk-1Are all gaussian distributed, the test statistic gammakChi-square distribution with parameter m is satisfied:
where m is the degree of freedom, i.e., the dimension of the observed quantity,
is α threshold of significance level, when tested, statistic gamma
kMeasured value of
Greater than α indicates an observation error.
Preferably, in the second step, the robust kalman filter equation establishing method includes:
the observed noise covariance is as follows:
in the formula, λ
kIs a scale factor, and is a function of,
an updated observed noise covariance;
and further obtaining a new observation prediction vector:
in the formula (I), the compound is shown in the specification,
for new observation prediction vectors, n is the number of samples, n
kFor the k-th sample, the number of samples,
is a covariance;
so the following equation is satisfied:
then, Newton iteration method is applied to solve the following steps:
wherein i is 0,1, 2;
the degree of freedom is predetermined to be 6, i.e., the significance level is 1%, so χα16.812;
introducing another scalar factor kkAnd adjusting the covariance of the observation prediction to ensure the robustness:
robust Kalman gain KkComprises the following steps:
through the steps, the proportion of observation quantity is small, the information weight of the dynamic model is large, and the influence of actual observation errors is effectively inhibited.
Preferably, in the second step, the adaptive kalman filter equation is established as follows:
the conventional adaptive kalman filter is as follows:
wherein, αkIs an adaptive factor;
tr is the trace of the matrix:
wherein, C
0kIs the real covariance of the residual error,
is a sequence of the residual error that is,
by this method, the availability factor of the historical status information is reduced, i.e. the current availability factor metric information is increased. Thereby effectively suppressing the influence of dynamic model errors.
Preferably, the generalized recurrent neural network algorithm in the third step structurally comprises four layers, namely an input layer, a mode layer, a summation layer and an output layer:
1) the number of neurons in the input layer is equal to the dimension t of an input vector in a learning sample, and an input variable is transmitted to the mode layer through the neurons in the input layer;
2) each neuron in the mode layer corresponds to different samples respectively, the number of the neurons is equal to the number n of learning samples, and the neuron transfer function of the mode layer is as follows:
wherein n is the number of samples, sigma is a smoothing factor, and X is an input state vector;
3) the neurons for summation in the summation layer have two types, one is to sum the outputs of all the neurons in the mode layer arithmetically, the connection weight of the mode layer and each neuron is 1, and the transfer function SDComprises the following steps:
and another type of neuron carries out weighted summation on the neurons of all the mode layers, wherein the jth element in the input sample is the ith neuron in the mode layerTransfer function S through the connection weight of the j-th neuron in the element and summation layerzjComprises the following steps:
where j is the dimension of the output variable, yijIs the output state vector;
4) the output layer divides the two types of summation neurons transmitted by the summation layer, and the result is used as a predicted value:
the generalized regression neural network is a radial basis function network established on the basis of mathematical statistics, and the theoretical basis of the generalized regression neural network is nonlinear regression analysis. GRNN has strong nonlinear mapping capability and learning speed, has stronger advantages than RBF, and the network is finally converged in the optimized regression with more sample size aggregation, and has good prediction effect when the sample data is less and can also process unstable data. A generalized recurrent neural network can be built, in general, from radial basis neurons and linear neurons.
Preferably, the third step is specifically performed as follows:
taking the integral of the original acceleration and rotation speed as the training input of GRNN and as the sum of the angular and velocity increments; the GPS/BDS position difference is then used as a training output, which is a three-dimensional vector with north, east, and bottom positions as components. The expression is as follows:
wherein B is geodetic coordinates, str is a GRNN network structure,
is the original acceleration of the accelerometer and,
is the rate at which the spinning top is rotating,
is the position increment in the n coordinate system, Δ B, Δ L, Δ H are the difference of the geodetic coordinates, R
mIs the meridian radius of curvature, R
NIs the transverse radius of curvature;
on the premise that the differential navigation solution is reliable and the network learning is reasonable before k epoch, the network output can predict the optimal estimation of the state parameters of the dynamic model, so the reliable observation is the key premise for obtaining the optimal estimation. To this end, we propose the following strategy to improve the quality of the observed information.
Firstly, eliminating abnormal observation values of the GPS/BDS,
standard deviation of residual sequence
Comprises the following steps:
is a residual sequence, if
GPS/BDS observation and corresponding residual sequence are removed, wherein c is a constant;
secondly, in the dynamic initialization stage of the combined system, the optimal expansion factor of GRNN is adjusted on line,
the root mean square RMS of the GRNN predictions is expressed as:
wherein Z isGNSSFor observations of GNSS during training, ZpreIs the predicted value of the network, N is the epoch number, and then the network structure is trained by the following function:
wherein newgrnn represents the GRNN function; s is the propagation factor of the network;
after the network structure is determined, the GRNN output satisfies:
the optimal diffusion coefficient ensures to obtain the optimal output;
according to our experience, the initial value of the propagation factor s is 0.5 and the maximum value is 10, with the following criteria:
RMSpre=min
thirdly, using the rest observed values and the optimal expansion factors to carry out GRNN network training by adopting a sliding window method,
the method sets the sliding window width to 50. By preprocessing the network input, the GRNN neural network is more efficient, reliable and stable. The predicted position with the optimal GRNN can avoid the influence of fault or abnormal information which may occur in k epochs, and can be used for detecting and positioning current observation information. That is, the source of the system fault can be identified by tracking the decision threshold and automatically selecting robust or adaptive kalman filtering for the integrated system:
|Zpre-ZGNZZ|≥3RMSpre→robust Kalman filtering
|Zpre-ZGNZZ|<3RMSpre→adaptive Kalman filtering。
compared with the prior art, the invention has the following beneficial effects:
by the method, the generalized regression neural network is utilized to assist the adaptive robust Kalman filtering, the system can realize fault detection in GNSS/INS integrated navigation, and the robust or adaptive Kalman filtering is selected automatically so as to adjust the influence of observation and dynamic models on navigation results.
Compared with a single system, the reliability of a GPS/BDS combined system is obviously improved, and the real-time positioning precision is also obviously improved.
(1) The generalized regression neural network GRNN is used in the method, the GRNN has strong nonlinear mapping capability and learning speed and has stronger advantages than RBF, the network is finally converged in the optimized regression with more sample size aggregation, the prediction effect is good when the sample data is less, and the network can also process unstable data.
(2) The method combines robust Kalman filtering and adaptive Kalman filtering.
(3) When the system fails, the output of the neural network is not directly used as an integral result, but the neural network is used for calculating a tracking decision threshold value to identify the source of the system failure, and the robust or the adaptive Kalman filtering is automatically selected for the integrated system.
Detailed Description
The invention will be further explained with reference to the drawings.
The method for identifying the navigation fault by GRNN auxiliary self-adaptive Kalman filtering comprises the following steps:
firstly, establishing an error detection state equation based on the Mahalanobis distance, wherein the method comprises the following steps:
the basic equation for kalman filtering is as follows:
the state prediction and state prediction covariance equations are:
where k is the epoch number, the superscript ^ represents the estimated value of the variable,
the state vector representing the system is an estimate of the state vector of the system, phi
k,k-1Being a state transition matrix, w
k-1In order to be a vector of the process noise,
the covariance is predicted for the state of the k epoch,
predicting covariance, Q, for states of k-1 epoch
k-1Is w
k-1T represents the transpose of the matrix;
the observation equation is: z is a radical ofk=HkXk+vkIn the formula, zkRepresents the observed quantity, HkTo represent a relationship matrix, vkMeasuring a noise vector for the representation relationship matrix;
the measurement updating process comprises the following steps:
in the formula, K
kAs Kalman gain, H
kRepresenting a relationship matrix, R
kTo observe the noise covariance, from the observed quantity Z
kTo its mean value
Is considered to be the relevant test statistic y
kExpressed as:
wherein M is
kIs the Mahalanobis distance, z
kThe amount of observation is represented by a graph,
is the observation of the prediction vector(s),
as covariance, the following is expressed:
when observation error and dynamic model noise wk-1Are all gaussian distributed, the test statistic gammakChi-square distribution with parameter m is satisfied:
where m is the degree of freedom, i.e., the dimension of the observed quantity,
is α threshold of significance level, when tested, statistic gamma
kMeasured value of
Greater than α indicates an observation error.
Secondly, establishing a robust Kalman filtering equation and an adaptive Kalman filtering equation;
the robust Kalman filtering equation establishment method comprises the following steps:
the observed noise covariance is as follows:
in the formula, λ
kIs a scale factor, and is a function of,
an updated observed noise covariance;
further obtaining:
in the formula (I), the compound is shown in the specification,
for new observation prediction vectors, n is the number of samples, n
kFor the k-th sample, the number of samples,
is a covariance;
so the following equation is satisfied:
then, Newton iteration method is applied to solve the following steps:
wherein i is 0,1, 2;
the degree of freedom is predetermined to be 6, i.e., the significance level is 1%, so χα16.812;
another scalar factor is introduced to adjust the covariance of the observation prediction to ensure robustness:
the robust kalman gain is:
through the steps, the proportion of observation quantity is small, the information weight of the dynamic model is large, and the influence of actual observation errors is effectively inhibited.
The method for establishing the adaptive Kalman filtering equation comprises the following steps:
the conventional adaptive kalman filter is as follows:
wherein, αkIs an adaptive factor;
tr is the trace of the matrix:
wherein, C
0kIn order to be the real covariance of the residual,
is a sequence of the residual error that is,
by this method, the availability factor of the historical status information is reduced, i.e. the current availability factor metric information is increased. Thereby effectively suppressing the influence of dynamic model errors.
Thirdly, the generalized regression neural network algorithm assists robust adaptive Kalman filtering to identify faults of the navigation system, and firstly, an abnormal observation value of the GPS/BDS is eliminated; then, in the dynamic initialization stage of the combined system, the optimal expansion factor of GRNN is adjusted on line; and finally, carrying out GRNN network training by using the residual observed values and the optimal expansion factors and adopting a sliding window method, identifying the source of system faults by tracking a decision threshold, and automatically selecting robust or adaptive Kalman filtering for the integrated system.
The generalized regression neural network is a radial basis function network established on the basis of mathematical statistics, and the theoretical basis of the generalized regression neural network is nonlinear regression analysis. GRNN has strong nonlinear mapping capability and learning speed, has stronger advantages than RBF, and the network is finally converged in the optimized regression with more sample size aggregation, and has good prediction effect when the sample data is less and can also process unstable data. A generalized recurrent neural network can be built, in general, from radial basis neurons and linear neurons. The generalized regression neural network algorithm structurally comprises four layers, namely an input layer, a mode layer, a summation layer and an output layer:
1) the number of neurons in the input layer is equal to the dimension t of an input vector in a learning sample, and an input variable is transmitted to the mode layer through the neurons in the input layer;
2) each neuron in the mode layer corresponds to different samples respectively, the number of the neurons is equal to the number n of learning samples, and the neuron transfer function of the mode layer is as follows:
wherein n is the number of samples, sigma is a smoothing factor, and X is an input state vector;
3) the neurons for summation in the summation layer have two types, one is to sum the outputs of all the neurons in the mode layer arithmetically, the connection weight of the mode layer and each neuron is 1, and the transfer function SDComprises the following steps:
the other type of neuron carries out weighted summation on the neurons of all the mode layers, the jth element in the input sample is the connection weight value of the ith neuron in the mode layer and the jth neuron in the summation layer, and the transfer function SzjComprises the following steps:
where j is the dimension of the output variable, yijIs the output state vector;
4) the output layer divides the two types of summation neurons transmitted by the summation layer, and the result is used as a predicted value:
the generalized regression neural network is a radial basis function network established on the basis of mathematical statistics, and the theoretical basis of the generalized regression neural network is nonlinear regression analysis. GRNN has strong nonlinear mapping capability and learning speed, has stronger advantages than RBF, and the network is finally converged in the optimized regression with more sample size aggregation, and has good prediction effect when the sample data is less and can also process unstable data. A generalized recurrent neural network can be built, in general, from radial basis neurons and linear neurons.
The specific method for identifying the fault of the navigation system by using the generalized regression neural network algorithm to assist the robust adaptive Kalman filtering is as follows:
taking the integral of the original acceleration and rotation speed as the training input of GRNN and as the sum of the angular and velocity increments; the GPS/BDS position difference is then used as a training output, which is a three-dimensional vector with north, east, and bottom positions as components. The expression is as follows:
wherein B is geodetic coordinates, str is a GRNN network structure,
is the original acceleration of the accelerometer and,
is the rate at which the spinning top is rotating,
is the position increment in the n coordinate system, Δ B, Δ L, Δ H are the difference of the geodetic coordinates, R
mIs the meridian radius of curvature, R
NIs the transverse radius of curvature;
on the premise that the differential navigation solution is reliable and the network learning is reasonable before k epoch, the network output can predict the optimal estimation of the state parameters of the dynamic model, so the reliable observation is the key premise for obtaining the optimal estimation. To this end, we propose the following strategy to improve the quality of the observed information.
Firstly, eliminating abnormal observation values of the GPS/BDS,
standard deviation of residual sequence
Comprises the following steps:
is a residual sequence, if
GPS/BDS observations and phasesRemoving all the corresponding residual sequences, wherein c is a constant;
secondly, in the dynamic initialization stage of the combined system, the optimal expansion factor of GRNN is adjusted on line,
the root mean square RMS of the GRNN predictions is expressed as:
wherein Z isGNSSFor observations of GNSS during training, ZpreIs the predicted value of the network, N is the epoch number, and then the network structure is trained by the following function:
wherein newgrnn represents the GRNN function; s is the propagation factor of the network;
after the network structure is determined, the GRNN output satisfies:
the optimal diffusion coefficient ensures to obtain the optimal output;
according to our experience, the initial value of the propagation factor s is 0.5 and the maximum value is 10, with the following criteria:
RMSpre=min
thirdly, using the rest observed values and the optimal expansion factors to carry out GRNN network training by adopting a sliding window method,
the method sets the sliding window width to 50. By preprocessing the network input, the GRNN neural network is more efficient, reliable and stable. The predicted position with the optimal GRNN can avoid the influence of fault or abnormal information which may occur in k epochs, and can be used for detecting and positioning current observation information. That is, the source of the system fault can be identified by tracking the decision threshold and automatically selecting robust or adaptive kalman filtering for the integrated system:
|Zpre-ZGNZZ|≥3RMSpre→robust Kalman filtering
|Zpre-ZGNZZ|<3RMSpre→adaptive Kalman filtering。
compared with a single system, the reliability of the GPS/BDS combined system is obviously improved, and the real-time positioning precision is also obviously improved.
And (3) vehicle navigation test:
the vehicle adopts an INS/GPS integrated navigation positioning system, the navigation system consists of a star-watching NVIMU300 inertial measurement unit and a JAVAD LexonGGD112T GPS receiver, and 1Hz C/A GPS data is output. In addition, another JAVAD Lexon-GGD112T GPS receiver placed at a local reference station provides 1Hz Differential GPS (DGPS) data along with a vehicle-mounted receiver. The maximum distance between the vehicle and the local reference station is less than 60km, and the DGPS positioning precision is less than 0.1m through post differential processing.
The vehicle navigation test was conducted in jin of shanxi province with a home position of 37 ° 44'44.4 "east longitude 112 ° 42' 07.3". The GPS receiver has a horizontal position error (RMS) of 5m, a height error (RMS) of 8m, and a velocity error (RMS) of 0.05 m/s. The initial speeds of east, north and day are 10m/s, 10m/s and 0m/s, respectively. The gyro constant drift was 0.1 °/h, and its white noise was 0.05 °/h. Accelerometer zero offset is 10-3g, white noise of 10-4g. The vehicle initial position error is (12m,12m,15m), the initial velocity error is (0.3m/s,0.3m/s,0.3 m/s), and the initial attitude error is (1.5 ', 1 ', 1 '). The test time is 1000s and the filter period is 1 s.
As shown in fig. 3 and 4, for the comparison between the current vehicle navigation test and the positioning error of the classical kalman filter algorithm, it can be seen from the figure that the error of the method is significantly smaller than that of the classical kalman filter algorithm. The method effectively and dynamically selects the ACKF and the RCKF, and simultaneously improves the overall positioning precision of the GNSS signal in short-term interruption. Therefore, the method has stronger adaptability and robustness and higher navigation precision.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the gist of the present invention.