CN109000640B - Vehicle GNSS/INS integrated navigation method based on discrete grey neural network model - Google Patents

Vehicle GNSS/INS integrated navigation method based on discrete grey neural network model Download PDF

Info

Publication number
CN109000640B
CN109000640B CN201810513315.4A CN201810513315A CN109000640B CN 109000640 B CN109000640 B CN 109000640B CN 201810513315 A CN201810513315 A CN 201810513315A CN 109000640 B CN109000640 B CN 109000640B
Authority
CN
China
Prior art keywords
time
error
equation
gnss
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810513315.4A
Other languages
Chinese (zh)
Other versions
CN109000640A (en
Inventor
王立辉
张月新
乔楠
石佳晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201810513315.4A priority Critical patent/CN109000640B/en
Publication of CN109000640A publication Critical patent/CN109000640A/en
Application granted granted Critical
Publication of CN109000640B publication Critical patent/CN109000640B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/48Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Automation & Control Theory (AREA)
  • Navigation (AREA)

Abstract

The invention discloses a vehicle GNSS/INS integrated navigation method based on a discrete grey neural network model, which comprises the following steps: s1: calculating the attitude, the speed and the position of the vehicle by using an inertial navigation numerical value updating algorithm according to the angular increment and the specific force output by the micro inertial device; s2: establishing a discrete gray prediction model based on DGM (1, 1); s3: improving a multilayer neural network (MLP); s4: designing a DGM-MLP (hybrid intelligent prediction algorithm) based on a discrete gray neural network; s5: taking an inertial navigation error equation as a state equation, taking the difference between the position solved by the INS and the position of the GNSS as an observed quantity or taking the difference between the position solved by the INS and the position of the pseudo GNSS as an observed quantity, and carrying out state estimation on the combined navigation system by utilizing a Kalman filter KF; s6: and (3) outputting and correcting the inertial navigation resolving result by using the position, speed and attitude errors estimated by the Kalman filter KF, and performing feedback correction on the inertial navigation by using the gyroscope and the adding table errors. The method can effectively solve the problem of reduction of navigation precision when the GNSS signal fails.

Description

Vehicle GNSS/INS integrated navigation method based on discrete grey neural network model
Technical Field
The invention relates to a vehicle integrated navigation technology, in particular to a vehicle GNSS/INS integrated navigation method based on a discrete grey neural network model.
Background
The combined navigation system of the MEMS-INS/GNSS is more and more widely applied due to the advantages of low cost and miniaturization, but GNSS signals are easy to be shielded by tall buildings, bridges or tunnel lamps and fail. When the GNSS signal is invalid, the integrated navigation system is in a pure inertial navigation running state, and the resolving result can be rapidly reduced and even diverged due to the low precision of the MEMS-INS. In order to improve the navigation precision when the GNSS signal fails and ensure the reliability of the system, the main method comprises
1. Additional sensors, such as machine vision and doppler velocimetry, are added, which can effectively improve the navigation accuracy, but can lead to the increase of the system cost and is not easy to integrate.
2. Adding constraints, such as assuming zero vehicle lateral, free-wheel velocity to limit rapid divergence of inertial navigation errors, is simple to implement, but has limited accuracy and is only applicable to a particular travel path.
3. By improving the filtering algorithm, the method has a good effect in a short time when the GNSS fails, but has a limited effect when the GNSS fails for a long time.
4. The prediction is made by artificial intelligence algorithms, such as neural networks. The method needs more training samples, and the accuracy of establishing the neural network model based on the low-cost MEMS is low.
Disclosure of Invention
The purpose of the invention is as follows: the invention provides a vehicle GNSS/INS combined navigation method based on a discrete grey neural network model, aiming at the problem that the navigation precision is reduced when a GNSS signal fails.
The technical scheme is as follows: in order to achieve the purpose, the invention adopts the following technical scheme:
the invention discloses a vehicle GNSS/INS integrated navigation method based on a discrete grey neural network model, which comprises the following steps:
s1: calculating the attitude, the speed and the position of the vehicle by using an inertial navigation numerical value updating algorithm according to the angular increment and the specific force output by the micro inertial device;
s2: establishing a discrete gray prediction model based on DGM (1, 1);
s3: improving a multilayer neural network (MLP);
s4: designing a DGM-MLP (hybrid intelligent prediction algorithm) based on a discrete gray neural network, and training the position of the GNSS by using the DGM-MLP when the GNSS signal is effective; when the GNSS signal is invalid, predicting the position of the GNSS by utilizing DGM-MLP to obtain pseudo GNSS position information;
s5: taking an inertial navigation error equation as a state equation, taking the difference between the position solved by the INS and the position of the GNSS as an observed quantity or taking the difference between the position solved by the INS and the position of the pseudo GNSS as an observed quantity, and carrying out state estimation on the combined navigation system by utilizing a Kalman filter KF;
s6: and (3) outputting and correcting the inertial navigation resolving result by using the position, speed and attitude errors estimated by the Kalman filter KF, and performing feedback correction on the inertial navigation by using the gyroscope and the adding table errors.
Further, the step S1 specifically includes the following steps: the INS measures the specific force and the angular increment of the carrier by using a triaxial accelerometer and a triaxial gyroscope, and calculates the navigation information of the carrier at the current moment according to the initial condition; the navigation coordinate system n adopts a northeast geographic coordinate system, and the carrier coordinate system b adopts a right-front upper coordinate system; the error differential equation of the INS includes an attitude error differential equation, a velocity error differential equation, and a position error differential equation, which are respectively expressed by equations (1) to (3):
Figure BDA0001673225210000021
equation (1) is an attitude error differential equation, wheren=[ψE ψN ψU]TFor attitude angle error in the navigation coordinate system n, psiEFor pitch angle error, psiNFor roll angle error, psiUIn order to be the error of the course angle,
Figure BDA0001673225210000022
is the angular velocity of rotation caused by the rotation of the earth,
Figure BDA0001673225210000023
an error of a rotational angular velocity caused by rotation of the earth;
Figure BDA0001673225210000024
the angular velocity of the displacement generated for the movement of the carrier,
Figure BDA0001673225210000025
error in displacement angular velocity for carrier motion; epsilonnThe projection of the gyro drift under a navigation coordinate system n system;
Figure BDA0001673225210000026
equation (2) is a velocity error differential equation, where Vn=[VE VN VU]TIs a velocity vector, VEEast speed, VNIs the north velocity, VUIs the speed in the direction of the sky, delta VnIs the speed error; f. ofnIs a specific force vector; vnProjecting the accelerometer drift under a navigation coordinate system n;
Figure BDA0001673225210000027
equation (3) is a position error differential equation, where L is latitude, λ is longitude, h is altitude, δ L is latitude error, δ λ is longitude error, δ h is altitude error, REIs a major curvature radius of the fourth prime circle, RNIs the principal radius of curvature of meridian circle, δ VNFor north velocity error, δ VEFor east velocity error, δ VUIs the error in the speed in the direction of the day.
Further, the step S2 specifically includes the following steps:
s2.1: preprocessing raw data
Converting original data into a non-negative sequence through an equation (4);
X(0)={x(0)(1),x(0)(2),…,x(0)(n)} (4)
in the formula (4), X(0)Is a non-negative sequence, x(0)(k) Is X(0)The number k in (1), 2, …, n, n is the original data length;
s2.2: accumulated to generate number
Generating a sequence using a one-time accumulation to obtain X(1)Namely:
X(1)={x(1)(1),x(1)(2),…,x(1)(n)} (5)
in the formula (5), the reaction mixture is,
Figure BDA0001673225210000031
s2.3: building discrete gray prediction model based on DGM (1,1)
A discrete gray prediction model based on DGM (1,1) is established by the following formula (6):
Figure BDA0001673225210000032
in the formula (6), the reaction mixture is,
Figure BDA0001673225210000033
is the accumulated estimate of the (l + 1) th data,
Figure BDA0001673225210000034
for the cumulative estimate of the ith data, U1Is a coefficient of a first order term, U2Is constant bias value, l is 1,2, …, n, …, n + m-1, m is the length of the prediction sequence, m is more than or equal to 1;
if it is
Figure BDA0001673225210000035
Is the parameter sequence, and combines with equation (7), the least square estimation parameter sequence of equation (6) satisfies the condition of equation (8);
Figure BDA0001673225210000036
Figure BDA0001673225210000037
s2.4: inverse accumulation of the generated number
Get
Figure BDA0001673225210000038
The reduction value is then:
Figure BDA0001673225210000039
in the formula (9), the reaction mixture is,
Figure BDA0001673225210000041
is the reduction value of the l +1 th data;
and subtracting the constant added by the original data from the reduction value to obtain the prediction value of the discrete gray prediction model.
Further, the step S3 specifically includes the following steps:
s3.1: momentum terms are introduced, and are shown in a formula (10) and a formula (11):
Figure BDA0001673225210000042
in the formula (10), ωH,t+1Is the weight of the hidden layer at the time of t +1, omegaH,tIs the weight of the hidden layer at time t, EtError at time t, ωH,t-1Is the weight of the hidden layer at the time t-1, eta is a momentum factor, and eta is more than or equal to 0 and less than or equal to 1;
Figure BDA0001673225210000043
in the formula (11), bH,t+1Is the bias of the hidden layer at time t +1, bH,tIs the bias of the hidden layer at time t, bH,t-1Is the bias value of the hidden layer at the time of t-1;
s3.2: the calculation is performed according to equations (12) to (13):
Figure BDA0001673225210000044
in formula (12), Δ ptIs the adjustment of the weight or bias at time t, ptIs the weight or bias value at time t, pt-1The weight value or the bias value at the moment t-1 is that a is more than 0 and b is more than 0;
pt′=pt+Δpt (13)
in the formula (13), pt' is an updated value of the weight or bias at time t.
Further, the step S4 specifically includes the following steps:
when the GNSS signal is effective, the whole system is in a training mode; setting the GNSS position information sequence as { x after preprocessing(0)(k) 1,2, …, n, predicted by DGM (1,1) model
Figure BDA0001673225210000045
Then the residual error at time T is defined as e(0)(T) that is
Figure BDA0001673225210000046
Building a residual sequence e(0)(T) in the MLP network model, if S is the prediction order number, the input sample of the MLP network training is e(0)(k-1),e(0)(k-2),…,e(0)(k-S),e(0)(k) Corresponding output samples for the network;
when the GNSS signal is invalid, the whole system is in a prediction mode; the residual sequence predicted by the MLP network training model is
Figure BDA0001673225210000047
Constructing a resulting prediction value using this prediction value
Figure BDA0001673225210000051
Will predict the value
Figure BDA0001673225210000052
The constant added by subtracting the original data is the predicted value of the discrete gray neural network model.
Further, the step S5 specifically includes the following steps:
s5.1: the discretization state equation is obtained by equation (14), and the discretization measurement equation is obtained by equation (15):
Xt=Ft,t-1Xt-1+GtWt (14)
in formula (14), XtIs the state value at time t, Xt-1Is the state value at time t-1, Ft,t-1A system state transition matrix from the t-1 moment to the t moment; gtDistributing a matrix for the noise at the time t; wtSystem noise at time t;
Zt=HtXt+Vt (15)
in the formula (15), ZtIs an observed quantity at time t, HtIs an observation matrix at time t, VtThe measurement noise at time t;
s5.2: the kalman filter includes time updates and measurement updates, as shown in equations (16) - (20):
Figure BDA0001673225210000053
Figure BDA0001673225210000054
Figure BDA0001673225210000055
Figure BDA0001673225210000056
Figure BDA0001673225210000057
wherein the content of the first and second substances,
Figure BDA0001673225210000058
for the updated value of the state quantity from time t-1 to time t,
Figure BDA0001673225210000059
is a state estimate at time t, Pt,t-1For an updated value of the covariance matrix from time t-1 to time t, PtIs a covariance matrix at time t, Pt-1Is a covariance matrix, Q, at time t-1t-1Is the system noise matrix at time t-1, RtIs the measured noise matrix at time t, KtIs the gain matrix at time t.
Has the advantages that: the invention discloses a vehicle GNSS/INS combined navigation method based on a discrete grey neural network model, which has the following beneficial effects compared with the prior art:
1) the invention can effectively solve the problem of navigation precision reduction caused by the shielding of satellite signals by high-rise and tunnel lamps;
2) the invention does not need to introduce additional sensors, has small calculated amount, needs less training samples and is simple to realize.
Drawings
FIG. 1 is a block diagram of a training mode of an integrated navigation system with GNSS signals enabled according to an embodiment of the present invention;
FIG. 2 is a block diagram of a training mode of the integrated navigation system when GNSS signals are invalid according to an embodiment of the present invention.
Detailed Description
The technical solution of the present invention will be further described with reference to the following detailed description and accompanying drawings.
The specific embodiment discloses a vehicle GNSS/INS combined navigation method based on a discrete grey neural network model, which comprises the following steps:
s1: calculating the attitude, the speed and the position of the vehicle by using an inertial navigation numerical value updating algorithm according to the angular increment and the specific force output by the micro inertial device;
s2: establishing a discrete gray prediction model based on DGM (1, 1);
s3: improving a multilayer neural network (MLP);
s4: designing a DGM-MLP (hybrid intelligent prediction algorithm) based on a discrete gray neural network, and training the position of the GNSS by using the DGM-MLP when the GNSS signal is effective; when the GNSS signal is invalid, predicting the position of the GNSS by utilizing DGM-MLP to obtain pseudo GNSS position information;
s5: taking an inertial navigation error equation as a state equation, taking the difference between the position solved by the INS and the position of the GNSS as an observed quantity or taking the difference between the position solved by the INS and the position of the pseudo GNSS as an observed quantity, and carrying out state estimation on the combined navigation system by utilizing a Kalman filter KF;
s6: and (3) outputting and correcting the inertial navigation resolving result by using the position, speed and attitude errors estimated by the Kalman filter KF, and performing feedback correction on the inertial navigation by using the gyroscope and the adding table errors.
Step S1 specifically includes the following processes: the INS measures the specific force and the angular increment of the carrier by using a triaxial accelerometer and a triaxial gyroscope, and calculates the navigation information of the carrier at the current moment according to the initial condition; the navigation coordinate system n adopts a northeast geographic coordinate system, and the carrier coordinate system b adopts a right-front upper coordinate system; the error differential equations of the INS include an attitude error differential equation, a velocity error differential equation, and a position error differential equation, as shown in equations (1) to (3), respectively:
Figure BDA0001673225210000061
equation (1) is an attitude error differential equation, wheren=[ψE ψN ψU]TFor attitude angle error in the navigation coordinate system n, psiEFor pitch angle error, psiNFor roll angle error, psiUIn order to be the error of the course angle,
Figure BDA0001673225210000062
is the angular velocity of rotation caused by the rotation of the earth,
Figure BDA0001673225210000063
an error of a rotational angular velocity caused by rotation of the earth;
Figure BDA0001673225210000064
the angular velocity of the displacement generated for the movement of the carrier,
Figure BDA0001673225210000065
error in displacement angular velocity for carrier motion; epsilonnThe projection of the gyro drift under a navigation coordinate system n system;
Figure BDA0001673225210000071
equation (2) is a velocity error differential equation, where Vn=[VE VN VU]TIs a velocity vector, VEEast speed, VNIs the north velocity, VUIs the speed in the direction of the sky, delta VnIs the speed error; f. ofnIs a specific force vector; vnProjecting the accelerometer drift under a navigation coordinate system n;
Figure BDA0001673225210000072
equation (3) is a position error differential equation, where L is latitude, λ is longitude, h is altitude, δ L is latitude error, δ λ is longitude error, δ h is altitude error, REIs a major curvature radius of the fourth prime circle, RNIs the principal radius of curvature of meridian circle, δ VNFor north velocity error, δ VEFor east velocity error, δ VUIs the error in the speed in the direction of the day.
Step S2 specifically includes the following processes:
s2.1: preprocessing raw data
Converting original data into a non-negative sequence through an equation (4);
X(0)={x(0)(1),x(0)(2),…,x(0)(n)} (4)
in the formula (4), X(0)Is a non-negative sequence, x(0)(k) Is X(0)The number k in (1), 2, …, n, n is the original data length;
s2.2: accumulated to generate number
Generating a sequence using a one-time accumulation to obtain X(1)Namely:
X(1)={x(1)(1),x(1)(2),…,x(1)(n)} (5)
in the formula (5), the reaction mixture is,
Figure BDA0001673225210000073
s2.3: building discrete gray prediction model based on DGM (1,1)
A discrete gray prediction model based on DGM (1,1) is established by the following formula (6):
Figure BDA0001673225210000074
in the formula (6), the reaction mixture is,
Figure BDA0001673225210000075
is the accumulated estimate of the (l + 1) th data,
Figure BDA0001673225210000076
for the cumulative estimate of the ith data, U1Is a coefficient of a first order term, U2Is constant bias value, l is 1,2, …, n, …, n + m-1, m is the length of the prediction sequence, m is more than or equal to 1;
if it is
Figure BDA0001673225210000081
Is the parameter sequence, and combines with equation (7), the least square estimation parameter sequence of equation (6) satisfies the condition of equation (8);
Figure BDA0001673225210000082
Figure BDA0001673225210000083
s2.4: inverse accumulation of the generated number
Get
Figure BDA0001673225210000084
The reduction value is then:
Figure BDA0001673225210000085
in the formula (9), the reaction mixture is,
Figure BDA0001673225210000086
is the reduction value of the l +1 th data;
and subtracting the constant added by the original data from the reduction value to obtain the prediction value of the discrete gray prediction model.
Step S3 specifically includes the following processes:
s3.1: momentum terms are introduced, and are shown in a formula (10) and a formula (11):
Figure BDA0001673225210000087
in the formula (10), ωH,t+1Is the weight of the hidden layer at the time of t +1, omegaH,tIs the weight of the hidden layer at time t, EtError at time t, ωH,t-1Is the weight of the hidden layer at the time t-1, eta is a momentum factor, and eta is more than or equal to 0 and less than or equal to 1;
Figure BDA0001673225210000088
in the formula (11), bH,t+1Is the bias of the hidden layer at time t +1, bH,tIs the bias of the hidden layer at time t, bH,t-1Is the bias value of the hidden layer at the time of t-1;
s3.2: the calculation is performed according to equations (12) to (13):
Figure BDA0001673225210000089
in formula (12), Δ ptIs the adjustment of the weight or bias at time t, ptIs the weight or bias value at time t, pt-1The weight value or the bias value at the moment t-1 is that a is more than 0 and b is more than 0;
pt′=pt+Δpt (13)
in the formula (13), pt' is an updated value of the weight or bias at time t.
Step S4 specifically includes the following processes:
when the GNSS signal is valid, the entire system is in a training mode, as shown in fig. 1; setting the GNSS position information sequence to be processed into { x(0)(k) 1,2, …, n, predicted by DGM (1,1) model
Figure BDA0001673225210000091
Then the residual error at time T is defined as e(0)(T) that is
Figure BDA0001673225210000092
Building a residual sequence e(0)(T) in the MLP network model, if S is the prediction order number, the input sample of the MLP network training is e(0)(k-1),e(0)(k-2),…,e(0)(k-S),e(0)(k) Corresponding output samples for the network;
when the GNSS signal is invalid, the whole system is in a prediction mode, as shown in FIG. 2; the residual sequence predicted by the MLP network training model is
Figure BDA0001673225210000093
Constructing a resulting prediction value using this prediction value
Figure BDA0001673225210000094
Will predict the value
Figure BDA0001673225210000095
The constant added by subtracting the original data is the predicted value of the discrete gray neural network model.
Step S5 specifically includes the following processes:
s5.1: the discretization state equation is obtained by equation (14), and the discretization measurement equation is obtained by equation (15):
Xt=Ft,t-1Xt-1+GtWt (14)
in formula (14), XtIs the state value at time t, Xt-1Is the state value at time t-1, Ft,t-1A system state transition matrix from the t-1 moment to the t moment; gtDistributing a matrix for the noise at the time t; wtSystem noise at time t;
Zt=HtXt+Vt (15)
in the formula (15), ZtIs an observed quantity at time t, HtIs an observation matrix at time t, VtThe measurement noise at time t;
s5.2: the kalman filter includes time updates and measurement updates, as shown in equations (16) - (20):
Figure BDA0001673225210000096
Figure BDA0001673225210000097
Figure BDA0001673225210000098
Figure BDA0001673225210000099
Figure BDA0001673225210000101
wherein the content of the first and second substances,
Figure BDA0001673225210000102
for the updated value of the state quantity from time t-1 to time t,
Figure BDA0001673225210000103
is a state estimate at time t, Pt,t-1Of covariance matrix from time t-1 to time tUpdate value, PtIs a covariance matrix at time t, Pt-1Is a covariance matrix, Q, at time t-1t-1Is the system noise matrix at time t-1, RtIs the measured noise matrix at time t, KtIs the gain matrix at time t.

Claims (4)

1. The vehicle GNSS/INS combined navigation method based on the discrete grey neural network model is characterized in that: the method comprises the following steps:
s1: calculating the attitude, the speed and the position of the vehicle by using an inertial navigation numerical value updating algorithm according to the angular increment and the specific force output by the micro inertial device;
s2: establishing a discrete gray prediction model based on DGM (1, 1);
s3: improving a multilayer neural network (MLP);
s4: designing a DGM-MLP (hybrid intelligent prediction algorithm) based on a discrete gray neural network, and training the position of the GNSS by using the DGM-MLP when the GNSS signal is effective; when the GNSS signal is invalid, predicting the position of the GNSS by utilizing the DGM-MLP to obtain pseudo GNSS position information;
s5: taking an inertial navigation error equation as a state equation, taking the difference between the position solved by the INS and the position of the GNSS as an observed quantity or taking the difference between the position solved by the INS and the position of the pseudo GNSS as an observed quantity, and carrying out state estimation on the combined navigation system by utilizing a Kalman filter KF;
s6: the position, speed and attitude errors obtained by Kalman filter KF estimation carry out output correction on the inertial navigation resolving result, and the gyroscope and adding table errors carry out feedback correction on the inertial navigation;
wherein, the step S3 specifically includes the following processes:
s3.1: momentum terms are introduced, and are shown in a formula (10) and a formula (11):
Figure FDA0003197666220000011
in the formula (10), ωH,t+1Is the weight of the hidden layer at the time of t +1, omegaH,tIs the weight of the hidden layer at time t, EtIs time tError of (a), ωH,t-1Is the weight of the hidden layer at the time t-1, eta is a momentum factor, and eta is more than or equal to 0 and less than or equal to 1;
Figure FDA0003197666220000012
in the formula (11), bH,t+1Is the bias of the hidden layer at time t +1, bH,tIs the bias of the hidden layer at time t, bH,t-1Is the bias value of the hidden layer at the time of t-1;
s3.2: the calculation is performed according to equations (12) to (13):
Figure FDA0003197666220000013
in formula (12), Δ ptIs the adjustment of the weight or bias at time t, ptIs the weight or bias value at time t, pt-1The weight value or the bias value at the moment t-1 is that a is more than 0 and b is more than 0;
pt′=pt+Δpt (13)
in the formula (13), pt' is an updated value of the weight or the bias value at the time t;
wherein, the step S4 specifically includes the following processes:
when the GNSS signal is effective, the whole system is in a training mode; setting the GNSS position information sequence as { x after preprocessing(0)(k) 1,2, …, n, predicted by DGM (1,1) model
Figure FDA0003197666220000021
Then the residual error at time T is defined as e(0) (T) that is
Figure FDA0003197666220000022
Building a residual sequence e(0)(T) in the MLP network model, if S is the prediction order, the input sample of the MLP network training is e(0)(k-1),e(0)(k-2),…,e(0)(k-S),e(0)(k) Corresponding output samples for the network;
when the GNSS signal is invalid, the whole system is in a prediction mode; the residual sequence predicted by the MLP network training model is
Figure FDA0003197666220000023
Constructing a resulting prediction value using this prediction value
Figure FDA0003197666220000024
Will predict the value
Figure FDA0003197666220000025
The constant added by subtracting the original data is the predicted value of the discrete grey neural network model.
2. The integrated GNSS/INS navigation method of a vehicle based on discrete gray neural network model as claimed in claim 1, wherein: the step S1 specifically includes the following steps: the INS measures the specific force and the angular increment of the carrier by using a triaxial accelerometer and a triaxial gyroscope, and calculates the navigation information of the carrier at the current moment according to the initial condition; the navigation coordinate system n adopts a northeast geographic coordinate system, and the carrier coordinate system b adopts a right-front upper coordinate system; the error differential equations of the INS include an attitude error differential equation, a velocity error differential equation, and a position error differential equation, which are respectively expressed by equations (1) to (3):
Figure FDA0003197666220000026
equation (1) is an attitude error differential equation, wheren=[ψE ψN ψU]TFor attitude angle error in the navigation coordinate system n, psiEFor pitch angle error, psiNFor roll angle error, psiUIn order to be the error of the course angle,
Figure FDA0003197666220000027
is the angular velocity of rotation caused by the rotation of the earth,
Figure FDA0003197666220000028
an error of a rotational angular velocity caused by rotation of the earth;
Figure FDA0003197666220000029
for the angular velocity of the displacement caused by the motion of the carrier,
Figure FDA00031976662200000210
error in displacement angular velocity for carrier motion; epsilonnThe projection of the gyro drift under a navigation coordinate system n system;
Figure FDA00031976662200000211
equation (2) is a velocity error differential equation, where Vn=[VE VN VU]TIs a velocity vector, VEEast speed, VNIs the north velocity, VUIs the speed in the direction of the sky, delta VnIs the speed error; f. ofnIs a specific force vector;
Figure FDA00031976662200000212
projecting the accelerometer drift under a navigation coordinate system n;
Figure FDA0003197666220000031
equation (3) is a position error differential equation, where L is latitude, λ is longitude, h is altitude, δ L is latitude error, δ λ is longitude error, δ h is altitude error, REIs a major curvature radius of the fourth prime circle, RNIs the radius of the meridian principal curvature, δ VNFor north velocity error, δ VEFor east velocity error, δ VUIs the error in the speed in the direction of the day.
3. The integrated GNSS/INS navigation method of a vehicle based on discrete gray neural network model as claimed in claim 1, wherein: the step S2 specifically includes the following steps:
s2.1: preprocessing raw data
Converting original data into a non-negative sequence through an equation (4);
X(0)={x(0)(1),x(0)(2),…,x(0)(n)} (4)
in the formula (4), X(0)Is a non-negative sequence, x(0)(k) Is X(0)The number k in (1), 2, …, n, n is the original data length;
s2.2: accumulated to generate number
Generating a sequence using a one-time accumulation to obtain X(1)Namely:
X(1)={x(1)(1),x(1)(2),…,x(1)(n)} (5)
in the formula (5), the reaction mixture is,
Figure FDA0003197666220000032
s2.3: building discrete gray prediction model based on DGM (1,1)
A discrete gray prediction model based on DGM (1,1) is established by the following formula (6):
Figure FDA0003197666220000033
in the formula (6), the reaction mixture is,
Figure FDA0003197666220000034
is the accumulated estimate of the (l + 1) th data,
Figure FDA0003197666220000035
for cumulative estimates of the I-th data, U1Is a coefficient of a first order term, U2Is constant bias value, l is 1,2, …, n, …, n + m-1, m is the length of the prediction sequence, m is more than or equal to 1;
if it is
Figure FDA0003197666220000036
Is the parameter sequence, and combines with equation (7), the least square estimation parameter sequence of equation (6) satisfies the condition of equation (8);
Figure FDA0003197666220000041
Figure FDA0003197666220000042
s2.4: inverse accumulation of the generated number
Get
Figure FDA0003197666220000043
The reduction value is then:
Figure FDA0003197666220000044
in the formula (9), the reaction mixture is,
Figure FDA0003197666220000045
is the reduction value of the l +1 th data;
and subtracting the constant added by the original data from the reduction value to obtain the prediction value of the discrete gray prediction model.
4. The integrated GNSS/INS navigation method of a vehicle based on discrete gray neural network model as claimed in claim 1, wherein: the step S5 specifically includes the following steps:
s5.1: the discretized state equation is obtained by equation (14), and the discretized measurement equation is obtained by equation (15):
Xt=Ft,t-1Xt-1+GtWt (14)
in formula (14), XtIs the value of the state at the time t,Xt-1is the state value at time t-1, Ft,t-1A system state transition matrix from the t-1 moment to the t moment; gtDistributing a matrix for the noise at the time t; wtSystem noise at time t;
Zt=HtXt+Vt (15)
in the formula (15), ZtIs an observed quantity at time t, HtIs an observation matrix at time t, VtThe measurement noise at time t;
s5.2: the kalman filter includes time updates and measurement updates, as shown in equations (16) - (20):
Figure FDA0003197666220000046
Figure FDA0003197666220000047
Figure FDA0003197666220000048
Figure FDA0003197666220000049
Figure FDA00031976662200000410
wherein the content of the first and second substances,
Figure FDA0003197666220000051
for the updated value of the state quantity from time t-1 to time t,
Figure FDA0003197666220000052
is a state estimate at time t, Pt,t-1Is t-1 orUpdate value of covariance matrix at moment t, PtIs a covariance matrix at time t, Pt-1Is a covariance matrix, Q, at time t-1t-1Is the system noise matrix at time t-1, RtIs the measured noise matrix at time t, KtIs the gain matrix at time t.
CN201810513315.4A 2018-05-25 2018-05-25 Vehicle GNSS/INS integrated navigation method based on discrete grey neural network model Active CN109000640B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810513315.4A CN109000640B (en) 2018-05-25 2018-05-25 Vehicle GNSS/INS integrated navigation method based on discrete grey neural network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810513315.4A CN109000640B (en) 2018-05-25 2018-05-25 Vehicle GNSS/INS integrated navigation method based on discrete grey neural network model

Publications (2)

Publication Number Publication Date
CN109000640A CN109000640A (en) 2018-12-14
CN109000640B true CN109000640B (en) 2021-09-28

Family

ID=64573813

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810513315.4A Active CN109000640B (en) 2018-05-25 2018-05-25 Vehicle GNSS/INS integrated navigation method based on discrete grey neural network model

Country Status (1)

Country Link
CN (1) CN109000640B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109444928B (en) * 2018-12-18 2021-08-06 重庆西部汽车试验场管理有限公司 Positioning method and system
CN109931926B (en) * 2019-04-04 2023-04-25 山东智翼航空科技有限公司 Unmanned aerial vehicle seamless autonomous navigation method based on station-core coordinate system
CN110632636B (en) * 2019-09-11 2021-10-22 桂林电子科技大学 Carrier attitude estimation method based on Elman neural network
CN111290007A (en) * 2020-02-27 2020-06-16 桂林电子科技大学 BDS/SINS combined navigation method and system based on neural network assistance
CN112505737B (en) * 2020-11-16 2024-03-01 东南大学 GNSS/INS integrated navigation method
CN112683261B (en) * 2020-11-19 2022-10-14 电子科技大学 Unmanned aerial vehicle robustness navigation method based on speed prediction
CN113009537B (en) * 2021-02-18 2023-10-31 中国人民解放军国防科技大学 Inertial navigation assisted defensive navigation relative positioning single epoch part ambiguity solving method
CN113029139B (en) * 2021-04-07 2023-07-28 中国电子科技集团公司第二十八研究所 Airport flight area vehicle differential Beidou/SINS combined navigation method based on motion detection
CN114690221A (en) * 2021-12-22 2022-07-01 北京航天时代激光导航技术有限责任公司 Elman neural network and Kalman fusion filtering algorithm based on wavelet threshold method
CN114184211B (en) * 2021-12-27 2023-07-14 北京计算机技术及应用研究所 Method for judging consistency of performance change mechanism in inertial navigation reliability test
CN114459472B (en) * 2022-02-15 2023-07-04 上海海事大学 Combined navigation method of volume Kalman filter and discrete gray model

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103900610A (en) * 2014-03-28 2014-07-02 哈尔滨工程大学 MEMS (Micro-electromechanical Systems) gyroscope random error predication method based on grey wavelet neural network
CN106240705A (en) * 2016-09-06 2016-12-21 上海应用技术大学 A kind of based on double-wheel self-balancing dolly before and after grey neural network prediction algorithm
CN106781685A (en) * 2017-03-06 2017-05-31 苏州悦驻智能科技有限公司 A kind of city intelligent shutdown system that robot is managed based on parking garage
CN107045785A (en) * 2017-02-08 2017-08-15 河南理工大学 A kind of method of the short-term traffic flow forecast based on grey ELM neutral nets
CN107144284A (en) * 2017-04-18 2017-09-08 东南大学 Inertial navigation combination navigation method is aided in based on the vehicle dynamic model that CKF is filtered
CN107390246A (en) * 2017-07-06 2017-11-24 电子科技大学 A kind of GPS/INS Combinated navigation methods based on genetic neural network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8756001B2 (en) * 2011-02-28 2014-06-17 Trusted Positioning Inc. Method and apparatus for improved navigation of a moving platform

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103900610A (en) * 2014-03-28 2014-07-02 哈尔滨工程大学 MEMS (Micro-electromechanical Systems) gyroscope random error predication method based on grey wavelet neural network
CN106240705A (en) * 2016-09-06 2016-12-21 上海应用技术大学 A kind of based on double-wheel self-balancing dolly before and after grey neural network prediction algorithm
CN107045785A (en) * 2017-02-08 2017-08-15 河南理工大学 A kind of method of the short-term traffic flow forecast based on grey ELM neutral nets
CN106781685A (en) * 2017-03-06 2017-05-31 苏州悦驻智能科技有限公司 A kind of city intelligent shutdown system that robot is managed based on parking garage
CN107144284A (en) * 2017-04-18 2017-09-08 东南大学 Inertial navigation combination navigation method is aided in based on the vehicle dynamic model that CKF is filtered
CN107390246A (en) * 2017-07-06 2017-11-24 电子科技大学 A kind of GPS/INS Combinated navigation methods based on genetic neural network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A hybrid fusion algorithm for GPS/INS integration during GPS outages;YiqingYao等;《Measurement》;20170207;第103卷;第42-51页 *
A low-cost GPS/INS integration methodology based on DGPM during GPS outages;Yuexin Zhang等;《2018 Integrated Communications, Navigation, Surveillance Conference (ICNS)》;20180412;第4E2-1-4E2-8页 *
基于改进型灰色预测模型的SINS/GPS组合导航系统;王立冬等;《中国惯性技术学报》;20150430;第23卷(第2期);第248-251,257页 *
基于自适应学习速率的改进型BP算法研究;杨甲沛;《中国优秀硕士学位论文全文数据库 信息科技辑》;20090915(第9期);第I140-25页 *

Also Published As

Publication number Publication date
CN109000640A (en) 2018-12-14

Similar Documents

Publication Publication Date Title
CN109000640B (en) Vehicle GNSS/INS integrated navigation method based on discrete grey neural network model
CN112629538B (en) Ship horizontal attitude measurement method based on fusion complementary filtering and Kalman filtering
US6859727B2 (en) Attitude change kalman filter measurement apparatus and method
CN111156994B (en) INS/DR & GNSS loose combination navigation method based on MEMS inertial component
Iqbal et al. An integrated reduced inertial sensor system—RISS/GPS for land vehicle
CN112505737B (en) GNSS/INS integrated navigation method
CN109917440B (en) Combined navigation method, system and vehicle
CN110285804B (en) Vehicle collaborative navigation method based on relative motion model constraint
RU2380656C1 (en) Integrated strapdown inertial and satellite navigation system on coarse sensors
CN113029139B (en) Airport flight area vehicle differential Beidou/SINS combined navigation method based on motion detection
Aftatah et al. Fusion of GPS/INS/Odometer measurements for land vehicle navigation with GPS outage
Atia et al. An enhanced 3D multi-sensor integrated navigation system for land-vehicles
CN101900573A (en) Method for realizing landtype inertial navigation system movement aiming
Liu et al. Interacting multiple model UAV navigation algorithm based on a robust cubature Kalman filter
Cho et al. A personal navigation system using low-cost MEMS/GPS/Fluxgate
JP2009250778A (en) Repeated calculation control method and device in kalman filter processing
Park et al. Implementation of vehicle navigation system using GNSS, INS, odometer and barometer
CN113566850A (en) Method and device for calibrating installation angle of inertial measurement unit and computer equipment
CN113074757A (en) Calibration method for vehicle-mounted inertial navigation installation error angle
CN112325878A (en) Ground carrier combined navigation method based on UKF and air unmanned aerial vehicle node assistance
CN110567456B (en) BDS/INS combined train positioning method based on robust Kalman filtering
Botha et al. Vehicle sideslip estimation using unscented Kalman filter, AHRS and GPS
CN113324539A (en) SINS/SRS/CNS multi-source fusion autonomous integrated navigation method
Shien et al. Integrated navigation accuracy improvement algorithm based on multi-sensor fusion
Wachsmuth et al. Development of an error-state Kalman Filter for Emergency Maneuvering of Trucks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant