CN114935345A - Vehicle-mounted inertial navigation mounting angle error compensation method based on pattern recognition - Google Patents

Vehicle-mounted inertial navigation mounting angle error compensation method based on pattern recognition Download PDF

Info

Publication number
CN114935345A
CN114935345A CN202210677046.1A CN202210677046A CN114935345A CN 114935345 A CN114935345 A CN 114935345A CN 202210677046 A CN202210677046 A CN 202210677046A CN 114935345 A CN114935345 A CN 114935345A
Authority
CN
China
Prior art keywords
error
vehicle
inertial navigation
imu
gnss
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210677046.1A
Other languages
Chinese (zh)
Inventor
张文安
娄昱乾
杨旭升
刘浩淼
史秀纺
付明磊
季华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202210677046.1A priority Critical patent/CN114935345A/en
Publication of CN114935345A publication Critical patent/CN114935345A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C25/00Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
    • G01C25/005Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass initial alignment, calibration or starting-up of inertial devices

Landscapes

  • Engineering & Computer Science (AREA)
  • Manufacturing & Machinery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Navigation (AREA)

Abstract

The invention provides a vehicle-mounted inertial navigation mounting angle error compensation method based on pattern recognition. Firstly, mounting inertial navigation equipment on a vehicle, reading six-axis data of an inertial measurement unit IMU, projecting a gyroscope z-axis output value on a vehicle system to obtain a vehicle body yaw velocity, updating strapdown inertial navigation by using the IMU data, and identifying a vehicle motion mode; then, utilizing GNSS to carry out observation fusion; and finally, fusing the vehicle speed through a Kalman filter, and estimating the mounting angle error in real time. Compared with the prior art, the method reduces the threshold for GNSS/INS combined navigation, and can automatically complete installation angle compensation in the driving process only by fixing the inertial navigation equipment on the vehicle; and an accurate and real-time vehicle motion state identification method is provided, different observation fusion can be carried out aiming at different vehicle motion states, the fact that the inertial equipment installation angle estimation is not influenced by wrong observation quantity is guaranteed, and robustness and estimation accuracy of the inertial equipment installation angle estimation are improved.

Description

Vehicle-mounted inertial navigation mounting angle error compensation method based on pattern recognition
Technical Field
The invention relates to the field of positioning navigation, in particular to a vehicle-mounted inertial navigation mounting angle error compensation method based on pattern recognition.
Background
In the field of positioning and Navigation, a Global Navigation System (GNSS) and an Inertial Navigation System (INS) have strong complementarity, and the combination of the GNSS and the INS can make up for the deficiency to obtain more accurate positioning information.
When the combined navigation is applied to a vehicle-mounted environment, the inertial navigation equipment cannot be completely aligned with a vehicle body coordinate system when being installed, and the final calculation results of the attitude, the speed and the position are influenced. The invention provides an inertial navigation mounting angle error compensation method based on vehicle mode identification. The method introduces the vehicle speed information provided by a Controller Area Network (CAN) to provide additional observation for combined navigation. Under the working conditions of turning, low speed and the like, the observation effect of the vehicle speed is not ideal, so that the invention provides that the installation angle of the inertial equipment is estimated by utilizing CAN vehicle speed and GNSS observation on the basis of vehicle motion state identification.
Disclosure of Invention
In order to compensate for inertial navigation mounting angle errors and improve the integrated navigation positioning precision, the invention provides a vehicle-mounted inertial navigation mounting angle error compensation method based on mode recognition. Firstly, identifying a vehicle motion mode; secondly, observing and fusing by utilizing a GNSS; and finally, fusing the vehicle speed through a Kalman filter in a high-speed straight-going state of the vehicle motion mode, and estimating the mounting angle error in real time. Compared with the existing method, the method has stronger robustness and faster estimation speed.
The invention adopts the following technical scheme:
step S1 is to install the inertial navigation device in the test car and perform system initialization.
Step S2, reading six-axis data of an Inertial Measurement Unit (IMU) in real time, including x, y and z-axis data of a gyroscope and an accelerometer, and performing sliding window mean preprocessing on the original data.
Step S3, when the vehicle is static, the roll installation angle theta between the inertial navigation equipment and the vehicle body is calculated by using the accelerometer data of the IMU r And pitch mounting angle theta p In addition, consider heading angle θ y Is 0. Will theta r 、θ p And theta y Calculating a conversion matrix between an inertial navigation equipment system and a vehicle body system through an Euler angle-to-attitude array formula
Figure BDA0003695181340000021
Step S4, use
Figure BDA0003695181340000022
The gyroscope z-axis output value wbz b Projected on the vehicle system to obtain the yaw rate wbz of the vehicle body v
Figure BDA0003695181340000023
Step S5, utilizing IMU data to update strapdown inertial navigation to obtain updated posture
Figure BDA0003695181340000024
Position p IMU And velocity v IMU And (4) information.
And step S6, when the GNSS signal is received, performing information fusion through a Kalman filter, wherein the Kalman filtering process comprises filter initialization, one-step prediction and measurement updating. The initialization process comprises the initialization of an error state quantity, a system covariance matrix and an a posteriori covariance matrix. The one-step prediction is obtained by a strapdown inertial navigation error propagation equation, the difference between the three-dimensional speed output by the GNSS and the speed after strapdown inertial navigation updating in the step S5 and the difference between the position output by the GNSS and the position after strapdown inertial navigation updating in the step S5 are used as observation in measurement updating, and finally the corrected error state quantity is obtained. And subtracting the error state quantity estimated by the Kalman from the attitude, the speed and the position updated and output by the strapdown inertial navigation in the step S5 to obtain more accurate attitude, speed and position information.
The step S6 includes the steps of:
step S6-1, initializing a Kalman filter, including initializing an error state vector X, a system noise covariance Q and an a posteriori covariance P.
Taking the error state vector X as an 18-dimensional vector:
Figure BDA0003695181340000025
wherein the content of the first and second substances,
Figure BDA0003695181340000026
for the estimation error of the attitude in the navigational coordinate system, δ v n For estimation error of velocity under the navigation system, δ p n For estimation error of position under navigation system, b ω And b a Dynamic zero offset of the gyroscope and the accelerometer, delta k is the scale factor error of the CAN vehicle speed, delta theta p Pitch angle error, δ θ, for inertial navigation equipment mounting angle y And the course angle error of the installation angle of the inertial navigation equipment.
The system covariance matrix Q is:
Q=diag([IMU arw IMU vrw IMU gpsd IMU apsd ] 2 )
wherein, the IMU arw And IMU vrw Is the random walk coefficient of the gyro and the random walk coefficient of the accelerometer, IMU gpsd And IMU apsd The zero-bias instability coefficients of the gyroscope and the accelerometer respectively.
The posterior covariance matrix P is:
P=diag([IMU install GNSS stdv GNSS stdp IMU gpsd IMU apsd IMU err ] 2 )
wherein, the IMU install For installation error angle, GNSS stdv For GNSS speed error variance, GNSS stdp For GNSS position error variance, IMU err The vehicle speed scale factor is composed of a mounting angle error angle and a vehicle speed scale factor.
Step S6-2, Kalman filtering one-step prediction is carried out, strapdown inertial navigation error propagation equation is utilized to carry out one-step prediction, and prior state quantity X is obtained prior
Step S6-3, updating Kalman filtering measurement, and constructing GNSS observation Z gnss GNSS observation matrix H gnss Measuring and updating the prior state quantity to obtain a posterior state quantity X post
GNSS observation Z gnss
Z gnss =[v gnss -v IMU p gnss -p IMU ] T
Wherein v is gnss Three-dimensional velocity, v, output for GNSS IMU Updating the output three-dimensional velocity, p, for the strapdown inertial navigation in step S5 gnss Position output for GNSS (including longitude and latitude height information), p IMU And in the step S5, the output position (including longitude and latitude height information) is updated by the strapdown inertial navigation.
GNSS observation matrix H gnss
Figure BDA0003695181340000031
Wherein, O 3 Is a zero matrix of order 3, I 3 Is a 3-order unit array which is composed of a plurality of unit arrays,
Figure BDA0003695181340000032
R m is the radius of meridian curvature, R n Is the transverse radius of curvature, L is the latitude, and h is the altitude.
After the measurement is updated, the posterior error state direction is obtainedQuantity X post
Figure BDA0003695181340000033
In the formula, all variables in the subscript including post are posterior errors of corresponding quantities in the error state vector X.
Step S6-4, updating the attitude obtained by the strapdown inertial navigation in the step S5
Figure BDA0003695181340000034
Position p IMU And velocity v IMU Minus the posterior quantity of state X post Attitude error of (2)
Figure BDA0003695181340000035
Position error δ p n_post And velocity error δ v n_post
Figure BDA0003695181340000041
p ins =p IMU -δp n_post
v ins =v IMU -δv n_post
Wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003695181340000042
p ins 、v ins attitude, position and velocity estimates obtained for the combined navigation, respectively.
Step S7, receiving vehicle speed information through CAN, using wbz when receiving vehicle speed information v And identifying the basic motion state of the vehicle by the vehicle speed information, wherein the motion state can be divided into a high-speed straight-going state, a turning state, a parking state and a common state.
The step S7 includes the steps of:
step S7-1, Pair wbz v And the CAN vehicle speed is subjected to sliding window mean value processing to obtain the vehicle body yaw velocity wbz after mean value processing v_filtered And CANVehicle speed v filtered
Step S7-2, when v filtered Greater than a first threshold value for vehicle speed, wbz v_filtered If the angular velocity is smaller than the first threshold value and larger than the second threshold value, the high-speed straight-going state is judged; when wbz v_filtered If the angular velocity is greater than the first threshold value or less than the second threshold value, the turning state is determined; when v is filtered If the vehicle speed is less than the second threshold value, the vehicle is judged to be in a parking state; otherwise, judging the motion state as a normal motion state;
the vehicle speed first threshold, the vehicle speed second threshold, the angular speed first threshold and the angular speed second threshold are all determined according to the vehicle speed and the angular speed of each running state of the vehicle commonly used in the field.
And step S8, under the high-speed straight-ahead state, the CAN vehicle speed observation model selects the observation model added with the mounting angle error, and inertial navigation mounting angle error and CAN vehicle speed scale error estimation are carried out through a Kalman filter. And in other motion states, the CAN vehicle speed observation model selects a common vehicle speed observation model, and other state quantities are corrected through a Kalman filter without influencing mounting angle errors and CAN vehicle speed scale factor errors.
The step S8 includes the steps of:
in the high-speed straight-moving state, the difference value between the speed estimation result of the integrated navigation and the CAN vehicle speed added with the scale factor is selected as the observed quantity Z CAN_k Selecting an observation matrix H to which the mounting angle error is added CAN_install And performing observation fusion.
CAN vehicle speed observed quantity Z added with scale factor CAN_k
Figure BDA0003695181340000043
Wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003695181340000051
v ins in order to combine the navigation speed estimation results in step S6,
Figure BDA0003695181340000052
is a vector in three dimensions and is,
Figure BDA0003695181340000053
a first dimension, a second dimension and a third dimension thereof;
Figure BDA0003695181340000054
in order to combine the navigation pose estimation results,
Figure BDA0003695181340000055
is a transformation matrix, v, from the vehicle body coordinate system to the inertial navigation device system D Is a CAN vehicle speed value, and k is a vehicle speed scale factor;
because the result of Kalman is an attitude error, the correct attitude is obtained after correction, expressed as the Euler angle, i.e. the Euler angle
Figure BDA0003695181340000056
Corresponding to a rotation matrix is
Figure BDA0003695181340000057
And combining the navigation attitude estimation results
Figure BDA0003695181340000058
Are substantially the same.
Observation matrix H with installation angle error added CAN_install :
H CAN_install =[M 1 M 2 O 3 O 3 O 3 M 3 ] T
Wherein the content of the first and second substances,
Figure BDA0003695181340000059
Figure BDA00036951813400000510
() × representing an antisymmetric matrix, O 3 Is a zero matrix of order 3.
In other motion statesIn the state, the speed v estimated by the system is caused by inaccurate speed observation provided by the GNSS in the turning or low-speed motion state ins And the method is also inaccurate, so that the estimation of the installation angle of the inertial navigation equipment is not carried out at the moment. Selecting the difference value between the speed estimation result of the combined navigation and the CAN vehicle speed without adding the scale factor as the observed quantity Z CAN Selecting an observation matrix H without adding a mounting angle error CAN And (5) performing observation fusion.
Observed quantity Z CAN
Figure BDA00036951813400000511
Observation matrix H CAN :
H CAN =[M 1 M 2 O 3 O 3 O 3 O 3 ] T
And S9, correcting the current CAN vehicle speed scale and the current mounting angle transformation matrix by using the CAN vehicle speed scale error, the inertial navigation equipment pitch angle mounting angle error and the inertial navigation equipment course angle mounting angle error estimated in the step S8.
The step S9 includes the steps of:
step S9-1, correcting the CAN vehicle speed scale factor of the previous step by using the estimated CAN vehicle speed scale error delta k:
k=k (-) +δk
step S9-2, correcting the installation angle coordinate transformation matrix of the previous step by using the error of the pitch angle and the installation angle of the inertial navigation equipment and the error of the course angle and the installation angle of the inertial navigation equipment
Figure BDA00036951813400000512
As shown in the following equation:
Figure BDA0003695181340000061
wherein the content of the first and second substances,
Figure BDA0003695181340000062
the attitude transformation matrix between the inertial navigation equipment system and the vehicle body system is the matrix representation form of the installation angle of the inertial navigation equipment. Obtained in step S3
Figure BDA0003695181340000063
Is a preliminary estimate of the transformation matrix and is continually corrected in step S9 so that it is closer and closer to the mounting angle between the real inertial navigation device and the vehicle.
Compared with the prior art, the invention has the beneficial effects that:
1. the threshold for GNSS/INS combined navigation is reduced, a user does not need to care about the arrangement angle of the inertial navigation equipment, and only needs to fix the inertial navigation equipment on a vehicle, so that the installation angle compensation can be automatically completed in the driving process. The CAN vehicle speed observation method provided by the invention CAN be used for estimating the installation angle of the inertial equipment in real time and compensating by using the CAN vehicle speed always pointing to the positive direction of the vehicle head.
2. The robustness and the calibration precision of the calibration of the installation angle of the inertia equipment are improved, a user does not need to care about the driving route of a vehicle in the calibration process of the installation angle of the inertia equipment, and the installation angle compensation can be completed under any driving working condition. The method for identifying the vehicle motion state accurately and in real time can perform different observation fusion aiming at different vehicle motion states, and ensure that the inertial equipment mounting angle estimation is not influenced by wrong observation quantity, thereby improving the robustness and the estimation accuracy of the inertial equipment mounting angle estimation.
Drawings
FIG. 1 is a flow chart of vehicle-mounted inertial navigation mounting angle error compensation according to an embodiment of the invention.
FIG. 2 is a diagram of a course angle estimation error in an embodiment of the present invention.
Detailed Description
For the purpose of promoting a clear understanding of the objects, aspects and advantages of the embodiments of the invention, reference will now be made to the drawings and detailed description, wherein there are shown in the drawings and described in detail, various modifications of the embodiments described herein, and other embodiments of the invention will be apparent to those skilled in the art. The exemplary embodiments and descriptions of the present invention are provided to explain the present invention and should not be interpreted as limiting the present invention.
In this embodiment, the feasibility and advantages of the present invention are verified through a real vehicle, which specifically includes the following steps:
and step S1, installing the inertial navigation equipment on the test car, wherein the x axis of the inertial navigation equipment is approximately flush with the direction of the car head.
Step S2, reading six-axis data of an Inertial Measurement Unit (IMU) including x, y, and z-axis data of a gyroscope and an accelerometer at a frequency of 100 hz. In this embodiment, the original gyroscope data wb xyz And accelerometer velocity ab xyz Performing sliding window mean preprocessing:
Figure BDA0003695181340000071
Figure BDA0003695181340000072
step S3, when the vehicle is static, the roll installation angle theta between the inertial navigation equipment and the vehicle body is calculated by using the accelerometer data of the IMU r And pitch mounting angle theta p In addition, consider heading angle θ y Is 0. Will theta r 、θ p And theta y Calculating an attitude conversion matrix between an inertial navigation equipment system and a vehicle body system through an Euler angle-to-attitude array formula
Figure BDA0003695181340000073
In the present embodiment, the mounting angle θ install Comprises the following steps:
θ install =[θ r θ p θ y ] T =[0.021 -0.039 0] T
step S4, use
Figure BDA0003695181340000074
Output value wbz of gyroscope z axis b Projected on the vehicle system to obtain wbz v
Figure BDA0003695181340000075
And step S5, updating by using a strapdown inertial navigation algorithm to obtain updated posture, position and speed information.
And step S6, when the GNSS signal is received, taking the three-dimensional speed and latitude and longitude information output by the GNSS as observation, and carrying out information fusion with the attitude, position and speed obtained in the previous step through a Kalman filter to obtain a more credible attitude, speed and position combined navigation result.
The step S6 includes the steps of:
step S6-1, initializing a Kalman filter, including initializing an error state vector X, a system noise covariance Q and a posterior covariance P.
Taking the error state vector X as an 18-dimensional vector:
Figure BDA0003695181340000076
wherein the content of the first and second substances,
Figure BDA0003695181340000077
for the estimation error of the attitude in the navigation coordinate system, δ v n For estimation error of velocity under the navigation system, δ p n For estimation errors of the position under the navigation system, b ω And b a Dynamic zero offset of the gyroscope and the accelerometer respectively, wherein delta k is the scale factor error of the CAN vehicle speed, and delta theta p Pitch angle error, δ θ, for inertial navigation equipment mounting angle y And the course angle error of the installation angle of the inertial navigation equipment.
In this embodiment, the system covariance matrix Q is:
Q=diag([0.001I 3 ,0.003I 3 ,1×10 -4 I 3 ,0.003I 3 ] 2 )
the posterior covariance matrix P is:
P=diag([0.1I 3 ,I 3 ,4×10 -7 I 3 ,O 3 ,O 3 ,0.1,0.01,0.01] 2 )
and S6-2, performing Kalman filtering one-step prediction by using a strapdown inertial navigation error propagation equation.
Step S6-3, updating Kalman filtering measurement, and constructing GNSS observation Z gnss GNSS observation matrix H gnss
GNSS observation Z gnss
Z gnss =[v gnss -v IMU p gnss -p IMU ] T
GNSS observation matrix H gnss
Figure BDA0003695181340000081
Wherein, O 3 Is a zero matrix of order 3, I 3 Is a 3-order unit array which is composed of a plurality of unit arrays,
Figure BDA0003695181340000082
R m is the radius of meridian curvature, R n Is the transverse radius of curvature, L is the latitude, and h is the altitude.
And S6-4, correcting the system state vector by using the error estimated by the Kalman filter, and finally outputting the positioning and attitude estimation results.
Step S7, receiving CAN vehicle speed information at 10hz frequency, when receiving the vehicle speed information, using wbz v And the vehicle speed information identifies the basic motion state of the vehicle, and the motion state can be divided into a high-speed straight-going state, a turning state, a parking state and a common state.
The step S7 includes the steps of:
step S7-1, Pair wbz v And carrying out sliding window mean value processing on the CAN vehicle speed.
Step S7-2, when the vehicle speed is greater than the first threshold value of the vehicle speed, the angular speed of the gyroscope z-axis is less than the first threshold value of the angular speed and greater than the second threshold value of the angular speed, the high-speed straight-going state is judged; when the z-axis angular speed of the gyroscope is greater than a third threshold value of the angular speed or less than a fourth threshold value of the angular speed, the gyroscope is judged to be in a turning state; when the vehicle speed is less than a second threshold value of the vehicle speed, the vehicle is judged to be in a parking state; otherwise, the state is determined as a normal motion state.
In the embodiment, the following threshold values are adopted, wherein the first threshold value of the vehicle speed is 5m/s, and the second threshold value of the vehicle speed is 0.01 m/s; the first threshold value of the angular velocity is 0.05rad/s, the second threshold value of the angular velocity is-0.05 rad/s, the third threshold value of the angular velocity is 0.1rad/s, and the third threshold value of the angular velocity is-0.1 rad/s.
And step S8, under the high-speed straight-ahead state, the CAN vehicle speed observation model selects the observation model added with the mounting angle error, and inertial navigation mounting angle error and CAN vehicle speed scale error estimation are carried out through a Kalman filter. And under other motion states, the CAN vehicle speed observation model selects a common vehicle speed observation model, and corrects other state quantities through a Kalman filter without influencing mounting angle errors and CAN vehicle speed scale errors.
The step S8 includes the steps of:
and under the high-speed straight-moving state, selecting the difference value between the speed estimation result of the combined navigation and the CAN vehicle speed as an observed quantity Z, and selecting an observation matrix H added with the mounting angle error for observation fusion.
Observed quantity Z CAN_k
Figure BDA0003695181340000091
Wherein the content of the first and second substances,
Figure BDA0003695181340000092
v n in order to combine the navigation speed estimation results in step S6,
Figure BDA0003695181340000093
in order to combine the navigation pose estimation results,
Figure BDA0003695181340000094
for transformation matrix of vehicle body system to inertial navigation equipment system, v D Is a CAN vehicle speed value, and k is a CAN vehicle speed scale factor.
Observation matrix H CAN_install :
H CAN_install =[M 1 M 2 O 3 O 3 O 3 M 3 ] T
Wherein the content of the first and second substances,
Figure BDA0003695181340000095
Figure BDA0003695181340000096
() × representing an antisymmetric matrix, O 3 Is a zero matrix of order 3.
In other motion states, the speed v estimated by the system is caused by inaccurate speed observation provided by the GNSS in turning or low-speed motion states ins And the method is inaccurate, so that the estimation of the installation angle of the inertial navigation equipment is not carried out at the moment. And selecting the following observation quantity Z and observation matrix H for observation fusion.
Observed quantity Z CAN
Figure BDA0003695181340000097
Observation matrix H CAN :
H CAN =[M 1 M 2 O 3 O 3 O 3 O 3 ] T
And S9, correcting the current CAN vehicle speed scale and the current mounting angle transformation matrix by using the CAN vehicle speed scale error, the inertial navigation equipment pitch angle mounting angle error and the inertial navigation equipment course angle mounting angle error estimated in the step S7.
The step S9 includes the steps of:
step S9-1, correcting the CAN vehicle speed scale of the previous step by using the estimated CAN vehicle speed scale error:
k=k (-) +δk
step S9-2 of utilizing inertiaCorrection of installation angle coordinate transformation matrix of previous step for pitch angle installation angle error of navigation equipment and course angle installation angle error of inertial navigation equipment
Figure BDA0003695181340000101
As shown in the following formula:
Figure BDA0003695181340000102
wherein the content of the first and second substances,
Figure BDA0003695181340000103
the matrix is the attitude transformation matrix between the inertial navigation equipment system and the vehicle body system, namely the matrix representation form of the installation angle of the inertial navigation equipment. As the time goes by, it is possible to,
Figure BDA0003695181340000104
the estimated value of (c) will be closer and closer to the mounting angle between the real inertial navigation device and the vehicle.
FIG. 2 is an error diagram of course mounting angle estimation of the present embodiment, where the initial course mounting angle has an error of 30 °, and within 100s of algorithm operation, the course mounting angle can converge to a value close to the true value, and the error angle is smaller than 3 °. Compared with a method for calculating the mounting angle of the inertial equipment by using an accelerometer and geomagnetic observation, the mounting angle compensation method provided by the invention has stronger robustness and estimation accuracy.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (6)

1. A vehicle-mounted inertial navigation mounting angle error compensation method based on pattern recognition is characterized by comprising the following steps: the method comprises the following steps:
step S1, installing the inertial navigation equipment on the vehicle, and carrying out system initialization;
step S2, reading six-axis data of an inertial measurement unit IMU in real time, including x, y and z-axis data of a gyroscope and an accelerometer, and performing sliding window mean value preprocessing on the original data;
step S3, when the vehicle is static, the roll installation angle theta between the inertial navigation equipment and the vehicle body is calculated by using the accelerometer data of the IMU r And pitch mounting angle theta p In addition, consider heading angle θ y Is 0; will theta r 、θ p And theta y Calculating a conversion matrix of an inertial navigation equipment system and a vehicle body coordinate system through an Euler angle-to-attitude array formula;
step S4, projecting the output value of the gyroscope z axis on the vehicle body system by using a transformation matrix of the inertial navigation equipment system and the vehicle body coordinate system to obtain the yaw velocity of the vehicle body;
step S5, utilizing IMU data to update strapdown inertial navigation to obtain updated posture
Figure FDA0003695181330000011
Position p IMU And velocity v IMU
Step S6, when a GNSS signal is received, taking three-dimensional speed and latitude and longitude information output by the GNSS as observation, and carrying out information fusion with the updated attitude, position and speed obtained in the previous step through a Kalman filter to obtain an estimation result of the combined navigation attitude, position and speed;
step S7, vehicle speed information is received through the CAN, basic motion state recognition of the vehicle is carried out by utilizing the yaw velocity of the vehicle body and the CAN vehicle speed information, and the motion states are divided into a high-speed straight-going state, a turning state, a parking state and a common motion state;
s8, under a high-speed straight-ahead state, the CAN vehicle speed observation model selects an observation matrix added with the mounting angle error, and the inertial navigation equipment mounting angle error and the CAN vehicle speed scale error are estimated through a Kalman filter; under other motion states, the CAN vehicle speed observation model selects an observation matrix without installation angle errors, and corrects other state quantities through a Kalman filter without influencing the installation angle errors and the CAN vehicle speed scale errors;
and S9, correcting the current CAN vehicle speed scale and the current mounting angle transformation matrix by using the CAN vehicle speed scale error, the inertial navigation equipment pitch angle mounting angle error and the inertial navigation equipment course angle mounting angle error obtained in the step S8.
2. The method for compensating for errors in inertial navigation mounting angle based on pattern recognition of claim 1, wherein in step S4, a transformation matrix of inertial navigation device system and vehicle body coordinate system is used
Figure FDA0003695181330000012
Output value wbz of gyroscope z axis b Projecting on the train system to obtain the yaw rate wbz of the train body v The method specifically comprises the following steps:
Figure FDA0003695181330000021
3. the vehicle-mounted inertial navigation mounting angle error compensation method based on pattern recognition as claimed in claim 1, wherein the step S6 specifically includes the following steps:
step S6-1, initializing a Kalman filter, including initializing an error state vector X, a system noise covariance Q and a posterior covariance P;
taking the error state vector X as an 18-dimensional vector:
Figure FDA0003695181330000022
wherein the content of the first and second substances,
Figure FDA0003695181330000023
for the estimation error of the attitude in the navigation coordinate system, δ v n For the estimation error of the velocity in the navigation coordinate system, δ p n For the estimation error of the position in the navigation coordinate system, b ω And b a Respectively, a gyroscope and an accelerometerDynamic zero offset of (1), where δ k is the CAN vehicle speed scale factor error, δ θ p For inertial navigation equipment pitch angle installation angle error, delta theta y Setting an angle error for a course angle of inertial navigation equipment;
the system covariance matrix Q is:
Q=diag([IMU arw IMU vrw IMU gpsd IMU apsd ] 2 )
wherein, the IMU arw And IMU vrw IMU for random walk coefficients of gyros and accelerometers gpsd And IMU apsd Zero-bias instability coefficients of the gyroscope and the accelerometer respectively;
the posterior covariance matrix P is:
P=diag([IMU install GNSS stdv GNSS stdp IMU gpsd IMU apsd IMU err ] 2 )
wherein, the IMU install For installation error angle, GNSS stdv For GNSS speed error variance, GNSS stdp For GNSS position error variance, IMU err The device consists of an installation angle error angle and a vehicle speed scale factor;
step S6-2, performing Kalman one-step prediction by using a strapdown inertial navigation error propagation equation to obtain a priori state quantity X prior
Step S6-3, Kalman filtering measurement updating, GNSS observation Z is constructed gnss GNSS observation matrix H gnss For a priori state quantity X prior Updating the measurement to obtain the posterior error state vector X post
The GNSS observation Z gnss
Z gnss =[v gnss -v IMU p gnss -p IMU ] T
Wherein v is gnss Three-dimensional velocity, v, output for GNSS IMU Is the updated speed, p, of the strapdown inertial navigation in step S5 gnss Position, p, output for GNSS IMU The updated position of the strapdown inertial navigation in step S5;
the GNSS observation matrix H gnss
Figure FDA0003695181330000031
Wherein, O 3 Is a zero matrix of order 3, I 3 Is a 3-order identity matrix and is,
Figure FDA0003695181330000032
R m is the radius of meridian curvature, R n Is the transverse curvature radius, L is the latitude, and h is the altitude;
after the measurement is updated, the posterior error state vector X is obtained post
Figure FDA0003695181330000033
In the formula, the variables containing post in the subscript are all posterior errors of corresponding variables in the error state vector X;
step S6-4, updating the attitude obtained by the strapdown inertial navigation in the step S5
Figure FDA0003695181330000034
Position p IMU And velocity v IMU Subtracting the a posteriori error state vector X post Attitude error of
Figure FDA0003695181330000035
Position error δ p n_post And velocity error δ v n_post
Figure FDA0003695181330000036
p ins =p IMU -δp n_post
v ins =v IMU -δv n_post
In the formula (I), the compound is shown in the specification,
Figure FDA0003695181330000037
p ins 、v ins respectively, combined navigation attitude, position and velocity estimates.
4. The vehicle-mounted inertial navigation mounting angle error compensation method based on pattern recognition as claimed in claim 1, characterized in that:
the step S7 includes the steps of:
step S7-1, Pair wbz v And performing sliding window mean value processing on the vehicle speed and the CAN vehicle speed to obtain the vehicle body yaw velocity wbz after mean value processing v_filtered And CAN vehicle speed v filtered 。;
Step S7-2, when v filtered Greater than a first threshold value for vehicle speed, wbz v_filtered If the angular velocity is smaller than the first threshold value and larger than the second threshold value, the high-speed straight-going state is judged; when wbz v_filtered If the angular velocity is greater than the first threshold value or less than the second threshold value, the turning state is determined; when v is filtered If the vehicle speed is less than the second threshold value, the vehicle is judged to be in a parking state; otherwise, the state is determined as the normal motion state.
5. The vehicle-mounted inertial navigation mounting angle error compensation method based on pattern recognition according to claim 1, wherein the step S8 is specifically as follows:
in a high-speed straight-ahead motion state, selecting a difference value between a speed estimation result of the integrated navigation and the CAN vehicle speed added with the scale factor as an observed quantity Z CAN_k Selecting an observation matrix H to which the mounting angle error is added CAN_install Carrying out observation fusion;
the observed quantity Z CAN_k
Figure FDA0003695181330000041
Wherein, the first and the second end of the pipe are connected with each other,
Figure FDA0003695181330000042
v ins in order to combine the navigation speed estimation results in step S6,
Figure FDA0003695181330000043
is a vector in three dimensions and is,
Figure FDA0003695181330000044
a first dimension, a second dimension and a third dimension thereof;
Figure FDA0003695181330000045
in order to combine the navigation attitude estimation results,
Figure FDA0003695181330000046
is a transformation matrix of an inertial navigation equipment system and a vehicle body coordinate system v D Is a CAN vehicle speed value, and k is a CAN vehicle speed scale factor;
observation matrix H with installation angle error added CAN_install
H CAN_install =[M 1 M 2 O 3 O 3 O 3 M 3 ] T
Wherein the content of the first and second substances,
Figure FDA0003695181330000047
() × representing an antisymmetric matrix, O 3 Is a zero matrix of order 3;
in other motion states, the difference value between the speed estimation result of the integrated navigation and the CAN vehicle speed without adding the scale factor is selected as the observed quantity Z CAN Selecting an observation matrix H without adding a mounting angle error CAN And (3) carrying out observation fusion:
the observed quantity Z CAN
Figure FDA0003695181330000048
The observation matrix H CAN
H CAN =[M 1 M 2 O 3 O 3 O 3 O 3 ] T
6. The vehicle-mounted inertial navigation mounting angle error compensation method based on pattern recognition as claimed in claim 1, wherein the step S9 specifically includes the following steps:
step S9-1, correcting the CAN vehicle speed scale factor k by using the error of the CAN vehicle speed scale factor:
k=k (-) +δk
wherein k is (-) Representing the CAN vehicle speed scale factor at the previous moment, wherein k is the CAN vehicle speed scale factor, and δ k is the CAN vehicle speed scale factor error;
step S9-2, utilizing inertial navigation equipment pitch angle installation angle error delta theta p Course angle installation angle error delta theta of inertial navigation equipment y Correcting the transformation matrix of the inertial navigation equipment system and the vehicle body coordinate system in the previous step
Figure FDA0003695181330000051
The formula is as follows:
Figure FDA0003695181330000052
wherein, I 3 Is a 3-order identity matrix and is,
Figure FDA0003695181330000053
the transformation matrix of the inertial navigation equipment system and the vehicle body coordinate system at the previous moment is obtained.
CN202210677046.1A 2022-06-15 2022-06-15 Vehicle-mounted inertial navigation mounting angle error compensation method based on pattern recognition Pending CN114935345A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210677046.1A CN114935345A (en) 2022-06-15 2022-06-15 Vehicle-mounted inertial navigation mounting angle error compensation method based on pattern recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210677046.1A CN114935345A (en) 2022-06-15 2022-06-15 Vehicle-mounted inertial navigation mounting angle error compensation method based on pattern recognition

Publications (1)

Publication Number Publication Date
CN114935345A true CN114935345A (en) 2022-08-23

Family

ID=82868916

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210677046.1A Pending CN114935345A (en) 2022-06-15 2022-06-15 Vehicle-mounted inertial navigation mounting angle error compensation method based on pattern recognition

Country Status (1)

Country Link
CN (1) CN114935345A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116718196A (en) * 2023-08-09 2023-09-08 腾讯科技(深圳)有限公司 Navigation method, device, equipment and computer readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116718196A (en) * 2023-08-09 2023-09-08 腾讯科技(深圳)有限公司 Navigation method, device, equipment and computer readable storage medium
CN116718196B (en) * 2023-08-09 2023-10-20 腾讯科技(深圳)有限公司 Navigation method, device, equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN110501024B (en) Measurement error compensation method for vehicle-mounted INS/laser radar integrated navigation system
CN111024064B (en) SINS/DVL combined navigation method for improving Sage-Husa adaptive filtering
CN112505737B (en) GNSS/INS integrated navigation method
CN113063429B (en) Self-adaptive vehicle-mounted integrated navigation positioning method
CN109870173A (en) A kind of track correct method of the submarine pipeline inertial navigation system based on checkpoint
CN110954102B (en) Magnetometer-assisted inertial navigation system and method for robot positioning
CN108387236B (en) Polarized light SLAM method based on extended Kalman filtering
CN111536972B (en) Vehicle-mounted DR navigation method based on odometer scale factor correction
CN109470276B (en) Odometer calibration method and device based on zero-speed correction
CN112432642B (en) Gravity beacon and inertial navigation fusion positioning method and system
CN109612460B (en) Plumb line deviation measuring method based on static correction
US20230182790A1 (en) Method for calculating an instantaneous velocity vector of a rail vehicle and corresponding system
JP5164645B2 (en) Method and apparatus for repetitive calculation control in Kalman filter processing
CN115060257B (en) Vehicle lane change detection method based on civil-grade inertia measurement unit
CN114964222A (en) Vehicle-mounted IMU attitude initialization method, and mounting angle estimation method and device
CN112284415A (en) Odometer scale error calibration method, system and computer storage medium
CN115200578A (en) Polynomial optimization-based inertial-based navigation information fusion method and system
CN114136310B (en) Autonomous error suppression system and method for inertial navigation system
CN113008229B (en) Distributed autonomous integrated navigation method based on low-cost vehicle-mounted sensor
CN113932806B (en) High-speed aircraft inertia/geomagnetic matching search area self-adaptive combined navigation method
CN114935345A (en) Vehicle-mounted inertial navigation mounting angle error compensation method based on pattern recognition
CN111220151B (en) Inertia and milemeter combined navigation method considering temperature model under load system
CN113074757A (en) Calibration method for vehicle-mounted inertial navigation installation error angle
CN114111840B (en) DVL error parameter online calibration method based on integrated navigation
CN111256708A (en) Vehicle-mounted integrated navigation method based on radio frequency identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination