CN113703025A - GNSS (global navigation satellite system) multiple failure states oriented vehicle positioning error intelligent prediction method - Google Patents

GNSS (global navigation satellite system) multiple failure states oriented vehicle positioning error intelligent prediction method Download PDF

Info

Publication number
CN113703025A
CN113703025A CN202110972521.3A CN202110972521A CN113703025A CN 113703025 A CN113703025 A CN 113703025A CN 202110972521 A CN202110972521 A CN 202110972521A CN 113703025 A CN113703025 A CN 113703025A
Authority
CN
China
Prior art keywords
model
subtask
positioning error
gnss
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110972521.3A
Other languages
Chinese (zh)
Other versions
CN113703025B (en
Inventor
徐启敏
阮国星
李旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202110972521.3A priority Critical patent/CN113703025B/en
Publication of CN113703025A publication Critical patent/CN113703025A/en
Application granted granted Critical
Publication of CN113703025B publication Critical patent/CN113703025B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Position Fixing By Use Of Radio Waves (AREA)

Abstract

The invention discloses a GNSS (global navigation satellite system) multiple failure states oriented vehicle positioning error intelligent prediction method. The method comprises the steps of firstly, dividing failure states into four types of 0 star, 1 star, 2 stars and 3 stars according to the number of visible stars when the GNSS fails, and respectively establishing an LSTM-based deep learning network aiming at positioning error prediction tasks in different failure states according to the dependence characteristics of positioning errors on historical information; then, by taking the multi-task learning idea as a reference, fully utilizing the similarity between subtask models in different failure states, and carrying out comprehensive training by adopting a soft parameter sharing mechanism to improve the generalization capability of each subtask model; finally, the trained vehicle positioning error intelligent prediction model can select a corresponding subtask model according to the number of input quantities determined by the actual number of visible stars to obtain a predicted value of the positioning error. The method fully utilizes the information of partial visible satellites in the GNSS failure state, the model generalization capability is strong, and the positioning error prediction is accurate.

Description

GNSS (global navigation satellite system) multiple failure states oriented vehicle positioning error intelligent prediction method
Technical Field
The invention relates to an intelligent prediction method for vehicle positioning errors, in particular to an intelligent prediction method for vehicle positioning errors in multiple failure states of GNSS, and belongs to the field of vehicle navigation positioning.
Background
In the field of vehicle positioning, the combined positioning system of the INS and the GNSS is widely applied; the INS is an autonomous navigation system, is not interfered by external signals, but is easy to generate accumulated errors, and the positioning accuracy is continuously reduced along with the progress of the positioning process; the GNSS can provide information such as three-dimensional absolute position and speed of the vehicle in real time all day long; generally, a GNSS requires 4 satellites to complete real-time positioning of a vehicle; three satellites are used for obtaining three-dimensional positions, and a time difference between a satellite synchronous satellite clock and a receiving clock is needed; when the vehicle is shielded by high buildings around or enters a tunnel, signals of 4 satellites cannot be received, and the GNSS cannot realize real-time positioning;
a common solution to this is to introduce other auxiliary sensors; the method comprises the steps that a model is established through a multi-sensor fusion algorithm to assist INS in positioning, common auxiliary sensors comprise a vision sensor, a laser radar and the like, the vision sensor can calculate the distance and the angle between the vision sensor and a known object in an actual environment according to the size and the position of the pixel of the object on an image, and the vision sensor has the defects that the vision sensor is easily influenced by the surrounding environment and the imaging quality is unstable; the laser radar transmits laser pulses to surrounding objects and then receives the reflected laser pulses to acquire information such as distances and angles of the surrounding objects, and compared with a visual sensor, the laser radar is less affected by the external environment, but the laser radar is high in cost and difficult to widely apply; although the auxiliary sensor can correct the positioning error of the INS to a certain extent, due to lack of observation and update of absolute position information, an accumulated error still exists in a corrected positioning result;
with the rapid development of the artificial intelligence technology, as the machine learning algorithm has strong self-learning and nonlinear mapping capabilities, the accumulated error model when the GNSS fails is established by using the machine learning method and is introduced into the fusion positioning system, and the accumulated error of the positioning system during the GNSS failure is predicted and compensated, so that the positioning error when the GNSS fails is reduced; the existing error prediction method simply classifies the states of the GNSS into invalid and valid states, and when the number of visible satellites is more than or equal to 4, the GNSS is in the valid state and can provide three-dimensional position and speed information for the vehicle; when the number of visible satellites is less than 4, the GNSS is in a failure state, and at the moment, the vehicle can possibly receive partial satellite signals but cannot resolve the vehicle position information, and can only rely on the INS for positioning; due to the limitations of the conventional methods, it is possible that a portion of the received satellite information is not utilized efficiently.
The failure states of the GNSS are further refined, and the failure states are divided into 4 different failure states according to the number of different satellites which can be received by the current GNSS, and are respectively the failure states when the GNSS receives 0, 1, 2 and 3 visible satellites; secondly, respectively establishing a subtask model for different failure states of the GNSS by using a deep learning network, and considering the characteristic that the observation quantity of the GNSS is sequence data, in the deep learning network structure, the RNN has excellent processing performance on the sequence data, so that a positioning error prediction model is established by using the RNN; in the practical application of RNN, the LSTM variant is mostly adopted, so that the problem that the characteristics with longer intervals are difficult to learn due to gradient disappearance and gradient explosion of the common RNN can be solved; compared with the state that the common RNN only transmits forwards, the LSTM enables the network to accumulate information in a longer duration time and reach a convergence state by increasing gating; furthermore, the subtask models considering 4 different failure states of GNSS have similarities: the output of the model is error prediction, the input data of the model is similar (different in that the GNSS data has different data dimensions due to different numbers of visible satellites), and the 4 subtasks can be trained simultaneously by using a multi-task learning method; the Soft parameter sharing mechanism is adopted in the multi-task learning, and the problem that the generalization performance is insufficient due to the fact that the number of samples is small when 4 subtask models are trained independently can be effectively solved.
Disclosure of Invention
The invention aims to provide an intelligent prediction method for vehicle positioning errors in multiple failure states of GNSS (global navigation satellite system); the method classifies the failure states of the GNSS according to the number of the received satellites, establishes positioning error prediction models of different failure states by using an LSTM network respectively, and improves the generalization performance of the models by using a multi-task learning mechanism; finally, the trained model can judge the sub-task model category to which the current working condition belongs according to the actual number of visible satellites, and an accurate positioning error prediction value is obtained.
The technical scheme adopted by the invention is as follows: an intelligent prediction method for vehicle positioning errors of multiple failure states of a GNSS is characterized in that partial visible satellite information when the GNSS fails is deeply considered, different failure states of the GNSS are modeled, LSTM deep learning networks are respectively designed, and parameter sharing training is carried out by adopting a multi-task learning idea, so that the generalization capability of each subtask model is improved; the method comprises the following specific steps: the method comprises the following steps: determining inputs and outputs of a model
Dividing failure states into 4 types of 0 satellite, 1 satellite, 2 satellites and 3 satellites according to the number of visible satellites when the GNSS fails, and then dividing a positioning error prediction model into 4 sub-task models which respectively correspond to positioning error prediction tasks of 0 satellite, 1 satellite, 2 satellites and 3 satellites; the output quantity is the positioning error under the GNSS failure state according to the training task of each subtask model; the input quantity is the main influence factor of the positioning error, mainly including azimuth angle and altitude angle which can represent satellite distribution, and in addition, considering the time-dependent characteristic of the positioning error, the failure time T is also taken as the input quantity: it should be noted that the number of input quantities of different subtasks is different, that is, when there are 0 visible satellites, there are no azimuth angles and altitude angles of the satellites as input quantities, and when there are multiple visible satellites, the azimuth angle and altitude angle of each satellite are all used as input quantities;
step two: making training samples
A large number of accurate samples are the basis for efficient model training; the input quantities of the 4 subtasks vary with the number of visible stars, but the output quantities are all positioning errors to be predicted, and the true value of the training sample is the difference between the positioning result when the GNSS has not failed and the positioning result generated in 4 different failure states: on the basis of only obtaining a true data value when the GNSS is not invalid, the data samples required by 4 subtask models can be obtained by simulating different invalid states of the GNSS; according to the invention, a multi-task learning mechanism is adopted, and each subtask model can obtain additional information from other subtask models during training, so that a model based on multi-task learning can learn a better hidden layer representation compared with a single-task model, and the influence of lack of training data can be compensated to a certain extent, thereby saving time and economic cost;
meanwhile, in order to ensure the generalization performance of the trained model, the number of samples of 4 subtasks is approximately equal, and rather, great differences are not suitable to be formed;
the samples in the present invention were obtained by two methods: the first method is to collect sample data in a real shielding environment, solve the position of the vehicle by using a traditional tight coupling algorithm, acquire a positioning error by using a high-precision optical fiber integrated navigation system as a reference, and record related sensor data required by input quantity; the second method is to collect data of each sensor in an open area and adopt a method of simulating partial satellite failure afterwards to obtain training samples required by each subtask;
step three: model structural design and training
Respectively carrying out optimization design on a subtask model structure and a multi-task coordination mechanism according to the characteristics of the positioning error prediction task;
(1) design of subtasks
In practical application, in order to solve the problem that the characteristics with longer intervals are difficult to learn due to gradient extinction and gradient explosion of the common RNN, an LSTM network is usually adopted; compared with the state that the common RNN only transmits forwards, the LSTM can increase gating so that the network can accumulate information in a longer duration and reach a convergence state; using the current input of LSTM and the state passed at the last time for stitching and multiplying by the weight yields the following formula:
Figure BDA0003225131990000041
Figure BDA0003225131990000042
Figure BDA0003225131990000043
Figure BDA0003225131990000044
wherein x(t)Is the input vector at time t, corresponding to the time-varying GNSS observations, h, during the training process(t)Is the current hidden layer vector, h(t)Contains the outputs of all the LSTM units; in the formula (1)
Figure BDA0003225131990000045
Is input data obtained through calculation, i represents the ith LSTM unit, and b, U and W are respectively the offset, the input weight and the cyclic weight of the LSTM unit; in the formula (2)
Figure BDA0003225131990000046
Being an external input gate, bg、Ug、WgBias, input weight, and round robin weight of the external input gate, respectively; in the formula (3)
Figure BDA0003225131990000047
For the output gate, controlling the output of the LSTM cell, bo、Uo、WoRespectively, the offset, input weight, and round robin weight of the output gate; b in the formula (4)f、Uf、WfRespectively, the bias, input weight and circulation weight of the forgetting gate, sigma is an activation function, and the forgetting gate fi (t)The unit i controls the self-circulation weight at the moment t to select which information needs to be forgotten; the long-term memory state is used for memorizing the influence of the previous error state on the current error when the GNSS fails for a long time; the short-term memory state is used for memorizing the influence of the previous error state on the current error when the GNSS fails for a short time; LSTM achieves selective memory by the following equation:
Figure BDA0003225131990000051
for input x(t)The selection and the memory are performed, and the data is stored,
Figure BDA0003225131990000052
then it is the information passed to the next state;
(2) multitask learning
Because 4 LSTM networks with similar structures need to be trained, the input quantity of the 4 LSTM networks can be changed along with the difference of the number of visible stars; the GNSS observation quantity is not available at all when only 0 satellite exists, the altitude angle and the azimuth angle of the satellite are used as input quantities when one satellite exists, and the altitude angle and the azimuth angle of each satellite are used as input quantities when a plurality of satellites exist; the output quantities are all positioning errors, which means that the same loss function can be used to train 4 LSTM networks with different input quantities but having similarities;
practice proves that in most cases, a large amount of data input can improve the generalization performance of the model; however, in the model training, it is difficult to obtain a large amount of training data in consideration of time cost and economic cost; because 4 observations under different GNSS failures need to be collected, if each subtask is trained by independently using the LSTM network, the finally obtained generalization performance can hardly meet the requirements of practical application;
therefore, the training method based on multi-task learning is adopted in the patent; multitask learning can be viewed as a form of inductive migration, where inductive transfer helps improve the model by introducing inductive bias (inductive bias); the deep learning network model mainly has two most common methods when executing multitask learning, and learning is carried out by using Soft parameter sharing and Hard parameter sharing; since the Hard parameters are shared to share the hidden layer among all the subtasks, the method is not in line with the actual situation of the method; therefore, in the method, a Soft parameter sharing mechanism is adopted for multi-task learning; the multitask learning is designed as follows:
1) soft parameter sharing mechanism
According to the characteristics of the 4 subtask models mentioned above, in the method, the multitask learning mainly adopts Soft parameter sharing, 4 LSTM deep learning networks with similar structures are respectively designed, and the input quantity of each network corresponds to different subtasksA service model, wherein each network has own parameters; the parameter distance between each subtask model is regularized to ensure the similarity of parameters between subtasks, so L2 distance regularization is adopted; let thetai jFor the jth shared hidden layer of the ith subtask model, the regularization term is as follows:
Figure BDA0003225131990000061
n is the number of subtasks in the multi-task learning positioning error prediction model, the total number of the subtasks is 4, and j is a hidden parameter layer which needs to be shared by all the subtask models;
2) loss function design
Note that the final outputs of these 4 LSTM networks are all the same, i.e., the error prediction measures, so the same loss function is set to train the network, and optimization is performed by sharing the gradient in the hidden layer of the network; different from the traditional multitask learning Soft parameter sharing that one main task and a plurality of auxiliary tasks exist, in the invention, each subtask has other 3 subtasks as auxiliary tasks in the training process, and each subtask also serves as an auxiliary task of other 3 subtasks; in addition, the gradient magnitude of loss back propagation of 4 subtasks may be different, and when the loss back propagation reaches the shared hidden layer part, the subtasks with small gradient magnitude have less proportion to the updating of the whole positioning error prediction model parameters, so that the problem of insufficient learning is caused, and the gradient of each subtask is balanced by introducing the weight; the overall loss function is:
Figure BDA0003225131990000062
wherein alpha isiIs the loss weight of the ith subtask, which is the parameter to be learned, LiAs a loss function for the ith subtask, a loss function L for each subtaskiComprises the following steps:
Figure BDA0003225131990000063
wherein M isiIs the number of samples for the ith sub-task,
Figure BDA0003225131990000064
to predict the value of the positioning error, ymPredicting a real error value; the overall loss function expands as:
Figure BDA0003225131990000065
3) optimization algorithm
The optimization algorithm is trained by adopting a back propagation algorithm; in a back propagation network, adding tasks affects updating of deep learning network parameters, for example, adding additional tasks increases the effective learning rate of a hidden layer, which depends on the error feedback weight output by each task; back-propagation networks often add noise to prevent deep learning network overfitting, such as adding regularization terms; adding a task can also be considered as adding noise, thereby improving the generalization performance of the back propagation network; calculating the weight alpha of the ith subtaskiComprises the following steps:
Figure BDA0003225131990000071
wherein T is the number of current iterations, T is a parameter for controlling the weight flexibility of the subtasks, K is a hyperparameter, and K ═ Σiαi(t),wi(t-1) the calculation is as follows:
Figure BDA0003225131990000072
wherein
Figure BDA0003225131990000073
The average loss value of the t iteration training of the ith subtask is obtained;
(3) model validation
When the positioning error prediction model is verified, a cross verification mode is used for verification, and a 10-fold cross verification mode is adopted for 10 times to obtain a positioning error prediction model with the best generalization performance; the model verification steps are as follows:
1) partitioning a data set
The cross validation is a model selection method, and all data sets are divided into a training set, a validation set and a test set; the training set is used for training the model, the verification set is used for evaluating and selecting the model, and the test set is used for evaluating the whole training method; the method adopts 10-fold cross verification for 10 times, wherein 10-fold means that each division randomly divides a data set required by a training model into 10 subsets, 9 subsets are used as training sets in one-time model training, and 1 subset is used as a verification set; 10 times is to divide the entire data set 10 times in different ways;
2) verification model
By dividing the data set every time, using 9 subsets after division for each training, carrying out 10 times of same model training, and finally obtaining 10 positioning error prediction models and the average verification error of each model; carrying out 10 different divisions on the model data set, and finally needing 100 times of training;
3) model selection
When a data set is divided and trained every time, the selected positioning error prediction model is the model with the minimum average verification error in 10 times of training; after 10 times of data set division, the selected model is the model with the minimum verification error after 10 times of model training; the final selected model loss value L is:
Figure BDA0003225131990000081
wherein L is(i)A loss value for the ith partition of the data set;
step four: model implementation
After the training model passes the test, saving the positioning error prediction model into a SavedModel format; in actual implementation, the stored models are deployed and applied, corresponding subtask models are selected according to the number of input quantities determined by the actual number of visible satellites, and the deployed subtask models are subjected to forward calculation to finally obtain the predicted value of the positioning error.
The invention has the advantages that:
(1) the model flexibility is high. Compared with the traditional GNSS/INS combined positioning method which only considers the invalid state and the valid state, the positioning error prediction model established by the invention considers 4 different GNSS invalid states and can fully utilize the information of part of visible satellites.
(2) The model generalization performance is good. Compared with a model which is trained independently by a single task, the model adopting the multi-task learning mechanism can learn better internal representation by sharing hidden layer parameters, thereby providing the generalization performance of the model and obtaining a more accurate positioning error prediction value.
Drawings
FIG. 1 is a flow chart of a positioning error prediction model training process.
Fig. 2 is a general block diagram of a positioning error prediction model.
Detailed Description
An INS and GNSS combined positioning system is a positioning system which is widely applied in the field of vehicle positioning; however, due to the characteristics of the GNSS, the GNSS fails in some road conditions, so that the positioning system cannot perform normal positioning, and the existing error prediction method only considers two states of failure and effectiveness, and does not fully utilize part of satellite information received when the GNSS fails; the method comprises the steps of firstly, respectively establishing a deep learning network according to different failure states of the GNSS; then, considering the characteristics of GNSS observed quantity, an RNN is used for establishing a model for each subtask, and an LSTM (least squares) RNN variant is adopted during modeling, so that the problems of gradient disappearance and gradient explosion of a common RNN can be solved; secondly, training the 4 subtask models by utilizing a multi-task learning mechanism, and sharing the parameters of the 4 subtask models by adopting a Soft parameter sharing mode; fig. 1 is a flow chart of a positioning error prediction model training, which specifically includes the following steps:
the method comprises the following steps: determining inputs and outputs of a model
Dividing failure states into 4 types of 0 satellite, 1 satellite, 2 satellites and 3 satellites according to the number of visible satellites when the GNSS fails, and then dividing a positioning error prediction model into 4 sub-task models which respectively correspond to positioning error prediction tasks of 0 satellite, 1 satellite, 2 satellites and 3 satellites; the output quantity is the positioning error under the GNSS failure state according to the training task of each subtask model; the input quantity is a main influence factor of the positioning error and mainly comprises an azimuth angle and an altitude angle which can represent satellite distribution; in addition, the failure time T is also taken as an input quantity in consideration of the time-dependent characteristic of the positioning error; it should be noted that the number of input quantities of different subtasks is different, that is, when there are 0 visible satellites, there are no azimuth angles and altitude angles of the satellites as input quantities, and when there are multiple visible satellites, the azimuth angle and altitude angle of each satellite are all used as input quantities;
step two: making training samples
A large number of accurate samples are the basis for efficient model training; the input quantity of the 4 subtask models changes along with the difference of the number of visible stars, but the output quantities are positioning errors needing to be predicted, and the true value of the training sample is the difference value between the positioning result when the GNSS is not in failure and the positioning result generated under 4 different failure states; on the basis of only obtaining a true data value when the GNSS is not invalid, the data samples required by 4 subtasks can be obtained by simulating different invalid states of the GNSS; according to the invention, a multi-task learning mechanism is adopted, and each subtask model can obtain additional information from other subtask models during training, so that a model based on multi-task learning can learn a better hidden layer representation compared with a single-task model, and the influence of lack of training data can be compensated to a certain extent, thereby saving time and economic cost;
meanwhile, in order to ensure the generalization performance of the trained model, the number of samples of the 4 subtask models is approximately equal, and rather, the samples are not suitable to have great differences;
the samples in the present invention were obtained by two methods: the first method is to collect sample data in a real shielding environment, solve the position of the vehicle by using a traditional tight coupling algorithm, acquire a positioning error by using a high-precision optical fiber integrated navigation system as a reference, and record related sensor data required by input quantity; the second method is to collect data of each sensor in an open area and adopt a method of simulating partial satellite failure afterwards to obtain training samples required by each subtask;
step three: model structural design and training
Respectively carrying out optimization design on a model structure of the subtasks and a multi-task coordination mechanism according to the characteristics of the positioning error prediction tasks; FIG. 2 is a general block diagram of a positioning error prediction model;
(1) design of subtasks
In practical application, in order to solve the problem that the characteristics with longer intervals are difficult to learn due to gradient extinction and gradient explosion of the common RNN, an LSTM network is usually adopted; compared with the state that the common RNN only transmits forwards, the LSTM can increase gating so that the network can accumulate information in a longer duration and reach a convergence state; using the current input of LSTM and the state passed at the last time for stitching and multiplying by the weight yields the following formula:
Figure BDA0003225131990000101
Figure BDA0003225131990000102
Figure BDA0003225131990000103
Figure BDA0003225131990000104
wherein x(t)Is the input vector at time t, corresponding to the time-varying GNSS observations, h, during the training process(t)Is the current hidden layer vector, h(t)Contains the outputs of all the LSTM units; in the formula (1)
Figure BDA0003225131990000105
Is input data obtained through calculation, i represents the ith LSTM unit, and b, U and W are respectively the offset, the input weight and the cyclic weight of the LSTM unit; in the formula (2)
Figure BDA0003225131990000106
Being an external input gate, bg、Ug、WgBias, input weight, and round robin weight of the external input gate, respectively; in the formula (3)
Figure BDA0003225131990000107
For the output gate, controlling the output of the LSTM cell, bo、Uo、WoRespectively, the offset, input weight, and round robin weight of the output gate; b in the formula (4)f、Uf、WfRespectively, the bias, input weight and circulation weight of the forgetting gate, sigma is an activation function, and the forgetting gate fi (t)The unit i controls the self-circulation weight at the moment t to select which information needs to be forgotten; the long-term memory state is used for memorizing the influence of the previous error state on the current error when the GNSS fails for a long time; the short-term memory state is used for memorizing the influence of the previous error state on the current error when the GNSS fails for a short time; LSTM achieves selective memory by the following equation:
Figure BDA0003225131990000111
for input x(t)The selection and the memory are performed, and the data is stored,
Figure BDA0003225131990000112
then it is the information passed to the next state;
(2) multitask learning
Because 4 LSTM networks with similar structures need to be trained, the input quantity of the 4 LSTM networks can be changed along with the difference of the number of visible stars; the GNSS observation quantity is not available at all when only 0 satellite exists, the altitude angle and the azimuth angle of the satellite are used as input quantities when one satellite exists, and the altitude angle and the azimuth angle of each satellite are used as input quantities when a plurality of satellites exist; the output quantities are all positioning errors, which means that the same loss function can be used to train 4 LSTM networks with different input quantities but having similarities;
practice proves that in most cases, a large amount of data input can improve the generalization performance of the model; however, in the model training, it is difficult to obtain a large amount of training data in consideration of time cost and economic cost; because 4 observations under different GNSS failures need to be collected, if each subtask is trained by independently using the LSTM, the finally obtained generalization performance can hardly meet the requirements of practical application;
therefore, the training method based on multi-task learning is adopted in the patent; multitask learning can be viewed as a form of inductive migration, where inductive transfer helps improve the model by introducing inductive bias (inductive bias); the deep learning network model mainly has two most common methods when executing multitask learning, and learning is carried out by using Soft parameter sharing and Hard parameter sharing; since the Hard parameters are shared to share the hidden layer among all the subtasks, the method is not in line with the actual situation of the method; therefore, in the method, a Soft parameter sharing mechanism is adopted for multi-task learning; the multitask learning is designed as follows:
1) soft parameter sharing mechanism
According to the characteristics of the 4 subtask models mentioned above, in the method, multi-task learning mainly adopts Soft parameter sharing, 4 LSTM deep learning networks with similar structures are respectively designed, the input quantity of each network corresponds to different subtask models, and each network has own parameters; the parameter distance between each subtask model is regularized to ensure the similarity of parameters between subtasks, and the method adopts L2 distance regularization; order to
Figure BDA0003225131990000121
For the jth shared hidden layer of the ith subtask modelThen the regularization term is as follows:
Figure BDA0003225131990000122
n is the number of subtasks in the multi-task learning positioning error prediction model, the total number of the subtasks is 4, and j is a hidden parameter layer which needs to be shared by all the subtask models;
2) loss function design
Note that the final outputs of the 4 LSTM network models are all the same, i.e., the error prediction measures, so the same loss function is set to train the network, and optimization is performed by sharing gradients in the hidden layer of the network model; different from the traditional multitask learning Soft parameter sharing that one main task and a plurality of auxiliary tasks exist, in the invention, each subtask has other 3 subtasks as auxiliary tasks in the training process, and each subtask also serves as an auxiliary task of other 3 subtasks; in addition, the gradient magnitude of loss back propagation of 4 subtasks may be different, and when the loss back propagation reaches the shared hidden layer part, the subtasks with small gradient magnitude have less proportion to the updating of the whole error prediction model parameters, so that the problem of insufficient learning is caused, and therefore the gradient of each subtask is balanced by introducing weights; the overall loss function is:
Figure BDA0003225131990000123
wherein alpha isiIs the loss weight of the ith subtask, which is the parameter to be learned, LiAs a loss function for the ith subtask, a loss function L for each subtaskiComprises the following steps:
Figure BDA0003225131990000124
wherein M isiIs the number of samples for the ith sub-task,
Figure BDA0003225131990000125
to predict the value of the positioning error, ymPredicting a real error value; the overall loss function expands as:
Figure BDA0003225131990000126
3) optimization algorithm
The optimization algorithm is trained by adopting a back propagation algorithm; in a back propagation network, adding tasks affects updating of deep learning network parameters, for example, adding additional tasks increases the effective learning rate of a hidden layer, which depends on the error feedback weight output by each task; back-propagation networks often add noise to prevent deep learning network overfitting, such as adding regularization terms; adding a task can also be considered as adding noise, thereby improving the generalization performance of the back propagation network; calculating the weight alpha of the ith subtaskiComprises the following steps:
Figure BDA0003225131990000131
wherein T is the number of current iterations, T is a parameter for controlling the weight flexibility of the subtasks, K is a hyperparameter, and K ═ Σiαi(t),wi(t-1) the calculation is as follows:
Figure BDA0003225131990000132
wherein
Figure BDA0003225131990000133
The average loss value of the t iteration training of the ith subtask is obtained;
(3) model validation
When the positioning error prediction model is verified, a cross verification mode is used for verification, and a 10-fold cross verification mode is adopted for 10 times to obtain a model with the best generalization performance; the model verification steps are as follows:
1) partitioning a data set
The cross validation is a model selection method, and all data sets are divided into a training set, a validation set and a test set; the training set is used for training the model, the verification set is used for evaluating and selecting the model, and the test set is used for evaluating the whole training method; the method adopts 10-fold cross verification for 10 times, wherein 10-fold means that each division randomly divides a data set required by a training model into 10 subsets, 9 subsets are used as training sets in one-time model training, and 1 subset is used as a verification set; 10 times is to divide the entire data set 10 times in different ways;
2) verification model
By dividing the data set every time, using 9 subsets after division for each training, carrying out 10 times of same model training, and finally obtaining 10 positioning error prediction models and the average verification error of each model; carrying out 10 different divisions on the model data set, and finally needing 100 times of training;
3) model selection
When a data set is divided and trained every time, the selected positioning error prediction model is the model with the minimum average verification error in 10 times of training; after 10 times of data set division, the selected model is the model with the minimum verification error after 10 times of model training; the final selected model loss value L is:
Figure BDA0003225131990000141
wherein L is(i)A loss value for the ith partition of the data set;
step four: model implementation
After the training model passes the test, saving the model in a SavedModel format; in actual implementation, the stored models are deployed and applied, corresponding subtask models are selected according to the number of input quantities determined by the actual number of visible satellites, and the deployed subtask models are subjected to forward calculation to finally obtain the predicted value of the positioning error.

Claims (1)

1. A GNSS (global navigation satellite system) multiple failure states oriented vehicle positioning error intelligent prediction method is characterized in that: classifying the failure states of the GNSS according to the number of the received satellites, respectively establishing positioning error prediction models of different failure states by using an LSTM deep learning network, and improving the generalization performance of each subtask model by using a multi-task learning mechanism, wherein the trained positioning error prediction model can judge the subtask model category to which the current working condition belongs according to the actual number of the visible satellites to obtain an accurate positioning error prediction value; the method comprises the following specific steps:
the method comprises the following steps: determining inputs and outputs of a model
Dividing failure states into 4 types of 0 satellite, 1 satellite, 2 satellites and 3 satellites according to the number of visible satellites when the GNSS fails, and then dividing a positioning error prediction model into 4 sub-task models which respectively correspond to positioning error prediction tasks of 0 satellite, 1 satellite, 2 satellites and 3 satellites; the output quantity is the positioning error under the GNSS failure state according to the training task of each subtask model; the input quantity is the main influence factor of the positioning error, including the azimuth angle and the altitude angle which can represent the satellite distribution, and in addition, the failure time T is also taken as the input quantity in consideration of the time-dependent characteristic of the positioning error; it should be noted that the number of input quantities of different subtasks is different, that is, when there are 0 visible satellites, there are no azimuth angles and altitude angles of the satellites as input quantities, and when there are multiple visible satellites, the azimuth angle and altitude angle of each satellite are all used as input quantities;
step two: making training samples
The input quantity of the 4 subtask models changes along with the difference of the number of visible stars, but the output quantities are positioning errors needing to be predicted, and the true value of the training sample is the difference value between the positioning result when the GNSS is not in failure and the positioning result generated under 4 different failure states; on the basis of only obtaining a true data value when the GNSS is not invalid, the data samples required by 4 subtask models can be obtained by simulating different invalid states of the GNSS; in order to ensure the generalization performance of the trained model, the number of samples of the 4 subtask models should be approximately equivalent;
step three: model structural design and training
Respectively carrying out optimization design on a subtask model structure and a multi-task coordination mechanism according to the characteristics of the positioning error prediction task;
(1) design of subtasks
The LSTM may add gating so that the deep learning network may accumulate information over a longer duration and reach a converged state, using the current input of the LSTM and the state passed at the last moment to splice and multiply by weight to obtain the following formula:
Figure FDA0003225131980000021
Figure FDA0003225131980000022
Figure FDA0003225131980000023
Figure FDA0003225131980000024
wherein x(t)Is the input vector at time t, corresponding to the time-varying GNSS observations, h, during the training process(t)Is the current hidden layer vector, h(t)Contains the outputs of all the LSTM units; in the formula (1)
Figure FDA0003225131980000025
Is input data obtained through calculation, i represents the ith LSTM unit, and b, U and W are respectively the offset, the input weight and the cyclic weight of the LSTM unit; in the formula (2)
Figure FDA0003225131980000026
Being an external input gate, bg、Ug、WgBias, input weight, and round robin weight of the external input gate, respectively; in the formula (3)
Figure FDA0003225131980000027
For the output gate, controlling the output of the LSTM cell, bo、Uo、WoRespectively, the offset, input weight, and round robin weight of the output gate; b in the formula (4)f、Uf、WfRespectively, the bias, input weight and circulation weight of the forgetting gate, sigma is an activation function, and the forgetting gate fi (t)The unit i controls the self-circulation weight at the moment t to select which information needs to be forgotten; the long-term memory state is used for memorizing the influence of the previous error state on the current error when the GNSS fails for a long time; the short-term memory state is used for memorizing the influence of the previous error state on the current error when the GNSS fails for a short time; LSTM achieves selective memory by the following equation:
Figure FDA0003225131980000028
for input x(t)The selection and the memory are performed, and the data is stored,
Figure FDA0003225131980000029
then it is the information passed to the next state;
(2) multitask learning
Because 4 LSTM networks with similar structures need to be trained, the input quantity of the 4 LSTM networks can be changed along with the difference of the number of visible stars; the GNSS observation quantity is not available at all when only 0 satellite exists, the altitude angle and the azimuth angle of the satellite are used as input quantities when one satellite exists, and the altitude angle and the azimuth angle of each satellite are used as input quantities when a plurality of satellites exist; the output quantities are all positioning errors, which means that the same loss function can be used to train 4 LSTM networks with different input quantities but having similarities; the multi-task learning adopts a Soft parameter sharing mechanism, and the design of the multi-task learning is as follows:
1) soft parameter sharing mechanism
According to the characteristics of the 4 subtask models, Soft parameter sharing is adopted in multi-task learning, 4 LSTM networks with similar structures are respectively designed, the input quantity of each network corresponds to different subtask models, and each network has own parameters; the parameter distance between each subtask model is regularized to ensure similarity of parameters between subtasks, so L2 distance regularization is adopted to ensure that
Figure FDA0003225131980000031
For the jth shared hidden layer of the ith subtask model, the regularization term is as follows:
Figure FDA0003225131980000032
n is the number of subtasks in the multi-task learning positioning error prediction model, the total number of the subtasks is 4, and j is a hidden parameter layer which needs to be shared by all the subtask models;
2) loss function design
Note that the final outputs of the 4 LSTM deep learning networks are all the same, i.e., the error prediction measures, so the same loss function is set to train the network, and optimization is performed by sharing gradients in the hidden layer of the network; each subtask has other 3 subtasks as auxiliary tasks in the training process, and each subtask also serves as an auxiliary task of other 3 subtasks; in addition, the gradient magnitude of loss back propagation of 4 subtasks may be different, and when the loss back propagation reaches the shared hidden layer part, the subtasks with small gradient magnitude have less proportion to the updating of the whole positioning error prediction model parameters, so that the problem of insufficient learning is caused, and the gradient of each subtask is balanced by introducing the weight; the overall loss function is:
Figure FDA0003225131980000033
wherein alpha isiIs the loss weight of the ith subtask, which is the parameter to be learned, LiAs a loss function for the ith subtask, a loss function L for each subtaskiComprises the following steps:
Figure FDA0003225131980000034
wherein M isiIs the number of samples for the ith sub-task,
Figure FDA0003225131980000041
to predict the value of the positioning error, ymPredicting a real error value; the overall loss function expands as:
Figure FDA0003225131980000042
3) optimization algorithm
The optimization algorithm is trained by adopting a back propagation algorithm; in a back propagation network, adding tasks affects updating of deep learning network parameters, for example, adding additional tasks increases the effective learning rate of a hidden layer, which depends on the error feedback weight output by each task; back-propagation networks often add noise to prevent deep learning network overfitting, such as adding regularization terms; adding a task can also be considered as adding noise, thereby improving the generalization performance of the back propagation network; calculating the weight alpha of the ith subtaskiComprises the following steps:
Figure FDA0003225131980000043
wherein T is the number of current iterations, T is a parameter for controlling the weight flexibility of the subtasks, K is a hyperparameter, and K ═ Σiαi(t),wi(t-1) the calculation is as follows:
Figure FDA0003225131980000044
wherein
Figure FDA0003225131980000045
The average loss value of the t iteration training of the ith subtask is obtained;
(3) model validation
When the positioning error prediction model is verified, a cross verification mode is used for verification, and a 10-fold cross verification mode is adopted for 10 times to obtain a model with the best generalization performance; the model verification steps are as follows:
1) partitioning a data set
Adopting 10-fold cross validation method for 10 times, wherein 10-fold means that each division randomly divides a data set required by a training model into 10 subsets, 9 subsets are used as a training set in one-time model training, and 1 subset is used as a validation set; 10 times is to divide the entire data set 10 times in different ways;
2) verification model
By dividing the data set every time, using 9 subsets after division for each training, carrying out 10 times of same model training, and finally obtaining 10 positioning error prediction models and the average verification error of each model; carrying out 10 different divisions on the model data set, and finally needing 100 times of training;
3) model selection
When a data set is divided and trained every time, the selected positioning error prediction model is the model with the minimum average verification error in 10 times of training; after 10 times of data set division, the selected positioning error prediction model is the model with the minimum verification error of the 10 training models; the final selected model loss value L is:
Figure FDA0003225131980000051
wherein L is(i)A loss value for the ith partition of the data set;
step four: model implementation
After the training model passes the test, saving the positioning error prediction model into a SavedModel format; in actual implementation, the stored models are deployed and applied, corresponding subtask models are selected according to the number of input quantities determined by the actual number of visible satellites, and the deployed subtask models are subjected to forward calculation to finally obtain the predicted value of the positioning error.
CN202110972521.3A 2021-08-23 2021-08-23 GNSS (Global navigation satellite System) multiple failure state-oriented intelligent vehicle positioning error prediction method Active CN113703025B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110972521.3A CN113703025B (en) 2021-08-23 2021-08-23 GNSS (Global navigation satellite System) multiple failure state-oriented intelligent vehicle positioning error prediction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110972521.3A CN113703025B (en) 2021-08-23 2021-08-23 GNSS (Global navigation satellite System) multiple failure state-oriented intelligent vehicle positioning error prediction method

Publications (2)

Publication Number Publication Date
CN113703025A true CN113703025A (en) 2021-11-26
CN113703025B CN113703025B (en) 2023-11-07

Family

ID=78654249

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110972521.3A Active CN113703025B (en) 2021-08-23 2021-08-23 GNSS (Global navigation satellite System) multiple failure state-oriented intelligent vehicle positioning error prediction method

Country Status (1)

Country Link
CN (1) CN113703025B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220355805A1 (en) * 2021-05-04 2022-11-10 Hyundai Motor Company Vehicle position correction apparatus and method thereof

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109521454A (en) * 2018-12-06 2019-03-26 中北大学 A kind of GPS/INS Combinated navigation method based on self study volume Kalman filtering
US20200132861A1 (en) * 2018-10-31 2020-04-30 Mitsubishi Electric Research Laboratories, Inc. Position Estimation Under Multipath Transmission
CN111913175A (en) * 2020-07-02 2020-11-10 哈尔滨工程大学 Water surface target tracking method with compensation mechanism under transient failure of sensor
CN113156473A (en) * 2021-03-04 2021-07-23 中国北方车辆研究所 Self-adaptive discrimination method for satellite signal environment of information fusion positioning system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200132861A1 (en) * 2018-10-31 2020-04-30 Mitsubishi Electric Research Laboratories, Inc. Position Estimation Under Multipath Transmission
CN109521454A (en) * 2018-12-06 2019-03-26 中北大学 A kind of GPS/INS Combinated navigation method based on self study volume Kalman filtering
CN111913175A (en) * 2020-07-02 2020-11-10 哈尔滨工程大学 Water surface target tracking method with compensation mechanism under transient failure of sensor
CN113156473A (en) * 2021-03-04 2021-07-23 中国北方车辆研究所 Self-adaptive discrimination method for satellite signal environment of information fusion positioning system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
QIMIN XU: ""A Reliable Hybrid Positioning Methodology for Land Vehicles Using Low-Cost Sensors"", 《 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS 》, vol. 17, no. 3, pages 834 - 847, XP011608301, DOI: 10.1109/TITS.2015.2487518 *
徐启敏: "《城市环境下车辆智能融合定位技术研究》", 《中国博士学位论文全文数据库 工程科技Ⅱ辑》, no. 3 *
郭宇杰: ""基于移动机器人GPS/INS组合导航模式下的定位误差理论分析及实验研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 9 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220355805A1 (en) * 2021-05-04 2022-11-10 Hyundai Motor Company Vehicle position correction apparatus and method thereof
US11821995B2 (en) * 2021-05-04 2023-11-21 Hyundai Motor Company Vehicle position correction apparatus and method thereof

Also Published As

Publication number Publication date
CN113703025B (en) 2023-11-07

Similar Documents

Publication Publication Date Title
Shen et al. Seamless GPS/inertial navigation system based on self-learning square-root cubature Kalman filter
Shen et al. Observability analysis and adaptive information fusion for integrated navigation of unmanned ground vehicles
CN108319293B (en) UUV real-time collision avoidance planning method based on LSTM network
Chen et al. A hybrid prediction method for bridging GPS outages in high-precision POS application
CN108334677B (en) UUV real-time collision avoidance planning method based on GRU network
CN111272174B (en) Combined navigation method and system
Bonnifait et al. Cooperative localization with reliable confidence domains between vehicles sharing GNSS pseudoranges errors with no base station
CN111190211B (en) GPS failure position prediction positioning method
CN102445902A (en) System and method for conditional multi-output regression for machine condition monitoring
CN112698646B (en) Aircraft path planning method based on reinforcement learning
CN110954132A (en) Method for carrying out navigation fault identification through GRNN (generalized regression neural network) assisted adaptive Kalman filtering
CN109615860A (en) A kind of signalized intersections method for estimating state based on nonparametric Bayes frame
CN113703025B (en) GNSS (Global navigation satellite System) multiple failure state-oriented intelligent vehicle positioning error prediction method
CN114689047A (en) Deep learning-based integrated navigation method, device, system and storage medium
CN113504728B (en) Method, device and equipment for generating task instruction and storage medium
Sarychev et al. Optimal regressors search subjected to vector autoregression of unevenly spaced TLE series
CN112268564B (en) Unmanned aerial vehicle landing space position and attitude end-to-end estimation method
Cohen et al. A-KIT: Adaptive Kalman-informed transformer
CN116910534A (en) Space-time intelligent prediction method and device for ocean environmental elements in different sea areas
CN115993077A (en) Optimal decision method and optimal decision system for inertial navigation system under complex road condition transportation condition
Tuohy et al. Map based navigation for autonomous underwater vehicles
Xu et al. A novel heading angle estimation methodology for land vehicles based on deep learning and enhanced digital map
KR20200074660A (en) A method for predicting satellite events embedded in satellite on-board software
CN113190632B (en) Model building method and system of track restoration algorithm
CN114826337A (en) Pre-compensation method and system for Doppler frequency offset of satellite communication signal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant