CN111709517B - Method and device for enhancing redundancy fusion positioning based on confidence prediction system - Google Patents

Method and device for enhancing redundancy fusion positioning based on confidence prediction system Download PDF

Info

Publication number
CN111709517B
CN111709517B CN202010537807.4A CN202010537807A CN111709517B CN 111709517 B CN111709517 B CN 111709517B CN 202010537807 A CN202010537807 A CN 202010537807A CN 111709517 B CN111709517 B CN 111709517B
Authority
CN
China
Prior art keywords
confidence
positioning
moment
calculating
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010537807.4A
Other languages
Chinese (zh)
Other versions
CN111709517A (en
Inventor
漆梦梦
杨贵
陶靖琦
施忠继
刘奋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Heading Data Intelligence Co Ltd
Original Assignee
Heading Data Intelligence Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Heading Data Intelligence Co Ltd filed Critical Heading Data Intelligence Co Ltd
Priority to CN202010537807.4A priority Critical patent/CN111709517B/en
Publication of CN111709517A publication Critical patent/CN111709517A/en
Application granted granted Critical
Publication of CN111709517B publication Critical patent/CN111709517B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3446Details of route searching algorithms, e.g. Dijkstra, A*, arc-flags, using precalculated routes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The invention relates to a redundancy fusion positioning enhancement method based on a confidence prediction system, which comprises the following steps: calculating positioning deviation data of the redundant fusion positioning system and the high-precision inertial navigation system, inputting the positioning deviation data into a trained confidence value fitting network based on deep learning, and calculating a confidence value corresponding to the positioning deviation data; predicting track points at a future moment through the speed and the acceleration of the vehicle at the current moment, and acquiring high-precision map attribute values corresponding to the track points; and inputting the observed quantity related to the confidence coefficient of the positioning subsystem at the current moment, the confidence coefficient value at the current moment and the high-precision map attribute value corresponding to the track point at the future moment into the trained time sequence neural network based on deep learning, and calculating the confidence coefficient value at the future moment. The method solves the problems of real-time estimation of the position confidence level and prediction of the confidence level of positioning in the future time by the automatic driving vehicle according to the current positioning state and the forward high-precision map attribute.

Description

Redundancy fusion positioning enhancement method and device based on confidence prediction system
Technical Field
The invention relates to the fields of automatic driving and high-precision map application, in particular to a redundancy fusion positioning enhancement method and device based on a confidence prediction system.
Background
The autonomous vehicle needs to perform navigation guidance according to the position of the autonomous vehicle, and the process needs to not only pay attention to the current position, but also predict the position change in a future period of time to assist the state self-check and the robustness capability enhancement of the positioning algorithm. The confidence of the predicted positioning is an essential function for an automatic driving navigation guidance and decision-making assisting system.
Fig. 1 to 2 show a hardware configuration diagram and a software configuration diagram of the redundant fusion positioning system. A redundant fusion positioning system for use with autonomous vehicles includes a plurality of positioning subsystems: the System comprises a low-precision INS combined Inertial Navigation System (Inertial Navigation System), a GNSS positioning System (Global Navigation Satellite System), a DR track recursion (dead reckoning), a high-precision map matching positioning and the like, and the System positioning robustness is increased through the redundancy of sensor input information. The accuracy of the redundant fusion positioning system is affected by various factors, such as the confidence of the GNSS, the GNSS loss time, the INS performance, the map data of the current and the front scenes (judging the scene type), the vehicle movement intention (the lane change frequently affects the lateral accuracy), the performance of a wheel speed meter, the magnetic field interference of a magnetometer and the like. The non-linear characteristics are presented between the influence factor weight of various factors and the position confidence level, so that the confidence level suitable for the whole scene is difficult to evaluate through an expert system, and the future positioning confidence level is difficult to effectively predict by combining the current scene and the front scene.
Disclosure of Invention
Aiming at the technical problems in the prior art, the invention provides a redundancy fusion positioning enhancement method and a redundancy fusion positioning enhancement device based on a confidence prediction system, so as to solve the problem of real-time estimation of the positioning confidence of a multi-sensor redundancy fusion positioning system, and effectively predict the confidence of positioning in a future period of time by an automatic driving vehicle according to the current positioning state and the attribute of a front high-precision map, and the information can be used for automatic driving auxiliary decision making.
The technical scheme for solving the technical problems is as follows:
in a first aspect, the present invention provides a method for enhancing redundant fusion localization based on a confidence prediction system, comprising the following steps:
calculating positioning deviation data of a redundant fusion positioning system and a high-precision inertial navigation system, inputting the positioning deviation data into a trained confidence value fitting network based on deep learning, and calculating a confidence value corresponding to the positioning deviation data;
predicting track points at a future moment through the speed and the acceleration of the vehicle at the current moment, and acquiring high-precision map attribute values corresponding to the track points; and inputting the observed quantity related to the confidence coefficient of the positioning subsystem at the current moment, the confidence coefficient value at the current moment and the high-precision map attribute value corresponding to the track point at the future moment into the trained time sequence neural network based on deep learning, and calculating the confidence coefficient value at the future moment.
In a second aspect, the present invention provides a redundant fusion localization enhancing apparatus based on a confidence prediction system, including:
the confidence evaluation module is used for calculating positioning deviation data of the redundant fusion positioning system and the high-precision inertial navigation system, inputting the positioning deviation data into a trained confidence value fitting network based on deep learning, and calculating a confidence value corresponding to the positioning deviation data;
the confidence coefficient prediction module is used for predicting track points at a future moment through the speed and the acceleration of the vehicle at the current moment and acquiring high-precision map attribute values corresponding to the track points; and inputting the observed quantity related to the confidence coefficient of the positioning subsystem at the current moment, the confidence coefficient value at the current moment and the high-precision map attribute value corresponding to the track point at the future moment into the trained time sequence neural network based on deep learning, and calculating the confidence coefficient value at the future moment.
The beneficial effects of the invention are: the method solves the problem of real-time evaluation of the positioning confidence of the multi-sensor redundancy fusion positioning system, and the automatic driving vehicle effectively predicts the positioning confidence in a period of time in the future according to the current positioning state and the forward high-precision map attribute, and the information can be used for automatic driving auxiliary decision-making. The method adopts a deep learning neural network to solve the confidence coefficient prediction function of fusion positioning, has low requirement on the calculation capability of a confidence coefficient prediction part, can be embedded into hardware equipment of a positioning module, and performs real-time calculation at a vehicle end. The confidence coefficient value of the confidence coefficient prediction part is easy to obtain, the cost is controllable, manual marking is not needed, training data can be continuously obtained, and the confidence coefficient value of network prediction gradually approaches to a true value. The high-precision map attribute of the track point at the future moment is fused into fusion positioning, and the design mode can enable confidence coefficient prediction to be more accurate.
On the basis of the technical scheme, the invention can be further improved as follows.
Further, the calculating of the positioning deviation data of the redundant fusion positioning system and the high-precision inertial navigation system includes:
respectively converting a longitude and latitude coordinate positioning result of the redundant fusion positioning system and a longitude and latitude coordinate positioning result of the high-precision inertial navigation system into plane northeast coordinates in a coordinate projection mode;
and acquiring a rotation matrix of a yaw angle through a high-precision inertial navigation system, multiplying the northeast coordinate vector by the rotation matrix to obtain a vehicle body coordinate, and calculating the x-direction deviation, the y-direction deviation and the elevation difference under a vehicle body coordinate system, wherein the value combination of the x-direction deviation, the y-direction deviation and the elevation difference is positioning deviation data.
Further, the training process of the deep learning-based confidence value fitting network comprises:
setting a corresponding experience confidence value for each group of sampled positioning deviation data, training a confidence value fitting network by taking a plurality of groups of positioning deviation data and the corresponding experience confidence values as input, and obtaining a confidence value fitting network training weight file which is taken as the input of the confidence value fitting network.
Further, the deep learning based training process for the time-series neural network includes:
training a time sequence neural network through positioning deviation data at the current moment, a confidence coefficient value corresponding to the positioning deviation data and a high-precision map attribute value corresponding to a vehicle track point at the future moment to obtain a time sequence neural network weight file;
through back propagation in the training process, the network weight is continuously changed in a mode of minimizing a loss function to enable the network output iteration to approach a true value, and an optimized time sequence neural network weight file is obtained and used as the input of the time sequence neural network.
Further, the loss function is a harmonic mean of absolute percentage errors between confidence prediction values and confidence truth values at T +1, … and T + n moments, where T is the current moment, T +1 is the next moment, and T + n is the nth future moment.
Further, the calculating the confidence value at the future time includes:
computing a feature fusion representation vector: respectively forming input vectors for the observation data of the multiple positioning subsystems and variables related to the confidence degrees through feature representation to obtain feature vectors of the positioning system at the current moment, and performing confidence degree vector fusion calculation on the feature vectors through a neural network attention mechanism to obtain a feature fusion representation vector V _ fusion;
Calculating a position confidence value at a future time, comprising: the feature fusion expression vector V _ fusion is connected in series with a feature vector V _ localization directly related to the position confidence of the current moment, and then is sequentially spliced with the high-precision map attribute expression vector of the next moment, passes through two BilSTM bidirectional long and short term memory network layers, and then passes through a full connection layer to output the position confidence value of the next moment; by analogy, outputting continuous position confidence values of future moments;
calculating a body attitude confidence value at a future time, comprising: the feature fusion expression vector V _ fusion is connected with the feature vector V _ yaw directly related to the attitude confidence of the current moment in series, and then is sequentially spliced with the high-precision map attribute expression vector of the next moment, passes through two BilSTM bidirectional long and short term memory network layers, and then outputs the attitude confidence value of the next moment after passing through a full connection layer; and in the same way, outputting the attitude confidence values of continuous future moments.
Further, the high-precision map attribute value includes various scene type information of a road.
In a third aspect, the present invention provides an electronic device comprising:
a memory for storing a computer software program;
And the processor is used for reading and executing the computer software program to realize the redundancy fusion positioning enhancement method based on the confidence prediction system.
In a fourth aspect, the present invention provides a computer readable storage medium having stored therein a computer software program for implementing a method for redundant fusion localization enhancement based on confidence prediction system as described above.
Drawings
FIG. 1 is a hardware block diagram of a redundant fusion positioning system according to the present invention;
FIG. 2 is a software configuration diagram of the redundant fusion positioning system of the present invention;
FIG. 3 is a flowchart of an embodiment of the present invention;
FIG. 4 is a chart of an input summary of the confidence prediction system of the present invention;
FIG. 5 is a network structure of a confidence prediction model according to the present invention;
FIG. 6 is a block diagram of a system according to a second embodiment of the present invention;
fig. 7 is a block diagram of the three systems according to the embodiment of the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
The first embodiment is as follows:
as shown in fig. 3, an embodiment of the present invention provides a method for enhancing redundant fusion localization based on a confidence prediction system, including the following steps:
S1, calculating positioning deviation data of a redundant fusion positioning system and a high-precision inertial navigation system, inputting the positioning deviation data into a trained confidence value fitting network based on deep learning, and calculating a confidence value corresponding to the positioning deviation data;
s2, predicting track points at a future moment through the speed and the acceleration of the vehicle at the current moment, and acquiring high-precision map attribute values corresponding to the track points; and inputting the observed quantity related to the confidence coefficient of the positioning subsystem at the current moment, the confidence coefficient value at the current moment and the high-precision map attribute value corresponding to the track point at the future moment into the trained time sequence neural network based on deep learning, and calculating the confidence coefficient value at the future moment.
Further, the calculating of the positioning deviation data of the redundant fusion positioning system and the high-precision inertial navigation system includes:
respectively converting a longitude and latitude coordinate positioning result of the redundant fusion positioning system and a longitude and latitude coordinate positioning result of the high-precision inertial navigation system into plane northeast coordinates in a coordinate projection mode;
and acquiring a rotation matrix of a yaw angle through a high-precision inertial navigation system, multiplying the northeast coordinate vector by the rotation matrix to obtain a vehicle body coordinate, and calculating the x-direction deviation, the y-direction deviation and the elevation difference under a vehicle body coordinate system, wherein the value combination of the x-direction deviation, the y-direction deviation and the elevation difference is positioning deviation data.
And in the confidence evaluation stage, the confidence evaluation object is the deviation between the positioning result (longitude, latitude, elevation and yaw angle) output by the redundant fusion positioning system and the positioning result (longitude, latitude, elevation and yaw angle) output by the high-precision inertial navigation system, and the confidence value is calculated according to the deviation. Therefore, two problems need to be solved: and calculating positioning deviation data and calculating a confidence coefficient value of the current moment according to the positioning deviation data. The high-precision inertial navigation system is actually a set of high-precision satellite and inertial combined navigation system, can output a positioning result and a yaw angle with higher absolute position precision, and therefore is used as a positioning precision evaluation reference. The inputs to the inference stage of the trained confidence fit network are: the confidence value fitting network training weight file and the value combination of the x-direction deviation, the y-direction deviation and the elevation difference of a group of confidence values to be predicted output are as follows: and inputting confidence values corresponding to the value combination of the x-direction deviation, the y-direction deviation and the elevation difference.
In the confidence coefficient prediction stage, a time sequence neural network based on deep learning is adopted, and the positioning confidence coefficient of a real vehicle at a future moment is predicted in the derivation process through a supervised learning training evaluation model. The input of the confidence prediction system is shown in the input list diagram of the confidence prediction system in fig. 4, and includes observed quantities related to confidence of a positioning subsystem of a redundant fusion positioning system (for example, a three-axis magnetometer yaw angle observed quantity, a DR odometer pose estimation observed quantity, an barometer elevation observed quantity, a vehicle speed, a steering wheel turning angle, an INS system observed quantity, a GNSS positioning observed quantity, a map matching positioning precision observed quantity, and the like), confidence values output by a confidence value fitting network, high-precision map attribute values corresponding to a future time track point output by a map engine module, and the output is the predicted confidence value of the future time.
The method solves the problem of real-time evaluation of the positioning confidence of the multi-sensor redundancy fusion positioning system, and the automatic driving vehicle effectively predicts the positioning confidence in a period of time in the future according to the current positioning state and the forward high-precision map attribute, and the information can be used for automatic driving auxiliary decision-making. The method adopts a deep learning neural network to solve the confidence coefficient prediction function of fusion positioning, has low requirement on the calculation capability of a confidence coefficient prediction part, can be embedded into hardware equipment of a positioning module, and performs real-time calculation at a vehicle end. The confidence coefficient value of the confidence coefficient prediction part is easy to obtain, the cost is controllable, manual marking is not needed, training data can be continuously obtained, and the confidence coefficient value of network prediction gradually approaches to a true value. The high-precision map attribute of the track point at the future moment is fused into fusion positioning, and the design mode can enable confidence coefficient prediction to be more accurate.
In this embodiment, training the confidence value fitting network and the timing neural network is further included.
Before training, training data of various road scenes needs to be sampled. The training data to be sampled in this embodiment mainly includes positioning data of a redundant fusion positioning system, positioning data of a high-precision inertial navigation system, and high-precision map attribute data output by a map engine module.
And in the data acquisition process, a redundant fusion positioning system, a high-precision inertial navigation system and a map engine module (namely a high-precision map) are started at the same time. Collecting data of various different scenes, cleaning the collected data, carrying out preprocessing such as statistical analysis according to the scenes to obtain positioning data of a redundant fusion positioning system and positioning data of a high-precision inertial navigation system, and calculating the positioning data of the redundant fusion positioning system and the positioning data of the high-precision inertial navigation system at the same sampling moment to obtain positioning deviation data, wherein the positioning deviation data comprises a combination of x-direction deviation, y-direction deviation and elevation difference; meanwhile, a high-precision map attribute value is obtained through a map engine module, and the high-precision map attribute value contains information of each scene type.
Specifically, the training data sampling process comprises training data sampling, training data acquisition and training data preprocessing.
Sampling training data: sampling the acquired high-speed scene data, urban area scene data and city-surrounding high-speed scene data according to a certain proportion, wherein each scene sequence can cover the original data under different positioning conditions.
The sampled scene mainly comprises: specific position scenes such as open scenes, under viaducts, underground passages, tunnels, radio stations, high-voltage transformer substations, urban high-rise canyons and special scenes are collected and made into samples according to data of joints among the scene lists.
Training data acquisition:
and acquiring training data according to the training data sampling requirement. And in the data acquisition process, a redundant fusion positioning system, a high-precision inertial navigation system and a map engine module (namely a high-precision map) are started at the same time, and input data required by a confidence value fitting network and a time sequence neural network are stored.
And calculating a track point at the moment T +1 (namely the next moment) according to the current speed v _ T and the current acceleration a _ T of the vehicle. And the map engine module outputs the high-precision map attribute value of the track point at the T +1 moment.
Training data preprocessing:
cleaning the collected data and eliminating abnormal values; and then carrying out statistical analysis on the cleaned data according to scenes.
The numerical measure of confidence in the training phase is set primarily empirically, and for a given combination of x-direction deviation, y-direction deviation, and elevation difference, an empirical confidence value is set. In the invention, a neural network learning mode is adopted to fit the confidence value. Because the neural network can well simulate the nonlinear relation among a plurality of variables, the problems of inexhaustible combinations of x-direction deviation, y-direction deviation and elevation difference and search cost are solved by designing a confidence value fitting network.
The training data of the confidence value fitting network is the value combination of a plurality of groups of sampled x-direction deviation, y-direction deviation and elevation difference and the empirical confidence value corresponding to each group of data. The empirical confidence value is the reference data given by the location specialist (a priori knowledge from statistical analysis of historical location data).
In this embodiment, the training of the confidence value fitting network in step S1 specifically includes: setting a corresponding experience confidence value for each group of sampled positioning deviation data, training a confidence value fitting network by taking a plurality of groups of positioning deviation data and the corresponding experience confidence values as input, and obtaining a confidence value fitting network training weight file which is taken as the input of the confidence value fitting network.
In this embodiment, step S2 further includes training the time-series neural network. The training of the time-series neural network comprises:
training a time sequence neural network through observed quantities related to confidence degrees of a positioning subsystem at the current moment, the confidence degree value at the current moment and high-precision map attribute values corresponding to track points at a future moment to obtain a time sequence neural network weight file;
And continuously changing the network weight by back propagation in the training process in a mode of minimizing a loss function to enable the network output to be iterative and approximate to a true value, so as to obtain an optimized time sequence neural network weight file, wherein the optimized time sequence neural network weight file is used as the input of the time sequence neural network.
Further, the loss function is a harmonic mean of absolute percentage errors between confidence prediction values and confidence truth values at T +1, … and T + n moments, where T is the current moment, T +1 is the next moment, and T + n is the nth future moment.
In step S2, the calculating the confidence value at the future time includes:
(1) computing a feature fusion representation vector: as shown in the confidence prediction system input list diagram of fig. 4, input vectors are respectively formed for observation data of a plurality of positioning subsystems and variables related to confidence through feature representation, so as to obtain feature vectors V _ gps, V _ ins, V _ had, V _ mm, and V _ dr of the positioning system at the current time, wherein the feature vectors are used as input vectors, and a feature fusion representation vector V _ fusion is obtained by performing confidence vector fusion calculation through a neural network attention mechanism; the fusion operation can self-learn the weights of all positioning subsystems according to training data, and can capture and express different positioning scenes and different influence factors of all positioning subsystems in fusion positioning. The V _ fusion vector is a feature fusion representation vector of a plurality of positioning subsystems, and is a common input part of a positioning confidence prediction branch and a vehicle posture confidence prediction branch.
(2) Calculating a position confidence value at a future time, comprising: the feature fusion expression vector V _ fusion is connected in series with a feature vector V _ localization directly related to the position confidence at the current moment, and then sequentially spliced with a high-precision map attribute expression vector at the T +1 moment (namely the next moment), and a position confidence value at the T +1 moment (namely the next moment) is output through two layers of BilSTM two-way long-short term memory network layers and a full connection layer;
as shown in fig. 5, the time T +2 is similar to the time T +1, and after the feature fusion expression vector V _ fusion is connected in series with the feature vector V _ localization vector directly related to the position confidence of the time T +1, the feature fusion expression vector V _ fusion is spliced with the high-precision map attribute expression vector of the time T +2, and then the high-precision map attribute expression vector is input into two layers of BiLSTM bidirectional long-short term memory network layers, and then the position confidence value of the time T +2 is output after passing through a full connection layer.
T +3, …, with time T +10 being similar to time T + 2; and in this way, the continuous position confidence value of the future moment is output. The bidirectional BilSTM network layer has a time sequence bidirectional memory function and can express the continuity of positioning on a time sequence. The input is connected with a full connection layer except the BilSTM layer, and the full connection layer outputs the positioning confidence coefficients at different moments.
(3) Calculating a body attitude confidence value at a future time, comprising: the feature fusion expression vector V _ fusion is connected with the feature vector V _ yaw directly related to the attitude confidence at the current moment in series, and then sequentially spliced with the high-precision map attribute expression vector at the T +1 moment (namely the next moment), and the attitude confidence value at the T +1 moment (namely the next moment) is output through two BilSTM two-way long-short term memory network layers and the full connection layer.
As shown in fig. 5, T +2 is similar to T +1, the feature fusion expression vector V _ fusion is concatenated with the feature vector V _ yaw vector directly related to the attitude confidence at T +1, and then spliced with the high-precision map attribute expression vector at T +2 and input to the BiLSTM network layer, and then the attitude confidence value at T +2 is output through the fully connected layer.
T +3, …, with time T +10 being similar to time T + 2; and in the same way, outputting the attitude confidence values of continuous future moments.
Further, the high-precision map attribute value includes various scene type information of a road.
Example two:
as shown in fig. 6, the present invention provides a redundant fusion localization enhancing apparatus based on confidence prediction system, which includes:
the confidence evaluation module is used for calculating positioning deviation data of the redundant fusion positioning system and the high-precision inertial navigation system, inputting the positioning deviation data into a trained confidence value fitting network based on deep learning, and calculating a confidence value corresponding to the positioning deviation data;
The confidence coefficient prediction module is used for predicting track points at a future moment through the speed and the acceleration of the vehicle at the current moment and acquiring high-precision map attribute values corresponding to the track points; and inputting the observed quantity related to the confidence coefficient of the positioning subsystem at the current moment, the confidence coefficient value at the current moment and the high-precision map attribute value corresponding to the track point at the future moment into the trained time sequence neural network based on deep learning, and calculating the confidence coefficient value at the future moment.
Example three:
as shown in fig. 7, the present embodiment provides an electronic apparatus including:
a memory for storing a computer software program;
and the processor is used for reading and executing the computer software program to realize the redundancy fusion positioning enhancement method based on the confidence prediction system. For example: calculating positioning deviation data of a redundant fusion positioning system and a high-precision inertial navigation system, inputting the positioning deviation data into a trained confidence value fitting network based on deep learning, and calculating a confidence value corresponding to the positioning deviation data; predicting track points at a future moment through the speed and the acceleration of the vehicle at the current moment, and acquiring high-precision map attribute values corresponding to the track points; and inputting the observed quantity related to the confidence coefficient of the positioning subsystem at the current moment, the confidence coefficient value at the current moment and the high-precision map attribute value corresponding to the track point at the future moment into the trained time sequence neural network based on deep learning, and calculating the confidence coefficient value at the future moment.
In addition, the logic instructions in the memory may be implemented in the form of software functional units and may be stored in a computer readable storage medium when sold or used as a stand-alone product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or make a contribution to the prior art, or may be implemented in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Example four:
an embodiment of the present invention provides a computer-readable storage medium, in which a computer software program for implementing the above-mentioned method for enhancing redundant fusion positioning based on a confidence prediction system is stored. Examples include: calculating positioning deviation data of a redundant fusion positioning system and a high-precision inertial navigation system, inputting the positioning deviation data into a trained confidence value fitting network based on deep learning, and calculating a confidence value corresponding to the positioning deviation data; predicting track points at a future moment through the speed and the acceleration of the vehicle at the current moment, and acquiring high-precision map attribute values corresponding to the track points; and inputting the observed quantity related to the confidence coefficient of the positioning subsystem at the current moment, the confidence coefficient value at the current moment and the high-precision map attribute value corresponding to the track point at the future moment into the trained time sequence neural network based on deep learning, and calculating the confidence coefficient value at the future moment.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment may be implemented by software plus a necessary general hardware platform, and may also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (9)

1. A method for enhancing redundant fusion positioning based on a confidence coefficient prediction system is characterized by comprising the following steps:
calculating positioning deviation data of a redundant fusion positioning system and a high-precision inertial navigation system, inputting the positioning deviation data into a trained confidence value fitting network based on deep learning, and calculating a confidence value corresponding to the positioning deviation data;
Predicting track points at a future moment through the speed and the acceleration of the vehicle at the current moment, and acquiring high-precision map attribute values corresponding to the track points, wherein the high-precision map attribute values comprise various scene type information of roads;
inputting the observed quantity related to the confidence coefficient of the positioning subsystem at the current moment, the confidence coefficient value at the current moment and the high-precision map attribute value corresponding to the track point at the future moment into a trained time sequence neural network based on deep learning, and calculating the confidence coefficient value at the future moment; the method comprises the following steps:
calculating a feature fusion expression vector through the observed quantity related to the confidence of the positioning subsystem at the current moment, calculating a positioning confidence value at the future moment through the feature fusion expression vector, the position confidence value at the current moment and the high-precision map attribute at the next moment, and calculating a posture confidence value at the future moment through the feature fusion expression vector, the posture confidence value at the current moment and the high-precision map attribute at the next moment.
2. The method for enhancing redundant fusion positioning based on confidence coefficient prediction system according to claim 1, wherein the calculating the positioning deviation data of the redundant fusion positioning system and the high-precision inertial navigation system includes:
Respectively converting a longitude and latitude coordinate positioning result of the redundant fusion positioning system and a longitude and latitude coordinate positioning result of the high-precision inertial navigation system into plane northeast coordinates in a coordinate projection mode;
and acquiring a rotation matrix of a yaw angle through a high-precision inertial navigation system, multiplying the northeast coordinate vector by the rotation matrix to obtain a vehicle body coordinate, and calculating the x-direction deviation, the y-direction deviation and the elevation difference under a vehicle body coordinate system, wherein the value combination of the x-direction deviation, the y-direction deviation and the elevation difference is positioning deviation data.
3. The method for redundant fusion localization enhancement based on confidence prediction system according to claim 1 or 2, wherein the training process of the confidence value fitting network based on deep learning comprises:
setting a corresponding experience confidence value for each group of sampled positioning deviation data, training a confidence value fitting network by taking a plurality of groups of positioning deviation data and the corresponding experience confidence values as input, and obtaining a confidence value fitting network training weight file which is taken as the input of the confidence value fitting network.
4. The method of claim 1, wherein the deep learning based training process for the time-series neural network comprises:
Training a time sequence neural network through positioning deviation data at the current moment, a confidence coefficient value corresponding to the positioning deviation data and a high-precision map attribute value corresponding to a vehicle track point at the future moment to obtain a time sequence neural network weight file;
through back propagation in the training process, the network weight is continuously changed in a mode of minimizing a loss function to enable the network output iteration to approach a true value, and an optimized time sequence neural network weight file is obtained and used as the input of the time sequence neural network.
5. The method of claim 4, wherein the loss function is a harmonic mean of absolute percentage errors between confidence estimates and confidence truth values at time T +1, …, and T + n, where T is a current time, T +1 is a next time, and T + n is an nth future time.
6. The method of claim 1, wherein the calculating the confidence value at the future time comprises:
computing a feature fusion representation vector: respectively forming input vectors for the observation data of the multiple positioning subsystems and variables related to the confidence degrees through feature representation to obtain feature vectors of the positioning system at the current moment, and performing confidence degree vector fusion calculation on the feature vectors through a neural network attention mechanism to obtain a feature fusion representation vector V _ fusion;
Calculating a position confidence value at a future time, comprising: the feature fusion expression vector V _ fusion is connected in series with a feature vector V _ localization directly related to the position confidence of the current moment, and then is sequentially spliced with the high-precision map attribute expression vector of the next moment, passes through two BilSTM bidirectional long and short term memory network layers, and then passes through a full connection layer to output the position confidence value of the next moment; by analogy, outputting continuous position confidence values of future moments;
calculating a body attitude confidence value at a future time, comprising: the feature fusion expression vector V _ fusion is connected with the feature vector V _ yaw directly related to the attitude confidence of the current moment in series, and then is sequentially spliced with the high-precision map attribute expression vector of the next moment, passes through two BilSTM bidirectional long and short term memory network layers, and then outputs the attitude confidence value of the next moment after passing through a full connection layer; and in the same way, outputting the attitude confidence values of continuous future moments.
7. A redundant fusion positioning enhancement device based on a confidence prediction system is characterized by comprising:
the confidence evaluation module is used for calculating positioning deviation data of the redundant fusion positioning system and the high-precision inertial navigation system, inputting the positioning deviation data into a trained confidence value fitting network based on deep learning, and calculating a confidence value corresponding to the positioning deviation data;
The confidence coefficient prediction module is used for predicting track points at a future moment through the speed and the acceleration of the vehicle at the current moment and acquiring high-precision map attribute values corresponding to the track points, wherein the high-precision map attribute values comprise various scene type information of roads; inputting the observed quantity related to the confidence coefficient of the positioning subsystem at the current moment, the confidence coefficient value at the current moment and the high-precision map attribute value corresponding to the track point at the future moment into a trained time sequence neural network based on deep learning, and calculating the confidence coefficient value at the future moment; the method comprises the following steps:
calculating a feature fusion expression vector through the observed quantity related to the confidence of the positioning subsystem at the current moment, calculating a positioning confidence value at the future moment through the feature fusion expression vector, the position confidence value at the current moment and the high-precision map attribute at the next moment, and calculating a posture confidence value at the future moment through the feature fusion expression vector, the posture confidence value at the current moment and the high-precision map attribute at the next moment.
8. An electronic device, comprising:
a memory for storing a computer software program;
a processor for reading and executing said computer software program to implement a method of redundant fusion localization enhancement based on confidence prediction system of any of claims 1 to 6.
9. A computer-readable storage medium having stored thereon a computer software program for implementing the method of redundant fusion localization enhancement based on confidence prediction system of any of claims 1 to 6.
CN202010537807.4A 2020-06-12 2020-06-12 Method and device for enhancing redundancy fusion positioning based on confidence prediction system Active CN111709517B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010537807.4A CN111709517B (en) 2020-06-12 2020-06-12 Method and device for enhancing redundancy fusion positioning based on confidence prediction system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010537807.4A CN111709517B (en) 2020-06-12 2020-06-12 Method and device for enhancing redundancy fusion positioning based on confidence prediction system

Publications (2)

Publication Number Publication Date
CN111709517A CN111709517A (en) 2020-09-25
CN111709517B true CN111709517B (en) 2022-07-29

Family

ID=72540324

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010537807.4A Active CN111709517B (en) 2020-06-12 2020-06-12 Method and device for enhancing redundancy fusion positioning based on confidence prediction system

Country Status (1)

Country Link
CN (1) CN111709517B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112798020B (en) * 2020-12-31 2023-04-07 中汽研(天津)汽车工程研究院有限公司 System and method for evaluating positioning accuracy of intelligent automobile
CN112927256A (en) * 2021-03-16 2021-06-08 杭州萤石软件有限公司 Boundary fusion method and device for partitioned area and mobile robot
CN113096175B (en) * 2021-03-24 2023-10-24 苏州中科广视文化科技有限公司 Depth map confidence estimation method based on convolutional neural network
CN112859131B (en) * 2021-04-12 2021-09-07 北京三快在线科技有限公司 Positioning method and device of unmanned equipment
CN113093255A (en) * 2021-05-07 2021-07-09 深圳市前海智车科技有限公司 Multi-signal true fusion positioning calculation method, device, equipment and storage medium
CN113776538A (en) * 2021-09-16 2021-12-10 中国人民解放军91388部队 Real-time data fusion method for target track based on indication platform
CN114485681B (en) * 2021-12-30 2023-10-10 武汉光庭信息技术股份有限公司 Method for evaluating consistency rate of precision map data by utilizing DR track
CN115238836B (en) * 2022-09-23 2023-04-28 中国空气动力研究与发展中心计算空气动力研究所 Fusion method based on correlation degree of pneumatic data and physical model
CN117002538A (en) * 2023-10-07 2023-11-07 格陆博科技有限公司 Automatic driving control system based on deep learning algorithm

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106885576B (en) * 2017-02-22 2020-02-14 哈尔滨工程大学 AUV (autonomous Underwater vehicle) track deviation estimation method based on multipoint terrain matching positioning
CN111133447B (en) * 2018-02-18 2024-03-19 辉达公司 Method and system for object detection and detection confidence for autonomous driving
CN109001787B (en) * 2018-05-25 2022-10-21 北京大学深圳研究生院 Attitude angle resolving and positioning method and fusion sensor thereof
US11169531B2 (en) * 2018-10-04 2021-11-09 Zoox, Inc. Trajectory prediction on top-down scenes
CN109521454B (en) * 2018-12-06 2020-06-09 中北大学 GPS/INS integrated navigation method based on self-learning volume Kalman filtering
CN109978026B (en) * 2019-03-11 2021-03-09 浙江新再灵科技股份有限公司 Elevator position detection method and system based on LSTM network
CN110118560B (en) * 2019-05-28 2023-03-10 东北大学 Indoor positioning method based on LSTM and multi-sensor fusion
CN110208740A (en) * 2019-07-09 2019-09-06 北京智芯微电子科技有限公司 TDOA-IMU data adaptive merges positioning device and method
CN111156987B (en) * 2019-12-18 2022-06-28 东南大学 Inertia/astronomy combined navigation method based on residual compensation multi-rate CKF

Also Published As

Publication number Publication date
CN111709517A (en) 2020-09-25

Similar Documents

Publication Publication Date Title
CN111709517B (en) Method and device for enhancing redundancy fusion positioning based on confidence prediction system
Shen et al. Seamless GPS/inertial navigation system based on self-learning square-root cubature Kalman filter
Zhang A fusion methodology to bridge GPS outages for INS/GPS integrated navigation system
Hasberg et al. Simultaneous localization and mapping for path-constrained motion
Zhang et al. Increasing GPS localization accuracy with reinforcement learning
Liu et al. Deep learning-enabled fusion to bridge GPS outages for INS/GPS integrated navigation
CN109866752A (en) Double mode parallel vehicles track following driving system and method based on PREDICTIVE CONTROL
CN113642633A (en) Method, apparatus, device and medium for classifying driving scene data
CN111190211B (en) GPS failure position prediction positioning method
CN103884340A (en) Information fusion navigation method for detecting fixed-point soft landing process in deep space
CN110906933A (en) AUV (autonomous Underwater vehicle) auxiliary navigation method based on deep neural network
Geragersian et al. An INS/GNSS fusion architecture in GNSS denied environment using gated recurrent unit
CN111257853B (en) Automatic driving system laser radar online calibration method based on IMU pre-integration
Wang et al. DeepSpeedometer: Vehicle speed estimation from accelerometer and gyroscope using LSTM model
Gwak et al. Neural-network multiple models filter (NMM)-based position estimation system for autonomous vehicles
Tang et al. OdoNet: Untethered speed aiding for vehicle navigation without hardware wheeled odometer
CN114386599B (en) Method and device for training trajectory prediction model and trajectory planning
CN113920198B (en) Coarse-to-fine multi-sensor fusion positioning method based on semantic edge alignment
CN116337045A (en) High-speed map building navigation method based on karto and teb
Du et al. A hybrid fusion strategy for the land vehicle navigation using MEMS INS, odometer and GNSS
Yan et al. Integration of vehicle dynamic model and system identification model for extending the navigation service under sensor failures
El Sabbagh et al. Promoting navigation system efficiency during GPS outage via cascaded neural networks: A novel AI based approach
Toro et al. Particle Filter technique for position estimation in GNSS-based localisation systems
CN114777762B (en) Inertial navigation method based on Bayesian NAS
CN114004406A (en) Vehicle track prediction method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant