CN113483769B - Vehicle self-positioning method, system, equipment and medium based on particle filter - Google Patents

Vehicle self-positioning method, system, equipment and medium based on particle filter Download PDF

Info

Publication number
CN113483769B
CN113483769B CN202110941721.2A CN202110941721A CN113483769B CN 113483769 B CN113483769 B CN 113483769B CN 202110941721 A CN202110941721 A CN 202110941721A CN 113483769 B CN113483769 B CN 113483769B
Authority
CN
China
Prior art keywords
vehicle
model
motion system
system model
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110941721.2A
Other languages
Chinese (zh)
Other versions
CN113483769A (en
Inventor
江昆
杨殿阁
冯润泽
于伟光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202110941721.2A priority Critical patent/CN113483769B/en
Publication of CN113483769A publication Critical patent/CN113483769A/en
Application granted granted Critical
Publication of CN113483769B publication Critical patent/CN113483769B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Automation & Control Theory (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Navigation (AREA)

Abstract

The invention relates to a vehicle self-positioning method, a system, equipment and a medium based on a particle filter, which comprise the following steps: designing a metrology model of the particle filter, the metrology model comprising a direct metrology model and a design metrology model; training a pre-constructed vehicle motion system model based on the collected real vehicle data to obtain a trained vehicle motion system model; and acquiring the motion state of the vehicle based on the combined inertial navigation and visual sensor, and estimating the pose of the vehicle based on the vehicle motion system model and the measurement model to realize the positioning of the vehicle. The invention introduces the transverse intercept from the vehicle to the left lane line and the right lane line observed by the vision module as additional observed quantity, and can estimate the position and the orientation of the vehicle. The invention can be widely applied to the field of vehicle self-positioning.

Description

Vehicle self-positioning method, system, equipment and medium based on particle filter
Technical Field
The invention relates to the field of vehicle self-positioning, in particular to a method, a system, equipment and a storage medium for self-positioning a vehicle by using a particle filter based on sensors (a global navigation satellite system, an inertial measurement unit and a vision sensor) in a mass production vehicle.
Background
The accurate self-positioning of the vehicle is a precondition for realizing L3 (Level 3) Level and higher Level automatic driving, and the current mature high-accuracy self-positioning scheme of the vehicle mainly comprises the following four steps: a combined inertial navigation positioning system consisting of a GNSS (Global Navigation Satellite System ) and an IMU (Inertial Measurement Unit, inertial measurement unit); 2. a fusion positioning system combining inertial navigation fusion RTKs (Real-Time Kinematic); 3. the laser radar combining point cloud high-precision map matching and positioning system; 4. a visual camera is combined with a matching positioning system of a high-precision map. However, because of the high precision IMU, combined inertial navigation combined with RTK technology, and high cost of lidar used in the first three solutions, many automobile manufacturers try to avoid using them on mass production vehicles, most of these solutions only stay in the research stage and have no practical application on mass production vehicle models. The vision sensor used in the scheme 4 is low in price, rich semantic information and vision information can be obtained, high-precision positioning can be performed after the vision sensor is matched with the high-precision map, and meanwhile, the use cost of the high-precision map is low, so that the scheme 4 is very suitable for mass production vehicle types.
The current matching positioning method of combining the vision camera with the high-precision map can be divided into an optimization-based method and a filtering-based method, and the following description is respectively given:
the method is based on the optimization method, namely, a vehicle pose with the smallest difference between a visual detection feature and a map feature is found, a large number of visual feature points are firstly required to be extracted from visual information, then the extracted visual feature points are matched with a point cloud map, and the pose relation between the two point clouds is continuously and iteratively calculated so as to solve the position and the pose of the vehicle. However, most of the visual feature points are extracted by using a deep learning method, and a large number of point clouds are iteratively solved, and the two steps have larger calculation force requirements.
The filtering-based method is that the prior probability distribution of the self-vehicle pose is obtained according to a vehicle motion model, the posterior probability distribution of the self-vehicle pose is obtained through calculation after an observation result is obtained, and then the self-vehicle pose is estimated. The filtering-based method has small calculated amount, however, the current popular Kalman filtering algorithm is only suitable for the situation of a linear observation model, but the observation model constructed based on the visual sensor is nonlinear, cai Hao and the like use the Kalman filtering algorithm, and the linear observation model constructed based on the visual sensor can only estimate the position of the vehicle and cannot estimate the posture of the vehicle. In addition, the filtering-based method needs to construct a motion model of the vehicle, and a motion model based on kinematics such as a uniform velocity and uniform angular velocity (CVCT) model or a uniform acceleration model is currently commonly used, but when the sampling interval is increased, there is a problem that the motion model fails, for example, the visual matching map positioning method proposed by Jae Kyu Suhr sets the filtering frequency to 100Hz, however, the sampling frequency of many visual sensors is about 30Hz, so the problem that the motion model of the vehicle fails is also an obstacle to practical application of the algorithm.
Disclosure of Invention
In view of the above problems, the present invention aims to provide a method, a system, a device and a medium for self-positioning of a vehicle based on a particle filter, which utilize common sensors (GNSS, IMU and camera) in a mass production vehicle in combination with a vector high-precision map to perform high-precision positioning of the vehicle.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
in a first aspect of the present invention, a method for self-positioning a vehicle based on a particle filter is provided, comprising the steps of:
designing a metrology model of the particle filter, the metrology model comprising a direct metrology model and a design metrology model;
training a pre-constructed vehicle motion system model based on the collected real vehicle data to obtain a trained vehicle motion system model;
and acquiring the motion state of the vehicle based on the combined inertial navigation and visual sensor, and estimating the pose of the vehicle based on the vehicle motion system model and the measurement model to realize the positioning of the vehicle.
Preferably, the direct measurement model is based on combined inertial navigation formed by fusion of a GPS and an IMU, and outputs vehicle positioning information with the same frequency as the sampling frequency of the IMU; the design metrology model is based on visual sensors, the output of which is the lateral intercept from adjacent left and right lane lines of the vehicle distance.
Preferably, the method for training the pre-constructed vehicle motion system model based on the collected real vehicle data comprises the following steps:
constructing a vehicle motion system model, and defining input and output of the vehicle motion system model;
defining a loss function adopted in the training process;
and processing the pre-acquired real vehicle data, taking the processed data as a training set, and training the constructed vehicle motion system model based on the determined loss function to obtain a trained vehicle motion system model.
Preferably, the input of the vehicle motion system model is: time t of consecutive n sampling moments 0 ,t 1 ,...,t n-1 The vehicle position abscissa x, the vehicle position ordinate y, the included angle theta between the vehicle orientation and the north direction, the vehicle speed v and the component a of the vehicle acceleration along the direction of the vehicle head are observed at the corresponding moment; the output of the vehicle motion system model is as follows: and predicting the vehicle position and the vehicle orientation at the next moment.
Preferably, the preprocessing the pre-acquired real vehicle data includes:
converting longitude and latitude coordinates output by the vehicle inertial navigation system and based on a wgs84 coordinate system into a UTM coordinate system;
from t in one section 0 Last to t K From t i ,i∈[0,K-n]Start to continue n sampling moments t i ,t i+1 ,...,t i+n-1 Each sampling time is up to t i Time interval of time is taken as sampling time of the time, and t is taken as i The position of the own vehicle at each moment is taken as a coordinate origin, the position of the own vehicle at each moment is updated, and an input segment is obtained by combining an included angle theta between the direction of the vehicle and the north direction, the vehicle speed v and a component a of the vehicle acceleration along the direction of the vehicle head;
repeating the steps from a section of from t 0 Last to t K K-n+1 data were obtained for trainingFragments of the training.
Preferably, the method for realizing positioning of the vehicle based on the combined inertial navigation and vision sensor to collect the motion state of the vehicle and estimating the pose of the vehicle based on the motion system model and the measurement model of the vehicle comprises the following steps:
(1) judging whether the sampling time length meets the input requirement of the vehicle motion system model, if not, entering the step (2), otherwise, entering the step (3);
(2) when the sampling time is less than n, generating particles by using the observed quantity of combined inertial navigation;
(3) when the sampling time is greater than or equal to n, inputting the observed quantity of the vehicle into a vehicle motion system model to obtain the prediction of the vehicle state at the next time, and generating particles based on the prediction of the vehicle state at the next time;
(4) calculating the transverse intercept Inc of the particles obtained in the step (2) or the step (3) from left and right adjacent lanes by combining the vector high-precision map M obtained by the vision sensor left And Inc right And based on the lateral intercept Inc of the particle distance left and right adjacent lanes left And Inc right Updating the particle weight;
(5) and carrying out normalization processing on the weights of the particles after calculating the weights of the particles, and carrying out weighted summation on the states of all the particles to obtain the estimation of the self-vehicle pose.
In a second aspect, the present invention provides a vehicle self-positioning system based on a particle filter, comprising:
the measurement model design module is used for designing a measurement model of the particle filter, and the measurement model comprises a direct measurement model and a design measurement model;
the vehicle motion system model training module is used for training a pre-constructed vehicle motion system model based on the collected real vehicle data to obtain a trained vehicle operation system model;
and the vehicle pose estimation module is used for acquiring the motion state of the vehicle based on the combined inertial navigation and visual sensor, estimating the vehicle pose based on the vehicle motion system model and the measurement model and realizing the positioning of the vehicle.
Preferably, the direct measurement model is based on combined inertial navigation formed by fusion of a GPS and an IMU, and the model is output as vehicle positioning information with the same frequency as the sampling frequency of the IMU; the design measurement model is based on a visual sensor, and the model output is the transverse intercept of adjacent left and right lane lines from the vehicle distance.
In a third aspect of the present invention, there is provided a processing device comprising at least a processor and a memory, said memory having stored thereon a computer program, said processor executing steps of said method for self-positioning a particle filter based vehicle when said computer program is run.
In a fourth aspect of the present invention, there is provided a computer storage medium having stored thereon computer readable instructions executable by a processor to implement the steps of the particle filter based vehicle self-positioning method.
Due to the adoption of the technical scheme, the invention has the following advantages:
1. the invention introduces the transverse intercept from the vehicle to the left lane line and the right lane line observed by the vision module as additional observed quantity, and can estimate the position and the orientation of the vehicle.
2. The invention builds the self-vehicle motion system model by using the cyclic neural network, and solves the problem that a filter is invalid when the filtering frequency is low by using the vehicle model based on kinematics as the vehicle motion model.
3. The invention uses a particle filter algorithm, is suitable for a nonlinear measurement model and a vehicle motion system model, and realizes accurate self-positioning of the vehicle by using a mass production vehicle sensor.
Therefore, the invention can be widely applied to the field of vehicle self-positioning.
Drawings
FIG. 1 is a flow chart of an algorithm for implementing vehicle self-positioning in accordance with the present invention;
FIG. 2 is a diagram of a metrology model designed in the present invention;
FIG. 3 is a network model for predicting the state of a vehicle in the present invention;
FIG. 4 is a model diagram of the present invention for calculating left and right lateral intercepts of particles to lane lines.
Detailed Description
The present invention will be described in detail with reference to the accompanying drawings and examples.
According to the self-positioning method for the vehicle, provided by the invention, based on an algorithm of the particle filter, the transverse intercept from the vehicle to the left lane line and the right lane line observed by the vision module is used as an additional observed quantity, the circulating neural network is utilized to model a vehicle motion system, a better filtering effect is still achieved when the filtering frequency is low, and the problem that a common filter is not suitable for a nonlinear measurement model is solved. Compared with a matching positioning algorithm based on optimization, the method has the advantages of small calculated amount and low cost and high precision self-positioning of the mass production vehicle.
Example 1
As shown in fig. 1, the present embodiment provides a vehicle self-positioning method based on a particle filter, which includes the following steps:
1) A metrology model of the particle filter is designed, the metrology model including a direct metrology model and a design metrology model.
As shown in fig. 2, the direct measurement model is based on combined inertial navigation formed by fusion of a GPS and an IMU, and the output of the direct measurement model is vehicle positioning information (longitude, latitude and orientation) with the same frequency as the sampling frequency of the IMU;
the design metrology model is based on visual sensors (mobilee in this embodiment TM ADAS camera) that outputs a lateral intercept from adjacent left and right lane lines of the vehicle distance.
2) Training a pre-constructed vehicle motion system model based on the collected real vehicle data to obtain a trained vehicle operation system model, so that the trained vehicle operation system model can be fitted with the actual motion system model of the vehicle as much as possible.
Specifically, the method for training the pre-constructed vehicle motion system model based on the collected real vehicle data comprises the following steps:
2.1 A vehicle motion system model is constructed and inputs and outputs of the vehicle motion system model are defined.
Since RNN (Recurrent Neural Network ) is suitable for extracting features of the serialized information and predicting trends, the present embodiment constructs a vehicle motion system model based on RNN network series full-connection networks.
As shown in fig. 3, in the present embodiment, the constructed vehicle motion system model uses a 5-layer LSTM network to connect to a 2-layer fully connected network, wherein the inputs of the 5-layer LSTM network are: time t of consecutive n sampling moments 0 ,t 1 ,...,t n-1 And outputting hidden layer information h obtained at the last moment, wherein the hidden layer information h is obtained at the last moment, and the vehicle position abscissa x, the vehicle position ordinate y, the included angle theta between the vehicle direction and the north direction, the vehicle speed v and the component a of the vehicle acceleration along the direction of the vehicle head are obtained by observing at the corresponding moment t The method comprises the steps of carrying out a first treatment on the surface of the The input of the 2-layer full-connection network is hidden layer information h obtained at the last moment of the LSTM network t Output is t n Prediction of time of day vehicle position and vehicle orientation.
2.2 A loss function used in the training process is defined.
In this embodiment, the loss function of the vehicle motion system model adopts a mean square loss function, namely
Where y represents the network predicted vehicle state,a true value representing a vehicle state; y is i An ith dimension component of y; />Is->Is the i-th dimensional component of (2); n is y->N=3 (abscissa, ordinate, and 3 dimensions toward) in the present embodiment.
2.3 Processing the pre-acquired real vehicle data, taking the processed real vehicle data as a training set, and training the vehicle motion system model constructed in the step 2.1) based on the loss function determined in the step 2.2) to obtain a final vehicle motion system model.
When the real vehicle data is preprocessed, the method comprises the following steps:
firstly, converting longitude and latitude coordinates output by a vehicle inertial navigation system and based on a wgs coordinate system into a UTM (Universal Transverse Mercator Grid System, universal cross ink card grid system) coordinate system;
then, at a segment from t 0 Last to t K From t i (i∈[0,K-n]) Start to continue n sampling instants (t i ,t i+1 ,...,t i+n-1 ) Each sampling time is up to t i The time interval of the time is taken as the sampling time of the time, and the time interval is additionally taken as t i The position of the own vehicle at the moment is taken as the origin of coordinates, the position of the own vehicle at each moment is updated, and t is taken as network input by combining the included angle theta between the direction of the vehicle and the north direction, the vehicle speed v and the component a of the vehicle acceleration along the direction of the vehicle head i+n The position abscissa, ordinate and orientation of the vehicle at the moment are taken as prediction truth values. Thus, from a segment from t 0 Last to t K K-n+1 fragments available for training were obtained.
3) And acquiring the motion state of the vehicle based on the combined inertial navigation and visual sensor, and estimating the pose of the vehicle based on the vehicle motion system model and the measurement model to realize the positioning of the vehicle.
Specifically, the method comprises the following steps:
3.1 Judging whether the sampling time length meets the input requirement of the vehicle motion system model in the step 2), if not, entering the step and the step 3.2), otherwise, entering the step 3.3);
3.2 Algorithm start-up phase): when the sampling time is less than n, generating particles by using the combined inertial navigation observance quantity, wherein the generated particles are in the following states:
wherein x is t ,y t ,θ t Respectively representing the abscissa, the ordinate and the orientation angle of the particles at the time t;respectively representing observed quantity of combined inertial navigation at t moment; />Respectively represent the mean value as 0 and the variance as Is a gaussian noise of (c).
3.3 When the number of sampling moments is greater than or equal to n, inputting the observed quantity of the vehicle into a vehicle motion system model to obtain the prediction of the vehicle state at the next moment, and generating particles based on the prediction of the vehicle state at the next moment, wherein the particle states are as follows:
wherein x is t ,y t ,θ t Respectively representing an abscissa, an ordinate and an orientation angle of the generated particles at the moment t;θ^t pred respectively representing the prediction of the state of the vehicle at time t by the vehicle motion system model. N (0, v) 2 ) Represents a mean of 0 and a variance of v 2 Gaussian noise of (a); v pos As the observation noise of the location,v in the present embodiment pos =1m;v θ In this embodiment v is the observed noise of the orientation θ =0.1 rad; in this embodiment, the number of particles generated is p=400.
3.4 Using the particle state obtained in the step 3.2) or the step 3.3), and calculating the transverse intercept Inc of the particle from left and right adjacent lanes by combining the vector high-precision map M obtained by the vision sensor left And Inc right And based on the lateral intercept Inc of the particle distance left and right adjacent lanes left And Inc right And updating the particle weight.
The composition of the vector high-precision map M is as follows:
M={L 1 ,L 2 ,...,L n } (4)
L i ={P 1 ,P 2 ,...,P n } (5)
P j ={x j ,y j } (6)
wherein L is i Represents the ith lane line, P j Represents the j-th point, P, on the same lane line j Is (x) j ,y j )。
As shown in fig. 4, the points in the vector high-precision map M are transformed into a coordinate system with the position of the particles as the origin of coordinates and the direction of the particles as the positive direction of the vertical axis, and map points in each quadrant closest to the particles in the coordinate system are searched, and the distance between the closest points of the first quadrant and the third quadrant and the intersection point of the abscissa after being connected to the particles is the left transverse intercept Inc left The distance from the intersection point of the two and four quadrants and the abscissa to the particles after the closest points are connected is the right transverse intercept Inc right
After the state quantity of the particles is obtained, the weight of the particles is updated by combining the observed quantity of the designed measurement model, and the weight of the kth (k E [0, K ]) particles can be calculated from the following formula:
in the formula, v Inc Representing mobilee TM Observing error of ADAS camera to transverse intercept, v in this embodiment Inc =0.1m。
3.5 After the weight of the particles is calculated, carrying out normalization processing on the weight of the particles, and carrying out weighted summation on the states of all the particles to obtain the estimation of the vehicle pose.
Example 2
In contrast, the embodiment 1 provides a vehicle self-positioning method based on a particle filter, and the embodiment provides a vehicle self-positioning system. The vehicle self-positioning system provided in the present embodiment may implement the vehicle self-positioning method based on the particle filter of embodiment 1, and the identification system may be implemented by software, hardware or a combination of software and hardware. For example, the identification system may comprise integrated or separate functional modules or functional units to perform the corresponding steps in the methods of embodiment 1. Since the identification system of the present embodiment is substantially similar to the method embodiment, the description of the present embodiment is relatively simple, and the relevant points may be found in part in the description of embodiment 1, where the embodiment of the vehicle self-positioning system of the present embodiment is merely illustrative.
The embodiment provides a vehicle self-positioning system based on particle filter, it includes:
the measurement model design module is used for designing a measurement model of the particle filter, and the measurement model comprises a direct measurement model and a design measurement model;
the vehicle motion system model training module is used for training a pre-constructed vehicle motion system model based on the collected real vehicle data to obtain a trained vehicle operation system model;
and the vehicle pose estimation module is used for acquiring the motion state of the vehicle based on the combined inertial navigation and visual sensor, estimating the vehicle pose based on the vehicle motion system model and the measurement model and realizing the positioning of the vehicle.
Preferably, in the measurement model design module, a direct measurement model is based on combined inertial navigation formed by fusion of a GPS and an IMU, and the model is output as vehicle positioning information with the same frequency as the sampling frequency of the IMU; the design of the measurement model is based on visual sensors, the model output of which is the transverse intercept from the adjacent left and right lane lines of the vehicle distance.
Preferably, the vehicle motion system model training module includes:
the model construction module is used for constructing a vehicle motion system model and defining the input and the output of the vehicle motion system model;
the loss function definition module is used for defining a loss function adopted in the training process;
the model training module is used for processing the pre-acquired real vehicle data, taking the processed data as a training set, and training the constructed vehicle motion system model based on the determined loss function and the super parameters to obtain a final vehicle motion model.
Preferably, the vehicle pose estimation module includes:
the sampling time length judging module is used for judging whether the sampling time length meets the input requirement of the vehicle motion system model, if not, the sampling time length judging module sends a signal to the first particle generating module, otherwise, the sampling time length judging module sends a signal to the second particle generating module;
a first particle generation module for generating particles using combined inertial navigation observables;
the second particle generation module is used for inputting the observed quantity of the vehicle into the vehicle motion system model to obtain the prediction of the vehicle state at the next moment, and generating particles based on the prediction of the vehicle state at the next moment;
the particle weight updating module is used for calculating the transverse intercept Inc of the particles from left and right adjacent lanes by utilizing the obtained particles and combining the vector high-precision map M left And Inc right And based on the lateral intercept Inc of the particle distance left and right adjacent lanes left And Inc right Updating the particle weight;
and the estimation module is used for carrying out normalization processing on the weights of the particles after calculating the weights of the particles, and carrying out weighted summation on the states of all the particles to obtain the estimation of the vehicle pose.
Example 3
The present embodiment provides a processing device corresponding to the particle filter-based vehicle self-positioning method provided in the present embodiment 1, where the processing device may be a processing device for a client, for example, a mobile phone, a notebook computer, a tablet computer, a desktop computer, etc., to execute the particle filter-based vehicle self-positioning method of embodiment 1.
The processing device comprises a processor, a memory, a communication interface and a bus, wherein the processor, the memory and the communication interface are connected through the bus so as to complete communication among each other. The memory stores a computer program executable on the processor, and the processor executes the vehicle self-positioning method provided in embodiment 1 when the processor executes the computer program.
In some implementations, the memory may be high-speed random access memory (RAM: random Access Memory), and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
In other implementations, the processor may be a Central Processing Unit (CPU), a Digital Signal Processor (DSP), or other general-purpose processor, which is not limited herein.
Example 4
The vehicle self-positioning method of this embodiment 1 may be embodied as a computer program product, which may include a computer-readable storage medium having computer-readable program instructions loaded thereon for performing the particle filter-based vehicle self-positioning method of this embodiment 1.
The computer readable storage medium may be a tangible device that retains and stores instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any combination of the preceding.
Example 5
The embodiment combines the real vehicle data to evaluate the performance of the vehicle self-positioning method based on the camera and the vector high-precision map and the traditional particle filter.
The traditional particle filtering positioning method only uses the vehicle position and orientation as observables, and uses a uniform-speed uniform-angular velocity model as a vehicle motion model. After a section of real vehicle acquisition data is combined and parameters in the embodiment are adopted, the algorithm is compared with the traditional particle filtering algorithm under the condition that the filtering frequency is 10Hz, and experimental results (as shown in table 1) show that when the average absolute distance error of observation is 1.25m, the average absolute distance error of the traditional particle filtering algorithm is 1.75m, and the average angular distance error of the algorithm is 0.39m; when the average absolute angle error of observation is 0.08rad, the average absolute distance error of the conventional particle filtering algorithm is 0.18rad, and the average angle distance error of the present algorithm is 0.07rad. Experimental results show that under the condition that the sampling frequency is 10Hz, the error of the traditional filter is larger than the direct observation error, and the traditional filter is invalid, but the algorithm has larger improvement on the positioning accuracy compared with the original observation.
Table 1 comparison of the Performance of the present algorithm with conventional particle filter algorithm
Average absolute distance error (m) Average absolute angle error (rad)
Observed quantity 1.25 0.08
Traditional particle filtering algorithm 1.75 0.18
The algorithm of the invention 0.39 0.07
It is noted that the flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
Finally, it should be noted that: the above embodiments are only for illustrating the technical aspects of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the above embodiments, it should be understood by those of ordinary skill in the art that: modifications and equivalents may be made to the specific embodiments of the invention without departing from the spirit and scope of the invention, which is intended to be covered by the claims.
The foregoing embodiments are only for illustrating the present invention, wherein the structures, connection modes, manufacturing processes, etc. of the components may be changed, and all equivalent changes and modifications performed on the basis of the technical solutions of the present invention should not be excluded from the protection scope of the present invention.

Claims (4)

1. The vehicle self-positioning method based on the particle filter is characterized by comprising the following steps of:
designing a metrology model of the particle filter, the metrology model comprising a direct metrology model and a design metrology model;
training a pre-constructed vehicle motion system model based on the collected real vehicle data to obtain a trained vehicle motion system model;
the vehicle motion state is acquired based on the combined inertial navigation and visual sensor, and the vehicle pose is estimated based on the vehicle motion system model and the measurement model, so that the vehicle is positioned;
the direct measurement model is based on combined inertial navigation formed by fusion of a GPS and an IMU, and outputs vehicle positioning information with the same frequency as the sampling frequency of the IMU; the design measurement model is based on a visual sensor, and the output of the design measurement model is the transverse intercept of adjacent left and right lane lines from the distance of a vehicle;
the method for training the pre-constructed vehicle motion system model based on the collected real vehicle data comprises the following steps:
constructing a vehicle motion system model, and defining input and output of the vehicle motion system model;
defining a loss function adopted in the training process;
processing the pre-acquired real vehicle data, taking the processed data as a training set, and training the constructed vehicle motion system model based on the determined loss function to obtain a trained vehicle motion system model;
the input of the vehicle motion system model is as follows: time t of consecutive n sampling moments 0 ,t 1 ,…,t n-1 The vehicle position abscissa x, the vehicle position ordinate y, the included angle theta between the vehicle orientation and the north direction, the vehicle speed v and the component a of the vehicle acceleration along the direction of the vehicle head are observed at the corresponding moment; the output of the vehicle motion system model is as follows: predicting the position and the orientation of the vehicle at the next moment;
when pre-processing the pre-acquired real vehicle data, the method comprises the following steps:
firstly, converting longitude and latitude coordinates output by a vehicle inertial navigation system and based on a wgs coordinate system into a UTM coordinate system;
then, at a segment from t 0 Last to t K From t i ,i∈[0,K-n]Start to continue n sampling moments t i ,t i+1 ,…,t i+n-1 Each sampling time is up to t i Time interval of time is taken as sampling time of the time, and t is taken as i The position of the own vehicle at the moment is the origin of coordinates, and moreThe vehicle position at each new moment is combined with an included angle theta between the vehicle direction and the north direction, a vehicle speed v and a component a of vehicle acceleration along the direction of the vehicle head to obtain an input segment; repeating the steps from a section of from t 0 Last to t K Obtaining K-n+1 fragments for training;
the method for realizing the positioning of the vehicle based on the combined inertial navigation and vision sensor for collecting the motion state of the vehicle and estimating the vehicle pose based on the vehicle motion system model and the measurement model comprises the following steps:
(1) judging whether the sampling time length meets the input requirement of the vehicle motion system model, if not, entering the step (2), otherwise, entering the step (3);
(2) when the sampling time is less than n, generating particles by using the observed quantity of combined inertial navigation;
(3) when the sampling time is greater than or equal to n, inputting the observed quantity of the vehicle into a vehicle motion system model to obtain the prediction of the vehicle state at the next time, and generating particles based on the prediction of the vehicle state at the next time;
(4) calculating the transverse intercept Inc of the particles obtained in the step (2) or the step (3) from left and right adjacent lanes by combining the vector high-precision map M obtained by the vision sensor left And Inc right And based on the lateral intercept Inc of the particle distance left and right adjacent lanes left And Inc right Updating the particle weight;
(5) and carrying out normalization processing on the weights of the particles after calculating the weights of the particles, and carrying out weighted summation on the states of all the particles to obtain the estimation of the self-vehicle pose.
2. A particle filter-based vehicle self-positioning system, comprising:
the measurement model design module is used for designing a measurement model of the particle filter, and the measurement model comprises a direct measurement model and a design measurement model;
the vehicle motion system model training module is used for training a pre-constructed vehicle motion system model based on the collected real vehicle data to obtain a trained vehicle operation system model;
the vehicle pose estimation module is used for acquiring the motion state of the vehicle based on the combined inertial navigation and visual sensor, estimating the vehicle pose based on the vehicle motion system model and the measurement model and realizing the positioning of the vehicle;
the direct measurement model is based on combined inertial navigation formed by fusion of a GPS and an IMU, and outputs vehicle positioning information with the same frequency as the sampling frequency of the IMU; the design measurement model is based on a visual sensor, and the output of the design measurement model is the transverse intercept of adjacent left and right lane lines from the distance of a vehicle;
the method for training the pre-constructed vehicle motion system model based on the collected real vehicle data comprises the following steps:
constructing a vehicle motion system model, and defining input and output of the vehicle motion system model;
defining a loss function adopted in the training process;
processing the pre-acquired real vehicle data, taking the processed data as a training set, and training the constructed vehicle motion system model based on the determined loss function to obtain a trained vehicle motion system model;
the input of the vehicle motion system model is as follows: time t of consecutive n sampling moments 0 ,t 1 ,…,t n-1 The vehicle position abscissa x, the vehicle position ordinate y, the included angle theta between the vehicle orientation and the north direction, the vehicle speed v and the component a of the vehicle acceleration along the direction of the vehicle head are observed at the corresponding moment; the output of the vehicle motion system model is as follows: predicting the position and the orientation of the vehicle at the next moment;
when pre-processing the pre-acquired real vehicle data, the method comprises the following steps:
firstly, converting longitude and latitude coordinates output by a vehicle inertial navigation system and based on a wgs coordinate system into a UTM coordinate system;
then, at a segment from t 0 Last to t K From t i ,i∈[0,K-n]Start to continue n sampling moments t i ,t i+1 ,…,t i+n-1 Each sampling time is up to t i Time interval of time is taken as sampling time of the time, and t is taken as i The position of the own vehicle at each moment is taken as a coordinate origin, the position of the own vehicle at each moment is updated, and an input segment is obtained by combining an included angle theta between the direction of the vehicle and the north direction, the vehicle speed v and a component a of the vehicle acceleration along the direction of the vehicle head; repeating the steps from a section of from t 0 Last to t K Obtaining K-n+1 fragments for training;
the method for realizing the positioning of the vehicle based on the combined inertial navigation and vision sensor for collecting the motion state of the vehicle and estimating the vehicle pose based on the vehicle motion system model and the measurement model comprises the following steps:
(1) judging whether the sampling time length meets the input requirement of the vehicle motion system model, if not, entering the step (2), otherwise, entering the step (3);
(2) when the sampling time is less than n, generating particles by using the observed quantity of combined inertial navigation;
(3) when the sampling time is greater than or equal to n, inputting the observed quantity of the vehicle into a vehicle motion system model to obtain the prediction of the vehicle state at the next time, and generating particles based on the prediction of the vehicle state at the next time;
(4) calculating the transverse intercept Inc of the particles obtained in the step (2) or the step (3) from left and right adjacent lanes by combining the vector high-precision map M obtained by the vision sensor left And Inc rihgt And based on the lateral intercept Inc of the particle distance left and right adjacent lanes left And Inc right Updating the particle weight;
(5) and carrying out normalization processing on the weights of the particles after calculating the weights of the particles, and carrying out weighted summation on the states of all the particles to obtain the estimation of the self-vehicle pose.
3. A processing device comprising at least a processor and a memory, the memory having stored thereon a computer program, characterized in that the processor executes the steps of the particle filter based vehicle self-positioning method according to claim 1 when running the computer program.
4. A computer storage medium having stored thereon computer readable instructions executable by a processor to perform the steps of the particle filter based vehicle self-positioning method according to claim 1.
CN202110941721.2A 2021-08-17 2021-08-17 Vehicle self-positioning method, system, equipment and medium based on particle filter Active CN113483769B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110941721.2A CN113483769B (en) 2021-08-17 2021-08-17 Vehicle self-positioning method, system, equipment and medium based on particle filter

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110941721.2A CN113483769B (en) 2021-08-17 2021-08-17 Vehicle self-positioning method, system, equipment and medium based on particle filter

Publications (2)

Publication Number Publication Date
CN113483769A CN113483769A (en) 2021-10-08
CN113483769B true CN113483769B (en) 2024-03-29

Family

ID=77946643

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110941721.2A Active CN113483769B (en) 2021-08-17 2021-08-17 Vehicle self-positioning method, system, equipment and medium based on particle filter

Country Status (1)

Country Link
CN (1) CN113483769B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114323020B (en) * 2021-12-06 2024-02-06 纵目科技(上海)股份有限公司 Vehicle positioning method, system, equipment and computer readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110095116A (en) * 2019-04-29 2019-08-06 桂林电子科技大学 A kind of localization method of vision positioning and inertial navigation combination based on LIFT
CN110865403A (en) * 2019-10-18 2020-03-06 浙江天尚元科技有限公司 Positioning method based on neural network pre-learning and wheel-type odometer fusion
CN111174782A (en) * 2019-12-31 2020-05-19 智车优行科技(上海)有限公司 Pose estimation method and device, electronic equipment and computer readable storage medium
CN111862672A (en) * 2020-06-24 2020-10-30 北京易航远智科技有限公司 Parking lot vehicle self-positioning and map construction method based on top view
CN113075713A (en) * 2021-03-29 2021-07-06 北京理工大学重庆创新中心 Vehicle relative pose measuring method, system, equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200044420A (en) * 2018-10-19 2020-04-29 삼성전자주식회사 Method and device to estimate position

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110095116A (en) * 2019-04-29 2019-08-06 桂林电子科技大学 A kind of localization method of vision positioning and inertial navigation combination based on LIFT
CN110865403A (en) * 2019-10-18 2020-03-06 浙江天尚元科技有限公司 Positioning method based on neural network pre-learning and wheel-type odometer fusion
CN111174782A (en) * 2019-12-31 2020-05-19 智车优行科技(上海)有限公司 Pose estimation method and device, electronic equipment and computer readable storage medium
CN111862672A (en) * 2020-06-24 2020-10-30 北京易航远智科技有限公司 Parking lot vehicle self-positioning and map construction method based on top view
CN113075713A (en) * 2021-03-29 2021-07-06 北京理工大学重庆创新中心 Vehicle relative pose measuring method, system, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
多传感器信息融合的自动驾驶车辆定位与速度估计;彭文正;敖银辉;黄晓涛;王鹏飞;;传感技术学报(第08期);全文 *

Also Published As

Publication number Publication date
CN113483769A (en) 2021-10-08

Similar Documents

Publication Publication Date Title
CN112347840B (en) Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
Zhang A fusion methodology to bridge GPS outages for INS/GPS integrated navigation system
Li et al. Deep sensor fusion between 2D laser scanner and IMU for mobile robot localization
CN107246876B (en) Method and system for autonomous positioning and map construction of unmanned automobile
CN112639502A (en) Robot pose estimation
Qiu et al. RGB-DI images and full convolution neural network-based outdoor scene understanding for mobile robots
Thormann et al. Extended target tracking using Gaussian processes with high-resolution automotive radar
Wei et al. Vision-based lane-changing behavior detection using deep residual neural network
CN111402328B (en) Pose calculation method and device based on laser odometer
Wu et al. Robust LiDAR-based localization scheme for unmanned ground vehicle via multisensor fusion
CN113920198B (en) Coarse-to-fine multi-sensor fusion positioning method based on semantic edge alignment
Zhang et al. A robust SINS/VO integrated navigation algorithm based on RHCKF for unmanned ground vehicles
CN114323033A (en) Positioning method and device based on lane lines and feature points and automatic driving vehicle
Golroudbari et al. Recent advancements in deep learning applications and methods for autonomous navigation: A comprehensive review
CN113483769B (en) Vehicle self-positioning method, system, equipment and medium based on particle filter
Gwak et al. Neural-network multiple models filter (NMM)-based position estimation system for autonomous vehicles
Chen et al. Continuous occupancy mapping in dynamic environments using particles
CN110780325A (en) Method and device for positioning moving object and electronic equipment
CN113759364A (en) Millimeter wave radar continuous positioning method and device based on laser map
US20240094343A1 (en) Method, device, system, and storage medium for tracking moving target
Liu et al. Precise Positioning and Prediction System for Autonomous Driving Based on Generative Artificial Intelligence
CN112560571A (en) Intelligent autonomous visual navigation method based on convolutional neural network
CN117075158A (en) Pose estimation method and system of unmanned deformation motion platform based on laser radar
CN114485607B (en) Method, operation equipment, device and storage medium for determining motion trail
CN112747752B (en) Vehicle positioning method, device, equipment and storage medium based on laser odometer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant