CN113155121B - Vehicle positioning method and device and electronic equipment - Google Patents
Vehicle positioning method and device and electronic equipment Download PDFInfo
- Publication number
- CN113155121B CN113155121B CN202110305408.XA CN202110305408A CN113155121B CN 113155121 B CN113155121 B CN 113155121B CN 202110305408 A CN202110305408 A CN 202110305408A CN 113155121 B CN113155121 B CN 113155121B
- Authority
- CN
- China
- Prior art keywords
- data
- sensor data
- positioning
- obtaining
- positioning uncertainty
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 70
- 238000013528 artificial neural network Methods 0.000 claims description 27
- 239000011159 matrix material Substances 0.000 claims description 27
- 230000015654 memory Effects 0.000 claims description 18
- 230000008569 process Effects 0.000 claims description 15
- 238000012549 training Methods 0.000 claims description 15
- 230000000007 visual effect Effects 0.000 claims description 14
- 230000004927 fusion Effects 0.000 claims description 12
- 230000001133 acceleration Effects 0.000 claims description 6
- 230000010354 integration Effects 0.000 claims description 4
- 238000004590 computer program Methods 0.000 claims description 2
- 230000004807 localization Effects 0.000 claims 1
- 238000004364 calculation method Methods 0.000 abstract description 10
- 238000012545 processing Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 238000007781 pre-processing Methods 0.000 description 3
- 230000007547 defect Effects 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 235000019800 disodium phosphate Nutrition 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000149 penetrating effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/3407—Route searching; Route guidance specially adapted for specific applications
- G01C21/3415—Dynamic re-routing, e.g. recalculating the route when the user deviates from calculated route or after detecting real-time traffic data or accidents
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Automation & Control Theory (AREA)
- Navigation (AREA)
Abstract
The invention provides a vehicle positioning method, a device and electronic equipment, wherein the vehicle positioning method comprises the following steps: acquiring first sensor data; obtaining first positioning uncertainty data according to the first sensor data; when the first positioning uncertainty data does not meet a first preset condition, second sensor data are obtained, and second positioning uncertainty data are obtained according to the second sensor data and the first sensor data; and when the second positioning uncertainty data meets a second preset condition, vehicle positioning is carried out according to the first sensor data and the second sensor data. By implementing the invention, the calculation force of the vehicle can be reduced by adding the sensor information step by step, and the triggering condition of each time of adding the sensor data is that the positioning uncertainty does not meet the condition, namely, the calculation force of the vehicle can be further reduced by adopting a step-by-step mode under the condition of ensuring the positioning accuracy.
Description
Technical Field
The invention relates to the field of automatic driving, in particular to a vehicle positioning method, a vehicle positioning device and electronic equipment.
Background
The automatic driving vehicle generally bears a GPS (global positioning system), a camera, a radar, an IMU (inertial measurement unit) and other sensor modules for positioning, different sensors provide different positioning information for the vehicle, and the automatic driving vehicle executes corresponding automatic driving operation according to the positioning information. Existing single sensor based autopilot vehicle-mounted positioning includes: GPS positioning, visual positioning, IMU positioning, radar positioning and the like. GPS positioning is the most traditional positioning mode and the simplest direct positioning mode, a satellite system is used for resolving a vehicle positioning signal through a base station, and then positioning information is fed back to a vehicle. The visual positioning is a current popular positioning mode, information near a vehicle is obtained through a camera, then the vehicle position information is obtained through scene reconstruction and semantic understanding, compared with GPS information, the positioning accuracy is obviously improved, and the positioning uncertainty is improved to a great extent because the vehicle-mounted camera system can effectively utilize vehicle-mounted scene information. But the processing accuracy and speed of visual positioning dependent algorithms and the positioning accuracy in similar scenarios is greatly reduced due to the provision of location information dependent scenarios. IMU positioning is often used for assisting positioning, and because the gyroscope and accelerometer information in the IMU needs to be converted into position information in an integral way, the IMU positioning is often fused with vision and GPS positioning, and has the advantages of being not influenced by environmental weather and scene factors, and has the disadvantage of having errors which increase with time. Radar positioning is often used for positioning in short-distance special scenes, and is widely used for positioning middle-high-grade vehicles due to high radar positioning accuracy and strong penetrating power and difficult to be influenced by environmental factors. In environments such as underground garages, vision and GPS positioning can be greatly affected. Radar positioning plays a large role, but radar positioning cannot be used for long distances and is costly. Other sensors are typically assisted in positioning.
In order to improve the defects caused by the positioning of a single sensor and improve the positioning accuracy, a technical scheme of multi-sensor fusion is proposed in the related art, namely, pose information (including positions and directions) of a vehicle is obtained through different source sensors such as a GPS, a camera and an IMU, however, the fusion of various sensor data is a great burden on the calculation of the vehicle.
Disclosure of Invention
In view of the above, the embodiments of the present invention provide a vehicle positioning method, apparatus and electronic device, so as to solve the defect of large calculation power consumption during vehicle positioning in the prior art.
According to a first aspect, an embodiment of the present invention provides a vehicle positioning method, including the steps of: acquiring first sensor data; obtaining first positioning uncertainty data according to the first sensor data; when the first positioning uncertainty data does not meet a first preset condition, second sensor data are obtained, and second positioning uncertainty data are obtained according to the second sensor data and the first sensor data; and when the second positioning uncertainty data meets a second preset condition, vehicle positioning is carried out according to the first sensor data and the second sensor data.
Optionally, the method further comprises: when the second positioning uncertainty data does not meet a second preset condition, acquiring third sensor data; and obtaining third positioning uncertainty data and positioning the vehicle according to the third sensor data, the second sensor data and the first sensor data.
Optionally, the method further comprises: and when the first positioning uncertainty data meets a first preset condition, positioning the vehicle according to the first sensor data.
Optionally, the first sensor data is a vision sensor, and the obtaining, according to the first sensor data, first positioning uncertainty data includes: and inputting the image data obtained by the vision sensor into a target neural network to obtain first positioning uncertainty data.
Optionally, the second sensor data is a gyroscope, and obtaining second positioning uncertainty data according to the second sensor data and the first sensor data includes: obtaining a first motion relation in the running process of the vehicle according to the second sensor data and the first sensor data; obtaining a first motion residual error according to the first motion relation; and obtaining second positioning uncertainty data according to the first motion residual error.
Optionally, the third sensor data is an accelerometer, and obtaining third positioning uncertainty data according to the third sensor data, the second sensor data, and the first sensor data includes: obtaining a second motion relation in the running process of the vehicle according to the second sensor data, the third sensor data and the first sensor data; obtaining a second motion residual error according to the second motion relation; and obtaining third positioning uncertainty data according to the second motion residual error.
Optionally, the obtaining second positioning uncertainty data according to the first motion residual error includes: obtaining a first deviation covariance according to the first motion residual; obtaining second positioning uncertainty data according to the first deviation covariance; the obtaining a first deviation covariance according to the motion residual comprises the following steps:
wherein E is 1 For the first bias covariance, ρ is the two residual scaling coefficients,directional residual error, sigma, expressed as the ith frame image to the jth frame image R,i,j Is the covariance matrix obtained after adding the gyroscope, sigma i,j Sigma (sigma) i Covariance matrix representing pre-fusion and re-projection errors, respectively,>covariance matrix representing marginalized reprojection errors in ith frame image, J k Represented is a jacobian matrix of reprojection errors,>residual vector representing marginalized reprojection error in the ith frame image, +.>Is->Transpose of->Is->Is a transpose of (a).
Optionally, the obtaining third positioning uncertainty data according to the second motion residual includes: obtaining a second deviation covariance according to the second motion residual; obtaining third positioning uncertainty data according to the second deviation covariance; obtaining a second deviation covariance according to the second motion residual, wherein the second deviation covariance comprises the following steps:
wherein E is 2 For the second deviation covariance to be the result, residual errors respectively representing the directions, the speeds and the positions of the ith frame image and the jth frame image; />Representing residual errors obtained by re-projection, wherein ρ represents two residual error proportional coefficients; />To add the covariance matrix obtained after pre-integration of the gyroscope data and the acceleration data,residual vector representing marginalized reprojection error in the ith frame image, +.>Is->Transpose of->Is thatIs a transpose of (a).
Optionally, the target neural network is constructed according to a pousent model, and the target neural network training process includes: inputting the sample into a pre-trained neural network to obtain posterior probability distribution of each network weight; determining the relative entropy between the approximate value of each layer of network and the posterior probability distribution according to the posterior probability distribution; and training the pre-trained neural network by taking the relative entropy between the minimized approximate value and the posterior probability distribution as a target, so as to obtain the target neural network.
According to a second aspect, an embodiment of the present invention provides a vehicle positioning device including: the first sensor data acquisition module is used for acquiring first sensor data; the first positioning uncertainty data determining module is used for obtaining first positioning uncertainty data according to the first sensor data; the second positioning uncertainty data determining module is used for acquiring second sensor data when the first positioning uncertainty data does not meet a first preset condition, and acquiring second positioning uncertainty data according to the second sensor data and the first sensor data; and the first positioning module is used for positioning the vehicle according to the first sensor data and the second sensor data when the second positioning uncertainty data meets a second preset condition.
According to a third aspect, an embodiment of the present invention provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the vehicle positioning method according to the first aspect or any implementation manner of the first aspect when the program is executed.
According to a fourth aspect, an embodiment of the present invention provides a storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the vehicle locating method according to the first aspect or any implementation manner of the first aspect.
The technical scheme of the invention has the following advantages:
according to the vehicle positioning method/device, firstly, the positioning uncertainty is judged according to the first sensor data, when the positioning uncertainty does not meet the conditions, the second sensor data are added, the vehicle calculation force can be reduced through the step-by-step addition of the sensor information, and the triggering condition of each addition of the sensor data is that the positioning uncertainty does not meet the conditions, namely, the step-by-step mode is adopted, so that the vehicle calculation force can be further reduced under the condition of ensuring the positioning accuracy.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a specific example of a vehicle positioning method according to an embodiment of the present invention;
FIG. 2 is a schematic block diagram of a specific example of a vehicle positioning device in accordance with an embodiment of the present invention;
fig. 3 is a schematic block diagram of a specific example of an electronic device in an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made apparent and fully in view of the accompanying drawings, in which some, but not all embodiments of the invention are shown. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the description of the present invention, it should be noted that the directions or positional relationships indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be either fixedly connected, detachably connected, or integrally connected, for example; can be mechanically or electrically connected; the two components can be directly connected or indirectly connected through an intermediate medium, or can be communicated inside the two components, or can be connected wirelessly or in a wired way. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
In addition, the technical features of the different embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
The embodiment provides a vehicle positioning method, as shown in fig. 1, including the following steps:
s101, acquiring first sensor data; the first sensor may be a visual sensor, such as a camera. The first sensor data may be image information photographed by a camera during the running of the vehicle.
S102, obtaining first positioning uncertainty data according to first sensor data;
illustratively, the positioning uncertainty characterizes uncertainty of a degree of deviation between a vehicle position coordinate predicted by multi-source information fusion and an actual vehicle position coordinate in a positioning process, namely, uncertainty of a predicted position and an actual position within an acceptable precision range after the vehicle position is predicted by multi-source information fusion.
According to the first sensor data, the mode of obtaining the first positioning uncertainty data can be that image information shot by a camera is input into a target neural network, the target neural network can be constructed by a poseNet model, posterior probability distribution of current network node weight is obtained through a sample set and a corresponding sample label, the distribution is approximated by using variation inference, positioning errors are obtained, after training is carried out on a large amount of data on a Cambridge Landmarks data set, the positioning errors and the uncertainty have strong correlation, the positioning errors are larger as the positioning uncertainty is higher, and the first positioning uncertainty data can be obtained according to the positioning errors due to the fact that the uncertainty has strong linear relation with the positioning errors.
S103, when the first positioning uncertainty data does not meet the first preset condition, acquiring second sensor data, and acquiring the second positioning uncertainty data according to the second sensor data and the first sensor data;
illustratively, the first preset condition may be that the first positioning uncertainty data is less than or equal to 0.12 (corresponding to a positioning error of 8 meters). The second sensor may be a gyroscope or an accelerometer, and because the gyroscope is accurate in a shorter time and has drift in a longer time, and the accelerometer is opposite, the method of acquiring gyroscope data in the vehicle-mounted IMU may be to acquire gyroscope data in this embodiment using the second sensor as a gyroscope for illustration.
The second positioning uncertainty data may be obtained according to the second sensor data and the first sensor data by:
firstly, data preprocessing is carried out, and a coordinate system corresponding to the IMU is converted into a camera coordinate system, specifically:
in order to estimate pose, sparse key point information is needed, and the key points are derived from preprocessing an image, specifically comprising the following steps:
for preprocessing of the image, the re-projection error of the key point is mainly calculated, and the 3D position information of the kth road sign is set as X k (the position information can be derived from GPS), and the coordinates of the ith frame image in the two-dimensional coordinate system corresponding to the position information areMinimizing reprojection error->
Obtaining optimal camera direction pose parameters
Where pi () is an operator that projects a 3D point onto an image.
According to the above formula, obtaining the position and direction of the IMU in the ith frame image includes:
wherein,respectively representing the direction pose parameters of the camera and the IMU in the ith frame image, R CB For the transformation from the IMU coordinate system to the rotation matrix of the camera coordinate system>The positions of the camera and IMU at the i-th frame image, C p B and (3) converting the position coordinates of the camera into metric units for the position of the IMU in the camera coordinate system, wherein s is a scale factor.
And secondly, obtaining a first motion relation in the running process of the vehicle according to the second sensor data and the first sensor data, wherein the first motion relation comprises a motion direction.
Wherein,is the direction of IMU in the i+1th frame, +.>For the direction of the IMU in the ith frame, ΔR i,i+1 For the predicted outcome of the direction of the IMU, < >>Indicating the deviation of the gyroscope +.>Represented is a jacobian matrix, which can be obtained by pre-fusion, exp (·) isIndex mapping of the prune clusters.
Thirdly, according to the first motion relation, obtaining a first motion residual error;
wherein,for the directional residual (first motion residual) from the i-th frame image to the i+1th frame image, Δr i,i+1 For the predicted outcome of the direction of the IMU, < >>Is Jacobian matrix, which can be obtained by pre-fusion, < - > in->Representing the deviation of the gyroscope,is the direction of IMU in the i+1th frame, +.>For the direction of IMU in the i-th frame, +.>Is->Exp (·) is an exponential mapping of the li group.
Then, obtaining second positioning uncertainty data according to the first motion residual, wherein the second positioning uncertainty data comprises: step 1, obtaining a first deviation covariance (a deviation covariance of visual information and a gyroscope) according to a first motion residual; and step 2, obtaining second positioning uncertainty data according to the first deviation covariance.
The first bias covariance (the bias covariance of the visual information and the gyroscope) in step 1 is solved as follows:
wherein E is 1 For the visual information and the bias covariance of the gyroscope (first bias covariance),directional residual error expressed as i-th frame image to j-th frame image,/and (x)>Is->Is transposed of Sigma R,i,j The covariance matrix obtained after adding the gyroscope is added, ρ is two residual error proportional coefficients, and can be expressed by Huber function, ++>Covariance matrix representing marginalized reprojection errors in ith frame image, J k Representing the reprojection error +.>Is a Jacobian matrix, sigma i,j Sigma (sigma) i Covariance matrix representing pre-fusion and re-projection errors, respectively,>residual vector representing marginalized reprojection error in the ith frame image, +.>Is->Transpose of->Wherein,indicating the direction and position of the linearized point,/->Position of camera at the time of the ith frame image, +.>Is the optimal camera pose parameter.
In step 2, in order to quantify the uncertainty, a trace of the visual information of the last frame image and the variance covariance of the gyroscope (first variance covariance) may be calculated, and the Mean Square Error (MSE) of the expected value may be used as the second positioning uncertainty data.
And S104, when the second positioning uncertainty data meets a second preset condition, positioning the vehicle according to the first sensor data and the second sensor data.
Illustratively, the second preset condition may be that the second positioning uncertainty data is less than or equal to 0.1 (corresponding to a positioning error of 5 meters). The second preset condition is not limited in this embodiment, and may be determined by those skilled in the art according to need. The vehicle positioning can be performed according to the first sensor data and the second sensor data by minimizing (1) to obtain positioning information of the vehicle.
According to the vehicle positioning method provided by the embodiment, firstly, the positioning uncertainty is judged according to the first sensor data, when the positioning uncertainty does not meet the condition, the second sensor data is added, the vehicle calculation force can be reduced by adding the sensor information step by step, and the triggering condition of adding the sensor data each time is that the positioning uncertainty does not meet the condition, namely, the vehicle calculation force can be further reduced under the condition of ensuring the positioning accuracy in a step mode.
As an optional implementation manner of this embodiment, the vehicle positioning method further includes:
firstly, when the second positioning uncertainty data does not meet a second preset condition, acquiring third sensor data;
the third sensor may be a gyroscope or an accelerometer, and the third sensor is taken as an accelerometer in this embodiment, and the manner of acquiring accelerometer data may be to acquire accelerometer data in the vehicle-mounted IMU.
And secondly, obtaining third positioning uncertainty data and positioning the vehicle according to the third sensor data, the second sensor data and the first sensor data.
For example, the way of deriving the third positioning uncertainty data from the third sensor data, the second sensor data and the first sensor data may comprise the steps of:
the first step: and obtaining a second motion relation in the running process of the vehicle according to the second sensor data, the third sensor data and the first sensor data, wherein the motion relation comprises a direction, a speed and a position.
Wherein,is the direction of IMU in the i+1th frame, +.>For the direction of the IMU in the ith frame, ΔR i,i+1 For the predicted outcome of the direction of the IMU, < >>Indicating the deviation of the gyroscope +.>Represented is a jacobian matrix, which can be obtained by pre-fusion, exp (·) is an exponential mapping of the li-population, +.>For the speed of IMU in frame i+1,/i>For the speed of the IMU in the ith frame, gw represents the gravity vector, Δt represents the time difference between the ith frame and the (i+1) th frame, and Δv i,i+1 Representing the result of a pre-measurement of the speed of the IMU,/->Is a jacobian matrix, which can be obtained by pre-fusion, <' > and is a matrix of jacobian>Representing the deviation of the accelerometer +.>Indicating the position of IMU in frame i+1,/->Δp for the position of the IMU in the ith frame i,i+1 The result of the predicted amount of IMU position is shown.
Secondly, according to a second motion relation, obtaining a second motion residual error comprising residual errors of direction, speed and position:
wherein,for the directional residual error from the ith frame image to the (i+1) th frame image,/for the (i) th frame image>For the speed residual of the ith frame image to the (i+1) th frame image,/for the (i) th frame image>And (3) the position residual error from the ith frame image to the (i+1) th frame image.
Thirdly, obtaining third positioning uncertainty data according to the second motion residual, wherein the third positioning uncertainty data comprises the following steps: step 1, obtaining a second deviation covariance (deviation covariance of visual information, a gyroscope and an accelerometer) according to the second motion residual, and step 2, obtaining third positioning uncertainty data according to the second deviation covariance.
The solving method of the second deviation covariance (the deviation covariance of the visual information, the gyroscope and the accelerometer) in the step 1 comprises the following steps:
wherein E is 2 To be the bias covariance (second bias covariance) of the visual information, gyroscope, accelerometer, residual errors respectively representing the directions, the speeds and the positions of the ith frame image and the jth frame image; />Representing residual errors obtained by re-projection, wherein ρ represents two residual error proportional coefficients; />For adding the covariance matrix obtained after pre-integration of the gyroscope data and the acceleration data,/for the addition of the gyroscope data and the acceleration data>Residual vector representing marginalized reprojection error in the ith frame image, +.>Is->Transpose of->Is->Is a transpose of (a).
In step 2, in order to quantify the uncertainty, a trace of the visual information of the last frame image, the gyroscope, and the offset covariance (second offset covariance) of the accelerometer may be calculated, and a Mean Square Error (MSE) of the expected value may be used as the third positioning uncertainty data.
Based on the third sensor data, the second sensor data, and the first sensor data, the vehicle location may be performed in a manner that minimizes equation (2) to obtain location information. According to the vehicle positioning method, the accuracy of positioning is improved by gradually adding various sensor data.
As an optional implementation manner of this embodiment, the vehicle positioning method further includes: and when the first positioning uncertainty data meets a first preset condition, positioning the vehicle according to the first sensor data. The vehicle positioning may be performed according to the first sensor data by inputting the first sensor data into the target neural network, thereby obtaining the vehicle positioning information.
As an optional implementation manner of this embodiment, the target neural network is constructed according to a pousent model, and the target neural network training process includes: inputting the sample into a pre-trained neural network to obtain posterior probability distribution of each network weight; determining the relative entropy between the approximate value of each layer of network and the posterior probability distribution according to the posterior probability distribution; and (3) training the pre-trained neural network by taking the relative entropy between the minimized approximate value and the posterior probability distribution as a target, so as to obtain the target neural network.
Illustratively, the image information is acquired by using an onboard camera, the image is input into a built PoseNet neural network for training, the model defines the motion state x (vehicle position) and q (vehicle head direction) of the vehicle, and the loss function of the network is as follows:
and θ represents a parameter for simultaneously optimizing the position information and the direction information in the training process, and a random gradient descent mode is adopted to train the model, so that a better result can be obtained under the condition of a smaller sample.
And obtaining posterior probability distribution of the current network node weight through the data set obtained through training and the corresponding label, using dropout sampling in the training process, and obtaining uncertainty of the current position of the vehicle through a Bayesian model.
Firstly, obtaining a data set through training, and firstly, obtaining posterior probability distribution of current network weight W through the data set X and the label Y obtained through training, namely:
p(W|X,Y);
the variance inference is then applied to minimize the relative entropy between the approximation q (W) and the posterior probability distribution:
KL(q(W)||p(W|X,Y));
wherein the approximation of each layer satisfies:
b ij ~Bernouli(pi),j=1,2,...,n-1
W i =M i diag(b i );
wherein M is i The approximate distribution of each layer satisfies the bernoulli distribution for the variation coefficient.
Finally, the objective loss function is minimized, which is also the relative entropy in the approximation and posterior probability distribution.
The detailed pseudocode is as follows:
the present embodiment provides a vehicle positioning device, as shown in fig. 2, including:
a first sensor data acquisition module 201 for acquiring first sensor data; reference is specifically made to the corresponding parts of the above method embodiments, and no further description is given here.
A first positioning uncertainty data determining module 202, configured to obtain first positioning uncertainty data according to the first sensor data; reference is specifically made to the corresponding parts of the above method embodiments, and no further description is given here.
A second positioning uncertainty data determining module 203, configured to obtain second sensor data when the first positioning uncertainty data does not meet a first preset condition, and obtain second positioning uncertainty data according to the second sensor data and the first sensor data; reference is specifically made to the corresponding parts of the above method embodiments, and no further description is given here.
And the first positioning module 204 is configured to perform vehicle positioning according to the first sensor data and the second sensor data when the second positioning uncertainty data meets a second preset condition. Reference is specifically made to the corresponding parts of the above method embodiments, and no further description is given here.
As an alternative implementation of the present embodiment, the vehicle positioning device further includes:
the third sensor data determining module is used for acquiring third sensor data when the second positioning uncertainty data does not meet a second preset condition; reference is specifically made to the corresponding parts of the above method embodiments, and no further description is given here.
And the second positioning module is used for obtaining third positioning uncertainty data and positioning the vehicle according to the third sensor data, the second sensor data and the first sensor data. Reference is specifically made to the corresponding parts of the above method embodiments, and no further description is given here.
As an alternative implementation of this embodiment, the method further includes: and the third positioning module is used for positioning the vehicle according to the first sensor data when the first positioning uncertainty data meets a first preset condition. Reference is specifically made to the corresponding parts of the above method embodiments, and no further description is given here.
As an alternative implementation of this embodiment, the first sensor data is a vision sensor, and the first positioning uncertainty data determining module 202 includes: and the first positioning uncertainty data determining sub-module is used for inputting the image data obtained by the vision sensor into a target neural network to obtain first positioning uncertainty data. Reference is specifically made to the corresponding parts of the above method embodiments, and no further description is given here.
As an optional implementation manner of this embodiment, the second sensor data is a gyroscope, and the second positioning uncertainty data determining module 203 includes:
the first motion relation determining module is used for obtaining a first motion relation in the running process of the vehicle according to the second sensor data and the first sensor data; reference is specifically made to the corresponding parts of the above method embodiments, and no further description is given here.
The first motion residual determination module is used for obtaining a first motion residual according to the first motion relation; reference is specifically made to the corresponding parts of the above method embodiments, and no further description is given here.
And the second positioning uncertainty data sub-module is used for obtaining second positioning uncertainty data according to the first motion residual error. Reference is specifically made to the corresponding parts of the above method embodiments, and no further description is given here.
As an alternative implementation manner of this embodiment, the second positioning module includes:
the second motion relation determining module is used for obtaining a second motion relation in the running process of the vehicle according to the second sensor data, the third sensor data and the first sensor data; reference is specifically made to the corresponding parts of the above method embodiments, and no further description is given here.
The second motion residual determination module is used for obtaining a second motion residual according to the second motion relation; reference is specifically made to the corresponding parts of the above method embodiments, and no further description is given here.
And the second positioning sub-module is used for obtaining third positioning uncertainty data according to the second motion residual error. Reference is specifically made to the corresponding parts of the above method embodiments, and no further description is given here.
As an alternative implementation of this embodiment, the second positioning uncertainty data sub-module includes:
the first deviation covariance determining module is used for obtaining a first deviation covariance according to the first motion residual; reference is specifically made to the corresponding parts of the above method embodiments, and no further description is given here.
The second positioning uncertainty data calculation module is used for obtaining second positioning uncertainty data according to the first deviation covariance; reference is specifically made to the corresponding parts of the above method embodiments, and no further description is given here.
The first bias covariance determination module performs operations comprising:
wherein E is 1 As a first deviation covariance of the first deviation,expressed as a party from an ith frame image to a jth frame imageTo residual, sigma R,i,j Is covariance matrix obtained after adding gyroscope, < ->Residual vector representing marginalized reprojection error in the ith frame image, +.>Is->Transpose of->Is->Is a transpose of (a). Reference is specifically made to the corresponding parts of the above method embodiments, and no further description is given here.
As an alternative implementation of this embodiment, the second positioning sub-module includes:
the second deviation covariance determining module is used for obtaining a second deviation covariance according to the second motion residual; reference is specifically made to the corresponding parts of the above method embodiments, and no further description is given here.
The third positioning uncertainty data calculation module is used for obtaining third positioning uncertainty data according to the second deviation covariance; reference is specifically made to the corresponding parts of the above method embodiments, and no further description is given here.
A second bias covariance determination module, comprising:
wherein E is 2 For the second deviation covariance to be the result, residual errors respectively representing the directions, the speeds and the positions of the ith frame image and the jth frame image; />Representing residual errors obtained by re-projection, wherein ρ represents two residual error proportional coefficients; />To add the covariance matrix obtained after pre-integration of the gyroscope data and the acceleration data,residual vector representing marginalized reprojection error in the ith frame image, +.>Is->Transpose of->Is->Is a transpose of (a). Reference is specifically made to the corresponding parts of the above method embodiments, and no further description is given here.
As an optional implementation manner of this embodiment, the target neural network is constructed according to a pousent model, and the first positioning uncertainty data determining submodule includes:
the posterior probability distribution determining module is used for inputting the samples into the pre-trained neural network to obtain posterior probability distribution of each network weight; reference is specifically made to the corresponding parts of the above method embodiments, and no further description is given here.
The relative entropy determining module is used for determining the relative entropy between the approximate value of each layer of network and the posterior probability distribution according to the posterior probability distribution; reference is specifically made to the corresponding parts of the above method embodiments, and no further description is given here.
And the training module is used for training the pre-trained neural network by taking the relative entropy between the minimized approximate value and the posterior probability distribution as a target, so as to obtain a target neural network. Reference is specifically made to the corresponding parts of the above method embodiments, and no further description is given here.
Embodiments of the present application also provide an electronic device, as shown in fig. 3, a processor 310 and a memory 320, where the processor 310 and the memory 320 may be connected by a bus or other means.
The processor 310 may be a central processing unit (Central Processing Unit, CPU). The processor 310 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSPs), application specific integrated circuits (Application Specific Integrated Circuit, ASICs), field programmable gate arrays (Field-Programmable Gate Array, FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or a combination of the above.
The memory 320 is used as a non-transitory computer readable storage medium for storing non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the vehicle positioning method in the embodiment of the invention. The processor executes various functional applications of the processor and data processing by running non-transitory software programs, instructions, and modules stored in memory.
Memory 320 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created by the processor, etc. In addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 320 may optionally include memory located remotely from the processor, which may be connected to the processor via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory 320, which when executed by the processor 310, performs the vehicle positioning method in the embodiment shown in fig. 1.
The details of the above electronic device may be understood correspondingly with respect to the corresponding related descriptions and effects in the embodiment shown in fig. 1, which are not repeated herein.
The present embodiment also provides a computer storage medium storing computer-executable instructions that can perform the vehicle positioning method of any of the above-described method embodiments 1. Wherein the storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a Flash Memory (Flash Memory), a Hard Disk (HDD), or a Solid State Drive (SSD); the storage medium may also comprise a combination of memories of the kind described above.
It is apparent that the above examples are given by way of illustration only and are not limiting of the embodiments. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. While still being apparent from variations or modifications that may be made by those skilled in the art are within the scope of the invention.
Claims (8)
1. A vehicle positioning method, characterized by comprising the steps of:
acquiring first sensor data;
obtaining first positioning uncertainty data according to the first sensor data;
when the first positioning uncertainty data meets a first preset condition, vehicle positioning is carried out according to the first sensor data;
when the first positioning uncertainty data does not meet a first preset condition, second sensor data are obtained, and second positioning uncertainty data are obtained according to the second sensor data and the first sensor data;
when the second positioning uncertainty data meets a second preset condition, vehicle positioning is carried out according to the first sensor data and the second sensor data;
the first sensor data is a visual sensor, and the obtaining first positioning uncertainty data according to the first sensor data includes: inputting the image data obtained by the vision sensor into a target neural network to obtain first positioning uncertainty data;
the second sensor data is a gyroscope, and the second positioning uncertainty data is obtained according to the second sensor data and the first sensor data, including:
obtaining a first motion relation in the running process of the vehicle according to the second sensor data and the first sensor data;
obtaining a first motion residual error according to the first motion relation;
obtaining second positioning uncertainty data according to the first motion residual error;
obtaining second positioning uncertainty data according to the first motion residual error comprises the following steps:
obtaining a first deviation covariance according to the first motion residual;
obtaining second positioning uncertainty data according to the first deviation covariance;
the obtaining a first deviation covariance according to the motion residual comprises the following steps:
wherein,for the first bias covariance ++>For two residual scaling factors, < >>Directional residual error expressed as i-th frame image to j-th frame image,/and (x)>Is covariance matrix obtained after adding gyroscope, < ->And +.>Covariance matrix representing pre-fusion and re-projection errors, respectively,>covariance matrix representing marginalized reprojection errors in the ith frame image, wherein +.>Represented is a jacobian matrix of reprojection errors,>residual vector representing marginalized reprojection error in the ith frame image, +.>Is->Transpose of->Is->Is a transpose of (a).
2. The method as recited in claim 1, further comprising:
when the second positioning uncertainty data does not meet a second preset condition, acquiring third sensor data;
and obtaining third positioning uncertainty data and positioning the vehicle according to the third sensor data, the second sensor data and the first sensor data.
3. The method of claim 2, wherein the third sensor data is an accelerometer, and the obtaining third positioning uncertainty data from the third sensor data, the second sensor data, and the first sensor data comprises:
obtaining a second motion relation in the running process of the vehicle according to the second sensor data, the third sensor data and the first sensor data;
obtaining a second motion residual error according to the second motion relation;
and obtaining third positioning uncertainty data according to the second motion residual error.
4. A method according to claim 3, wherein said deriving third positioning uncertainty data from said second motion residual comprises:
obtaining a second deviation covariance according to the second motion residual;
obtaining third positioning uncertainty data according to the second deviation covariance;
obtaining a second deviation covariance according to the second motion residual, wherein the second deviation covariance comprises the following steps:
;
wherein,for the second bias covariance ++>,/>Residual errors respectively representing the directions, the speeds and the positions of the ith frame image and the jth frame image; />Representing the residual error resulting from the re-projection,/->Representing two residual scaling coefficients; />For adding the covariance matrix obtained after pre-integration of the gyroscope data and the acceleration data,/for the addition of the gyroscope data and the acceleration data>Residual vector representing marginalized reprojection error in the ith frame image, +.>Is->Transpose of->Is->Is a transpose of (a).
5. The method of claim 1, wherein the target neural network is constructed according to a pousent model, and wherein the target neural network training process comprises:
inputting the sample into a pre-trained neural network to obtain posterior probability distribution of each network weight;
determining the relative entropy between the approximate value of each layer of network and the posterior probability distribution according to the posterior probability distribution;
and training the pre-trained neural network by taking the relative entropy between the minimized approximate value and the posterior probability distribution as a target, so as to obtain the target neural network.
6. A vehicle positioning device, characterized by comprising:
the first sensor data acquisition module is used for acquiring first sensor data;
the first positioning uncertainty data determining module is used for obtaining first positioning uncertainty data according to the first sensor data;
when the first positioning uncertainty data meets a first preset condition, vehicle positioning is carried out according to the first sensor data;
the second positioning uncertainty data determining module is used for acquiring second sensor data when the first positioning uncertainty data does not meet a first preset condition, and acquiring second positioning uncertainty data according to the second sensor data and the first sensor data;
the first positioning module is used for positioning the vehicle according to the first sensor data and the second sensor data when the second positioning uncertainty data meets a second preset condition;
the first sensor data is a visual sensor, and the first positioning uncertainty data determining module comprises: the first positioning uncertainty data determining sub-module is used for inputting the image data obtained by the vision sensor into a target neural network to obtain first positioning uncertainty data;
the second sensor data is a gyroscope, and the second positioning uncertainty data determining module comprises:
the first motion relation determining module is used for obtaining a first motion relation in the running process of the vehicle according to the second sensor data and the first sensor data;
the first motion residual determination module is used for obtaining a first motion residual according to the first motion relation;
the second positioning uncertainty data sub-module is used for obtaining second positioning uncertainty data according to the first motion residual error;
obtaining second positioning uncertainty data according to the first motion residual error comprises the following steps:
obtaining a first deviation covariance according to the first motion residual;
obtaining second positioning uncertainty data according to the first deviation covariance;
the obtaining a first deviation covariance according to the motion residual comprises the following steps:
wherein,for the first bias covariance ++>For two residual scaling factors, < >>Directional residual error expressed as i-th frame image to j-th frame image,/and (x)>Is covariance matrix obtained after adding gyroscope, < ->And +.>Covariance matrix representing pre-fusion and re-projection errors, respectively,>covariance matrix representing marginalized reprojection errors in the ith frame image, wherein +.>Represented is a jacobian matrix of reprojection errors,>residual vector representing marginalized reprojection error in the ith frame image, +.>Is->Transpose of->Is->Is a transpose of (a).
7. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the vehicle localization method of any one of claims 1-5 when the program is executed by the processor.
8. A storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the vehicle locating method of any of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110305408.XA CN113155121B (en) | 2021-03-22 | 2021-03-22 | Vehicle positioning method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110305408.XA CN113155121B (en) | 2021-03-22 | 2021-03-22 | Vehicle positioning method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113155121A CN113155121A (en) | 2021-07-23 |
CN113155121B true CN113155121B (en) | 2024-04-02 |
Family
ID=76887947
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110305408.XA Active CN113155121B (en) | 2021-03-22 | 2021-03-22 | Vehicle positioning method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113155121B (en) |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2966477A1 (en) * | 2014-07-09 | 2016-01-13 | ANavS GmbH | Method for determining the position and attitude of a moving object using low-cost receivers |
CN107869989A (en) * | 2017-11-06 | 2018-04-03 | 东北大学 | A kind of localization method and system of the fusion of view-based access control model inertial navigation information |
CN109991636A (en) * | 2019-03-25 | 2019-07-09 | 启明信息技术股份有限公司 | Map constructing method and system based on GPS, IMU and binocular vision |
WO2020048623A1 (en) * | 2018-09-07 | 2020-03-12 | Huawei Technologies Co., Ltd. | Estimation of a pose of a robot |
CN111210477A (en) * | 2019-12-26 | 2020-05-29 | 深圳大学 | Method and system for positioning moving target |
WO2020155616A1 (en) * | 2019-01-29 | 2020-08-06 | 浙江省北大信息技术高等研究院 | Digital retina-based photographing device positioning method |
CN111595333A (en) * | 2020-04-26 | 2020-08-28 | 武汉理工大学 | Modularized unmanned vehicle positioning method and system based on visual inertial laser data fusion |
CN111609868A (en) * | 2020-05-29 | 2020-09-01 | 电子科技大学 | Visual inertial odometer method based on improved optical flow method |
CN111739063A (en) * | 2020-06-23 | 2020-10-02 | 郑州大学 | Electric power inspection robot positioning method based on multi-sensor fusion |
CN111750853A (en) * | 2020-06-24 | 2020-10-09 | 国汽(北京)智能网联汽车研究院有限公司 | Map establishing method, device and storage medium |
CN111795686A (en) * | 2020-06-08 | 2020-10-20 | 南京大学 | Method for positioning and mapping mobile robot |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10295651B2 (en) * | 2016-09-21 | 2019-05-21 | Pinhas Ben-Tzvi | Linear optical sensor arrays (LOSA) tracking system for active marker based 3D motion tracking |
US10317214B2 (en) * | 2016-10-25 | 2019-06-11 | Massachusetts Institute Of Technology | Inertial odometry with retroactive sensor calibration |
-
2021
- 2021-03-22 CN CN202110305408.XA patent/CN113155121B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2966477A1 (en) * | 2014-07-09 | 2016-01-13 | ANavS GmbH | Method for determining the position and attitude of a moving object using low-cost receivers |
CN107869989A (en) * | 2017-11-06 | 2018-04-03 | 东北大学 | A kind of localization method and system of the fusion of view-based access control model inertial navigation information |
WO2020048623A1 (en) * | 2018-09-07 | 2020-03-12 | Huawei Technologies Co., Ltd. | Estimation of a pose of a robot |
WO2020155616A1 (en) * | 2019-01-29 | 2020-08-06 | 浙江省北大信息技术高等研究院 | Digital retina-based photographing device positioning method |
CN109991636A (en) * | 2019-03-25 | 2019-07-09 | 启明信息技术股份有限公司 | Map constructing method and system based on GPS, IMU and binocular vision |
CN111210477A (en) * | 2019-12-26 | 2020-05-29 | 深圳大学 | Method and system for positioning moving target |
CN111595333A (en) * | 2020-04-26 | 2020-08-28 | 武汉理工大学 | Modularized unmanned vehicle positioning method and system based on visual inertial laser data fusion |
CN111609868A (en) * | 2020-05-29 | 2020-09-01 | 电子科技大学 | Visual inertial odometer method based on improved optical flow method |
CN111795686A (en) * | 2020-06-08 | 2020-10-20 | 南京大学 | Method for positioning and mapping mobile robot |
CN111739063A (en) * | 2020-06-23 | 2020-10-02 | 郑州大学 | Electric power inspection robot positioning method based on multi-sensor fusion |
CN111750853A (en) * | 2020-06-24 | 2020-10-09 | 国汽(北京)智能网联汽车研究院有限公司 | Map establishing method, device and storage medium |
Non-Patent Citations (7)
Title |
---|
一种旋翼无人机组合导航系统设计及应用;刘洪剑;王耀南;谭建豪;李树帅;钟杭;;传感技术学报;20170215(第02期);全文 * |
基于惯性传感器和视觉里程计的机器人定位;夏凌楠;张波;王营冠;魏建明;;仪器仪表学报(第01期);全文 * |
夏凌楠 ; 张波 ; 王营冠 ; 魏建明 ; .基于惯性传感器和视觉里程计的机器人定位.仪器仪表学报.2013,(第01期),全文. * |
室内环境下立体视觉惯导融合定位;敖龙辉;郭杭;;测绘通报(第12期);全文 * |
彼得欧汉龙PeterO.TypeScript项目开发实战.机械工业出版社,2020,第247-248页. * |
敖龙辉 ; 郭杭 ; .室内环境下立体视觉惯导融合定位.测绘通报.2019,(第12期),全文. * |
罗四维.计算机视觉检测逆问题导论.北京交通大学出版社,2017,第110-111页. * |
Also Published As
Publication number | Publication date |
---|---|
CN113155121A (en) | 2021-07-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113945206B (en) | Positioning method and device based on multi-sensor fusion | |
CN109991636B (en) | Map construction method and system based on GPS, IMU and binocular vision | |
CN109887057B (en) | Method and device for generating high-precision map | |
US10860871B2 (en) | Integrated sensor calibration in natural scenes | |
Alonso et al. | Accurate global localization using visual odometry and digital maps on urban environments | |
US11144770B2 (en) | Method and device for positioning vehicle, device, and computer readable storage medium | |
US20200364883A1 (en) | Localization of a mobile unit by means of a multi-hypothesis kalman filter method | |
KR20190082071A (en) | Method, apparatus, and computer readable storage medium for updating electronic map | |
US20200355513A1 (en) | Systems and methods for updating a high-definition map | |
CN104729506A (en) | Unmanned aerial vehicle autonomous navigation positioning method with assistance of visual information | |
US10996337B2 (en) | Systems and methods for constructing a high-definition map based on landmarks | |
CN111241224B (en) | Method, system, computer device and storage medium for target distance estimation | |
CN110766760B (en) | Method, device, equipment and storage medium for camera calibration | |
WO2023059497A1 (en) | Camera calibration for lane detection and distance estimation using single-view geometry and deep learning | |
CN114755662A (en) | Calibration method and device for laser radar and GPS with road-vehicle fusion perception | |
CN110596741A (en) | Vehicle positioning method and device, computer equipment and storage medium | |
CN113252051A (en) | Map construction method and device | |
WO2020113425A1 (en) | Systems and methods for constructing high-definition map | |
CN113155121B (en) | Vehicle positioning method and device and electronic equipment | |
CN116625359A (en) | Visual inertial positioning method and device for self-adaptive fusion of single-frequency RTK | |
CN113034538B (en) | Pose tracking method and device of visual inertial navigation equipment and visual inertial navigation equipment | |
Verentsov et al. | Bayesian framework for vehicle localization using crowdsourced data | |
CN113874681B (en) | Evaluation method and system for point cloud map quality | |
Shami et al. | Geo-locating Road Objects using Inverse Haversine Formula with NVIDIA Driveworks | |
Zhao et al. | L-VIWO: Visual-Inertial-Wheel Odometry based on Lane Lines |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |