CN114777797B - High-precision map visual positioning method for automatic driving and automatic driving method - Google Patents

High-precision map visual positioning method for automatic driving and automatic driving method Download PDF

Info

Publication number
CN114777797B
CN114777797B CN202210659637.6A CN202210659637A CN114777797B CN 114777797 B CN114777797 B CN 114777797B CN 202210659637 A CN202210659637 A CN 202210659637A CN 114777797 B CN114777797 B CN 114777797B
Authority
CN
China
Prior art keywords
result
positioning
image
feature map
automatic driving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210659637.6A
Other languages
Chinese (zh)
Other versions
CN114777797A (en
Inventor
孙德安
王锦山
易炯
方锐涌
于泠汰
王宇雨
冼志怀
李俊璋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha Jinwei Information Technology Co ltd
Original Assignee
Changsha Jinwei Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha Jinwei Information Technology Co ltd filed Critical Changsha Jinwei Information Technology Co ltd
Priority to CN202210659637.6A priority Critical patent/CN114777797B/en
Publication of CN114777797A publication Critical patent/CN114777797A/en
Application granted granted Critical
Publication of CN114777797B publication Critical patent/CN114777797B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • G06F18/256Fusion techniques of classification results, e.g. of results related to same input data of results relating to different input data, e.g. multimodal recognition

Abstract

The invention discloses a high-precision map visual positioning method for automatic driving, which comprises the steps of acquiring sensor information and confirming an initial position of a vehicle; acquiring an environment image and extracting multi-scale spatial features of the image; converting the image from a pixel plane to a BEV plane and performing image segmentation; and extracting data from the image segmentation result and fusing the GNSS positioning result to obtain a high-precision positioning result of the current vehicle. The invention also discloses an automatic driving method comprising the high-precision map visual positioning method for automatic driving. Aiming at the conditions that the position drift occurs in weak GNSS or GNSS signal-free area positioning, the current high-precision position of the vehicle is obtained by using a mode of filtering and fusing a vision matching positioning result and a GNSS positioning result; therefore, the method can provide a low-cost high-precision positioning result for the automatic driving vehicle, improves the robustness of positioning, and has high reliability, good continuity and precision and stability.

Description

High-precision map visual positioning method for automatic driving and automatic driving method
Technical Field
The invention belongs to the field of digital signal processing, and particularly relates to a high-precision map visual positioning method for automatic driving and an automatic driving method.
Background
With the development of economic technology and the improvement of living standard of people, the automatic driving technology is widely applied to the production and the life of people, and brings endless convenience to the production and the life of people. Therefore, it is important for people to ensure the stable and reliable operation of automatic driving.
The core technical system of automatic driving is divided into three modules of perception, decision and execution. Wherein positioning is a very important part of the perception module; localization techniques are used to determine the precise location of a vehicle in a map: only when the vehicle is accurately positioned in the map, the system can better assist the perception of the vehicle and finally support the decision and corresponding action of the execution module.
The positioning method commonly used today generally adopts a GNSS (global navigation satellite system) direct positioning method. However, in an actual driving scene, there are many areas with weak GNSS or no GNSS signal (such as nearby urban high buildings, tunnels, under an overhead, etc.); these areas, if using pure GNSS positioning, may experience drift in position, resulting in inaccurate positioning. And inaccurate positioning can directly affect subsequent planning decision and control action of the execution module in automatic driving, and serious conditions can cause traffic accidents.
Currently, to overcome the above-mentioned drawbacks of the single GNSS direct positioning technology, a laser radar or a method of fusing a visual positioning result and a GNSS positioning phase is generally adopted for auxiliary or integrated positioning. However, the following drawbacks still exist in the current technology: (1) the laser radar sensor is expensive, and the effect is influenced to a certain extent under the conditions of tunnels or snow weather and the like, so that the positioning result is influenced; therefore, the lidar sensor cannot provide a positioning result with low cost and good robustness; (2) based on the matching positioning process of the visual sensor, a positioning layer is required to be projected from a pixel plane to a BEV (Bird's Eye View) plane for positioning; an IPM (intelligent power management) projection method is mostly used in the projection process at present, but the IPM (Inverse Perspective Mapping) projection method has Perspective noise, and the error is larger as the distance from a scene is farther; meanwhile, the IPM projection method generally sets the ground surface to be a plane, but this setting is not true in many scenes, so that the projection error of the IPM projection method is large, the positioning accuracy is low, and a high-accuracy positioning result cannot be provided.
Disclosure of Invention
The invention aims to provide a high-precision map visual positioning method for automatic driving, which has high reliability, good continuity and accuracy and stability.
The invention also aims to provide an automatic driving method comprising the high-precision map visual positioning method for automatic driving.
The invention provides a high-precision map visual positioning method for automatic driving, which comprises the following steps:
s1, acquiring sensor information, and confirming the initial position of the vehicle based on the acquired information;
s2, acquiring an environmental image around the vehicle, and extracting multi-scale space characteristics of the acquired image;
s3, converting the image acquired in the step S2 from a pixel plane to a BEV plane;
s4, carrying out image segmentation on the BEV plane image obtained in the step S3;
and S5, extracting data from the image segmentation result obtained in the step S4, and fusing a GNSS positioning result to obtain a high-precision positioning result of the current vehicle.
In step S1, the GNSS position information, the position information of the inertial navigation IMU, and the wheel speed meter information of the vehicle are obtained, and data synthesis is performed on the obtained information, so as to confirm the initial position of the vehicle.
The step S2 specifically includes the following steps:
acquiring an environment image around the vehicle by adopting a monocular camera, and performing distortion correction on the acquired environment image to obtain a corrected image
Figure 100002_DEST_PATH_IMAGE002
Wherein
Figure 100002_DEST_PATH_IMAGE004
Is a three-dimensional set of real numbers, Has is the original height of the image,Wis the original width of the image;
extracting a multi-scale feature map from the corrected image by using a feature extraction network, fusing feature information of the multi-scale feature map by using a Feature Pyramid (FPN) network, and finally obtaining a feature map of each layer
Figure 100002_DEST_PATH_IMAGE006
Wherein
Figure 100002_DEST_PATH_IMAGE008
Is a three-dimensional set of real numbers,h s in order to be the height of the feature map,w s is the width of the feature map.
The step S3 is specifically to generate the map of the image as a set of sequence-to-sequence conversion process according to a one-to-one correspondence relationship between the vertical scanning line of the image and the polar ray in the BEV map, encode each layer of feature map by using an encoder, and decode by using a corresponding decoder, thereby converting the image from the pixel plane to the BEV plane.
The step S3 specifically includes the following steps:
A. encoding each layer feature map with an encoder:
A1. adjusting the size of the characteristic diagram to be processed to obtain the characteristic diagram
Figure 100002_DEST_PATH_IMAGE010
A2. For the characteristic diagram obtained in step A1f s And (3) carrying out position coding: adding position-coding information to feature mapsf s Obtaining a coded position feature mapf s2 Is composed of
Figure 100002_DEST_PATH_IMAGE012
(ii) a Wherein the content of the first and second substances,PEis one-dimensional sinusoidal position coded, and
Figure 100002_DEST_PATH_IMAGE014
Figure 100002_DEST_PATH_IMAGE016
Figure 100002_DEST_PATH_IMAGE018
for the position-coding of the even-numbered columns,posin order to index the position of the object,
Figure 100002_DEST_PATH_IMAGE020
for the purpose of position-coding the odd columns,iis a certain dimension of the vector and is,dis the dimension length of the vector;
A3. based on the multi-head attention mechanism model, the encoded position feature map obtained in step A2 is subjected to the following formulaf s2 And (3) processing:
Figure 100002_DEST_PATH_IMAGE022
Figure 100002_DEST_PATH_IMAGE024
Figure 100002_DEST_PATH_IMAGE026
in the formulaQ(h i ) To search forInquiring a vector;h i is composed off s2 On the feature mapiColumns;W Q a query matrix in a multi-head attention mechanism model;K(h i ) Is a key vector;W K a key matrix in the model is made for the multi-head attention;V(h i ) Is a vector of values;W V a value matrix in the multi-head attention mechanism model;
A4. calculating the score value using the following formulascore ij
Figure 100002_DEST_PATH_IMAGE028
In the formulad k Is composed ofK(h i ) The dimension of the vector is long; symbol(s)
Figure 100002_DEST_PATH_IMAGE030
Calculating as dot product;
A5. applying softmax function to the score value obtained in the step A4score ij Processing to obtain the intermediate variable value after softmax
Figure 100002_DEST_PATH_IMAGE032
Figure 100002_DEST_PATH_IMAGE034
In the formulaH 1 Is the height value of the feature map column;
A6. obtaining the value of the intermediate variable after softmax according to step A5
Figure 418102DEST_PATH_IMAGE032
Weighting with the value vector obtained in step A3 to obtain the final resultc i
Figure 100002_DEST_PATH_IMAGE036
In the formulac i Is the coded result;jis a counting variable;
A7. combining the results of the steps A4-A6 to serve as a total calculation formula of the Attention mechanism:
Figure 100002_DEST_PATH_IMAGE038
in the formulad k Is composed ofK(h i ) The dimension of the vector is long;
A8. the result is processed by a full connection layer to obtain the final coded characteristic diagram
Figure 100002_DEST_PATH_IMAGE040
w u The feature map width after coding;h u is the coded feature map height;
B. and B, decoding the coding result obtained in the step A by a decoder:
B1. the coded characteristic diagram obtained in the step Af u And (3) carrying out position coding: adding position-coding information to feature mapsf u Obtaining a decoded position feature mapf u2 Is composed of
Figure 100002_DEST_PATH_IMAGE042
Figure 100002_DEST_PATH_IMAGE044
Coding the position;
B2. the decoded position feature map obtained in step B1f u2 Superposing the angle value of the polar coordinate to obtain an angle position characteristic diagramf u3 Is composed of
Figure 100002_DEST_PATH_IMAGE046
AECoding the angle position;
B3. based on the multi-head attention mechanism model, the angular position feature map obtained in the step B2 is subjected to the following formulaf u3 And (3) processing:
Figure 100002_DEST_PATH_IMAGE048
Figure 100002_DEST_PATH_IMAGE050
Figure 100002_DEST_PATH_IMAGE052
in the formula
Figure 100002_DEST_PATH_IMAGE054
Is a query vector;
Figure 100002_DEST_PATH_IMAGE056
is a BEV plane polar rayii 1 (ii) a strip;W QQ is a query matrix;h ii is composed off u3 On the feature mapiiColumns;K(h ii ) Is a key vector;W KK is a key matrix;V(h ii ) Is a vector of values;W VV is a matrix of values;
B4. the following formula is adopted as the overall equation of the Attention:
Figure 100002_DEST_PATH_IMAGE058
in the formulad k1 Is composed ofK(h ii ) The dimension of the vector is long;
B5. the result is further processed by a full connection layer to obtain a decoded feature map
Figure 100002_DEST_PATH_IMAGE060
h u Is the coded feature map height;r u is the radial length of the feature map;
B6. for the decoded characteristic diagram obtained in the step B5
Figure 100002_DEST_PATH_IMAGE062
Obtaining a final decoding result through a plurality of Attention conversions;
C. b, converting the characteristic diagram obtained in the step B into a rectangular coordinate system from a polar coordinate system to obtain a characteristic diagram
Figure 100002_DEST_PATH_IMAGE064
(ii) a WhereinXFor the width of the feature map after conversion,Yis the converted feature map height.
The step S4 is to obtain the segmentation result by passing the BEV plane image obtained in the step S3 through an image segmentation neural network
Figure 100002_DEST_PATH_IMAGE066
(ii) a WhereinclassesDividing the data into segmentation categories;x,yare plane coordinate values.
Step S5, specifically, extracting positioning layer point cloud data from the image segmentation result obtained in step S4; then, extracting positioning layer point cloud data of a region corresponding to the position according to the initialized position of the vehicle obtained in the step S1; calculating the obtained Point cloud data by adopting an ICP (Iterative close Point) matching positioning method to obtain a visual positioning result; and finally, fusing the visual positioning result and the GNSS positioning result to obtain the final high-precision positioning result of the current vehicle.
The step S5 specifically includes the following steps:
a. the image segmentation result obtained from step S4
Figure 100002_DEST_PATH_IMAGE068
Extracting point cloud data of positioning layerP s
b. According to the initialized position of the vehicle obtained in the step S1, extracting the point cloud data of the high-precision positioning layer of the corresponding area of the positionP t
c. The obtained point cloud dataP s AndP t and calculating by adopting an ICP (inductively coupled plasma) matching positioning method and adopting the following formula to obtain a visual positioning result:
Figure 100002_DEST_PATH_IMAGE070
in the formula
Figure 100002_DEST_PATH_IMAGE072
Is the final result;
Figure 100002_DEST_PATH_IMAGE074
in order to find the minimum value,nthe number of point clouds is obtained;P si is composed of
Figure 100002_DEST_PATH_IMAGE076
On the plane ofiPoint cloud data;Ris a rotation matrix;P ti for positioning the layer for high precisioniPoint cloud data;tis a translation vector;
Figure 100002_DEST_PATH_IMAGE078
is expressed as a norm;
d. and fusing the visual positioning result and the GNSS positioning result by adopting an extended Kalman filtering frame to obtain a final high-precision positioning result of the current vehicle.
The invention also discloses an automatic driving method comprising the high-precision map visual positioning method for automatic driving, which comprises the following steps:
(1) the high-precision map visual positioning method for automatic driving is adopted to position the vehicle in real time;
(2) performing real-time decision making according to the real-time positioning result obtained in the step (1);
(3) and (3) controlling an execution module to execute according to the real-time decision result of the step (2) to finish the automatic driving of the vehicle.
According to the high-precision map visual positioning method and the automatic driving method for automatic driving, provided by the invention, aiming at the conditions that the position drift occurs in weak GNSS or GNSS signal-free area positioning, the current high-precision position of a vehicle is obtained by using a mode of filtering and fusing a visual matching positioning result and a GNSS positioning result; therefore, the method can provide a low-cost high-precision positioning result for the automatic driving vehicle, improves the robustness of positioning, and has high reliability, good continuity and precision and stability.
Drawings
Fig. 1 is a schematic method flow diagram of a positioning method of the present invention.
Fig. 2 is a schematic method flow diagram of the automatic driving method of the present invention.
Fig. 3 is a schematic diagram of a road segment track test situation of an embodiment of the positioning method of the present invention.
Fig. 4 is a schematic track diagram of a GNSS signal before and after interruption in the positioning method according to the embodiment of the present invention.
Fig. 5 is a schematic diagram of an error statistic based on the method of the present invention according to the embodiment of the positioning method of the present invention.
Fig. 6 is a schematic diagram of the error statistics of the IMP-based method according to the embodiment of the positioning method of the present invention.
Detailed Description
Fig. 1 is a schematic flow chart of the positioning method of the present invention: the invention provides a high-precision map visual positioning method for automatic driving, which comprises the following steps:
s1, acquiring sensor information, and confirming the initial position of the vehicle based on the acquired information; specifically, GNSS position information, position information of an inertial navigation IMU and wheel speed meter information of a vehicle are obtained, and data synthesis is carried out on the obtained information, so that the initial position of the vehicle is confirmed;
s2, obtaining an environmental image around the vehicle, and extracting the multi-scale space characteristics of the obtained image; the method specifically comprises the following steps:
acquiring an environment image around the vehicle by adopting a monocular camera, and performing distortion correction on the acquired environment image to obtain a corrected image
Figure DEST_PATH_IMAGE079
Wherein
Figure DEST_PATH_IMAGE080
Is a three-dimensional set of real numbers, Has is the original height of the image,Wis the original width of the image;
extracting a multi-scale feature map from the corrected image by using a feature extraction network, fusing feature information of the multi-scale feature map by using a Feature Pyramid (FPN) network, and finally obtaining a feature map of each layer
Figure DEST_PATH_IMAGE081
Wherein
Figure 784843DEST_PATH_IMAGE008
Is a three-dimensional set of real numbers,h s in order to be the height of the feature map,w s is the width of the feature map;
s3, converting the image acquired in the step S2 from a pixel plane to a BEV plane; specifically, according to the one-to-one correspondence relationship between the vertical scanning lines of the image and the polar rays in the BEV map, the map of the image is generated as a conversion process from a group of sequences to a sequence, each layer of feature graph is encoded by an encoder, and a corresponding decoder is adopted for decoding, so that the image is converted from a pixel plane to a BEV plane;
when the method is implemented, the method comprises the following steps:
A. encoding each layer feature map with an encoder:
A1. adjusting the size of the characteristic diagram to be processed to obtain the characteristic diagram
Figure 767843DEST_PATH_IMAGE010
A2. For step A1The obtained characteristic diagramf s And (3) carrying out position coding: adding position-coding information to feature mapsf s Obtaining a coded position feature mapf s2 Is composed of
Figure 631893DEST_PATH_IMAGE012
(ii) a Wherein the content of the first and second substances,PEis one-dimensional sinusoidal position coded, and
Figure DEST_PATH_IMAGE082
Figure DEST_PATH_IMAGE083
Figure DEST_PATH_IMAGE084
for the position-coding of the even-numbered columns,posin order to be an index of the position,
Figure DEST_PATH_IMAGE085
for the purpose of position-coding the odd columns,ifor a certain dimension of the vector,dis the dimension length of the vector;
A3. based on the multi-head attention mechanism model, the encoded position feature map obtained in step A2 is subjected to the following formulaf s2 And (3) processing:
Figure DEST_PATH_IMAGE086
Figure 56053DEST_PATH_IMAGE024
Figure 612936DEST_PATH_IMAGE026
in the formulaQ(h i ) Is a query vector;h i is composed off s2 On the feature mapiColumns;W Q for queries in a multi-headed attention mechanism modelA matrix;K(h i ) Is a key vector;W K a key matrix in a multi-head attention mechanism model;V(h i ) Is a vector of values;W V a value matrix in the model is made for the multi-head attention;
A4. calculating the score value using the following formulascore ij
Figure 348811DEST_PATH_IMAGE028
In the formulad k Is composed ofK(h i ) The dimension of the vector is long; symbol
Figure 282132DEST_PATH_IMAGE030
Calculating as dot product;
A5. applying softmax function to the score value obtained in the step A4score ij Processing to obtain the intermediate variable value after softmax
Figure 419852DEST_PATH_IMAGE032
Figure 147637DEST_PATH_IMAGE034
In the formulaH 1 Height values for feature map columns;
A6. obtaining the value of the intermediate variable after softmax according to the step A5
Figure 370807DEST_PATH_IMAGE032
Weighting with the value vector obtained in step A3 to obtain the final resultc i
Figure 842240DEST_PATH_IMAGE036
In the formulac i Is the coded result;jis a counting variable;
A7. combining the results of the steps A4-A6 to serve as a total calculation formula of the Attention mechanism:
Figure 834467DEST_PATH_IMAGE038
in the formulad k Is composed ofK(h i ) The dimension of the vector is long;
A8. the result passes through a full connection layer to obtain a final coded characteristic diagram
Figure 998732DEST_PATH_IMAGE040
w u The feature map width after coding;h u is the coded feature map height;
B. and B, decoding the coding result obtained in the step A by a decoder:
B1. the coded characteristic diagram obtained in the step Af u And (3) carrying out position coding: adding position-coding information to feature mapsf u Obtaining a decoded position feature mapf u2 Is composed of
Figure 709199DEST_PATH_IMAGE042
Figure 718743DEST_PATH_IMAGE044
Coding the position;
B2. the decoded position feature map obtained in step B1f u2 Superposing the angle value of the polar coordinate to obtain an angle position characteristic diagramf u3 Is composed of
Figure 828126DEST_PATH_IMAGE046
AECoding the angle position;
B3. based on the multi-head attention mechanism model, the angular position feature map obtained in the step B2 is subjected to the following formulaf u3 And (3) processing:
Figure 897713DEST_PATH_IMAGE048
Figure 95476DEST_PATH_IMAGE050
Figure 908712DEST_PATH_IMAGE052
in the formula
Figure 875531DEST_PATH_IMAGE054
Is a query vector;
Figure 381598DEST_PATH_IMAGE056
is a BEV plane polar rayii 1 (ii) a strip;W QQ is a query matrix;h ii is composed off u3 On the feature mapiiColumns;K(h ii ) Is a key vector;W KK is a key matrix;V(h ii ) Is a vector of values;W VV is a matrix of values;
B4. the following formula is adopted as the general equation of Attention:
Figure 66658DEST_PATH_IMAGE058
in the formulad k1 Is composed ofK(h ii ) The dimension of the vector is long;
B5. the result is further processed by a full connection layer to obtain a decoded feature map
Figure 418005DEST_PATH_IMAGE060
h u Is the coded feature map height;r u is the radial length of the feature map;
B6. for the decoded characteristic diagram obtained in the step B5
Figure 504909DEST_PATH_IMAGE062
Obtaining a final decoding result through a plurality of Attention conversions;
C. b, converting the characteristic diagram obtained in the step B into a rectangular coordinate system from a polar coordinate system to obtain a characteristic diagram
Figure 916299DEST_PATH_IMAGE064
(ii) a WhereinXFor the width of the feature map after conversion,Yfor the converted feature map height
S4, carrying out image segmentation on the BEV plane image obtained in the step S3; specifically, the BEV plane image obtained in step S3 is subjected to image segmentation neural network to obtain a segmentation result
Figure DEST_PATH_IMAGE087
(ii) a WhereinclassesDividing the data into segmentation categories;x,yis a plane coordinate value;
s5, extracting data from the image segmentation result obtained in the step S4, and fusing a GNSS positioning result to obtain a high-precision positioning result of the current vehicle; specifically, positioning layer point cloud data is extracted from the image segmentation result obtained in the step S4; then, extracting positioning layer point cloud data of a region corresponding to the position according to the initialized position of the vehicle obtained in the step S1; calculating the obtained point cloud data by adopting an ICP (inductively coupled plasma) matching positioning method to obtain a visual positioning result; finally, fusing the visual positioning result and the GNSS positioning result to obtain a final high-precision positioning result of the current vehicle;
when the method is implemented, the method comprises the following steps:
a. the image segmentation result obtained from step S4
Figure 557496DEST_PATH_IMAGE068
Extracting point cloud data of positioning layerP s
b. Based on the initialized position of the vehicle obtained in step S1Extracting high-precision positioning layer point cloud data of the corresponding area of the positionP t
c. The obtained point cloud dataP s AndP t and calculating by adopting an ICP (inductively coupled plasma) matching positioning method and adopting the following formula to obtain a visual positioning result:
Figure 446954DEST_PATH_IMAGE070
in the formula
Figure 653945DEST_PATH_IMAGE072
Is the final result;
Figure 236236DEST_PATH_IMAGE074
in order to find the minimum value,nthe number of point clouds is obtained;P si is composed of
Figure 630308DEST_PATH_IMAGE076
On the plane ofiPoint cloud data;Ris a rotation matrix;P ti for positioning the layer for high precisioniPoint cloud data;tis a translation vector;
Figure 589037DEST_PATH_IMAGE078
is expressed as a norm;
d. and fusing the visual positioning result and the GNSS positioning result by adopting an extended Kalman filtering frame to obtain a final high-precision positioning result of the current vehicle.
Fig. 2 is a schematic flow chart of the method of the automatic driving method of the present invention: the automatic driving method comprising the high-precision map visual positioning method for automatic driving disclosed by the invention comprises the following steps of:
(1) the high-precision map visual positioning method for automatic driving is adopted to carry out real-time positioning on the vehicle;
(2) performing real-time decision making according to the real-time positioning result obtained in the step (1);
(3) and (3) controlling an execution module to execute according to the real-time decision result of the step (2) to finish the automatic driving of the vehicle.
The positioning method of the present invention is further described below with reference to an embodiment:
in the embodiment, development is carried out based on an open source automatic driving platform automatic ware.ai 1.14, an LGSVL simulator is combined for simulation, a certain section in a map of a certain area is selected for testing, and in order to simulate weak GNSS signals or GNSS signal-free scenes such as tunnels, under an overhead environment and the like, GNSS satellite signals are directly interrupted in the latter half part of the section; compared with the vision positioning based on IPM projection (hereinafter referred to as IPM method) and the vision positioning based on the invention (hereinafter referred to as the invention method), fig. 3 is the situation of the test track of the current road section, fig. 4 is the situation of the positioning track (in the figure, the GNSS signal is interrupted at the point A) after the GNSS signal is interrupted and unlocked, the curve of the fluctuation up and down is the IPM method track, and the track of the fluctuation almost is the track and the real track (the two are almost coincident) of the invention method.
Track coordinate values of an IMP method and the method of the invention are respectively output, and are subtracted from the true value to obtain an error value in the X/Y/Z direction, wherein FIG. 5 is an error statistical chart based on the method of the invention, and FIG. 6 is an error statistical chart based on the IPM method. As can be seen from fig. 5 and 6, under the condition of good GNSS satellite signals, the positioning accuracy of the GNSS is high, the positioning accuracy of the two positioning methods is close and is basically within 6cm, but under the condition of GNSS satellite signal lock loss, the positioning accuracy can reach within 16cm based on the method of the present invention, and the positioning accuracy based on the IPM method is within 60cm, so that the positioning accuracy under the complex environment can be greatly improved based on the method of the present invention, and the functions of service automatic driving perception and decision making can be better served.

Claims (7)

1. A high-precision map visual positioning method for automatic driving is characterized by comprising the following steps:
s1, acquiring sensor information, and confirming the initial position of the vehicle based on the acquired information;
s2, acquiring an environmental image around the vehicle, and extracting multi-scale space characteristics of the acquired image;
s3, converting the image acquired in the step S2 from a pixel plane to a BEV plane; specifically, according to the one-to-one correspondence relationship between the vertical scanning lines of the image and the polar rays in the BEV map, the map of the image is generated as a conversion process from a group of sequences to a sequence, each layer of feature graph is encoded by an encoder, and a corresponding decoder is adopted for decoding, so that the image is converted from a pixel plane to a BEV plane; the method specifically comprises the following steps:
A. encoding each layer feature map with an encoder:
A1. adjusting the size of the characteristic diagram to be processed to obtain the characteristic diagram
Figure DEST_PATH_IMAGE002
A2. For the characteristic diagram obtained in step A1f s And (3) carrying out position coding: adding position-coding information to feature mapsf s Obtaining a coded position feature mapf s2 Is composed of
Figure DEST_PATH_IMAGE004
(ii) a Wherein the content of the first and second substances,PEis one-dimensional sinusoidal position coded, and
Figure DEST_PATH_IMAGE006
Figure DEST_PATH_IMAGE008
Figure DEST_PATH_IMAGE010
for the position-coding of the even-numbered columns,posin order to be an index of the position,
Figure DEST_PATH_IMAGE012
for the purpose of position-coding the odd columns,ifor a certain dimension of the vector,dis the dimension length of the vector;
A3. based on a multi-head attention mechanism model, usingThe following equation is applied to the encoded position feature map obtained in step A2f s2 And (3) processing:
Figure DEST_PATH_IMAGE014
Figure DEST_PATH_IMAGE016
Figure DEST_PATH_IMAGE018
in the formulaQ(h i ) Is a query vector;h i is composed off s2 First on the feature mapiColumns;W Q a query matrix in the model is made for the multi-head attention;K(h i ) Is a key vector;W K a key matrix in the model is made for the multi-head attention;V(h i ) Is a vector of values;W V a value matrix in the multi-head attention mechanism model;
A4. calculating the score value using the following formulascore ij
Figure DEST_PATH_IMAGE020
In the formulad k Is composed ofK(h i ) The dimension of the vector is long; symbol
Figure DEST_PATH_IMAGE022
Calculating as dot product;
A5. applying softmax function to the score value obtained in the step A4score ij Processing to obtain the intermediate variable value after softmax
Figure DEST_PATH_IMAGE024
Figure DEST_PATH_IMAGE026
In the formulaH 1 Is the height value of the feature map column;
A6. obtaining the value of the intermediate variable after softmax according to the step A5
Figure 624279DEST_PATH_IMAGE024
Weighting with the value vector obtained in step A3 to obtain the final resultc i
Figure DEST_PATH_IMAGE028
In the formulac i Is the coded result;jis a counting variable;
A7. combining the results of the steps A4-A6 to serve as a total calculation formula of the Attention mechanism:
Figure DEST_PATH_IMAGE030
in the formulad k Is composed ofK(h i ) The dimension of the vector is long;
A8. the result is processed by a full connection layer to obtain the final coded characteristic diagram
Figure DEST_PATH_IMAGE032
w u The feature map width after coding;h u is the coded feature map height;
B. and B, decoding the coding result obtained in the step A by a decoder:
B1. the coded characteristic diagram obtained in the step Af u And (3) carrying out position coding: adding position-coding information to feature mapsf u Obtaining a decoded position feature mapf u2 Is composed of
Figure DEST_PATH_IMAGE034
Figure DEST_PATH_IMAGE036
Coding the position;
B2. the decoded position feature map obtained in step B1f u2 Superposing the angle value of the polar coordinate to obtain an angle position characteristic diagramf u3 Is composed of
Figure DEST_PATH_IMAGE038
AECoding the angle position;
B3. based on the multi-head attention mechanism model, the angular position feature map obtained in the step B2 is subjected to the following formulaf u3 And (3) processing:
Figure DEST_PATH_IMAGE040
Figure DEST_PATH_IMAGE042
Figure DEST_PATH_IMAGE044
in the formula
Figure DEST_PATH_IMAGE046
Is a query vector;
Figure DEST_PATH_IMAGE048
is a BEV plane polar rayii 1 (ii) a strip;W QQ for querying momentsArraying;h ii is composed off u3 On the feature mapiiA column;K(h ii ) Is a key vector;W KK is a key matrix;V(h ii ) Is a vector of values;W VV is a matrix of values;
B4. the following formula is adopted as the overall equation of the Attention:
Figure DEST_PATH_IMAGE050
in the formulad k1 Is composed ofK(h ii ) The dimension of the vector is long;
B5. the result is further processed by a full connection layer to obtain a decoded characteristic diagram
Figure DEST_PATH_IMAGE052
h u Is the coded feature map height;r u is the radial length of the feature map;
B6. for the decoded feature map obtained in step B5
Figure DEST_PATH_IMAGE054
Obtaining a final decoding result through a plurality of Attention conversions;
C. b, converting the characteristic diagram obtained in the step B into a rectangular coordinate system from a polar coordinate system to obtain a characteristic diagram
Figure DEST_PATH_IMAGE056
(ii) a WhereinXFor the width of the feature map after conversion,Yis the converted feature map height;
s4, carrying out image segmentation on the BEV plane image obtained in the step S3;
and S5, extracting data from the image segmentation result obtained in the step S4, and fusing a GNSS positioning result to obtain a high-precision positioning result of the current vehicle.
2. The method as claimed in claim 1, wherein the step S1 is implemented by acquiring GNSS position information, inertial navigation IMU position information and vehicle wheel speed information, and performing data synthesis on the acquired information to determine the initial position of the vehicle.
3. The visual positioning method for the high-precision map used for automatic driving according to claim 2, wherein the step S2 specifically comprises the following steps:
acquiring an environment image around the vehicle by adopting a monocular camera, and performing distortion correction on the acquired environment image to obtain a corrected image
Figure DEST_PATH_IMAGE058
Wherein
Figure DEST_PATH_IMAGE060
Is a three-dimensional set of real numbers, Has is the original height of the image,Wis the original width of the image;
extracting a multi-scale characteristic diagram from the corrected image by adopting a characteristic extraction network, fusing characteristic information of the multi-scale characteristic diagram by adopting a characteristic pyramid network, and finally obtaining the characteristic diagram of each layer
Figure DEST_PATH_IMAGE062
Wherein
Figure DEST_PATH_IMAGE064
Is a three-dimensional set of real numbers,h s in order to be the height of the feature map,w s is the width of the feature map.
4. The method as claimed in claim 3, wherein the step S4 is performed by subjecting the BEV plane image obtained in step S3 to image segmentation by neural network to obtain the segmentation result
Figure DEST_PATH_IMAGE066
(ii) a WhereinclassesDividing the data into segmentation categories;x,yare plane coordinate values.
5. The visual positioning method for high-precision map used in automatic driving of claim 4, wherein the step S5 is to extract the point cloud data of positioning layer from the image segmentation result obtained in step S4; then, extracting positioning layer point cloud data of a region corresponding to the position according to the initialized position of the vehicle obtained in the step S1; calculating the obtained point cloud data by adopting an ICP (inductively coupled plasma) matching positioning method to obtain a visual positioning result; and finally, fusing the visual positioning result and the GNSS positioning result to obtain the final high-precision positioning result of the current vehicle.
6. The visual positioning method for the high-precision map used for the automatic driving as claimed in claim 5, wherein the step S5 specifically comprises the following steps:
a. the image segmentation result obtained from step S4
Figure DEST_PATH_IMAGE068
Extracting point cloud data of positioning layerP s
b. According to the initialized position of the vehicle obtained in the step S1, extracting the high-precision positioning layer point cloud data of the corresponding area of the positionP t
c. The obtained point cloud dataP s AndP t and calculating by adopting an ICP (inductively coupled plasma) matching positioning method and adopting the following formula to obtain a visual positioning result:
Figure DEST_PATH_IMAGE070
in the formula
Figure DEST_PATH_IMAGE072
Is the final result;
Figure DEST_PATH_IMAGE074
in order to find the minimum value,nthe number of point clouds is obtained;P si is composed of
Figure DEST_PATH_IMAGE076
On the plane of the firstiPoint cloud data;Ris a rotation matrix;P ti for positioning the layer for high precisioniPoint cloud data;tis a translation vector;
Figure DEST_PATH_IMAGE078
is expressed as a norm;
d. and fusing the visual positioning result and the GNSS positioning result by adopting an extended Kalman filtering frame to obtain a final high-precision positioning result of the current vehicle.
7. An automatic driving method comprising a high-precision map visual positioning method for automatic driving according to any one of claims 1 to 6, characterized by comprising the steps of:
(1) the high-precision map visual positioning method for automatic driving according to any one of claims 1 to 6 is adopted to carry out real-time positioning of the vehicle;
(2) performing real-time decision making according to the real-time positioning result obtained in the step (1);
(3) and (3) controlling an execution module to execute according to the real-time decision result of the step (2) to finish the automatic driving of the vehicle.
CN202210659637.6A 2022-06-13 2022-06-13 High-precision map visual positioning method for automatic driving and automatic driving method Active CN114777797B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210659637.6A CN114777797B (en) 2022-06-13 2022-06-13 High-precision map visual positioning method for automatic driving and automatic driving method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210659637.6A CN114777797B (en) 2022-06-13 2022-06-13 High-precision map visual positioning method for automatic driving and automatic driving method

Publications (2)

Publication Number Publication Date
CN114777797A CN114777797A (en) 2022-07-22
CN114777797B true CN114777797B (en) 2022-09-30

Family

ID=82421732

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210659637.6A Active CN114777797B (en) 2022-06-13 2022-06-13 High-precision map visual positioning method for automatic driving and automatic driving method

Country Status (1)

Country Link
CN (1) CN114777797B (en)

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7962285B2 (en) * 1997-10-22 2011-06-14 Intelligent Technologies International, Inc. Inertial measurement unit for aircraft
EP2243125B1 (en) * 2007-12-13 2020-04-29 Clemson University Research Foundation Vision based real time traffic monitoring
US10037689B2 (en) * 2015-03-24 2018-07-31 Donald Warren Taylor Apparatus and system to manage monitored vehicular flow rate
US9933269B2 (en) * 2015-06-22 2018-04-03 Here Global B.V. Midpoint-based map-agnostic navigation routing
CN109766878B (en) * 2019-04-11 2019-06-28 深兰人工智能芯片研究院(江苏)有限公司 A kind of method and apparatus of lane detection
CN110569704B (en) * 2019-05-11 2022-11-22 北京工业大学 Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
US11788861B2 (en) * 2019-08-31 2023-10-17 Nvidia Corporation Map creation and localization for autonomous driving applications
CN114092751A (en) * 2020-07-09 2022-02-25 北京图森未来科技有限公司 Trajectory prediction method and device
CN112001352A (en) * 2020-09-02 2020-11-27 山东大学 Textile operation workbench identification and positioning method and device based on Apriltag
CN113139446B (en) * 2021-04-12 2024-02-06 长安大学 End-to-end automatic driving behavior decision method, system and terminal equipment
CN113221975B (en) * 2021-04-26 2023-07-11 中国科学技术大学先进技术研究院 Working condition construction method based on improved Markov analysis method and storage medium
CN114199259B (en) * 2022-02-21 2022-06-17 南京航空航天大学 Multi-source fusion navigation positioning method based on motion state and environment perception

Also Published As

Publication number Publication date
CN114777797A (en) 2022-07-22

Similar Documents

Publication Publication Date Title
CN108802785B (en) Vehicle self-positioning method based on high-precision vector map and monocular vision sensor
CN111144388A (en) Monocular image-based road sign line updating method
CN110738121A (en) front vehicle detection method and detection system
CN107481315A (en) A kind of monocular vision three-dimensional environment method for reconstructing based on Harris SIFT BRIEF algorithms
CN108734713A (en) A kind of traffic image semantic segmentation method based on multi-characteristic
CN115717894B (en) Vehicle high-precision positioning method based on GPS and common navigation map
CN110288659B (en) Depth imaging and information acquisition method based on binocular vision
CN104077760A (en) Rapid splicing system for aerial photogrammetry and implementing method thereof
CN111401150A (en) Multi-lane line detection method based on example segmentation and adaptive transformation algorithm
CN113409459A (en) Method, device and equipment for producing high-precision map and computer storage medium
CN114120272A (en) Multi-supervision intelligent lane line semantic segmentation method fusing edge detection
CN116612468A (en) Three-dimensional target detection method based on multi-mode fusion and depth attention mechanism
CN116228539A (en) Unmanned aerial vehicle remote sensing image stitching method
CN113538585B (en) High-precision multi-target intelligent identification, positioning and tracking method and system based on unmanned aerial vehicle
CN113673444B (en) Intersection multi-view target detection method and system based on angular point pooling
CN112862839B (en) Method and system for enhancing robustness of semantic segmentation of map elements
CN114419165A (en) Camera external parameter correcting method, device, electronic equipment and storage medium
CN114777797B (en) High-precision map visual positioning method for automatic driving and automatic driving method
CN114820931B (en) Virtual reality-based CIM (common information model) visual real-time imaging method for smart city
CN103473787B (en) A kind of based on moving target detecting method on the bridge of space geometry relation
CN115294551A (en) Construction method of drivable area and lane line detection model based on semantic segmentation
Song et al. A robust detection method for multilane lines in complex traffic scenes
CN115018926A (en) Method, device and equipment for determining pitch angle of vehicle-mounted camera and storage medium
CN114111817A (en) Vehicle positioning method and system based on SLAM map and high-precision map matching
Kong et al. UAV LiDAR Data-based Lane-level Road Network Generation for Urban Scene HD Maps

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant