CN115468576A - Automatic driving positioning method and system based on multi-mode data fusion - Google Patents

Automatic driving positioning method and system based on multi-mode data fusion Download PDF

Info

Publication number
CN115468576A
CN115468576A CN202211202088.6A CN202211202088A CN115468576A CN 115468576 A CN115468576 A CN 115468576A CN 202211202088 A CN202211202088 A CN 202211202088A CN 115468576 A CN115468576 A CN 115468576A
Authority
CN
China
Prior art keywords
positioning
point cloud
rgb image
laser point
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211202088.6A
Other languages
Chinese (zh)
Inventor
何薇
胡博伦
郭启翔
陈晖�
刘磊
高宠智
屈紫君
李嫩
晏萌
付浩
赵金波
于子康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongfeng Automobile Co Ltd
Original Assignee
Dongfeng Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongfeng Automobile Co Ltd filed Critical Dongfeng Automobile Co Ltd
Priority to CN202211202088.6A priority Critical patent/CN115468576A/en
Publication of CN115468576A publication Critical patent/CN115468576A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/3415Dynamic re-routing, e.g. recalculating the route when the user deviates from calculated route or after detecting real-time traffic data or accidents
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Automation & Control Theory (AREA)
  • Navigation (AREA)

Abstract

The invention discloses an automatic driving positioning method and system based on multi-mode data fusion. The method comprises the steps of obtaining coarse positioning coordinates of a vehicle position through a positioning system; determining a positioning space range according to the rough positioning coordinates; acquiring an RGB image and a laser point cloud of a road in front of a vehicle; fusing data of the RGB image and the laser point cloud within a positioning space range; and positioning and matching the fused RGB image and laser point cloud data with a high-precision map, and determining the position coordinates of the vehicle. Based on a multi-mode data fusion technology, an environment self-adaptive optimization positioning algorithm is adopted, so that the problems of complex urban road conditions and building or tunnel unlocking can be solved, and the positioning requirement under a complex dynamic environment can be met; the positioning accuracy of the autonomous vehicle can also be improved.

Description

Automatic driving positioning method and system based on multi-mode data fusion
Technical Field
The invention relates to the technical field of vehicle positioning, in particular to an automatic driving positioning method and system based on multi-mode data fusion.
Background
The precise real-time vehicle positioning technology is the core technology of the automatic driving vehicle, and the common positioning methods of the automatic driving vehicle are satellite-inertial navigation combined navigation, differential GPS positioning, beidou positioning, visual positioning and radar positioning. Beidou and GPS positioning can cause the conditions of accuracy reduction and even lock losing under the influence of factors such as complex urban road conditions, building or tunnel shielding signals, signal multipath effect and the like; the visual positioning precision is high, but the positioning may fail in rainy days; the positioning accuracy of the laser radar is related to the resolution ratio of the laser radar, and when the environment is complex, the cloud data volume is huge, and the real-time performance is influenced.
With the development of vehicle-road cloud integration, cross-mode fusion positioning becomes possible. The depth fusion positioning technology for researching multi-source information such as Beidou, inertial navigation, vision, high-precision maps and the like can improve the positioning precision and meet the positioning requirement in a complex dynamic environment.
Disclosure of Invention
The technical problem to be solved by the invention is to provide an automatic driving positioning method based on multi-mode data fusion, which can meet the positioning requirement in a complex dynamic environment and can improve the positioning precision of an automatic driving vehicle.
In order to solve the technical problem, the invention provides an automatic driving positioning method based on multi-modal data fusion, which comprises the following steps:
the method comprises the following steps: acquiring a coarse positioning coordinate of the vehicle position through a positioning system;
step two: determining a positioning space range according to the rough positioning coordinates;
step three: acquiring an RGB image and a laser point cloud of a road in front of a vehicle;
step four: fusing the RGB image and the laser point cloud data in a positioning space range;
step five: and positioning and matching the fused RGB image and laser point cloud data with a high-precision map, and determining the position coordinates of the vehicle.
Preferably, the RGB image and the laser point cloud in step three are data at the same time.
Further, in step four, the fusing the RGB image and the laser point cloud includes:
acquiring position information of elements in the RGB image in a first coordinate system;
and acquiring semantic codes of the elements in the laser point cloud in the first coordinate system. Preferably, the origin of coordinates of the first coordinate system is located on the vehicle body.
Further, in step five: determining the vehicle position coordinates includes:
step b1: constructing a factor graph, wherein nodes are positioning information to be solved, position coordinates at the ti moment are adopted as the nodes, and factors represent space constraints among the nodes and comprise high-precision map factors, RGB image positioning factors and laser point cloud positioning factors;
and b2: constructing an optimized objective function according to the nodes and the factors, and deducing a Jacobian matrix to finally complete optimization;
step b3: and solving by adopting a Gauss-Newton method to obtain the optimal value of the objective function.
In order to solve the technical problem, the present invention further provides a system based on the automatic driving positioning method based on multimodal data fusion, including: a coarse positioning module: acquiring a coarse positioning coordinate of the vehicle position through a positioning system;
a determination module: determining a positioning space range according to the coarse positioning coordinates;
an acquisition module: acquiring an RGB image and a laser point cloud of a road in front of a vehicle;
a fusion module: fusing data of the RGB image and the laser point cloud within a positioning space range;
a positioning matching module: and positioning and matching the fused RGB image and laser point cloud data with a high-precision map, and determining the position coordinates of the vehicle.
The invention has the beneficial effects that: based on a multi-mode data fusion technology, an environment self-adaptive optimization positioning algorithm is adopted, so that the problems of complex urban road conditions and building or tunnel unlocking can be solved, and the positioning requirement under a complex dynamic environment can be met; the positioning accuracy of the autonomous vehicle can also be improved.
Drawings
In the drawings:
FIG. 1 is a block flow diagram of an automated driving location method based on multimodal data fusion.
Detailed Description
The following further describes embodiments of the present invention with reference to the drawings. It should be noted that the description of the embodiments is provided to help understanding of the present invention, but the present invention is not limited thereto.
Example 1
As shown in FIG. 1, the invention relates to an automatic driving positioning method based on multi-mode data fusion, which comprises the following steps:
step S10: and acquiring coarse positioning coordinates of the vehicle position through a positioning system. First, a GPS (Global Positioning System), a beidou or a GNSS (Global navigation Satellite System) or inertial navigation is used to perform coarse Positioning, thereby reducing a Positioning range.
Step S20: and determining a positioning space range according to the coarse positioning coordinates. And based on the coarse positioning range, respectively screening the RGB image and the laser point cloud, eliminating data beyond the positioning space range, and acquiring a high-precision map of the positioning space range.
Step S30: and acquiring an RGB image and a laser point cloud of a road in front of the vehicle. Generally, a camera and a laser radar on the vehicle are used for obtaining, and due to time deviation of different hardware, time calibration processing is carried out, so that the RGB images of the front road collected in the third step correspond to the laser point cloud in time.
Step S40: and fusing the RGB image and the data of the laser point cloud within the positioning space range. Combining the outputs of the lidar and the camera helps to overcome their respective limitations, with complementary advantages. Cameras are a very good tool for detecting roads, reading signs or identifying vehicles. Lidar is better at accurately estimating the location range of the vehicle.
The specific fusion method can be as follows:
(1) Position information of an element in the RGB image in a first coordinate system is acquired. The RGB image elements comprise lane lines and road signs, the road signs are usually positioned firstly, if the lane lines cannot be positioned, the lane line identification and positioning process is complicated, the RGB image elements are used as substitutes for the lane signs, and the calculation process is reduced.
Optionally, the position information of the road sign is acquired based on a three-way sign single-eye visual positioning method.
Optionally, firstly, in the positioning space range, detecting lane lines on the RGB image by using a lane line detection algorithm based on the FCN; and calculating the position information of the lane line on the RGB image in the first coordinate system through a lane line positioning model.
(2) And acquiring semantic codes of elements in the laser point cloud in a first coordinate system. The method comprises the following specific steps:
(1) and performing semantic segmentation. The segmented semantic target consists of the ground, a traffic sign board and a rod-shaped object. Firstly, preprocessing point cloud such as conditional filtering and statistical filtering, removing outliers, and performing ground point cloud segmentation based on pitch angle evaluation; then, screening of the traffic sign is realized based on the reflection intensity and the shape characteristics. Carrying out Euclidean classification on the high radiation intensity, and constructing a multistage filter based on priori knowledge to finish accurate segmentation; finally, the shaft is segmented using a method based on object analysis. And (3) segmenting the rod-shaped target from the residual point cloud, wherein the rod-shaped target mainly comprises a rod-shaped traffic device and a trunk, and generating a point cluster set for the residual point cloud subjected to ground and traffic sign board filtering in an Euclidean clustering manner.
(2) And performing semantic projection. And replacing the actual object with the integral point cloud centroid of the semantic target, and performing overlook projection on the segmented traffic sign board and the rod-shaped semantic target.
(3) And carrying out semantic coding. And generating a weighted finite graph based on the top view, wherein the weight value between semantic objects is expressed by Euclidean distance. Recording all information under each path, including a target node sequence, a distance between target nodes, a target minimum outsourcing box size and a target ground clearance, and generating 1 semantic path. From the starting point S through d sp1 Traversing the distance to the rod-shaped target, and then passing through d sp1 Go through the traffic sign board and then go through d p1s1 The end point E is reached.
Figure BDA0003872355730000041
In the formula: l pl ,w pl ,h pl ,z pl max The length, the width, the height and the ground clearance of the minimum outer packing box are respectively; p and s are respectively a pole and a traffic sign; 1 and 2 are respectively a pole and a traffic sign board, and finally all semantic roads form scene semantic codes of the scene.
Optionally, the coordinate origin of the first coordinate system is set on the vehicle body, the RGB image output by the camera and the laser point cloud information output by the laser radar can be overlapped by calibration, and the coordinates at the same position are the same.
Step S50: and positioning and matching the fused RGB image and laser point cloud data with a high-precision map, and determining the position coordinates of the vehicle.
Optionally, for lane line positioning matching of RGB images: and carrying out lane level matching by using a hidden Markov model.
Optionally, the semantic coding positioning matching of the laser point cloud: and (4) outputting the position of the positioning point through target type sequence matching, minimum outer box matching and ground clearance matching, path weight matching and matching uniqueness testing.
Optionally, determining the vehicle position coordinates according to the result of positioning and matching each element includes: factor patterning based on multiple modalities.
(1) And (4) factor graph construction. The factor graph construction is the node and factor configuration of the factor graph, the node is the positioning information with solving, and t is adopted i The position and attitude at that time are nodes x i (ii) a The factors represent spatial constraints between the nodes. The factors in the scheme comprise a high-precision map factor, an RGB image positioning factor and a laser point cloud positioning factor.
(2) And (4) optimizing a factor graph. And constructing an optimized objective function according to the nodes and the factors, and deriving a Jacobian matrix to finally complete optimization. Ith node x i Optimized objective function F (x) i ) Can be divided into three parts:
Figure BDA0003872355730000051
a, b, c are the confidence of three factors,
Figure BDA0003872355730000052
is the coordinates of the image or images,
Figure BDA0003872355730000053
in order to have high-precision map coordinates,
Figure BDA0003872355730000054
is the laser point cloud coordinate.
(3) And solving by adopting a Gauss-Newton method, and estimating the optimal value of the objective function.
The invention relates to an automatic driving positioning method system based on multi-mode data fusion, which is based on the automatic driving positioning method based on multi-mode data fusion, and comprises the following steps:
a coarse positioning module: acquiring a coarse positioning coordinate of the vehicle position through a positioning system;
the determining module: determining a positioning space range according to the rough positioning coordinates;
an acquisition module: acquiring an RGB image and a laser point cloud of a road in front of a vehicle;
a fusion module: fusing the RGB image and the laser point cloud data in a positioning space range;
a positioning matching module: and positioning and matching the fused RGB image and laser point cloud data with a high-precision map, and determining the position coordinates of the vehicle.
Finally, it should be noted that: although the present invention has been described in detail with reference to the above embodiments, it should be understood by those skilled in the art that: various alterations, modifications and equivalents may be introduced into the embodiments of the invention by those skilled in the art after reading this disclosure, and such alterations, modifications and equivalents are intended to be within the scope of the invention as defined in the appended claims.

Claims (10)

1. An automatic driving positioning method based on multi-modal data fusion is characterized by comprising the following steps:
the method comprises the following steps: acquiring a coarse positioning coordinate of the vehicle position through a positioning system;
step two: determining a positioning space range according to the rough positioning coordinates;
step three: acquiring an RGB image and a laser point cloud of a road in front of a vehicle;
step four: fusing the RGB image and the laser point cloud data in a positioning space range;
step five: and positioning and matching the fused RGB image and laser point cloud data with a high-precision map, and determining the position coordinates of the vehicle.
2. The method of claim 1, wherein fusing the RGB images and the laser point cloud in step four comprises:
acquiring position information of elements in the RGB image in a first coordinate system;
and acquiring semantic codes of elements in the laser point cloud in a first coordinate system.
3. The method of claim 2, wherein obtaining the position information of the element in the RGB image in the first coordinate system comprises:
in the positioning space range, detecting lane lines on the RGB images by adopting a lane line detection algorithm based on FCN;
and calculating the position information of the lane line on the RGB image in the first coordinate system through the lane line positioning model.
4. The automatic driving positioning method based on multi-modal data fusion as claimed in claim 3, wherein in step five, the positioning and matching of the fused RGB image and laser point cloud data with the high-precision map comprises: and carrying out lane level matching on lane lines of the RGB images by adopting a hidden Markov model.
5. The method of claim 2, wherein obtaining the position information of the element in the RGB image in the first coordinate system comprises: and acquiring the position information of the road sign based on a three-way sign visual positioning method.
6. The method of claim 2, wherein obtaining semantic codes of elements in the laser point cloud in the first coordinate system comprises:
step a1: performing semantic segmentation on the laser point cloud, wherein segmented semantic objects comprise the ground, traffic signboards and rod-shaped objects;
step a2: the segmented traffic signboards and the rod-shaped semantic targets are subjected to overlook projection, and the whole point cloud centroid of the semantic targets is used for replacing an actual object during projection;
step a3: and carrying out semantic coding.
7. The automated driving positioning method based on multi-modal data fusion as claimed in claim 1, characterized in that in step five: determining the vehicle position coordinates includes:
step b1: constructing a factor graph, wherein nodes are positioning information to be solved, position coordinates at the ti moment are adopted as the nodes, and the factors represent space constraints among the nodes and comprise high-precision map factors, RGB image positioning factors and laser point cloud positioning factors;
step b2: constructing an optimized objective function according to the nodes and the factors, and deducing a Jacobian matrix to finally complete optimization;
step b3: and solving by adopting a Gauss-Newton method to obtain the optimal value of the objective function.
8. The automatic driving positioning method based on multi-modal data fusion as claimed in claim 1, wherein in step one, the positioning system comprises GPS, beidou or inertial navigation.
9. An automatic driving positioning method system based on multi-modal data fusion based on the automatic driving positioning method based on multi-modal data fusion of any one of the above claims 1-8, characterized by comprising:
a coarse positioning module: acquiring a coarse positioning coordinate of the vehicle position through a positioning system;
a determination module: determining a positioning space range according to the rough positioning coordinates;
an acquisition module: acquiring an RGB image and a laser point cloud of a road in front of a vehicle;
a fusion module: fusing the RGB image and the laser point cloud data in a positioning space range;
a positioning matching module: and positioning and matching the fused RGB image and laser point cloud data with a high-precision map, and determining the position coordinates of the vehicle.
10. The system of claim 9, wherein the acquisition module comprises a lidar and a camera.
CN202211202088.6A 2022-09-29 2022-09-29 Automatic driving positioning method and system based on multi-mode data fusion Pending CN115468576A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211202088.6A CN115468576A (en) 2022-09-29 2022-09-29 Automatic driving positioning method and system based on multi-mode data fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211202088.6A CN115468576A (en) 2022-09-29 2022-09-29 Automatic driving positioning method and system based on multi-mode data fusion

Publications (1)

Publication Number Publication Date
CN115468576A true CN115468576A (en) 2022-12-13

Family

ID=84334519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211202088.6A Pending CN115468576A (en) 2022-09-29 2022-09-29 Automatic driving positioning method and system based on multi-mode data fusion

Country Status (1)

Country Link
CN (1) CN115468576A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116027375A (en) * 2023-03-29 2023-04-28 智道网联科技(北京)有限公司 Positioning method and device for automatic driving vehicle, electronic equipment and storage medium

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108121346A (en) * 2017-12-20 2018-06-05 东风汽车集团有限公司 A kind of automatic Pilot heading control loop and method based on high-precision navigation positioning system
CN108254776A (en) * 2017-12-25 2018-07-06 东风汽车集团有限公司 Tunnel placement system and method based on curb fluorescent reflection and binocular camera
CN111899162A (en) * 2019-05-06 2020-11-06 上海交通大学 Point cloud data processing method and system based on segmentation
CN111949943A (en) * 2020-07-24 2020-11-17 北京航空航天大学 Vehicle fusion positioning method for V2X and laser point cloud registration for advanced automatic driving
CN112347840A (en) * 2020-08-25 2021-02-09 天津大学 Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
CN112819711A (en) * 2021-01-20 2021-05-18 电子科技大学 Monocular vision-based vehicle reverse positioning method utilizing road lane line
CN113093254A (en) * 2021-04-12 2021-07-09 南京速度软件技术有限公司 Multi-sensor fusion based vehicle positioning method in viaduct with map features
CN113885062A (en) * 2021-09-28 2022-01-04 中国科学技术大学先进技术研究院 Data acquisition and fusion equipment, method and system based on V2X
CN114111811A (en) * 2021-12-17 2022-03-01 奇瑞万达贵州客车股份有限公司 Navigation control system and method for automatically driving public bus
CN114184200A (en) * 2022-02-14 2022-03-15 南京航空航天大学 Multi-source fusion navigation method combined with dynamic mapping
CN114199240A (en) * 2022-02-18 2022-03-18 武汉理工大学 Two-dimensional code, laser radar and IMU fusion positioning system and method without GPS signal
CN114428259A (en) * 2021-12-13 2022-05-03 武汉中海庭数据技术有限公司 Automatic vehicle extraction method in laser point cloud of ground library based on map vehicle acquisition
CN114674314A (en) * 2022-03-31 2022-06-28 天津城建大学 Factor graph indoor positioning method based on fusion of multiple sensors
CN114754782A (en) * 2022-04-14 2022-07-15 智道网联科技(北京)有限公司 Map construction method and device, electronic equipment and computer readable storage medium
CN115031744A (en) * 2022-05-31 2022-09-09 电子科技大学 Cognitive map positioning method and system based on sparse point cloud-texture information

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108121346A (en) * 2017-12-20 2018-06-05 东风汽车集团有限公司 A kind of automatic Pilot heading control loop and method based on high-precision navigation positioning system
CN108254776A (en) * 2017-12-25 2018-07-06 东风汽车集团有限公司 Tunnel placement system and method based on curb fluorescent reflection and binocular camera
CN111899162A (en) * 2019-05-06 2020-11-06 上海交通大学 Point cloud data processing method and system based on segmentation
CN111949943A (en) * 2020-07-24 2020-11-17 北京航空航天大学 Vehicle fusion positioning method for V2X and laser point cloud registration for advanced automatic driving
CN112347840A (en) * 2020-08-25 2021-02-09 天津大学 Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
CN112819711A (en) * 2021-01-20 2021-05-18 电子科技大学 Monocular vision-based vehicle reverse positioning method utilizing road lane line
CN113093254A (en) * 2021-04-12 2021-07-09 南京速度软件技术有限公司 Multi-sensor fusion based vehicle positioning method in viaduct with map features
CN113885062A (en) * 2021-09-28 2022-01-04 中国科学技术大学先进技术研究院 Data acquisition and fusion equipment, method and system based on V2X
CN114428259A (en) * 2021-12-13 2022-05-03 武汉中海庭数据技术有限公司 Automatic vehicle extraction method in laser point cloud of ground library based on map vehicle acquisition
CN114111811A (en) * 2021-12-17 2022-03-01 奇瑞万达贵州客车股份有限公司 Navigation control system and method for automatically driving public bus
CN114184200A (en) * 2022-02-14 2022-03-15 南京航空航天大学 Multi-source fusion navigation method combined with dynamic mapping
CN114199240A (en) * 2022-02-18 2022-03-18 武汉理工大学 Two-dimensional code, laser radar and IMU fusion positioning system and method without GPS signal
CN114674314A (en) * 2022-03-31 2022-06-28 天津城建大学 Factor graph indoor positioning method based on fusion of multiple sensors
CN114754782A (en) * 2022-04-14 2022-07-15 智道网联科技(北京)有限公司 Map construction method and device, electronic equipment and computer readable storage medium
CN115031744A (en) * 2022-05-31 2022-09-09 电子科技大学 Cognitive map positioning method and system based on sparse point cloud-texture information

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116027375A (en) * 2023-03-29 2023-04-28 智道网联科技(北京)有限公司 Positioning method and device for automatic driving vehicle, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110988912B (en) Road target and distance detection method, system and device for automatic driving vehicle
CN111144388B (en) Monocular image-based road sign line updating method
CN111882612B (en) Vehicle multi-scale positioning method based on three-dimensional laser detection lane line
CN108802785B (en) Vehicle self-positioning method based on high-precision vector map and monocular vision sensor
CN105667518B (en) The method and device of lane detection
CN110146910B (en) Positioning method and device based on data fusion of GPS and laser radar
CN108171131B (en) Improved MeanShift-based method for extracting Lidar point cloud data road marking line
CN114526745B (en) Drawing construction method and system for tightly coupled laser radar and inertial odometer
CN111652179A (en) Semantic high-precision map construction and positioning method based on dotted line feature fusion laser
CN112740225B (en) Method and device for determining road surface elements
CN103377476A (en) Image registration of multimodal data using 3d geoarcs
CN112162297B (en) Method for eliminating dynamic obstacle artifacts in laser point cloud map
CN110197173B (en) Road edge detection method based on binocular vision
CN112362072A (en) High-precision point cloud map creation system and method in complex urban area environment
CN114325634A (en) Method for extracting passable area in high-robustness field environment based on laser radar
CN113640822A (en) High-precision map construction method based on non-map element filtering
CN113392169A (en) High-precision map updating method and device and server
CN115980765A (en) Vehicle positioning method based on environment perception auxiliary particle filtering
CN115205803A (en) Automatic driving environment sensing method, medium and vehicle
KR20230003803A (en) Automatic calibration through vector matching of the LiDAR coordinate system and the camera coordinate system
Xiong et al. Road-Model-Based road boundary extraction for high definition map via LIDAR
CN115468576A (en) Automatic driving positioning method and system based on multi-mode data fusion
CN113838129B (en) Method, device and system for obtaining pose information
CN113988197A (en) Multi-camera and multi-laser radar based combined calibration and target fusion detection method
WO2023222671A1 (en) Position determination of a vehicle using image segmentations

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination