CN110609311B - Intelligent vehicle positioning method based on fusion of vehicle-mounted panoramic image and millimeter wave radar - Google Patents

Intelligent vehicle positioning method based on fusion of vehicle-mounted panoramic image and millimeter wave radar Download PDF

Info

Publication number
CN110609311B
CN110609311B CN201910960074.2A CN201910960074A CN110609311B CN 110609311 B CN110609311 B CN 110609311B CN 201910960074 A CN201910960074 A CN 201910960074A CN 110609311 B CN110609311 B CN 110609311B
Authority
CN
China
Prior art keywords
vehicle
image
millimeter wave
positioning
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910960074.2A
Other languages
Chinese (zh)
Other versions
CN110609311A (en
Inventor
胡钊政
周哲
陶倩文
肖汉彪
刘佳蕙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Original Assignee
Wuhan University of Technology WUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUT filed Critical Wuhan University of Technology WUT
Priority to CN201910960074.2A priority Critical patent/CN110609311B/en
Publication of CN110609311A publication Critical patent/CN110609311A/en
Application granted granted Critical
Publication of CN110609311B publication Critical patent/CN110609311B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement

Abstract

The invention discloses an intelligent vehicle positioning method based on fusion of vehicle-mounted panoramic images and millimeter wave radars, which comprises the following steps: 1) Collecting GPS information, vehicle-mounted panoramic images, millimeter wave radar data and UWB data through a vehicle-mounted device, and generating a high-precision map according to the collected data; 2) Carrying out rough positioning according to GPS data acquired by a vehicle to be positioned and GPS data stored by each node in a map; 3) Determining the area range of the selected node according to the GPS precision and the node to carry out image-level positioning; 4) Calculating the pose relationship between the vehicle to be positioned and the charting vehicle, and determining the pose of the vehicle to be positioned; 5) And (4) performing Kalman filtering fusion on the pose obtained by the millimeter wave radar data and the pose obtained in the step 4), and taking the result as a final positioning result. The method realizes the vehicle positioning method configured with the all-round looking image and the millimeter wave radar on the basis of not increasing the cost of extra hardware, and improves the positioning precision and the robustness.

Description

Intelligent vehicle positioning method based on fusion of vehicle-mounted panoramic image and millimeter wave radar
Technical Field
The invention relates to the intelligent automobile technology, in particular to an intelligent automobile positioning method based on fusion of a vehicle-mounted panoramic image and a millimeter wave radar.
Background
Through the development of the GPS (Global Positioning System) in recent 60 years, the Positioning range and accuracy thereof have been absolutely superior in the outdoor environment, but the performance thereof is severely restricted in the indoor environment, particularly in the underground parking lot environment. Along with the development of cities, underground parking lots are continuously enlarged, and indoor-based intelligent vehicle positioning systems are widely required. Currently, common indoor positioning technologies include: RFID radio frequency identification location, WIFI location, bluetooth location, zigBee location, ultrasonic wave location. The RFID positioning and the WIFI positioning are easy to interfere, the stability is poor, the Bluetooth positioning and millimeter wave radar positioning distance is short, the ZigBee positioning cost is relatively high, and the ZigBee positioning method is not suitable for large-scale application.
At present, the mainstream vehicle models in China are provided with all-round-looking images and millimeter wave radars, and can provide an implementation carrier for the invention. The UWB technology is a novel wireless communication technology, has the characteristics of strong anti-interference performance, high multipath resolution, low power consumption and the like, and has wide development prospect. With the continuous development of computer vision technology, the vehicle positioning technology based on the technology is widely applied to intelligent vehicle positioning, although the information is rich, the positioning error is larger, so that the hierarchical positioning can be realized by fusing with other positioning methods, and the positioning efficiency and the stability are improved.
Disclosure of Invention
The invention aims to solve the technical problem of providing an intelligent vehicle positioning method based on fusion of vehicle-mounted panoramic images and millimeter wave radars, aiming at the defects in the prior art.
The technical scheme adopted by the invention for solving the technical problems is as follows: the intelligent vehicle positioning method based on the fusion of the vehicle-mounted panoramic image and the millimeter wave radar comprises the following steps:
1) Collecting GPS information, vehicle-mounted panoramic images, millimeter wave radar data and UWB data through a vehicle-mounted device, and generating a high-precision map according to the collected data; the high-precision map is node-based, and each node stores GPS position information, all-round view image information and millimeter wave radar data corresponding to the position of the node;
2) The method comprises the steps that a vehicle to be positioned is driven to a road section on which high-precision map acquisition and manufacturing are completed, any position is selected as a starting point, the Euclidean distance between GPS data acquired by the vehicle to be positioned and the GPS data stored in each node in a map is calculated, and the minimum distance is selected as a coarse positioning result;
3) Determining the area range of the selected node according to the GPS precision and the located node, taking all nodes in the area as image-level positioning candidate nodes, extracting global feature points of the all-round view image by using an ORB descriptor, matching the ring-view image global descriptor with the global descriptor of the image-level positioning candidate nodes, and selecting the node with the minimum Hamming distance from the image to be positioned as a prediction result;
because the precision of the GPS in an indoor environment is about 10m, 10 nodes in front of and behind a node sequence where a coarse positioning result is located are selected as image-level positioning candidates according to the distance between the nodes and in order to meet the robustness of the system, the overall feature points of the all-around view image are extracted by using ORB descriptors, the overall descriptors of the all-around view image are matched with the overall descriptors of 21 candidate nodes, and the node with the minimum Hamming distance from the image to be positioned is selected as a prediction result;
3.1 Extract all around images captured by the vehicle to be positioned and reset the resulting images to 63 × 63 (pixel) size, which is determined from the size extracted from ORB feature points in OpenCV.
3.2 The global feature tau of the image is calculated as the description of the whole image. Of these, τ =256 was chosen here because it was proven to be the best with 256 bits in the Brief paper.
3.3 Look around the global descriptor of the image and global descriptor of 21 candidate nodes match, choose the node with minimum Hamming distance of image to be positioned;
4) Extracting local feature points and descriptors of an original image to be positioned, matching with descriptors of all-around images in a map in a result node in the step 3.3) by adopting a Hamming distance, removing wrong matching by using RANSAC (random sample consensus), determining the same feature points in the image to be positioned and a map set image, calculating the position and orientation relation between a vehicle to be positioned and a drawing vehicle by using the feature points, and determining the position and orientation of the vehicle to be positioned;
5) And 3.3) extracting millimeter wave radar data in the positioning node, matching the millimeter wave radar data with millimeter wave radar data collected by the vehicle to be positioned, calculating a transformation relation between the vehicle to be positioned and the millimeter wave radar data stored in the map through an algorithm, solving the position and posture of the current vehicle relative to the map vehicle, fusing the position and posture obtained by the millimeter wave radar data and the position and posture obtained in the step 4) by adopting Kalman filtering, and taking the result as a final positioning result.
The invention has the following beneficial effects: the method realizes the vehicle positioning method configured with the all-round looking image and the millimeter wave radar on the basis of not increasing the cost of extra hardware, and improves the positioning precision and the robustness.
Drawings
The invention will be further described with reference to the following drawings and examples, in which:
FIG. 1 is a flow chart of a method of an embodiment of the present invention;
FIG. 2 is a UWB positioning schematic of an embodiment of the invention;
FIG. 3 is a schematic diagram of map node stored information according to an embodiment of the present invention;
fig. 4 is a schematic view of the pose determination of a vehicle to be positioned according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, an intelligent vehicle positioning method based on vehicle-mounted panoramic images and millimeter wave radars is innovative in that a multiscale positioning method integrating INS, vehicle-mounted panoramic images, millimeter wave radars and UWB is adopted, and specifically comprises the following steps:
1) Information collection
The equipment adopted for information acquisition comprises a vehicle-mounted camera, an INS receiver, a millimeter wave radar and a UWB. The number of the vehicle-mounted cameras is four, the front camera is embedded in the front LOGO position of the vehicle, the left camera and the right camera are arranged below the left rear-view mirror and the right rear-view mirror and form an angle with the vertical direction, the visual angle faces the outer side of the vehicle, the rear camera is arranged below the duck tail of the trunk and forms an angle with the vertical direction, and the visual angle faces backwards; the INS receiver is arranged in the middle of a rear axle of the vehicle; 1 millimeter wave radar, UWB label is installed in the front and back camera below.
2) Data processing
The data processing comprises camera image rectification and splicing, ORB descriptor extraction and data association.
The camera image correction and splicing module is mainly used for correcting images captured by the front camera, the rear camera, the left camera and the right camera, and then splicing the corrected images into a characteristic image for extracting an ORB descriptor. The ORB descriptor extraction module is used for extracting global ORB descriptors, local ORB feature points and local ORB descriptors of the spliced feature images. And the data correlation module realizes the time-space synchronization of the image information, the INS data and the millimeter wave radar data.
Auxiliary equipment, including fixed bolster, data line, collection vehicle, display, steady voltage DC power supply.
An intelligent vehicle positioning method based on fusion of vehicle-mounted panoramic images and millimeter wave radars comprises the following steps:
s1, collecting GPS information, a vehicle-mounted all-round view image, millimeter wave radar data and UWB data to generate a high-precision map, wherein the high-precision map is node-based, and each node stores the GPS position information, all-round view image information and millimeter wave radar data corresponding to the position of the node. Since only one UWB tag can obtain position, two UWB tags are required to obtain the attitude relationship, as shown in fig. 2.
Coarse positioning
And (4) driving the vehicle to be positioned to the road section on which the high-precision map acquisition and manufacturing are finished, and selecting any position as a starting point. And calculating the Euclidean distance between the GPS data acquired by the vehicle to be positioned and the GPS data stored by each node in the map. And selecting the minimum distance value as a coarse positioning result.
Image level localization
a) Selecting 10 nodes before and after the node sequence where the coarse positioning result is located as image-level positioning candidates, extracting global feature points of the all-round-looking image by using ORB descriptors, and matching the global descriptors of the all-round-looking image with the global descriptors of 21 candidate nodes, wherein the feature matching is realized by calculating Hamming distance, and the formula is as follows:
Figure BDA0002228619390000061
wherein, X 1 ,X 2 Representing two different global features, i representing X j (j =1, 2), the node having the smallest hamming distance from the image to be located is selected as the prediction result.
b) Extracting a local ORB descriptor of an image to be positioned, matching the local ORB descriptor of the image to be positioned with a local ORB descriptor in a) positioning map node, removing wrong matching by using RANSAC (random sample consensus), thereby determining the same characteristic points in the image to be positioned and a map set image, calculating the position and pose relations R and T of the vehicle to be positioned and a drawing vehicle by using the characteristic points, and determining the position and pose of the vehicle to be positioned. The formula is as follows:
Figure BDA0002228619390000071
wherein [ u ] 0 v 0 ] T For the sum of feature points [ u ] of the image to be located 1 v 1 ] T As the feature points of the map image,
Figure BDA0002228619390000072
Figure BDA0002228619390000073
and T is as desired.
c) Extracting millimeter wave radar data of a) positioning map nodes, matching the millimeter wave radar data with millimeter wave radar data collected by a vehicle to be positioned, calculating a transformation relation between the vehicle to be positioned and the millimeter wave radar data stored in the map through an algorithm, solving the position and posture of the current vehicle relative to the drawing vehicle, and fusing the position and posture obtained by the millimeter wave radar data and the position and posture obtained by b) to obtain a final position and posture relation.
Global level positioning
UWB data of the positioning nodes are extracted, and the pose of the vehicle to be positioned is converted into a global coordinate system through the UWB data.
One specific embodiment:
1) And discretizing the passable area of the parking lot, wherein each fixed distance is set as a node. The map information stored by the node includes GPS, all-round image features, millimeter wave radar data, and UWB data, as shown in fig. 3.
2) And the vehicle to be positioned is driven to the road section on which the high-precision map acquisition and manufacturing are finished, and any position is selected as a starting point. Traversing all the nodes in the map by the collected GPS information, calculating Euclidean distance between the nodes and the GPS data in the map nodes, and selecting the node with the minimum distance as an initial result.
3) As the positioning accuracy of the GPS is 3-10m, the initial setting is selected as 10 nodes in the front and back of the result node sequence as image-level positioning candidates.
3.1 Extract the all-round image collected by the vehicle to be positioned and reset the resulting image to 63 × 63 (pixel) size.
3.2 Carrying out graying and histogram equalization processing on the image after each frame of reset, and then calculating the global characteristic tau of the image as the description of the whole image, wherein tau is as follows:
τ=[66,185,151,152,137,205,160,207,174,61,138,85,164,242,100,
78,225,35,106,16,19,73,219,113,157,18,95,123,152,8,165,30]
3.3 The descriptor tau of the image to be positioned is matched with the descriptors tau of the 21 nodes selected in the step 3.2), the node with the minimum Hamming distance is used as a result, and the matching method is shown as a formula (1).
3.4 Extracting local characteristic points and descriptors of an original image to be positioned, matching with descriptors of all-around images in a map in a result node of 3.3) by adopting a Hamming distance, removing wrong matching by using RANSAC, determining the same characteristic points in the image to be positioned and a map set image, and calculating the position and orientation relation of a vehicle to be positioned and a drawing vehicle by using the characteristic points and a formula (2) to determine the position and orientation of the vehicle to be positioned.
3.5 3.3) extracting millimeter wave radar data in the positioning node, matching the millimeter wave radar data with millimeter wave radar data collected by the vehicle to be positioned, calculating a transformation relation between the vehicle to be positioned and the millimeter wave radar data stored in the map through an algorithm, solving the pose of the current vehicle relative to the map vehicle, fusing the pose obtained by the millimeter wave radar data and the pose obtained in the step 3.4) by adopting Kalman filtering, and taking the result as a final positioning result.
3.6 3.3) extracting UWB data of the positioning node, transforming the pose relation of S35 into a global coordinate system (as shown in figure 4), namely obtaining the pose of the vehicle in the global coordinate system, and finishing the positioning process.
It will be understood that modifications and variations can be made by persons skilled in the art in light of the above teachings and all such modifications and variations are intended to be included within the scope of the invention as defined in the appended claims.

Claims (3)

1. An intelligent vehicle positioning method based on fusion of vehicle-mounted panoramic images and millimeter wave radars is characterized by comprising the following steps:
1) Collecting GPS information, vehicle-mounted panoramic images, millimeter wave radar data and UWB data through a vehicle-mounted device, and generating a high-precision map according to the collected data; the high-precision map is node-based, and each node stores GPS position information, all-round view image information and millimeter wave radar data corresponding to the position of the node;
2) The method comprises the steps that a vehicle to be positioned is driven to a road section on which high-precision map acquisition and manufacturing are completed, any position is selected as a starting point, the Euclidean distance between GPS data acquired by the vehicle to be positioned and the GPS data stored in each node in a map is calculated, and the minimum distance is selected as a coarse positioning result;
3) Determining the area range of the selected node according to the GPS precision and the located node, taking all nodes in the area as image-level positioning candidate nodes, extracting global feature points of the all-round view image by using an ORB descriptor, matching the ring-view image global descriptor with the global descriptor of the image-level positioning candidate nodes, and selecting the node with the minimum Hamming distance from the image to be positioned as an image-level positioning prediction result;
4) Extracting local characteristic points and descriptors of an original image to be positioned, matching with descriptors of all-round images in a map in a result node of image-level positioning by adopting a Hamming distance, removing wrong matching by using RANSAC (random sample consensus), determining the same characteristic points in the image to be positioned and a map set image, calculating the position and pose relationship between a vehicle to be positioned and a drawing vehicle by using the characteristic points, and determining the position and pose of the vehicle to be positioned;
5) Millimeter wave radar data in the positioning nodes of the image-level positioning prediction result are extracted and matched with millimeter wave radar data collected by the vehicle to be positioned, the transformation relation between the vehicle to be positioned and the millimeter wave radar data stored in the map is calculated through an algorithm, the position and the posture of the current vehicle relative to the map vehicle are solved, the position and the posture obtained through the millimeter wave radar data and the position and the posture obtained in the step 4) are fused through Kalman filtering, and the result is used as the final positioning result.
2. The intelligent vehicle positioning method based on fusion of the vehicle-mounted panoramic image and the millimeter wave radar as claimed in claim 1, wherein the step 3) is specifically as follows: selecting 10 nodes before and after a node sequence where a coarse positioning result is located as image-level positioning candidates, extracting global feature points of a look-around image by using ORB descriptors, matching the global descriptors of the look-around image with global descriptors of 21 candidate nodes, and selecting a node with the minimum Hamming distance from an image to be positioned as a prediction result;
the global descriptor is a description of global features of the image, and specifically includes the following steps:
3.1 Extracting a panoramic image collected by a vehicle to be positioned, and resetting the obtained image to be 63 × 63, wherein the size of the image is determined according to the size extracted from the ORB feature points in the OpenCV;
3.2 Carrying out graying and histogram equalization processing on the image after each frame of reset, and then calculating the global characteristic tau of the image as the description of the whole image;
3.3 Look around image global descriptors are matched with global descriptors of 21 candidate nodes, and the node with the minimum hamming distance from the image to be located is selected.
3. The intelligent vehicle positioning method based on the fusion of the vehicle-mounted panoramic image and the millimeter wave radar as claimed in claim 1, wherein in the step 3), the hamming distance is calculated, and the formula is as follows:
Figure FDA0002228619380000031
wherein X 1 ,X 2 Representing two different global features, superscript i representing X j The ith of (j =1,2).
CN201910960074.2A 2019-10-10 2019-10-10 Intelligent vehicle positioning method based on fusion of vehicle-mounted panoramic image and millimeter wave radar Active CN110609311B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910960074.2A CN110609311B (en) 2019-10-10 2019-10-10 Intelligent vehicle positioning method based on fusion of vehicle-mounted panoramic image and millimeter wave radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910960074.2A CN110609311B (en) 2019-10-10 2019-10-10 Intelligent vehicle positioning method based on fusion of vehicle-mounted panoramic image and millimeter wave radar

Publications (2)

Publication Number Publication Date
CN110609311A CN110609311A (en) 2019-12-24
CN110609311B true CN110609311B (en) 2022-12-23

Family

ID=68894337

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910960074.2A Active CN110609311B (en) 2019-10-10 2019-10-10 Intelligent vehicle positioning method based on fusion of vehicle-mounted panoramic image and millimeter wave radar

Country Status (1)

Country Link
CN (1) CN110609311B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111142145A (en) * 2019-12-31 2020-05-12 武汉中海庭数据技术有限公司 Vehicle positioning method and device
CN111191596B (en) * 2019-12-31 2023-06-02 武汉中海庭数据技术有限公司 Closed area drawing method, device and storage medium
CN111649724B (en) * 2020-06-04 2022-09-06 百度在线网络技术(北京)有限公司 Visual positioning method and device based on mobile edge calculation
CN111964665B (en) * 2020-07-23 2022-07-12 武汉理工大学 Intelligent vehicle positioning method and system based on vehicle-mounted all-around image and storage medium
CN111890373A (en) * 2020-09-29 2020-11-06 常州唯实智能物联创新中心有限公司 Sensing and positioning method of vehicle-mounted mechanical arm
CN112230211A (en) * 2020-10-15 2021-01-15 长城汽车股份有限公司 Vehicle positioning method and device, storage medium and vehicle
CN112946436A (en) * 2021-02-02 2021-06-11 成都国铁电气设备有限公司 Online intelligent detection method for arc extinction and disconnection of vehicle-mounted contact net insulator
CN113465619A (en) * 2021-06-01 2021-10-01 上海追势科技有限公司 Vehicle fusion positioning method based on detection data of vehicle-mounted looking-around system
CN114199240B (en) * 2022-02-18 2022-06-21 武汉理工大学 Two-dimensional code, laser radar and IMU fusion positioning system and method without GPS signal

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109031304A (en) * 2018-06-06 2018-12-18 上海国际汽车城(集团)有限公司 Vehicle positioning method in view-based access control model and the tunnel of millimetre-wave radar map feature
CN109870689A (en) * 2019-01-08 2019-06-11 武汉中海庭数据技术有限公司 Millimetre-wave radar and the matched lane grade localization method of high-precision map vector and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109031304A (en) * 2018-06-06 2018-12-18 上海国际汽车城(集团)有限公司 Vehicle positioning method in view-based access control model and the tunnel of millimetre-wave radar map feature
CN109870689A (en) * 2019-01-08 2019-06-11 武汉中海庭数据技术有限公司 Millimetre-wave radar and the matched lane grade localization method of high-precision map vector and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于GPS与图像融合的智能车辆高精度定位算法;李承等;《交通运输系统工程与信息》;20170630(第03期);第112-119页 *

Also Published As

Publication number Publication date
CN110609311A (en) 2019-12-24

Similar Documents

Publication Publication Date Title
CN110609311B (en) Intelligent vehicle positioning method based on fusion of vehicle-mounted panoramic image and millimeter wave radar
CN111882612B (en) Vehicle multi-scale positioning method based on three-dimensional laser detection lane line
CN111983639B (en) Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU
CN108802785B (en) Vehicle self-positioning method based on high-precision vector map and monocular vision sensor
CN111830953B (en) Vehicle self-positioning method, device and system
CN111862672A (en) Parking lot vehicle self-positioning and map construction method based on top view
CN108801274B (en) Landmark map generation method integrating binocular vision and differential satellite positioning
CN109583409A (en) A kind of intelligent vehicle localization method and system towards cognitive map
CN105608693A (en) Vehicle-mounted panoramic around view calibration system and method
CN104732518A (en) PTAM improvement method based on ground characteristics of intelligent robot
CN103679674A (en) Method and system for splicing images of unmanned aircrafts in real time
CN111862673B (en) Parking lot vehicle self-positioning and map construction method based on top view
CN112862881B (en) Road map construction and fusion method based on crowd-sourced multi-vehicle camera data
CN112362072A (en) High-precision point cloud map creation system and method in complex urban area environment
CN110032965A (en) Vision positioning method based on remote sensing images
CN115717894A (en) Vehicle high-precision positioning method based on GPS and common navigation map
CN104535047A (en) Multi-agent target tracking global positioning system and method based on video stitching
CN110736472A (en) indoor high-precision map representation method based on fusion of vehicle-mounted all-around images and millimeter wave radar
Guo et al. Coarse-to-fine semantic localization with HD map for autonomous driving in structural scenes
WO2021017211A1 (en) Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal
CN113358112A (en) Map construction method and laser inertia odometer
CN115564865A (en) Construction method and system of crowdsourcing high-precision map, electronic equipment and vehicle
CN114782729A (en) Real-time target detection method based on laser radar and vision fusion
Tao et al. Automated processing of mobile mapping image sequences
CN114264297B (en) Positioning and mapping method and system for UWB and visual SLAM fusion algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant