CN116182833A - Construction method of small-scene high-precision map - Google Patents

Construction method of small-scene high-precision map Download PDF

Info

Publication number
CN116182833A
CN116182833A CN202211672134.9A CN202211672134A CN116182833A CN 116182833 A CN116182833 A CN 116182833A CN 202211672134 A CN202211672134 A CN 202211672134A CN 116182833 A CN116182833 A CN 116182833A
Authority
CN
China
Prior art keywords
road
road section
point
scene
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211672134.9A
Other languages
Chinese (zh)
Inventor
魏超
王鹏
曹兴雨
钱歆昊
王励志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yangtze River Delta Research Institute Of Beijing University Of Technology Jiaxing
Beijing Institute of Technology BIT
Original Assignee
Yangtze River Delta Research Institute Of Beijing University Of Technology Jiaxing
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yangtze River Delta Research Institute Of Beijing University Of Technology Jiaxing, Beijing Institute of Technology BIT filed Critical Yangtze River Delta Research Institute Of Beijing University Of Technology Jiaxing
Priority to CN202211672134.9A priority Critical patent/CN116182833A/en
Publication of CN116182833A publication Critical patent/CN116182833A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/3815Road data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3863Structures of map data
    • G01C21/387Organisation of map data, e.g. version management or database structures
    • G01C21/3878Hierarchical structures, e.g. layering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention relates to a method for constructing a small-scene high-precision map, which comprises the following steps: performing scene mapping by using a trolley carrying a laser radar and an inertial measurement unit, and acquiring a point cloud map of the scene; and marking the point cloud map based on a marking tool, and constructing a road network model to complete the construction of the scene. The method is suitable for any scene, the high-precision map is drawn manually after the environmental data are collected, the flexibility is high, and the work of each sub-module of the automatic driving system can be effectively assisted.

Description

Construction method of small-scene high-precision map
Technical Field
The invention relates to the technical field of automatic driving, in particular to a method for constructing a small-scene high-precision map.
Background
The high-precision map provides necessary priori information for each sub-module of the automatic driving system, and has an indispensable effect for the automatic driving automobiles of L4 level and above. In the environment sensing module, the high-precision map provides a complete static scene, and a region of interest (ROI) required by the sensing module can be determined in advance, so that the sensing module can detect a dynamic environment and traffic signs aiming at a specific region, the operation amount of the sensing module is greatly reduced, and the instantaneity is ensured; in the positioning module, scene information provided by the high-precision map and information received by the sensor are fused and matched to obtain the pose of the vehicle on the high-precision map; in the path planning module, the high-precision map provides navigation information, road parameters and passable areas, the auxiliary path planning module obtains an initial path with better overall situation, and then the vehicle motion constraint is combined to obtain a more reasonable running track.
At present, the production and the manufacture of the high-precision map are a piece of work with high cost and extremely complicated flow, a large number of workers are required to collect by using a specific data acquisition vehicle, moreover, the main flow map manufacturer in China only can provide partial high-precision maps corresponding to expressways or urban roads, the data encapsulation performance is good, users cannot use the high-precision map efficiently and conveniently, the self high-precision map is difficult to customize according to actual task scenes, and the flexibility is poor. For some special small-scene automatic driving tasks, a high-precision map is not acquired from the future, so that automatic driving is difficult to land practically.
Disclosure of Invention
The invention aims to provide a method for constructing a small-scene high-precision map, which organizes road elements by a brand-new system architecture and establishes a road network structure model, the effect of the method can reach lane-level navigation and centimeter-level positioning precision, the method is applicable to any scene, the high-precision map is manually drawn after environmental data are acquired, the flexibility is high, and the work of each sub-module of an automatic driving system can be effectively assisted.
In order to achieve the above object, the present invention provides the following solutions:
a construction method of a small scene high-precision map comprises the following steps:
performing scene mapping by using a laser radar and an inertial measurement device, and acquiring a point cloud map of the scene;
and marking the point cloud map based on a marking tool, and constructing a road network model to complete the construction of the scene.
Preferably, acquiring the point cloud map of the scene includes:
and acquiring the point cloud map through a laser synchronous positioning and mapping technology, wherein the point cloud map is used as a high-precision map point cloud layer.
Preferably, the high-precision map includes:
a point cloud layer, a road topology layer, an obstacle layer and a vehicle model layer;
the point cloud layer is a three-dimensional point cloud map of the scene, the road topology layer is a road network topology structure in the scene, the obstacle layer is a mapping of static obstacles and real-time dynamic obstacles of the scene on the high-precision map, and the vehicle model layer comprises a vehicle motion model, a vehicle running track and a planning track in the scene.
Preferably, labeling the point cloud map based on a labeling tool includes:
and labeling the point cloud map by using an automatic tool to obtain a road topology layer, and labeling a road network model, traffic marks and parking areas in the point cloud map.
Preferably, labeling the road network model, the traffic sign and the parking area in the point cloud map comprises:
importing the point cloud map, sampling left road points and right road points at the road edge, merging the left road points into left road edges, merging the right road points into right road edges, merging the left road edges and the right road edges, obtaining a road section model, obtaining and constructing the road network model based on the road section model, and setting the road direction, the road pavement characteristics, the road speed limit and the road reference line of the road section;
setting a traffic sign label at a road intersection or an area with traffic signs, and associating the traffic sign label with a road section nearest to the road intersection to acquire the traffic signs;
the parking areas are marked manually in the parking allowed areas.
Preferably, constructing the road network model includes:
s1.1, sequentially distributing unique id identification numbers to each road section;
s1.2, carrying out longitudinal communication matching on any road segment, and setting a matching mode as one-to-many; taking any one of the road section end point coordinates C t Traversing the starting point coordinates C 'of the rest road sections' s,i Calculate the distance (C) t ,C′ s,i ) If the distance is less than 0.1 meter, the two are successfully matched, and the starting point coordinate C 'is obtained' s,i The id number of the road section is stored to the end point coordinate C t Setting the next candidate road segment set to be empty if not; wherein distance (C) t ,C′ s,i )<0.1;
S1.3, carrying out transverse communication matching on the road sections, setting a matching mode to be one-to-one, taking the left side road edge of any road section, namely a first road section, setting the road section as the left side adjacent road section of the first road section if the right side road edge of a certain road section is the same as the left side road edge of the first road section in other road sections, setting the left side adjacent road section of the road section as empty if the road section is not present, and setting the right side adjacent road section of the first road section in the same way;
s1.4, repeating the steps S1.2-S1.3 until all road sections are matched, and completing construction of the road network model.
Preferably, setting the road reference line includes:
s2.1, respectively reading a left boundary point sequence and a right boundary point sequence of the road section;
s2.2, calculating the total length of the left boundary and the total length of the right boundary of the road section based on the left boundary point sequence and the right boundary point sequence, and taking the maximum length of the left boundary and the total length of the right boundary of the road section as the effective sampling length;
s2.3, determining a sampling step lambda according to the road direction of the road section;
s2.4, determining the number of the road section sampling points according to the effective sampling length and the sampling step length;
s2.5, resampling the left side road edge and the right side road edge of the road section according to the number of the road section sampling points, and sampling a plurality of road points at equal intervals on two sides to obtain a left side reference line edge point V l ' and right reference line along point V r ′;
S2.6, using the resampled left reference line edge point V l ' and the right side reference line along point V r ' calculating the sequence points V of the road section reference lines in a one-to-one correspondence manner c
Figure BDA0004016342610000041
V c =[V c,1 V c,2 … V c,num ] T
Wherein V is c,i The i-th point coordinate of the reference line sequence point of the road section, V' l,i The i-th point coordinate of the point along the left reference line is V' r,i The i-th point coordinate of the point along the right reference line is V c,num The last point coordinate of the sequence point of the reference line of the road section;
s2.7, repeating the steps until all passable area labels are completed.
Preferably, the method for determining the number of the road section sampling points is as follows:
Figure BDA0004016342610000051
/>
wherein num is the number of road section sampling points, len is the effective sampling length, and lambda is the sampling step length.
Preferably, the road point is the laser point coordinate closest to the manual sampling point in the point cloud map, and the road edge is a set of a plurality of road points in the traffic direction of the lane.
The beneficial effects of the invention are as follows:
(1) The invention is improved based on the lanelet2 high-precision map format, so that the lanelet2 high-precision map is lighter, the high-precision map of a specific scene is obtained with lower manufacturing cost, and the invention is especially suitable for automatic driving tasks of small scenes;
(2) The method can greatly reduce the floor threshold of the automatic driving, improves the deployment flexibility of an automatic driving system, provides effective priori information for the automatic driving vehicle through certain manual operation, reduces the operation amount of a sensing system, provides redundancy for the sensing system, and improves the reliability of the sensing system; the positioning module provides an environment point cloud map, and in an area without GPS signals, the positioning accuracy can still keep a centimeter level through point cloud registration; the high-precision map provides high-quality reference line information and traffic road network information for the path planning module, so that the track planning result is more reasonable, and the safety of automatic driving is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a high-precision map according to an embodiment of the present invention;
FIG. 2 is a flow chart of road topology layer fabrication according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a road section structure according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a road network model according to an embodiment of the present invention;
fig. 5 is a flowchart of road reference line calculation according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
The invention provides a method for constructing a small-scene high-precision map, which specifically comprises the following steps:
s1, mapping the scene by using a trolley carrying a laser radar and an Inertial Measurement Unit (IMU), obtaining a point cloud map of the scene by a laser synchronous positioning and mapping (SLAM) technology, and taking the point cloud map as a high-precision map point cloud layer.
As shown in fig. 1, the high-precision map abstracts the actual scene into four layers, namely:
point cloud layer: a three-dimensional point cloud map of the scene;
road topology layer: road network topology of the scene;
barrier layer: mapping the static obstacle and the real-time dynamic obstacle of the scene on a high-precision map;
vehicle model layer: the method comprises three parts of a vehicle motion model, a vehicle running track and a planning track.
S2, manually marking a point cloud map by using an automatic tool to obtain a road topology layer, marking a road network model, traffic marks and parking areas at proper positions of the point cloud map, wherein the marking steps are as follows:
s2.1, importing a point cloud map;
s2.2, manually sampling left and right road points at the road edge, combining the left road edge points to be left road edges, combining the right road edge points to be right road edges, and combining the left road edges and the right road edges to be road sections;
s2.3, setting the road direction, the road surface characteristics, the road speed limit and the road reference line of the road section generated by the S2.2, setting the first point coordinate of the road reference line as the starting point coordinate of the road section, setting the last point coordinate of the road reference line as the end point coordinate of the road section, and calculating the road reference line comprises the following steps:
s2.3.1 reading the left boundary point sequence V of the road section l Represents n sets of coordinates of road points along the left road and a right boundary point sequence V r Representing m sets of waypoint coordinates of the right side road edge;
V l =[V l,1 V l,2 … V l,n ] T
V r =[V r,1 V r,2 … V r,m ] T
s2.3.2 calculating the total length len of the left boundary of the road section l Total length of right boundary len r Taking the maximum length of the two as the effective sampling length len;
len l =∑V l
len r =∑V r
len=max(len l ,len r )
s2.3.3 determining a sampling step length lambda according to the road direction of the road section, setting the sampling step length to be 5 meters if the straight traffic lane is in the road section, otherwise setting the sampling step length to be 3 meters;
Figure BDA0004016342610000081
s2.3.4, determining the number num of the road section sampling points according to the effective sampling length len and the sampling step lambda, wherein the calculation mode is that the effective sampling length len is divided by the sampling step lambda and then rounded downwards;
Figure BDA0004016342610000082
s2.3.5 resampling the left side road edge and the right side road edge of the road section according to the number of the road section sampling points, sampling and calculating by using a linear interpolation method, and equally sampling num road points on two sides to obtain a left side reference line edge point V l 'and right reference line along point V' r
V l ′=[V′ l,1 V′ l,2 … V′ l,num ] T
V′ r =[V′ r,1 V′ r,2 … V′ r,num ] T
S2.3.6 using the resampled left reference line edge point V l 'and right reference line along point V' r Calculating the sequence point V of the road section reference line in a one-to-one correspondence manner c
Figure BDA0004016342610000091
V c =[V c,1 V c,2 … V c,num ] T
S2.4, repeating the steps S2.2 and S2.3 to finish all passable area labeling; FIG. 5 is a flow chart of road reference line calculation;
s2.5, manually setting a traffic sign label at a road intersection or an area with traffic signs, and associating the traffic sign label with the nearest road section;
s2.6, manually marking a parking area in the parking allowed area;
s2.7, the original data of the road topology layer are derived. As in fig. 2.
S3, constructing a road network model according to the road section model;
s3.1, sequentially assigning unique id identification numbers to each road section from the number 0;
s3.2, carrying out longitudinal communication matching on a certain road section, setting a matching mode to be one-to-many, namely, the next candidate road section of one road section possibly has a plurality of road sections, and taking the terminal point coordinate C of the road section t Traversing the starting point coordinates C 'of the rest road sections' s,i Calculate the distance between the two (C) t ,C′ s,i ) If the distance is less than 0.1 meter, the two are successfully matched, and C 'is calculated' s,i The id number of the road segment is stored into the road segment C t Setting the next candidate road segment set to be empty if not;
distance(C t ,C′ s,i )<0.1
setting a matching mode for the cross communication matching of the road sections to be one-to-one, namely, the left adjacent road section or the right adjacent road section of one road section can be at most one, taking the left edge of the road section, setting the road section as the left adjacent road section of the road section if the right edge of one road section exists in the rest road sections and is the same as the left edge of the road section, setting the left adjacent road section of the road section as the empty if the right edge of one road section does not exist, and setting the right adjacent road section of the road section in the same way;
s3.3, repeating the step S3.2 until all road sections are matched. Fig. 3 is a schematic diagram of a road section structure; fig. 4 is a schematic diagram of a road network model.
The invention is improved based on the lanelet2 high-precision map format, so that the lanelet2 high-precision map is lighter, the high-precision map of a specific scene is obtained with lower manufacturing cost, and the invention is especially suitable for automatic driving tasks of small scenes. The method can greatly reduce the floor threshold of the automatic driving, improves the deployment flexibility of an automatic driving system, provides effective priori information for the automatic driving vehicle through certain manual operation, reduces the operation amount of a sensing system, provides redundancy for the sensing system, and improves the reliability of the sensing system; the positioning module provides an environment point cloud map, and in an area without GPS signals, the positioning accuracy can still keep a centimeter level through point cloud registration; the high-precision map provides high-quality reference line information and traffic road network information for the path planning module, so that the track planning result is more reasonable, and the safety of automatic driving is improved.
The above embodiments are merely illustrative of the preferred embodiments of the present invention, and the scope of the present invention is not limited thereto, but various modifications and improvements made by those skilled in the art to which the present invention pertains are made without departing from the spirit of the present invention, and all modifications and improvements fall within the scope of the present invention as defined in the appended claims.

Claims (9)

1. The method for constructing the small-scene high-precision map is characterized by comprising the following steps of:
performing scene mapping by using a laser radar and an inertial measurement device, and acquiring a point cloud map of the scene;
and marking the point cloud map based on a marking tool, and constructing a road network model to complete the construction of the scene.
2. The method of constructing a high-precision map of a small scene as recited in claim 1, wherein obtaining a point cloud map of the scene comprises:
and acquiring the point cloud map through a laser synchronous positioning and mapping technology, wherein the point cloud map is used as a high-precision map point cloud layer.
3. The method for constructing a small-scene high-precision map according to claim 2, characterized in that the high-precision map comprises:
a point cloud layer, a road topology layer, an obstacle layer and a vehicle model layer;
the point cloud layer is a three-dimensional point cloud map of the scene, the road topology layer is a road network topology structure in the scene, the obstacle layer is a mapping of static obstacles and real-time dynamic obstacles of the scene on the high-precision map, and the vehicle model layer comprises a vehicle motion model, a vehicle running track and a planning track in the scene.
4. The method for constructing a high-precision map of a small scene according to claim 1, wherein labeling the point cloud map based on a labeling tool comprises:
and labeling the point cloud map by using an automatic tool to obtain a road topology layer, and labeling a road network model, traffic marks and parking areas in the point cloud map.
5. The method for constructing a high-precision map of a small scene according to claim 4, wherein labeling a road network model, traffic marks and parking areas in the point cloud map comprises:
importing the point cloud map, sampling left road points and right road points at the road edge, merging the left road points into left road edges, merging the right road points into right road edges, merging the left road edges and the right road edges, obtaining a road section model, obtaining and constructing the road network model based on the road section model, and setting the road direction, the road pavement characteristics, the road speed limit and the road reference line of the road section;
setting a traffic sign label at a road intersection or an area with traffic signs, and associating the traffic sign label with a road section nearest to the road intersection to acquire the traffic signs;
the parking areas are marked manually in the parking allowed areas.
6. The method for constructing a high-precision map of a small scene as recited in claim 5, wherein constructing the road network model comprises:
s1.1, sequentially distributing unique id identification numbers to each road section;
s1.2, carrying out longitudinal communication matching on any road segment, and setting a matching mode as one-to-many; taking any one of the road section end point coordinates C t Traversing the starting point coordinates C 'of the rest road sections' s,i Calculate the distance (C) t ,C′ s,i ) If the distance is less than 0.1 meter, the two are successfully matched, and the starting point coordinate C 'is obtained' s,i The id number of the road section is stored to the end point coordinate C t Setting the next candidate road segment set to be empty if not; wherein distance (C) t ,C′ s,i )<0.1;
S1.3, carrying out transverse communication matching on the road sections, setting a matching mode to be one-to-one, taking the left side road edge of any road section, namely a first road section, setting the road section as the left side adjacent road section of the first road section if the right side road edge of a certain road section is the same as the left side road edge of the first road section in other road sections, setting the left side adjacent road section of the road section as empty if the road section is not present, and setting the right side adjacent road section of the first road section in the same way;
s1.4, repeating the steps S1.2-S1.3 until all road sections are matched, and completing construction of the road network model.
7. The method of constructing a small scene high precision map according to claim 5, wherein setting the road reference line comprises:
s2.1, respectively reading a left boundary point sequence and a right boundary point sequence of the road section;
s2.2, calculating the total length of the left boundary and the total length of the right boundary of the road section based on the left boundary point sequence and the right boundary point sequence, and taking the maximum length of the left boundary and the total length of the right boundary of the road section as the effective sampling length;
s2.3, determining a sampling step lambda according to the road direction of the road section;
s2.4, determining the number of the road section sampling points according to the effective sampling length and the sampling step length;
s2.5, resampling the left side road edge and the right side road edge of the road section according to the number of the road section sampling points, and sampling a plurality of road points at equal intervals on two sides to obtain a left side reference line edge point V l ' and right reference line along point V r ′;
S2.6, using the resampled left reference line edge point V l ' and the right side reference line along point V r ' calculating the sequence points V of the road section reference lines in a one-to-one correspondence manner c
Figure FDA0004016342600000031
V c =[V c,1 V c,2 … V c,num ] T
Wherein V is c,i The i-th point coordinate of the reference line sequence point of the road section, V' l,i The i-th point coordinate of the point along the left reference line is V' r,i The i-th point coordinate of the point along the right reference line is V c,num The last point coordinate of the sequence point of the reference line of the road section;
s2.7, repeating the steps until all passable area labels are completed.
8. The method for constructing a high-precision map of a small scene as recited in claim 7, wherein the method for determining the number of road segment sampling points is as follows:
Figure FDA0004016342600000041
wherein num is the number of road section sampling points, len is the effective sampling length, and lambda is the sampling step length.
9. The method for constructing a high-precision map of a small scene according to claim 8, wherein the road points are laser point coordinates closest to a manual sampling point in the point cloud map, and the road edges are a set of a plurality of road points in a lane passing direction.
CN202211672134.9A 2022-12-26 2022-12-26 Construction method of small-scene high-precision map Pending CN116182833A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211672134.9A CN116182833A (en) 2022-12-26 2022-12-26 Construction method of small-scene high-precision map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211672134.9A CN116182833A (en) 2022-12-26 2022-12-26 Construction method of small-scene high-precision map

Publications (1)

Publication Number Publication Date
CN116182833A true CN116182833A (en) 2023-05-30

Family

ID=86433513

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211672134.9A Pending CN116182833A (en) 2022-12-26 2022-12-26 Construction method of small-scene high-precision map

Country Status (1)

Country Link
CN (1) CN116182833A (en)

Similar Documents

Publication Publication Date Title
US10699135B2 (en) Automatic localization geometry generator for stripe-shaped objects
CN111551958B (en) Mining area unmanned high-precision map manufacturing method
CN106441319B (en) A kind of generation system and method for automatic driving vehicle lane grade navigation map
US11423677B2 (en) Automatic detection and positioning of pole-like objects in 3D
Schroedl et al. Mining GPS traces for map refinement
US10628671B2 (en) Road modeling from overhead imagery
US11093759B2 (en) Automatic identification of roadside objects for localization
JP6197393B2 (en) Lane map generation device and program
US20190138823A1 (en) Automatic occlusion detection in road network data
EP2172747B1 (en) Bezier curves for advanced driver assistance systems
JP5064870B2 (en) Digital road map generation method and map generation system
US20170343374A1 (en) Vehicle navigation method and apparatus
US9291462B2 (en) Method for position determination for a motor vehicle
CN114509065A (en) Map construction method, map construction system, vehicle terminal, server side and storage medium
CN114754778A (en) Vehicle positioning method and device, electronic equipment and storage medium
CN112797991A (en) Method and system for generating driving path of unmanned vehicle
Xi et al. Map matching algorithm and its application
CN116182833A (en) Construction method of small-scene high-precision map
CN112530270B (en) Mapping method and device based on region allocation
CN117859041A (en) Method and auxiliary device for supporting vehicle functions in a parking space and motor vehicle
CN113532417A (en) High-precision map acquisition method for parking lot
CN116466382B (en) GPS-based high-precision real-time positioning system
CN112651991B (en) Visual positioning method, device and computer system
CN114543842B (en) Positioning accuracy evaluation system and method for multi-sensor fusion positioning system
Pazhayampallil et al. Deep Learning Lane Detection for Autonomous Vehicle Localization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination