CN116452852A - Automatic generation method of high-precision vector map - Google Patents

Automatic generation method of high-precision vector map Download PDF

Info

Publication number
CN116452852A
CN116452852A CN202310267553.2A CN202310267553A CN116452852A CN 116452852 A CN116452852 A CN 116452852A CN 202310267553 A CN202310267553 A CN 202310267553A CN 116452852 A CN116452852 A CN 116452852A
Authority
CN
China
Prior art keywords
point cloud
image
ground
traffic sign
extracting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310267553.2A
Other languages
Chinese (zh)
Inventor
罗玉涛
单勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202310267553.2A priority Critical patent/CN116452852A/en
Publication of CN116452852A publication Critical patent/CN116452852A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Graphics (AREA)
  • Remote Sensing (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an automatic generation method of a high-precision vector map, which mainly extracts lane marks according to the intensity and geometric structure characteristics of laser point clouds, and segments ground and non-ground point clouds through ground filtering; converting the three-dimensional point cloud into a two-dimensional projection image by adopting block rasterization; extracting road marking pixels by adopting image filtering, edge extraction operators, dynamic self-adaptive threshold segmentation, morphological analysis and other processes, and extracting marking point clouds based on a point cloud image mapping relation; and classifying the marked lines by adopting a point cloud contour matching method based on a knowledge template base. Carrying out traffic sign detection classification on road images based on a deep learning convolutional neural network, and establishing a point cloud interest area through image point cloud fusion; adopting clustering to screen candidate point clouds, carrying out local projection based on principal component analysis, and extracting traffic signs based on point cloud matching of a knowledge template base; and finally, performing calibration, parameter fitting, vectorization and data association based on GPS data to complete the construction of a high-precision vector map.

Description

Automatic generation method of high-precision vector map
Technical Field
The invention relates to data processing, in particular to an automatic generation method of a high-precision vector map.
Background
With the development of the unmanned technology, the application of the unmanned technology is more and more widespread, and a high-precision vector map is defined relative to a common map, which provides map information with higher precision and more dimensions, and has become an indispensable part in the unmanned technology. Road markings and traffic signs are the most basic and core elements of the traffic road network. Therefore, the extraction of the road vector identification line and the signpost is one of key technologies in the fields of high-precision map drawing, intelligent driving of vehicles, three-dimensional modeling of road environment and the like. The current research is mainly focused on detecting and tracking road traffic signs from video images, but the road vector identification lines and the signs detected from the images mainly record two-dimensional plane information instead of three-dimensional information of traffic elements, and a map constructed based on the video images is a sparse map, so that the precision of the map cannot meet the requirements of automatic driving in the aspects of accurate navigation positioning, traffic management and the like. The vehicle-mounted mobile laser measuring system not only can rapidly measure the three-dimensional coordinates of the surface layer point of the target object, but also can record the reflection intensity information of the target on the laser, and provides a reliable data source for extracting high-precision three-dimensional road marking line information. However, the vehicle-mounted laser point cloud data has mass, uneven spatial distribution and density and complexity of terrain, so that the road information extraction and the high-precision vector map drawing based on the laser point cloud bring certain challenges. For the construction of the high-precision map, the high-precision map construction is carried out by using the point cloud data in the prior art, but the three-dimensional point cloud processing has low automation degree and depends on manual editing. Therefore, the processing method of fusion extraction of the laser point cloud, the image and the GPS positioning information can be adopted.
Disclosure of Invention
In order to overcome the defects and shortcomings of the prior art, the invention aims to provide an automatic generation method of a high-precision vector map, and the automation degree of the vector map is improved.
The aim of the invention is achieved by the following technical scheme:
an automatic generation method of a high-precision vector map comprises the following steps:
processing original road laser point cloud data, separating to obtain a ground point cloud and a non-ground point cloud, and fitting the ground point cloud to obtain the ground;
performing block grating processing on the ground point cloud, establishing a mapping relation between pixel points and point cloud indexes, storing, calculating pixel gray values according to the point cloud intensity in each block of grating, the elevation image of the ground point cloud and the density image of the non-ground point cloud, and converting the pixel gray values into a two-dimensional projection image;
extracting a lane marking pixel point according to the two-dimensional projection image, and back-projecting to the point cloud space for segmentation to obtain a lane marking point cloud;
classifying the lane marking point clouds:
acquiring road image data, and extracting traffic signs in the road image data by using a traffic sign detection and classification neural network;
fusing non-ground road point cloud, camera and radar calibration parameter files and traffic signs to establish an interesting point cloud space;
processing the interesting point cloud space to extract candidate traffic sign point clouds, establishing a local projection system, projecting the candidate traffic sign point clouds to a two-dimensional projection plane, and generating a gray image
Extracting the three-dimensional center position of the traffic sign according to the generated gray level image and based on an image matching method of a template knowledge base;
performing data calibration according to GPS high-precision positioning data and three-dimensional coordinates of the point cloud, performing polynomial parameter fitting on the calibrated lane marking point cloud, and vectorizing the polynomial parameter fitting; and then, according to the extracted central position and the category information of the traffic sign, carrying out data association and then deriving a high-precision vector map for automatic driving.
Further, the elevation difference double-threshold ground filtering method is adopted to separate original road laser point cloud data to obtain ground point cloud and non-ground road point cloud, then a random sampling consistency algorithm is adopted to fit and refine the ground point cloud to obtain the ground, specifically, all points in a plane with fitting errors smaller than a set threshold are counted as local points, then the area of an outsourcing convex polygon of the local points is calculated, and the largest fitting plane of the local point area is selected as the ground.
Further, the block grilling is performed on the ground point cloud, a mapping relation between pixel points and point cloud indexes is established and stored, and a pixel gray value is calculated according to the point cloud intensity in each block of grilles, the elevation image of the ground point cloud and the density image of the non-ground point cloud, and is converted into a two-dimensional projection image, specifically:
firstly, performing blocking and rasterization processing on a ground point cloud, and storing a mapping relation between pixel points and point cloud indexes by using a two-dimensional matrix;
then, gray enhancement is performed based on an image gray value calculation formula:
and calculating the pixel gray value of the elevation image of the ground point cloud and the density image of the non-ground points.
Further, the method extracts the pixels of the lane marking according to the two-dimensional projection image, and the back projection is carried out until the space of the point cloud is divided to obtain the lane marking cloud, specifically:
median filtering is carried out on the two-dimensional projection image to remove salt and pepper noise; performing edge detection by using a canny operator to generate a ground strength gradient image; generating a density and elevation binary image based on maximum entropy threshold segmentation, extracting road surface pixel points in a combined mode with a ground intensity gradient map, and finely extracting lane marking pixel points based on dynamic self-adaptive threshold segmentation and morphological corrosion expansion;
performing optimization analysis based on the connected components, and then marking;
and according to the mapping relation between the pixels and the point cloud index, back projecting the pixels to the point cloud space to extract the lane marking point cloud.
Further, the method further comprises the step of extracting contour information of each lane marking point cloud by adopting an alpha shape rolling algorithm, and determining the minimum circumscribed rectangle size of the contour of each marking point cloud.
Further, the classifying the lane marking point cloud specifically includes:
firstly, taking the geometric characteristics of the minimum circumscribed rectangle of the outer contour as a rough classification standard, and extracting solid broken lines and other irregular road marks; then classifying the irregular road mark point cloud by adopting a point cloud contour matching method based on a template knowledge base; the method comprises the following steps: and the candidate point clouds are sequentially subjected to rotary incremental ICP template library point cloud contour matching, coordinate unification is carried out, then the nearest neighbor point cloud number is calculated by adopting a KNN algorithm, and the proportion of the nearest neighbor point cloud number to the total point cloud number of the template is used as a classification standard for classification.
Further, the detection classification of the traffic sign is performed by using the deep learning convolutional neural network, and the obtained traffic sign board comprises a current confidence position which comprises bounding box information and classification information.
Further, processing the point cloud interesting area to extract candidate traffic sign point clouds, establishing a local projection system, and projecting the candidate traffic sign point clouds to a two-dimensional projection plane to generate a gray image, wherein the method specifically comprises the following steps:
and calculating the central pixel coordinates (u, v) of the mark according to the bounding box and the category information, and adding a buffer area on the basis of the central pixels to establish a point cloud region of interest. Then, carrying out cluster analysis on the interesting point cloud space, and finely extracting candidate traffic sign point clouds by utilizing intensity filtering; according to the candidate traffic sign point cloud obtained by clustering, adopting a normal vector obtained by principal component analysis as a Z axis, adopting a vertical direction as an X axis and adopting a direction vertical to a ZOX plane as a third axis, establishing a local projection system c-xyz by taking the centroid c of the candidate point cloud block as an origin, and projecting the candidate sign point cloud in the region of interest to a two-dimensional plane to generate a gray level image.
Further, according to the generated gray level image, extracting a three-dimensional center position of a traffic sign based on an image matching method of a template knowledge base, specifically:
firstly, realizing image enhancement by adopting expansion operation, obtaining a gradient image by using a sobel edge detection algorithm, carrying out convolution operation by using a constructed planar template as a convolution kernel and the gradient image, and when the template just covers a sign, the intensity value is the maximum, and the corresponding center is the accurate center position of the traffic sign.
Further, the traffic sign detection and classification neural network is a deep learning convolutional neural network YOLOv5.
Compared with the prior art, the invention has the following advantages and beneficial effects:
the intensity information of the point cloud is converted into gray information by the gray enhancement-based method, so that the three-dimensional point cloud is better converted into a two-dimensional projection image, the three-dimensional point cloud is processed relatively and directly, the dimension reduction is realized, the processing speed is improved, the lane marking of the gray image is obvious, and the subsequent image processing effect is facilitated;
then the adopted canny edge detection operator and dynamic self-adaptive threshold segmentation can identify the possible marked lines as far as possible, and meanwhile, the method is robust to noise points, good in processing effect and low in miss-extraction and miss-extraction rate
The classification effect is good based on the classification of the lane markings of the template knowledge base, and the extracted lane markings can be basically and correctly classified;
after the extracted lane markings are calibrated by high-precision GPS data, the images are formed according to the vectors oriented to the automatic driving navigation map, and the effect is good.
Drawings
FIG. 1 is a workflow diagram of the present invention;
FIG. 2 is a schematic illustration of the differential elevation dual threshold ground filtering of the present invention;
FIG. 3 is a graph of the point cloud matching effect of the template knowledge base of the present invention;
FIG. 4 is a flow chart of an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples, but embodiments of the present invention are not limited thereto.
As shown in fig. 1-4, an automatic generation method of a high-precision vector map is mainly based on high-precision positioning and traffic element identification, and comprises the following steps:
s1, processing original road laser point cloud data, separating to obtain a ground point cloud and a non-ground road point cloud, and then fitting the ground point cloud to obtain the ground;
the method specifically comprises the following steps:
s1.1, firstly, counting the minimum circumscribed boundary frame size and the maximum and minimum coordinate values of original road laser point cloud data, and then carrying out rasterization processing based on the boundary frame; setting the resolution-grid size to be 1.5m;
s1.2, carrying out high-threshold double-threshold ground filtering segmentation based on point cloud elevation information in the grid: the elevation difference double threshold values are the point cloud elevation in the grating and the minimum elevation of the grating and the 3X3 neighborhood grating respectivelyAnd->Two thresholds of minimum elevation difference from the grid, which are empirically obtained from 0.3 to 0.5, through each point p in the grid of the two thresholds k Classified as a roughly determined ground point cloud G or non-ground road point cloud NG, as follows:
s1.3, refining a fitting plane point cloud by using a random sampling consistency algorithm to obtain a fitting plane, counting all points in the plane with the fitting error smaller than a set threshold value as local points, and then calculating the area of an outer convex polygon of the local points; and repeating the steps, and selecting the local point area maximum fitting plane as the ground.
S2, carrying out block grating processing on the ground point cloud, establishing a mapping relation between pixels and point cloud indexes, storing, calculating pixel gray values according to the point cloud intensity in each grating, the elevation image of the ground point cloud and the density image of the non-ground point cloud, and converting the pixel gray values into a two-dimensional projection image;
the method comprises the following steps:
s2.1, firstly, partitioning a ground point cloud according to 200x200m and rasterizing the ground point cloud according to 0.05x0.05m, setting picture resolution of grid lines and columns after rasterizing, namely, each grid as a pixel point, and storing the mapping relation between the pixel point and a point cloud index by using a two-dimensional matrix;
s2.2, carrying out gray scale enhancement on the gray scale value of each pixel of the high-intensity image according to the intensity value of each point cloud intensity in each grid: specifically, the gray value of the image pixel is determined according to the proportion of the grid intensity mean value to the maximum intensity of the whole point cloud. The following formula is shown:
g in i,j For the gray value corresponding to grid (I, j), av_i i,j The maximum intensity I of the whole point cloud is the average intensity of all points in the grid (I, j) max The calculation formula is as follows: i max =av_I+δ scal X std; wherein av is I Std is the intensity mean and the mean square error of all grids in the partitioned point cloud, delta scal Is a set proportional threshold.
S2.3 and S2.2 respectively calculate pixel gray values for the elevation image of the ground point cloud and the density image of the non-ground point cloud; and gray value calculation is carried out by adopting a method of respectively adopting normalization and cutoff threshold normalization, and the gray value calculation is converted into a two-dimensional projection image, wherein the two-dimensional projection image is a ground image of a bird's eye view.
S3, extracting a lane marking pixel point according to the two-dimensional projection image, and back-projecting to the point cloud space for segmentation to obtain a lane marking point cloud;
the method comprises the following steps:
median filtering is carried out according to the two-dimensional projection image obtained in the last step to remove salt and pepper noise; performing edge detection by using a canny operator to generate an intensity gradient image; generating a density and elevation binary image based on maximum entropy threshold segmentation, extracting road surface pixel points in a combined mode with a ground intensity gradient map, and finely extracting lane marking pixel points based on dynamic self-adaptive threshold segmentation and morphological corrosion expansion;
the dynamic self-adaptive threshold segmentation firstly divides an intensity gradient image into a plurality of smaller subgraphs so as to alleviate the problem of uneven intensity distribution; and then, carrying out self-adaptive threshold segmentation on each sub-graph to obtain a binary image of the road marking.
S4, classifying the lane marking point clouds by adopting a point cloud contour matching method based on a template knowledge base.
The template knowledge base is specifically a template base matching of incremental rotary ICP.
The method comprises the following steps:
s4.1, carrying out optimization analysis based on connected components, carrying out merging optimization in a mode of establishing a kd tree for lane markings and calculating that the closest point among the markings is smaller than a certain threshold value, and then marking by adopting a seed filling algorithm: firstly traversing a pixel diagram, starting from a foreground pixel, pushing the foreground pixel into a stack; performing 3X3 neighborhood traversal search; and popping up the pixel at the top of the stack until the neighborhood does not contain the foreground pixel, and marking with the same label until the stack is empty.
And S4.2, extracting the lane marking point clouds according to the step S2.1, extracting the contour information of each marking point cloud by adopting an alpha shape concave-package algorithm, taking an empirical value of 0.7 by alpha, and counting the minimum external rectangle size of the contour of each marking point cloud.
S4.3, firstly calculating geometric characteristic information of a rectangular boundary frame according to the minimum circumscribed rectangular size information, taking length, length-width ratio, diagonal included angle and contour area of long and short sides as classification standards, carrying out rough classification on lane marks, and extracting real dotted lines and other irregular road marks;
quadrilateral diagonal formula b 2+ d 2-a 2-c 2 = 2m n cos (α) wherein a, b, c, d are four sides long, m, n are diagonal lengths, α is the diagonal angle representing the degree of flatness.
S4.4, classifying the irregular road marking point clouds by adopting a point cloud matching method based on a template knowledge base; the contour of the irregular candidate road marking point cloud and the contour of the road marking point cloud in the template library are sequentially subjected to incremental rotation ICP registration to solve a transformation matrix (4 dof), and then the template point cloud is subjected to coordinate transformation by using the solved optimal pose transformation matrix.
S4.5, establishing a kdtree for the road marking point cloud by using KdToeFLANN, calculating the ratio of the nearest neighbor point cloud number to the total template point cloud number by using a KNN algorithm to serve as a matching overlapping rate, and classifying the template into the template class if the overlapping rate is larger than a set threshold.
S5, road image data are obtained, traffic signs in the road image data are extracted through a traffic sign detection and classification neural network, and the confidence positions of the traffic signs contained in the current position are obtained, wherein the confidence positions comprise bounding box information and classification information.
The traffic sign detection and classification neural network is specifically a deep learning convolutional neural network YOLOv5.
The bounding box information refers to the maximum circumscribed rectangular box (boundingbox) occupied by the traffic signboards identified by the deep learning convolutional neural network image in the image, and the category information refers to the categories of the vertical traffic signboards identified by the deep learning convolutional neural network detection, such as various traffic signboards including signal lamps, speed limit boards, forbidden boards and the like.
S6, fusing non-ground road point cloud, camera and radar calibration parameter files and traffic signs to establish an interesting point cloud area;
the method specifically comprises the following steps:
the conversion of the pixel point coordinates and the laser point cloud coordinates of the image can be realized by using calibration parameter files of the camera and the radar, so that the fusion of the point cloud data and the image data is realized, and the fusion is mainly to add RGB information, confidence position and category information of traffic signs in the image into the point cloud data.
The projection conversion of the world coordinate system and the camera coordinate system is as follows:
the projection conversion of the camera coordinate system to the image coordinate system is as follows:
u=I width -(α/2π)*I width v=(β/2π)*I height
wherein R and T are the nominal rotation matrix and translation vector of the lidar respectively. Alpha and beta respectively represent the included angle between the connecting line of a certain point and an origin in a camera coordinate system and a Z coordinate axis, and the included angle between the projection of the connecting line on an XOY plane and X, and Iwidth and Iheight represent the width and the height of an image.
S7, processing the point cloud interesting area to extract candidate traffic sign point clouds, establishing a local projection system, projecting the candidate traffic sign point clouds to a two-dimensional projection plane, and generating a gray image
And calculating the central pixel coordinates (u, v) of the mark according to the bounding box and the category information, and adding a buffer area on the basis of the central pixels to establish a point cloud region of interest. Then cluster analysis is carried out on the point cloud interested area, and candidate traffic sign point clouds are extracted finely by utilizing intensity filtering; according to the candidate traffic sign point cloud obtained by clustering, adopting a normal vector obtained by principal component analysis as a Z axis, adopting a vertical direction as an X axis and adopting a direction vertical to a ZOX plane as a third axis, establishing a local projection system c-xyz by taking the centroid c of the candidate point cloud block as an origin, and projecting the candidate sign point cloud in the region of interest to a two-dimensional projection plane to generate a gray level image.
Further, the two-dimensional projection plane is specifically: the normal vector direction of the traffic sign obtained by principal component analysis is taken as a projection direction, a vertical direction and a direction perpendicular to two direction axes are established, and a local projection coordinate system taking the mass center of the traffic sign as an origin is used for obtaining a two-dimensional image of the traffic sign
The formula for the point cloud projection of the world coordinate system to the local coordinate system c-xyz is as follows:
the formula for projecting a point cloud from a local coordinate system onto an XOY plane is as follows:
u=[x max -(x-x min )]*L pixel
v=[y max -(y-y min )]*L pixel
s8, extracting the three-dimensional center position of the traffic sign according to the generated gray level image and based on an image matching method of a template knowledge base;
the method comprises the following steps: firstly, realizing image enhancement by adopting expansion operation, obtaining a gradient image by using a sobel edge detection algorithm, carrying out convolution operation by using a constructed planar template as a convolution kernel and the gradient image, and when the template just covers a sign, the intensity value is the maximum, and the corresponding center is the accurate center position of the traffic sign.
S9, carrying out data calibration according to GPS high-precision positioning data and three-dimensional coordinates of the point cloud, then carrying out polynomial parameter fitting on the calibrated lane marking point cloud, and vectorizing the polynomial parameter fitting; and then, carrying out data association according to the extracted central position and category information of the traffic sign and related traffic rules, and finally deriving a high-precision vector map for automatic driving.
The point cloud three-dimensional coordinates are three-dimensional coordinates measured by a laser radar and based on a laser radar coordinate system.
The method comprises the steps of classifying and extracting lane markings according to laser point cloud intensity information and geometric structure characteristics, dividing ground and non-ground point clouds through ground filtering, and converting a three-dimensional point cloud space into a two-dimensional projection image by adopting block grating; extracting road marking pixels by adopting image processing operation marks such as image filtering, edge extraction operators, dynamic self-adaptive threshold segmentation, morphological analysis and the like, and back-projecting and extracting marking point clouds based on a point cloud image mapping relation; and carrying out marking classification and semantic recognition by adopting a rotation incremental point cloud contour matching method based on a knowledge template base. Then, traffic sign board detection classification is carried out on the road image based on a neural network method, and an interested region of a point cloud space is established through image point cloud fusion; then, clustering is adopted to screen candidate point clouds, principal component analysis is adopted to establish a local projection system, and the point clouds based on a knowledge template library are matched to extract classified traffic signs; and finally, carrying out lane marking calibration, parameter fitting and vectorization based on the high-precision positioning data, and carrying out data association fusion of traffic sign plates to complete the construction of a high-precision vector map. According to the invention, the automatic construction of the high-precision vector map is realized according to the data acquired by the vehicle-mounted camera, the laser radar and the GPS.
The embodiments described above are preferred embodiments of the present invention, but the embodiments of the present invention are not limited to the embodiments described above, and any other changes, modifications, substitutions, combinations, and simplifications that do not depart from the spirit and principles of the present invention should be made in the equivalent manner, and are included in the scope of the present invention.

Claims (10)

1. An automatic generation method of a high-precision vector map is characterized by comprising the following steps:
processing original road laser point cloud data, separating to obtain a ground point cloud and a non-ground point cloud, and fitting the ground point cloud to obtain the ground;
performing block grating processing on the ground point cloud, establishing a mapping relation between pixel points and point cloud indexes, storing, calculating pixel gray values according to the point cloud intensity in each block of grating, the elevation image of the ground point cloud and the density image of the non-ground point cloud, and converting the pixel gray values into a two-dimensional projection image;
extracting a lane marking pixel point according to the two-dimensional projection image, and back-projecting to the point cloud space for segmentation to obtain a lane marking point cloud;
classifying the lane marking point clouds:
acquiring road image data, and extracting traffic signs in the road image data by using a traffic sign detection and classification neural network;
fusing non-ground point cloud, camera and radar calibration parameter files and traffic signs to establish a point cloud region of interest;
processing the interesting point cloud region to extract candidate traffic sign point clouds, establishing a local projection system, projecting the candidate traffic sign point clouds to a two-dimensional projection plane, and generating a gray image
Extracting the three-dimensional center position of the traffic sign according to the generated gray level image and based on an image matching method of a template knowledge base;
performing data calibration according to GPS high-precision positioning data and three-dimensional coordinates of the point cloud, performing polynomial parameter fitting on the calibrated lane marking point cloud, and vectorizing the polynomial parameter fitting; and then, according to the extracted central position and the category information of the traffic sign, carrying out data association and then deriving a high-precision vector map for automatic driving.
2. The automatic generation method according to claim 1, wherein an elevation difference double-threshold ground filtering method is adopted to separate original road laser point cloud data to obtain a ground point cloud and a non-ground road point cloud, then a random sampling consistency algorithm is adopted to fit and refine the ground point cloud to obtain the ground, specifically, all points in a plane with a fitting error smaller than a set threshold are counted as local points, then the area of an outsourcing convex polygon of the local points is calculated, and the largest fitting plane of the local point area is selected as the ground.
3. The automatic generation method according to claim 1, wherein the block rasterization processing is performed on the ground point cloud, a mapping relation between pixel points and point cloud indexes is established and stored, and pixel gray values are calculated according to the point cloud intensity in each block of grille, the elevation image of the ground point cloud and the density image of the non-ground point cloud, and are converted into two-dimensional projection images, specifically:
firstly, performing blocking and rasterization processing on a ground point cloud, and storing a mapping relation between pixel points and point cloud indexes by using a two-dimensional matrix;
then, gray enhancement is performed based on an image gray value calculation formula:
and calculating the pixel gray value of the elevation image of the ground point cloud and the density image of the non-ground points.
4. The automatic generation method according to claim 1, wherein the extracting the pixels of the lane marker according to the two-dimensional projection image, and the back-projecting to the point cloud space division obtains the lane marker cloud specifically comprises:
median filtering is carried out on the two-dimensional projection image to remove salt and pepper noise; performing edge detection by using a canny operator to generate a ground strength gradient image; generating a density and elevation binary image based on maximum entropy threshold segmentation, extracting road surface pixel points in a combined mode with a ground intensity gradient map, and finely extracting lane marking pixel points based on dynamic self-adaptive threshold segmentation and morphological corrosion expansion;
performing optimization analysis based on the connected components, and then marking;
and according to the mapping relation between the pixels and the point cloud index, back projecting the pixels to the point cloud space to extract the lane marking point cloud.
5. The method of claim 4, further comprising extracting contour information of each lane point cloud by using an alpha shape rolling algorithm, and determining a minimum circumscribed rectangle size of the contour of each point cloud.
6. The automatic generation method according to claim 1, wherein the classifying the lane marking point cloud specifically includes:
firstly, taking the geometric characteristics of the minimum circumscribed rectangle of the outer contour as a rough classification standard, and extracting solid broken lines and other irregular road marks; then classifying the irregular road mark point cloud by adopting a point cloud contour matching method based on a template knowledge base; the method comprises the following steps: and the candidate point clouds are sequentially subjected to rotary incremental ICP template library point cloud contour matching, coordinate unification is carried out, then the nearest neighbor point cloud number is calculated by adopting a KNN algorithm, and the proportion of the nearest neighbor point cloud number to the total point cloud number of the template is used as a classification standard for classification.
7. The automatic generation method according to claim 1, wherein the detection classification of traffic signs is performed by using a deep learning convolutional neural network, and the obtained traffic sign includes a current confidence position, and the confidence position includes bounding box information and classification information.
8. The automatic generation method according to claim 1, wherein the candidate traffic sign point clouds are extracted by processing the point cloud interest region, a local projection system is established, and the candidate traffic sign point clouds are projected to a two-dimensional projection plane to generate a gray image, specifically:
and calculating the central pixel coordinates (u, v) of the mark according to the bounding box and the category information, and adding a buffer area on the basis of the central pixels to establish a point cloud region of interest. Then, carrying out cluster analysis on the interesting point cloud space, and finely extracting candidate traffic sign point clouds by utilizing intensity filtering; according to the candidate traffic sign point cloud obtained by clustering, adopting a normal vector obtained by principal component analysis as a Z axis, adopting a vertical direction as an X axis and adopting a direction vertical to a ZOX plane as a third axis, establishing a local projection system c-xyz by taking the centroid c of the candidate point cloud block as an origin, and projecting the candidate sign point cloud in the region of interest to a two-dimensional plane to generate a gray level image.
9. The automatic generation method according to claim 1, wherein the three-dimensional center position of the traffic sign is extracted based on the image matching method of the template knowledge base according to the generated gray level image, specifically:
firstly, realizing image enhancement by adopting expansion operation, obtaining a gradient image by using a sobel edge detection algorithm, carrying out convolution operation by using a constructed planar template as a convolution kernel and the gradient image, and when the template just covers a sign, the intensity value is the maximum, and the corresponding center is the accurate center position of the traffic sign.
10. The automatic generation method according to claim 1, wherein the traffic sign detection and classification neural network is a deep learning convolutional neural network YOLOv5.
CN202310267553.2A 2023-03-20 2023-03-20 Automatic generation method of high-precision vector map Pending CN116452852A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310267553.2A CN116452852A (en) 2023-03-20 2023-03-20 Automatic generation method of high-precision vector map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310267553.2A CN116452852A (en) 2023-03-20 2023-03-20 Automatic generation method of high-precision vector map

Publications (1)

Publication Number Publication Date
CN116452852A true CN116452852A (en) 2023-07-18

Family

ID=87122919

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310267553.2A Pending CN116452852A (en) 2023-03-20 2023-03-20 Automatic generation method of high-precision vector map

Country Status (1)

Country Link
CN (1) CN116452852A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116625385A (en) * 2023-07-25 2023-08-22 高德软件有限公司 Road network matching method, high-precision map construction method, device and equipment
CN116758518A (en) * 2023-08-22 2023-09-15 安徽蔚来智驾科技有限公司 Environment sensing method, computer device, computer-readable storage medium and vehicle
CN116950429A (en) * 2023-07-31 2023-10-27 中建八局发展建设有限公司 Quick positioning and splicing method, medium and system for large spliced wall
CN117828737A (en) * 2024-01-04 2024-04-05 济南市园林绿化工程质量与安全中心 Digital twin landscape design method

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116625385A (en) * 2023-07-25 2023-08-22 高德软件有限公司 Road network matching method, high-precision map construction method, device and equipment
CN116625385B (en) * 2023-07-25 2024-01-26 高德软件有限公司 Road network matching method, high-precision map construction method, device and equipment
CN116950429A (en) * 2023-07-31 2023-10-27 中建八局发展建设有限公司 Quick positioning and splicing method, medium and system for large spliced wall
CN116758518A (en) * 2023-08-22 2023-09-15 安徽蔚来智驾科技有限公司 Environment sensing method, computer device, computer-readable storage medium and vehicle
CN116758518B (en) * 2023-08-22 2023-12-01 安徽蔚来智驾科技有限公司 Environment sensing method, computer device, computer-readable storage medium and vehicle
CN117828737A (en) * 2024-01-04 2024-04-05 济南市园林绿化工程质量与安全中心 Digital twin landscape design method

Similar Documents

Publication Publication Date Title
Hughes et al. A deep learning framework for matching of SAR and optical imagery
CN107516077B (en) Traffic sign information extraction method based on fusion of laser point cloud and image data
Wu et al. Rapid localization and extraction of street light poles in mobile LiDAR point clouds: A supervoxel-based approach
Zai et al. 3-D road boundary extraction from mobile laser scanning data via supervoxels and graph cuts
CN109243289B (en) Method and system for extracting parking spaces of underground garage in high-precision map manufacturing
Lari et al. An adaptive approach for the segmentation and extraction of planar and linear/cylindrical features from laser scanning data
Yang et al. Hierarchical extraction of urban objects from mobile laser scanning data
Wang et al. SigVox–A 3D feature matching algorithm for automatic street object recognition in mobile laser scanning point clouds
CN116452852A (en) Automatic generation method of high-precision vector map
Gross et al. Extraction of lines from laser point clouds
JP7138718B2 (en) Feature detection device, feature detection method, and feature detection program
CN110197173B (en) Road edge detection method based on binocular vision
CN112819895A (en) Camera calibration method and device
Karsli et al. Automatic building extraction from very high-resolution image and LiDAR data with SVM algorithm
Lin et al. CNN-based classification for point cloud object with bearing angle image
CN114677435A (en) Point cloud panoramic fusion element extraction method and system
Liu et al. Image-translation-based road marking extraction from mobile laser point clouds
CN116309817A (en) Tray detection and positioning method based on RGB-D camera
Yao et al. Automated detection of 3D individual trees along urban road corridors by mobile laser scanning systems
CN115131363A (en) Positioning method and device based on semantic information and terminal equipment
CN113219472B (en) Ranging system and method
CN113721254A (en) Vehicle positioning method based on road fingerprint space incidence matrix
CN112767459A (en) Unmanned aerial vehicle laser point cloud and sequence image registration method based on 2D-3D conversion
CN109118565B (en) Electric power corridor three-dimensional model texture mapping method considering shielding of pole tower power line
CN114004740B (en) Building wall line extraction method based on unmanned aerial vehicle laser radar point cloud

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination