CN117876984A - Road sign line detection and identification method based on MLS point cloud - Google Patents

Road sign line detection and identification method based on MLS point cloud Download PDF

Info

Publication number
CN117876984A
CN117876984A CN202311815103.9A CN202311815103A CN117876984A CN 117876984 A CN117876984 A CN 117876984A CN 202311815103 A CN202311815103 A CN 202311815103A CN 117876984 A CN117876984 A CN 117876984A
Authority
CN
China
Prior art keywords
point
road
point cloud
image
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311815103.9A
Other languages
Chinese (zh)
Inventor
宁小娟
赵昊罡
梁杰炜
石争浩
金海燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN202311815103.9A priority Critical patent/CN117876984A/en
Publication of CN117876984A publication Critical patent/CN117876984A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a road marking line detection and identification method based on MLS point clouds, which aims at solving the problem that road boundaries cannot be extracted effectively, adopts a progressive road point cloud extraction method to extract road point clouds, aims at solving the problems of over-segmentation and under-segmentation, jaggy of a point cloud projection intensity image and natural incomplete road marking lines generated by road marking line extraction, designs a road marking line extraction method based on self-adaptive threshold segmentation and morphological optimization, and finally solves the problem of lower classification precision caused by feature similarity by realizing a road marking line classification method of multi-scale clustering and PointNet++, and effectively improves the detection and identification precision of the road marking lines.

Description

Road sign line detection and identification method based on MLS point cloud
Technical Field
The invention belongs to the technical field of computer vision and artificial intelligence, and relates to a road marking line detection and identification method based on MLS point cloud.
Background
Road sign line detection is one of the important perception modules of high-precision maps and unmanned driving. Road sign detection is generally classified into three categories: the image-based method, the image-based method and the point cloud-based method have the following specific advantages and disadvantages: the related research of the road sign line detection based on the image is more, but the method has higher requirements on the illumination condition of the data set and is too dependent on the texture characteristics of the road surface; image-based road sign line detection is also affected by illumination, and due to image stitching problems, errors can occur if it is projected onto the same coordinate system, thereby affecting the accuracy of subsequent detection. Researchers also overcome the error problem caused by stitching by using remote sensing images instead of traditional images and images, but the resolution of remote sensing images is insufficient, so that it is still difficult to accurately and comprehensively extract road elements. Compared with the method, the method based on the point cloud shows the unique advantages in the aspect of road sign line detection: 1) Is not affected by illumination; 2) The data acquisition efficiency is higher; 3) The acquired data has better continuity; 4) The precision and the acquisition density are higher; 5) The special material for the road sign line has good echo reflection intensity. However, the outdoor scene data is too huge, the distribution of the point density is uneven, and the like, so that the research is still challenging.
Disclosure of Invention
The invention aims to provide a road marking line detection and identification method based on MLS point cloud, which effectively improves the detection and identification precision of the road marking line.
The technical scheme adopted by the invention is that the road marking line detection and identification method based on the MLS point cloud is implemented according to the following steps:
step 1, extracting a progressive road point cloud;
step 2, extracting road sign lines optimized based on a self-adaptive threshold segmentation and morphology method;
and 3, classifying road sign lines based on the improved PointNet++.
The present invention is also characterized in that,
the step 1 specifically comprises the following steps:
step 1.1, performing rough road extraction through an improved CSF algorithm;
step 1.2, carrying out fine extraction of road point clouds by using a road point cloud extraction algorithm of the pseudo scanning line;
step 1.3, road point cloud boundary optimization: clustering road points based on a DBSCAN clustering algorithm, setting constraint rules, and re-fitting boundaries by adopting a RANSAC method;
the step 2 is specifically as follows:
step 2.1, road sign line segmentation;
step 2.2, road sign line segmentation optimization;
step 2.3, generating point cloud through the back projection of the depth image stored during the point cloud projection and the binary image segmented by the self-adaptive threshold value, and realizing the extraction of the point cloud of the road sign line;
The step 3 is specifically as follows:
step 3.1, clustering and dividing the road sign lines;
step 3.2, classifying lane lines and zebra crossings;
step 3.3, small-sized road sign line classification based on modified PointNet++.
Step 1.1 is specifically implemented according to the following steps:
step 1.1.1, scene preprocessing: preprocessing a scene by adopting a discrete point removal method based on a statistical filter, indexing all point clouds in the scene, and firstly calculating any point p i Average distance d from n nearest neighbors around it n Assuming arbitrary p i Average distance d of (2) n Obeying Gaussian distribution, calculating the mean value mu and standard deviation sigma of the Gaussian distribution, establishing a standard range according to the obtained mean value and standard deviation, and taking the mean distance d as n If the point is larger than the set standard range, defining the point as a discrete point and removing the discrete point;
d n the formula is:
step 1.1.2, coarse extraction of road point cloud: filtering the road by adopting an improved CSF algorithm, thereby preliminarily extracting the road point cloud, in particular
Step 1.1.2.1, inputting and reversing denoised road point cloud data;
step 1.1.2.2, establishing a simulated distribution grid of the road point cloud, and customizing the size of the grid, wherein the grid is positioned above all the road point clouds at the initial moment;
Step 1.1.2.3, projecting all point clouds and grid particles in a scene onto the same horizontal plane, searching nearest neighbors of each particle at the same time, and recording elevation values IHV before projection;
step 1.1.2.4, for each movable grid particle, calculating the displacement generated by the influence of the internal factors, comparing with the elevation value of the current nearest neighbor, and resetting the elevation value of the particle to IHV and changing the particle to an immovable point if the elevation value of the particle is lower than or equal to IHV at this time;
step 1.1.2.5, for each mesh particle, calculating the displacement thereof due to the internal factors;
step 1.1.2.6, repeating step 1.1.2.4 and step 1.1.2.5, and stopping the simulation process when the iteration number is greater than the user set value;
step 1.1.2.7, calculating the elevation difference between the point cloud and the grid particles;
in step 1.1.2.8, if the distance between the point cloud and the grid particles is smaller than the preset threshold HCC, the point is considered to be a ground point, otherwise, the point is considered to be a non-ground point, and the extracted ground point is the road point cloud.
Step 1.2 is specifically implemented according to the following steps:
step 1.2.1, using road pointsThe cloud scanning line direction replaces the track direction, and a pseudo scanning line is established along the scanning line direction, namely, the road point cloud is sliced, and any point on the scanning line track is assumed Is (X) i ,Y i ) Therefore->Mileage value S corresponding to point i The formula of (2) is:
let c n (X' n ,Y’ n ) Represents arbitrary slice point position, where c n The formula of the coordinate value of (a) is:
in the formula (3), the amino acid sequence of the compound,representation->Coordinate azimuth, delta n Representing the distance between the slice position and the nearest neighbor of the slice position on the track,/o>Representation->Coordinate values of (2);
from c n (X' n ,Y′ n )、α' n And D, determining the position of the slice, the direction of the slice and the length of the slice respectively; calculating the distance d between the point cloud and the slice segment, and when the distance is smaller than half of the slice lengthThe point clouds are located on the same slice, L represents the projection of the slice on the xoy plane of the coordinate system, T represents the normal of the slice, and P (X) p ,Y p ) Representing the coordinates of the point cloud, and d representing the distance between the slice segment and the point cloud, then the vectorThe relationship between the normal vector T is formulated as:
d=|(X p -X' n )cos(α' n )+(Y p -Y′ n )sin(α' n )| (4);
step 1.2.2, reconstructing a local coordinate system of a pseudo-scanning line to replace a geodetic coordinate system of a radar system by the projection of the road cross section slice, converting scene point clouds into the local coordinate system of the corresponding pseudo-scanning line, defining a coordinate origin as the center of each road segment, taking an X axis as a track direction, and taking a Y axis perpendicular to the X axis and P i (X i ,Y i ,Z i ) Represents any point on the segment, P i The formula of the coordinates is:
In the formula (5), (X) C ,Y C ) Representing coordinates on the slice, α representing an azimuth angle of the track point;
projecting the three-dimensional point to the YOZ plane, whereinRepresenting the origin in the two-dimensional coordinate system, points on the pseudo-scan line after projection +.>The coordinate formula of (2) is:
step 1.2.3, extracting the road point cloud by using a sliding window filterDividing the projected points on the cross section slice into a plurality of cells, setting the threshold value of the cells as M, dividing the point cloud on the pseudo scan line into k sections at the moment, and assuming that the last point isThen->And (3) for the pseudo scanning point coordinates generated in the last interval, adopting a RANSAC algorithm to perform straight line detection, randomly selecting a part of points to perform fitting, counting data deviation with other points, screening out points meeting a threshold value, establishing a buffer area for the screened points, and removing non-road point clouds in the buffer area by using a sliding window filter.
The step 2.1 is specifically that of,
step 2.1.1, road point cloud projection: selecting an XOY plane as a horizontal plane, projecting road point cloud data to the plane along the normal vector direction of the XOY plane, and setting an original road point cloud set as P n ={P 1 ,P 2 ,...,P n The projected road point cloud set is P n ′={P 1 ',P 2 ',...,P n ' optionally selecting a point t (x) 0 ,y 0 ,z 0 ) Wherein the normal vector of the plane pi isAt this time->At->The calculation formula of the projected height H in the direction is:
at this timeIs->At->Projection vector in direction due to +.>And->Is the same direction, thus->Then the point of projection P i The' coordinate calculation formula is:
step 2.1.2, two-dimensional projection image rasterization: converting the projected point cloud obtained in the step 2.1.1 into a two-dimensional grating image, calculating the inverse distance weighting influence of the control points on surrounding pixels according to the given control points and the displacement vector of the control point pairs by using an inverse distance weighting interpolation method IDW, thereby calculating the intensity value of each pixel point in the image, and assuming P i (X i ,Y i ,Z i ,I i ) Representing coordinates of an arbitrary road point and reflection intensity I of the point i ,R pixel Representing the resolution of the generated raster image, R IDW Representing the radius of the interpolation, point P i When projected onto the plane pi, P is the same as P i Change to p j (X j ,Y j ,I j ) The calculation formula of the size of the raster image at this time is:
in the formula (9), maxX represents the maximum value of the X axis in the road point cloud; minX represents the minimum value of the X axis in the road point cloud; maxY represents the maximum value of the Y axis in the road point cloud; minY represents the minimum value of the Y axis in the road point cloud; width, height represent the size of the grating;
The intensity value of each point is obtained through an IDW algorithm, and the point P is positioned in the interpolation radius k From interpolation points Pixel (m,n) Position and interpolation radius determination, d (m,n,k) Then represent P k To the Pixel center Pixel (m,n) The calculation formula is as follows:
in the formula (10), X k An X coordinate representing a kth point; y is Y k A Y coordinate representing a kth point;
intensity value I of interpolation point at this time (m,n) Expressed as:
the intensity value in each grid is judged and mapped to the gray scale interval of the image, and finally the road point cloud is converted into an intensity image of the road point cloud;
step 2.1.3, eliminating abnormal areas of the intensity image: removing most of abnormal points of intensity in the intensity image of the road point cloud by adopting an IDW algorithm, and processing the intensity image of the road point cloud by adopting improved self-adaptive median filtering aiming at the condition of small amount of salt and pepper noise still existing;
step 2.1.4, based on integral image adaptive threshold segmentation: the integral image refers to the sum of the pixel values of the rectangular area of the left upper corner of the original image, the sum of the pixel values of any rectangular area of the original image is calculated through the integral image, the self-adaptive threshold value of each pixel is determined by the average value of the gray values of the surrounding pixels, and the original pixel point is p i (x i ,y i ) This point is The corresponding gray value of (c) is I (x i ,y i ) The integral image is expressed as an intI (x i ,y i ) Calculating the sum of pixel gray values in a rectangular region by defining a background radius s and a partition coefficient t to p i For the center, the sum of gray values of 2s+1 length is calculated, and the sum of pixel gray values in the rectangular area is expressed as:
when the rectangular area exceeds the boundary of the image, taking the boundary of the image as the reference, the gray value calculation formula of the binary image is as follows:
the step 2.2 is specifically that of,
step 2.2.1: optimizing a road marking line by using a morphological-based closing operation, repairing and filling hole defects in the middle of the road marking line, establishing an original image Mask and a marked image Mask as reconstruction references, continuously expanding the image obtained in the step 2.1.4 by using a morphological method, converging according to the constraint of the Mask, namely, the Mask is more than or equal to the Mask, and finally performing a compensation operation on the Mask, and subtracting the Mask to finish filling the white holes, wherein the expression of the Mask is as follows:
step 2.2.2, optimizing the road marking line by a morphological method: the aliasing phenomenon of the binary image boundary is processed by a morphological MLAA method, specifically:
step 2.2.2.1, firstly searching a part with obvious pixel discontinuity in the picture filled with the white holes in step 2.2.1 as an edge to be processed;
Step 2.2.2.2, classifying the edges to be processed into different modes, and re-vectorizing the marked edges;
and 2.2.2.3, reserving a smooth part, calculating weights through edge vectorization, and mixing edge pixels with surrounding pixels according to the weights to realize edge smoothing.
Step 3.1, dividing the road mark line point cloud into independent objects by adopting European clustering, wherein the method specifically comprises the following steps:
step 3.1.1, assume that the set of road marking points in the scene is p= { P i (X i ,Y i ) I=1, 2,3. }, the parameter D represents a cluster radius, C represents a cluster circle with D as a radius, taking any point P from the point set P, and placing the P points into the point set cluster;
step 3.1.2, taking D as a radius, taking p as a circle center to make a circle, and merging points in the circle into a cluster defined before;
step 3.1.3, using other points in the point cluster as new circle centers, repeating the step 3.1.2 until no new points are added into the cluster;
and 3.1.4, repeating the step 3.1.1 and the step 3.1.3, and generating new cluster clusters by using other points in the point set P, wherein each cluster is an independent object.
The step 3.2 is specifically that,
step 3.2.1, extracting road sign line characteristics: establishing MBR for the independent object obtained by the separation in the step 3.1;
Step 3.2.2, lane line classification: defining the aspect ratio beta of the MBR obtained in 3.2.1 as the geometric feature of the object, extracting a real lane line and a virtual lane line from the cluster according to the aspect ratio, and considering the mark as a small-size road mark line arrow mark or a zebra mark when the conditions of the real lane line and the virtual lane line are not satisfied;
step 3.2.3, zebra crossing classification: judging the arrangement mode of the point cloud clusters by adopting a PCA-based method, extracting rectangles arranged side by side, dividing the rectangles into zebra stripes, calculating the coordinates of the mass centers of any rectangular mark R on an xoy plane, and then calculating a covariance matrix C of any rectangular mark R R By the method of C R Decomposing the characteristic value, selecting and maximizingThe feature vector related to the feature value is taken as the distribution feature of the mark R and is set as v r In order to judge the arrangement mode of the marks R, the direction of the rectangular marks R of the parameter code table is firstly searched along the two sides of the marks R, the searching width is w, if the centroid is q and the distribution characteristic is v in the searching area q And satisfies the formula (v) r //v q )∧(v r //v rq )∧(v q //v rq The marker R is classified as a zebra crossing.
The step 3.3 is specifically that,
step 3.3.1, data set preparation, specifically: for unclassified small-size road mark lines in a scene, acquiring data of the unclassified small-size road mark lines, manually framing the data, manually selecting point set categories, labeling the data, and dividing a data set;
And 3.3.2, constructing a top-down network beside a main branch of the original PointNet++ network, continuously extracting features by using the PointNet network in a local iteration mode, and classifying the data set in the step 3.3.1.
The beneficial effects of the invention are as follows:
the invention discloses a road marking line detection and identification method based on MLS point clouds, which aims at solving the problem that road boundaries cannot be extracted effectively, adopts a progressive road point cloud extraction method to extract road point clouds, aims at solving the problems of over-segmentation and under-segmentation, jaggy of a point cloud projection intensity image and natural incomplete road marking lines generated by road marking line extraction, designs a road marking line extraction method based on self-adaptive threshold segmentation and morphological optimization, and finally solves the problem of lower classification precision caused by similar characteristics by realizing a road marking line classification method of multiscale clustering and PointNet++, and effectively improves the detection and identification precision of the road marking lines.
Drawings
Fig. 1 is a road point cloud extraction result diagram obtained after the original scene data is processed in step 1 of the present invention, wherein fig. 1 (a) is an original scene diagram, fig. 1 (b) is a preprocessed scene diagram, fig. 1 (c) is a scene diagram subjected to ground point filtering, fig. 1 (d) is a rough scene road point cloud extraction result diagram, fig. 1 (e) is a non-scene road point cloud extraction result diagram, and fig. 1 (f) is a fine scene road point cloud extraction result diagram.
Fig. 2 is a final road marking point cloud extraction result diagram of three selected scenes optimized in the step 2 of the present invention, in which fig. 2 (a) is a final road marking point cloud extraction result diagram of one selected scene optimized, fig. 2 (b) is a final road marking point cloud extraction result diagram of one selected scene optimized, and fig. 2 (c) is a final road marking point cloud extraction result diagram of another selected scene optimized.
Fig. 3 is a diagram of the road sign line classification result obtained after the processing of step 3 of the present invention.
Detailed Description
The invention will be described in detail below with reference to the drawings and the detailed description.
Example 1
The embodiment provides a MLS (MovingLeastSquar) point cloud-based road sign line detection and identification method, which is implemented according to the following steps:
step 1, extracting a progressive road point cloud;
step 1.1, performing rough road extraction through an improved cloth analog filtering (Cloth Simulation Filter, CSF) algorithm;
step 1.2, carrying out fine extraction of road point clouds by using a road point cloud extraction algorithm of the pseudo scanning line;
step 1.3, road point cloud boundary optimization: clustering road points by a Density-based clustering (DBSCAN) algorithm, setting constraint rules, and re-fitting boundaries by adopting a random sampling consensus (Random Sample Consensus, RANSAC) method to enable the extracted road boundaries to be closer to the real boundaries and improve the extraction accuracy of the road point cloud;
Step 2, extracting road sign lines optimized based on a self-adaptive threshold segmentation and morphology method;
step 2.1, road sign line segmentation;
step 2.2, road sign line segmentation optimization;
step 2.3, generating point cloud through the back projection of the depth image stored during the point cloud projection and the binary image segmented by the self-adaptive threshold value, and realizing the extraction of the point cloud of the road sign line;
step 3, classifying road sign lines based on the improved PointNet++;
step 3.1, clustering and dividing the road sign lines;
step 3.2, classifying lane lines and zebra crossings;
step 3.3, small-sized road sign line classification based on modified PointNet++.
Example 2
The embodiment provides a road sign line detection and identification method based on MLS point clouds, and based on embodiment 1, step 1 progressive road point cloud extraction specifically comprises:
step 1.1 is specifically implemented according to the following steps:
step 1.1.1, scene preprocessing: preprocessing the original scene shown in fig. 1 (a) by adopting a discrete point removal method based on a statistical filter, wherein the preprocessed scene is shown in fig. 1 (b), specifically, all point clouds in an index scene, and firstly, any point p is calculated i Average distance d from n nearest neighbors around it n Assuming arbitrary p i Average distance d of (2) n Obeying Gaussian distribution, calculating the mean value mu and standard deviation sigma of the Gaussian distribution, establishing a standard range according to the obtained mean value and standard deviation, and taking the mean distance d as n If the point is larger than the set standard range, defining the point as a discrete point and removing the discrete point;
d n the formula is:
step 1.1.2, coarse extraction of road point cloud: the road is filtered by adopting the improved CSF algorithm, so that the road point cloud is primarily extracted, and the scene of the ground point filtering is shown in the figure 1 (c), specifically
Step 1.1.2.1, inputting and reversing denoised road point cloud data;
step 1.1.2.2, establishing a simulated distribution grid of the road point cloud, and customizing the size of the grid, wherein the grid is positioned above all the road point clouds at the initial moment;
step 1.1.2.3, projecting all point clouds and grid particles in a scene onto the same horizontal plane, searching nearest neighbors of each particle at the same time, and recording elevation values IHV before projection;
step 1.1.2.4, for each movable grid particle, calculating the displacement generated by the influence of the internal factors of the improved model because the external factors are ignored, comparing the elevation value of the particle with the elevation value of the current nearest neighbor, resetting the elevation value of the particle to IHV if the elevation value of the particle is lower than or equal to IHV at the moment, and changing the particle to an immovable point;
Step 1.1.2.5, for each mesh particle, calculating the displacement thereof due to the internal factors;
step 1.1.2.6, repeating step 1.1.2.4 and step 1.1.2.5, and stopping the simulation process when the iteration number is greater than the user set value;
step 1.1.2.7, calculating the elevation difference between the point cloud and the grid particles;
in step 1.1.2.8, if the distance between the point cloud and the grid particles is smaller than the preset threshold HCC, the point is considered to be a ground point, otherwise, the point is considered to be a non-ground point, the extracted ground point is the road point cloud, the rough extraction of the road is implemented, the rough extraction result of the road point cloud of the scene is shown in fig. 1 (d), and the extraction result of the non-road point cloud of the scene is shown in fig. 1 (e).
Step 1.2 is specifically implemented according to the following steps:
step 1.2.1, establishing a road point cloud pseudo-scanning line, wherein only plane information of a running track, namely x and y coordinates of the point cloud, is considered to extract the road point cloud, the direction of the road point cloud scanning line is used for replacing the track direction, the pseudo-scanning line is established along the scanning line direction, namely, slicing is carried out on the road point cloud, and any point P on the scanning line track is assumed ri Is (X) i ,Y i ) So P ri Mileage value S corresponding to point i The formula of (2) is:
let c n (X' n ,Y′ n ) Represents arbitrary slice point position, where c n The formula of the coordinate value of (a) is:
in the formula (3), the amino acid sequence of the compound,representation->Coordinate azimuth, delta n Representing the distance between the slice position and the nearest neighbor of the slice position on the track,/o>Representation->Coordinate values of (2);
from c n (X' n ,Y′ n )、α' n And D, determining the position of the slice, the direction of the slice and the length of the slice respectively; the slice segmentation of the point cloud also requires calculating the distance d between the point cloud and the slice segment, when the distance is less than half of the slice length, the point cloud is located on the same slice, L represents the projection of the slice segment on the xoy plane of the coordinate system, T represents the normal of the slice segment, and P (X p ,Y p ) Representing the coordinates of the point cloud, and d representing the distance between the slice segment and the point cloud, then the vectorThe relationship between the normal vector T is formulated as:
d=|(X p -X' n )cos(α' n )+(Y p -Y′ n )sin(α' n )| (4)
step 1.2.2, reconstructing a local coordinate system of a pseudo-scanning line to replace a geodetic coordinate system of a radar system by the projection of the road cross section slice, converting scene point clouds into the local coordinate system of the corresponding pseudo-scanning line, defining a coordinate origin as the center of each road segment, taking an X axis as a track direction, and taking a Y axis perpendicular to the X axis and P i (X i ,Y i ,Z i ) Represents any point on the segment, P i The formula of the coordinates is:
in the formula (5), (X) C ,Y C ) Representing coordinates on the slice, α representing an azimuth angle of the track point;
Projecting the three-dimensional point to the YOZ plane, whereinRepresenting the origin in the two-dimensional coordinate system, points on the pseudo-scan line after projection +.>The coordinate formula of (2) is:
step 1.2.3, extracting the road point cloud by using a sliding window filter, dividing the points on the projected cross section slice into a plurality of cells, setting the threshold value of the cells as M, dividing the point cloud on the pseudo scan line into k sections at the moment, and assuming that the last point isThen->Pseudo scan generated for last intervalThe point coordinates are subjected to linear detection by adopting a RANSAC algorithm, a part of points are randomly selected for fitting, data deviation between the points and other points is counted, the points meeting a threshold value are screened out, a buffer zone is built for the screened points, after the buffer zone is built, a sliding window filter is used for removing non-road point clouds in the buffer zone, fine extraction of the road point clouds is realized, and an extraction result is shown in fig. 1 (f).
The principle of the sliding window filter is that according to the elevation difference between the road point cloud and the curb, a plurality of small windows are established to remove non-road point cloud, firstly, a plurality of rectangular windows are established in k intervals, each window is the minimum surrounding window of the point cloud in the interval, and the height of the window is dh i At this time, a road threshold is setThe threshold value represents the highest elevation value of the road point, when +.>When the window is indicated to have a noise point with an elevation value higher than that of the road surface, marking the window as a key window;
step 1.3, road point cloud boundary optimization: and (3) clustering the road points based on a DBSCAN clustering algorithm, setting constraint rules, and then re-fitting the boundaries by adopting a RANSAC method, so that the extracted road boundaries are closer to the real boundaries, and the extraction accuracy of the road point cloud is improved.
Example 3
The embodiment provides a road sign line detection and identification method based on MLS point cloud, and based on the embodiment 1-2, the road sign line extraction optimized based on the self-adaptive threshold segmentation and morphology method in the step 2 specifically comprises the following steps:
step 2.1 is specifically implemented according to the following steps:
step 2.1.1, road point cloud projection: selecting an XOY plane as a horizontal plane, projecting road point cloud data to the plane along the normal vector direction of the XOY plane, and setting an original road point cloud set as P n ={P 1 ,P 2 ,...,P n Projection ofThe back road point cloud set is P n '={P 1 ',P 2 ',...,P n ' optionally selecting a point t (x) 0 ,y 0 ,z 0 ) Wherein the normal vector of the plane pi isAt this time->At->The calculation formula of the projected height H in the direction is:
At this timeIs->At->Projection vector in direction due to +.>And->Is the same direction, thus->Then the point of projection P i The' coordinate calculation formula is:
step 2.1.2, two-dimensional projection image rasterization: converting the projected point cloud obtained in step 2.1.1 into a two-dimensional raster image, calculating the inverse distance weighted weight influence of the control points on surrounding pixels according to the displacement vector of a given control point and control point pair by using an inverse distance weighted interpolation method (Inverse Distance Weighting, IDW), thereby calculating the intensity value of each pixel point in the image, assuming P i (X i ,Y i ,Z i ,I i ) Representing coordinates of an arbitrary road point and reflection intensity I of the point i ,R pixel Representing the resolution of the generated raster image, R IDW Representing the radius of the interpolation, point P i When projected onto the plane pi, P is the same as P i Change to p i (X j ,Y j ,I j ) The calculation formula of the size of the raster image at this time is:
in the formula (9), maxX represents the maximum value of the X axis in the road point cloud; minX represents the minimum value of the X axis in the road point cloud; maxY represents the maximum value of the Y axis in the road point cloud; minY represents the minimum value of the Y axis in the road point cloud; width, height represent the size of the grating;
the intensity value of each point is obtained through an IDW algorithm, and the point P is positioned in the interpolation radius k From interpolation points Pixel (m,n) Position and interpolation radius determination, d (m,n,k) Then represent P k To the Pixel center Pixel (m,n) The calculation formula is as follows:
in the formula (10), X k An X coordinate representing a kth point; y is Y k A Y coordinate representing a kth point;
intensity value I of interpolation point at this time (m,n) Expressed as:
the intensity value in each grid is judged and mapped to the gray scale interval of the image, and finally the road point cloud is converted into an intensity image of the road point cloud;
step 2.1.3, eliminating abnormal areas of the intensity image: removing most of abnormal points of intensity in the intensity image of the road point cloud by adopting an IDW algorithm, and processing the intensity image of the road point cloud by adopting improved self-adaptive median filtering aiming at the condition of small amount of salt and pepper noise still existing;
the method for processing the intensity image of the road point cloud by adopting the improved self-adaptive median filter comprises the following specific steps: 1) Assuming that the pixel point of the intensity image is (x, y), f (x, y) represents the pixel value of the intensity image, W ij For a moving window, four indexes of the moving window are added: w (W) max 、f max 、f min 、f mid Respectively representing the maximum value of the moving window, the maximum value of the gray value, the minimum value of the gray value and the median value of the gray value; 2) When meeting f min <f(x,y)<f max Judging that the pixel point is not a noise point and outputting, otherwise, judging that the pixel point is a noise point; 3) For the determined noise point, a window f is determined mid Whether or not it is noise, if f min <f mid <f max The window median is not noise and the window median is output; if f mid Is noise, at this time, enlarge window and W ij =W ij +1. When W is ij <W max Repeating the operation of 2); 4) If W is ij ≥W max Removing all maximum value points and minimum value points in the window, and outputting the rest pixels in the intensity image after weighted average;
step 2.1.4, based on integral image adaptive threshold segmentation: the integral image refers to the sum of the pixel values of the rectangular area of the left upper corner of the original image, the sum of the pixel values of any rectangular area of the original image is calculated through the integral image, the self-adaptive threshold value of each pixel is determined by the average value of the gray values of the surrounding pixels, and the original pixel point is p i (x i ,y i ) The corresponding gray value of the point is I (x i ,y i ) The integral image is expressed as an intI (x i ,y i ) Calculating the sum of pixel gray values in a rectangular region by defining a background radius s and a partition coefficient t to p i For the center, the sum of gray values of 2s+1 length is calculated, and the sum of pixel gray values in the rectangular area is expressed as:
when the rectangular area exceeds the boundary of the image, the image boundary is considered, and the gray value calculation formula of the binary image is as follows:
the step 2.2 is specifically that of,
Step 2.2.1: optimizing a road marking line by using a morphological-based closing operation, repairing and filling hole defects in the middle of the road marking line, establishing an original image Mask and a marked image Mask as reconstruction references, continuously expanding the images obtained in the step 2.1.4 by using a morphological method, converging according to the constraint of the Mask, namely, mask is more than or equal to the Mask, and finally performing a compensation operation on the Mask and subtracting the Mask to finish filling white holes, wherein the expression of the Mask is as follows:
step 2.2.2, optimizing the road marking line by a morphological method: the aliasing of binary image boundaries is handled by morphological antialiasing methods (Morphological Antialiasing, MLAA), specifically:
step 2.2.2.1, firstly searching a part with obvious pixel discontinuity in the picture filled with the white holes in step 2.2.1 as an edge to be processed;
step 2.2.2.2, classifying the edges to be processed into different modes, and re-vectorizing the marked edges;
and 2.2.2.3, reserving a smooth part, calculating weights through edge vectorization, and mixing edge pixels with surrounding pixels according to the weights to realize edge smoothing.
The road sign line extraction result obtained in this step is shown in fig. 2, where fig. 2 (a) is a final road sign line point cloud extraction result diagram after one scene is selected, fig. 2 (b) is a final road sign line point cloud extraction result diagram after one scene is selected, and fig. 2 (c) is a final road sign line point cloud extraction result diagram after another scene is selected.
Example 4
The embodiment provides a road marking line detection and identification method based on MLS point cloud, and based on the embodiment 1-3, the step 3 is implemented based on the road marking line classification of improved PointNet++, specifically according to the following steps:
step 3.1, dividing the road mark line point cloud into independent objects by adopting European clustering, wherein the method specifically comprises the following steps:
step 3.1.1, assume that the set of road marking points in the scene is p= { P i (X i ,Y i ) I=1, 2,3. }, the parameter D represents a cluster radius, C represents a cluster circle with D as a radius, taking any point P from the point set P, and placing the P points into the point set cluster;
step 3.1.2, taking D as a radius, taking p as a circle center to make a circle, and merging points in the circle into a cluster defined before;
step 3.1.3, using other points in the point cluster as new circle centers, repeating the step 3.1.2 until no new points are added into the cluster;
And 3.1.4, repeating the step 3.1.1 and the step 3.1.3, and generating new cluster clusters by using other points in the point set P, wherein each cluster is an independent object.
The step 3.2 is specifically that,
step 3.2.1, extracting road sign line characteristics: establishing a minimum bounding rectangle (Minimum bounding rectangle, MBR) for the independent object separated in the step 3.1;
the MBR is specifically established as follows: 1) And establishing a simple circumscribed rectangle of the object, and solving a center point of the object. Assume that the point set within the object at this time is p= { p i (x i ,y i ) I=1, 2,3. }, the center point coordinate formula is:
2) Rotating the circumscribed rectangle by taking the central point as an original point, setting a rotation threshold value to be 1, solving the area of the circumscribed rectangle at the moment when the circumscribed rectangle rotates by 1 degree, and recording the rotation angle and the vertex coordinates of the rectangle at the moment;
3) Comparing the areas of all the circumscribed rectangles obtained by the rotation in the step 2), wherein the circumscribed rectangle with the smallest area is the MBR of the object;
step 3.2.2, lane line classification: defining the aspect ratio beta of the MBR obtained in 3.2.1 as the geometric feature of the object, extracting a real type lane line and a virtual type lane line from a cluster according to the aspect ratio, considering the point cloud cluster as the real type lane line when beta is more than 37, considering the point cloud cluster as the virtual type lane line when beta is more than 13 and less than 27, and considering the mark as a small-size road mark line arrow mark or a zebra mark when the two conditions of the real type lane line and the virtual type lane line are not satisfied;
Step 3.2.3, zebra crossing classification: judging the arrangement mode of the point cloud clusters by adopting a method based on principal component analysis (Principal Component Analysis, PCA) so as to extract rectangles which are arranged side by side and divide the rectangles into zebra stripes, calculating the coordinates of the centroid of any rectangle mark R on an xoy plane, and then calculating a covariance matrix C of any rectangle mark R R By the method of C R Performing eigenvalue decomposition, selecting eigenvector related to maximum eigenvalue as distribution characteristic of sign R, and setting it as v r In order to judge the arrangement mode of the marks R, the direction of the rectangular marks R of the parameter code table is firstly searched along the two sides of the marks R, the searching width is w, if the centroid is q and the distribution characteristic is v in the searching area q Rectangular label of (2)And satisfies the formula (v) r //v q )∧(v r //v rq )∧(v q //v rq ) The marker R is classified as a zebra crossing.
The step 3.3 is specifically that,
step 3.3.1, data set preparation, specifically: for unclassified small-size road mark lines in a scene, acquiring data of the unclassified small-size road mark lines, manually framing the data, manually selecting point set categories, labeling the data, and dividing a data set;
and 3.3.2, constructing a top-down network beside a main branch of the original PointNet++ network, continuously extracting features by using the PointNet network in a local iteration mode, and classifying the data set in the step 3.3.1 to obtain a road sign line classification result diagram shown in fig. 3.

Claims (9)

1. The road sign line detection and identification method based on the MLS point cloud is characterized by comprising the following steps of:
step 1, extracting a progressive road point cloud;
step 2, extracting road sign lines optimized based on a self-adaptive threshold segmentation and morphology method;
and 3, classifying road sign lines based on the improved PointNet++.
2. The road sign line detection and identification method based on the MLS point cloud according to claim 1, wherein the step 1 specifically comprises:
step 1.1, performing rough road extraction through an improved CSF algorithm;
step 1.2, carrying out fine extraction of road point clouds by using a road point cloud extraction algorithm of the pseudo scanning line;
step 1.3, road point cloud boundary optimization: clustering road points based on a DBSCAN clustering algorithm, setting constraint rules, and re-fitting boundaries by adopting a RANSAC method;
the step 2 specifically comprises the following steps:
step 2.1, road sign line segmentation;
step 2.2, road sign line segmentation optimization;
step 2.3, generating point cloud through the back projection of the depth image stored during the point cloud projection and the binary image segmented by the self-adaptive threshold value, and realizing the extraction of the point cloud of the road sign line;
the step 3 specifically comprises the following steps:
Step 3.1, clustering and dividing the road sign lines;
step 3.2, classifying lane lines and zebra crossings;
step 3.3, small-sized road sign line classification based on modified PointNet++.
3. The road sign line detection and identification method based on the MLS point cloud according to claim 2, wherein the step 1.1 is specifically implemented according to the following steps:
step 1.1.1, scene preprocessing: preprocessing a scene by adopting a discrete point removal method based on a statistical filter, indexing all point clouds in the scene, and firstly calculating any point p i Average distance d from n nearest neighbors around it n Assuming arbitrary p i Average distance d of (2) n Obeying Gaussian distribution, calculating the mean value mu and standard deviation sigma of the Gaussian distribution, establishing a standard range according to the obtained mean value and standard deviation, and taking the mean distance d as n If the point is larger than the set standard range, defining the point as a discrete point and removing the discrete point;
d n the formula is:
step 1.1.2, coarse extraction of road point cloud: filtering the road by adopting an improved CSF algorithm, thereby preliminarily extracting the road point cloud, in particular
Step 1.1.2.1, inputting and reversing denoised road point cloud data;
step 1.1.2.2, establishing a simulated distribution grid of the road point cloud, and customizing the size of the grid, wherein the grid is positioned above all the road point clouds at the initial moment;
Step 1.1.2.3, projecting all point clouds and grid particles in a scene onto the same horizontal plane, searching nearest neighbors of each particle at the same time, and recording elevation values IHV before projection;
step 1.1.2.4, for each movable grid particle, calculating the displacement generated by the influence of the internal factors, comparing with the elevation value of the current nearest neighbor, and resetting the elevation value of the particle to IHV and changing the particle to an immovable point if the elevation value of the particle is lower than or equal to IHV at this time;
step 1.1.2.5, for each mesh particle, calculating the displacement thereof due to the internal factors;
step 1.1.2.6, repeating step 1.1.2.4 and step 1.1.2.5, and stopping the simulation process when the iteration number is greater than the user set value;
step 1.1.2.7, calculating the elevation difference between the point cloud and the grid particles;
in step 1.1.2.8, if the distance between the point cloud and the grid particles is smaller than the preset threshold HCC, the point is considered to be a ground point, otherwise, the point is considered to be a non-ground point, and the extracted ground point is the road point cloud.
4. The road sign line detection and identification method based on the MLS point cloud according to claim 2, wherein the step 1.2 is specifically implemented according to the following steps:
Step 1.2.1, using the scanning line direction of the road point cloud to replace the track direction, and establishing a pseudo scanning line along the scanning line direction, namely slicing the road point cloud, and assuming any point on the track of the scanning lineIs (X) i ,Y i ) Therefore->Mileage value S corresponding to point i The formula of (2) is:
let c n (X′ n ,Y′ n ) Represents arbitrary slice point position, where c n The formula of the coordinate value of (a) is:
in the formula (3), the amino acid sequence of the compound,representation->Coordinate azimuth, delta n Representing the distance between the slice position and the nearest neighbor of the slice position on the track,/o>Representation->Coordinate values of (2);
from c n (X′ n ,Y′ n )、α′ n And D, determining the position of the slice, the direction of the slice and the length of the slice respectively; calculating the distance d between the point cloud and the slice segment, wherein when the distance is smaller than half of the slice length, the point cloud is positioned on the same slice, L represents the projection of the slice segment on the xoy plane of the coordinate system, T represents the normal line of the slice segment, and P (X) p ,Y p ) Representing the coordinates of the point cloud, and d representing the distance between the slice segment and the point cloud, then the vectorThe relationship between the normal vector T is formulated as:
d=|(X p -X′ n )cos(α′ n )+(Y p -Y′ n )sin(α′ n )| (4);
step 1.2.2, reconstructing a local coordinate system of a pseudo-scanning line to replace a geodetic coordinate system of a radar system by the projection of the road cross section slice, converting scene point clouds into the local coordinate system of the corresponding pseudo-scanning line, defining a coordinate origin as the center of each road segment, taking an X axis as a track direction, and taking a Y axis perpendicular to the X axis and P i (X i ,Y i ,Z i ) Represents any point on the segment, P i The formula of the coordinates is:
in the formula (5), (X) C ,Y C ) Representing coordinates on the slice, α representing an azimuth angle of the track point;
projecting the three-dimensional point to the YOZ plane, whereinRepresenting the origin in the two-dimensional coordinate system, points on the pseudo-scan line after projection +.>The coordinate formula of (2) is:
step 1.2.3, extracting the road point cloud by using a sliding window filter, dividing the points on the projected cross section slice into a plurality of cells, setting the threshold value of the cells as M, dividing the point cloud on the pseudo scan line into k sections at the moment, and assuming that the last point isThen->And (3) for the pseudo scanning point coordinates generated in the last interval, adopting a RANSAC algorithm to perform straight line detection, randomly selecting a part of points to perform fitting, counting data deviation with other points, screening out points meeting a threshold value, establishing a buffer area for the screened points, and removing non-road point clouds in the buffer area by using a sliding window filter.
5. The method for detecting and identifying road sign lines based on MLS point clouds according to claim 4, wherein said step 2.1 is specifically,
step 2.1.1, road point cloud projection: selecting an XOY plane as a horizontal plane, projecting road point cloud data to the plane along the normal vector direction of the XOY plane, and setting an original road point cloud set as P n ={P 1 ,P 2 ,...,P n The projected road point cloud set isThe point t (x) is arbitrarily selected on the plane pi 0 ,y 0 ,z 0 ) Wherein the normal vector of the plane pi is +.>At this time->At->The calculation formula of the projected height H in the direction is:
at this timeIs->At->Projection vector in direction due to +.>And->Is the same direction, thus->Then the point of projection P i The' coordinate calculation formula is:
step 2.1.2, two-dimensional projection image rasterization: converting the projected point cloud obtained in the step 2.1.1 into a two-dimensional grating image, calculating the inverse distance weighting influence of the control points on surrounding pixels according to the given control points and the displacement vector of the control point pairs by using an inverse distance weighting interpolation method IDW, thereby calculating the intensity value of each pixel point in the image, and assuming P i (X i ,Y i ,Z i ,I i ) Representing coordinates of an arbitrary road point and reflection intensity I of the point i ,R pixel Representing the resolution of the generated raster image, R IDW Representing the radius of the interpolation, point P i When projected onto the plane pi, P is the same as P i Change to p i (X j ,Y j ,I j ) The calculation formula of the size of the raster image at this time is:
in the formula (9), maxX represents the maximum value of the X axis in the road point cloud; minX represents the minimum value of the X axis in the road point cloud; maxY represents the maximum value of the Y axis in the road point cloud; minY represents the minimum value of the Y axis in the road point cloud; width, height represent the size of the grating;
The intensity value of each point is obtained through an IDW algorithm, and the point P is positioned in the interpolation radius k From interpolation points Pixel (m,n) Position and interpolation radius determination, d (m,n,k) Then represent P k To the Pixel center Pixel (m,n) The calculation formula is as follows:
in the formula (10), X k An X coordinate representing a kth point; y is Y k A Y coordinate representing a kth point;
intensity value I of interpolation point at this time (m,n) Expressed as:
the intensity value in each grid is judged and mapped to the gray scale interval of the image, and finally the road point cloud is converted into an intensity image of the road point cloud;
step 2.1.3, eliminating abnormal areas of the intensity image: removing most of abnormal points of intensity in the intensity image of the road point cloud by adopting an IDW algorithm, and processing the intensity image of the road point cloud by adopting improved self-adaptive median filtering aiming at the condition of small amount of salt and pepper noise still existing;
step 2.1.4, based on integral image adaptive threshold segmentation: the integral image refers to the sum of the pixel values of the rectangular area of the left upper corner of the original image, the sum of the pixel values of any rectangular area of the original image is calculated through the integral image, the self-adaptive threshold value of each pixel is determined by the average value of the gray values of the surrounding pixels, and the original pixel point is p i (x i ,y i ) The corresponding gray value of the point is I (x i ,y i ) The integral image is expressed as an intI (x i ,y i ) Calculating the sum of pixel gray values in a rectangular region by defining a background radius s and a partition coefficient t to p i For the center, the sum of gray values of 2s+1 length is calculated, and the sum of pixel gray values in the rectangular area is expressed as:
when the rectangular area exceeds the boundary of the image, taking the boundary of the image as the reference, the gray value calculation formula of the binary image is as follows:
6. the method for detecting and identifying road sign lines based on MLS point clouds according to claim 5, wherein said step 2.2 is specifically,
step 2.2.1: optimizing a road marking line by using a morphological-based closing operation, repairing and filling hole defects in the middle of the road marking line, establishing an original image Mask and a marked image Mask as reconstruction references, continuously expanding the image obtained in the step 2.1.4 by using a morphological method, converging according to the constraint of the Mask, namely, the Mask is more than or equal to the Mask, and finally performing a compensation operation on the Mask and subtracting the Mask to finish filling the white holes, wherein the expression of the Mask is as follows:
step 2.2.2, optimizing the road marking line by a morphological method: the aliasing phenomenon of the binary image boundary is processed by a morphological MLAA method, specifically:
Step 2.2.2.1, firstly searching a part with obvious pixel discontinuity in the picture filled with the white holes in step 2.2.1 as an edge to be processed;
step 2.2.2.2, classifying the edges to be processed into different modes, and re-vectorizing the marked edges;
and 2.2.2.3, reserving a smooth part, calculating weights through edge vectorization, and mixing edge pixels with surrounding pixels according to the weights to realize edge smoothing.
7. The method for detecting and identifying the road marking based on the MLS point cloud according to claim 6, wherein the step 3.1 adopts European clustering to divide the road marking point cloud into individual objects, specifically:
step 3.1.1, assume that the set of road marking points in the scene is p= { P i (X i ,Y i ) I=1, 2,3. }, the parameter D represents a cluster radius, C represents a cluster circle with D as a radius, taking any point P from the point set P, and placing the P points into the point set cluster;
step 3.1.2, taking D as a radius, taking p as a circle center to make a circle, and merging points in the circle into a cluster defined before;
step 3.1.3, using other points in the point cluster as new circle centers, repeating the step 3.1.2 until no new points are added into the cluster;
And 3.1.4, repeating the step 3.1.1 and the step 3.1.3, and generating new cluster clusters by using other points in the point set P, wherein each cluster is an independent object.
8. The method for detecting and identifying road sign lines based on MLS point clouds according to claim 3, wherein said step 3.2 is specifically,
step 3.2.1, extracting road sign line characteristics: establishing MBR for the independent object obtained by the separation in the step 3.1;
step 3.2.2, lane line classification: defining the aspect ratio beta of the MBR obtained in 3.2.1 as the geometric feature of the object, extracting a real lane line and a virtual lane line from the cluster according to the aspect ratio, and considering the mark as a small-size road mark line arrow mark or a zebra mark when the conditions of the real lane line and the virtual lane line are not satisfied;
step 3.2.3, zebra crossing classification: judging the arrangement mode of the point cloud clusters by adopting a PCA-based method, extracting rectangles arranged side by side, dividing the rectangles into zebra stripes, calculating the coordinates of the mass centers of any rectangular mark R on an xoy plane, and then calculating a covariance matrix C of any rectangular mark R R By the method of C R Performing eigenvalue decomposition, selecting eigenvector related to maximum eigenvalue as distribution characteristic of sign R, and setting it as v r In order to judge the arrangement mode of the marks R, the direction of the rectangular marks R of the parameter code table is firstly searched along the two sides of the marks R, the searching width is w, if the centroid is q and the distribution characteristic is v in the searching area q And satisfies the formula (v) r //v q )∧(v r //v rq )∧(v q //v rq The marker R is classified as a zebra crossing.
9. The method for detecting and identifying road sign lines based on MLS point clouds according to claim 2, wherein the step 3.3 is specifically,
step 3.3.1, data set preparation, specifically: for unclassified small-size road mark lines in a scene, acquiring data of the unclassified small-size road mark lines, manually framing the data, manually selecting point set categories, labeling the data, and dividing a data set;
and 3.3.2, constructing a top-down network beside a main branch of the original PointNet++ network, continuously extracting features by using the PointNet network in a local iteration mode, and classifying the data set in the step 3.3.1.
CN202311815103.9A 2023-12-27 2023-12-27 Road sign line detection and identification method based on MLS point cloud Pending CN117876984A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311815103.9A CN117876984A (en) 2023-12-27 2023-12-27 Road sign line detection and identification method based on MLS point cloud

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311815103.9A CN117876984A (en) 2023-12-27 2023-12-27 Road sign line detection and identification method based on MLS point cloud

Publications (1)

Publication Number Publication Date
CN117876984A true CN117876984A (en) 2024-04-12

Family

ID=90591149

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311815103.9A Pending CN117876984A (en) 2023-12-27 2023-12-27 Road sign line detection and identification method based on MLS point cloud

Country Status (1)

Country Link
CN (1) CN117876984A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118522004A (en) * 2024-06-13 2024-08-20 大连慕泽科技有限公司 Multifunctional road detection vehicle

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118522004A (en) * 2024-06-13 2024-08-20 大连慕泽科技有限公司 Multifunctional road detection vehicle

Similar Documents

Publication Publication Date Title
Cheng et al. Extraction and classification of road markings using mobile laser scanning point clouds
CN104008553B (en) Crack detection method with image gradient information and watershed method conflated
Pérez et al. Jetstream: Probabilistic contour extraction with particles
Sahar et al. Using aerial imagery and GIS in automated building footprint extraction and shape recognition for earthquake risk assessment of urban inventories
CN103049763B (en) Context-constraint-based target identification method
CN103034863B (en) The remote sensing image road acquisition methods of a kind of syncaryon Fisher and multiple dimensioned extraction
WO2021195697A1 (en) Feature extraction from mobile lidar and imagery data
Vasuki et al. An interactive image segmentation method for lithological boundary detection: A rapid mapping tool for geologists
CN104361582B (en) Method of detecting flood disaster changes through object-level high-resolution SAR (synthetic aperture radar) images
CN110992381A (en) Moving target background segmentation method based on improved Vibe + algorithm
GB2532948A (en) Objection recognition in a 3D scene
US11804025B2 (en) Methods and systems for identifying topographic features
CN106846322B (en) The SAR image segmentation method learnt based on curve wave filter and convolutional coding structure
Sun et al. Classification of MLS point clouds in urban scenes using detrended geometric features from supervoxel-based local contexts
CN109215112B (en) Method for marking single-side point cloud model
CN117876984A (en) Road sign line detection and identification method based on MLS point cloud
CN103971377A (en) Building extraction method based on prior shape level set segmentation
CN111091071B (en) Underground target detection method and system based on ground penetrating radar hyperbolic wave fitting
Femiani et al. Shadow-based rooftop segmentation in visible band images
Omidalizarandi et al. Segmentation and classification of point clouds from dense aerial image matching
CN117636268A (en) Unmanned aerial vehicle aerial natural driving data set construction method oriented to ice and snow environment
Straub Automatic extraction of trees from aerial images and surface models
Barrett et al. Houghing the hough: Peak collection for detection of corners, junctions and line intersections
CN114862883A (en) Target edge extraction method, image segmentation method and system
Liu et al. Identification of Damaged Building Regions from High-Resolution Images Using Superpixel-Based Gradient and Autocorrelation Analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination