CN117876984A - Road sign detection and recognition method based on MLS point cloud - Google Patents
Road sign detection and recognition method based on MLS point cloud Download PDFInfo
- Publication number
- CN117876984A CN117876984A CN202311815103.9A CN202311815103A CN117876984A CN 117876984 A CN117876984 A CN 117876984A CN 202311815103 A CN202311815103 A CN 202311815103A CN 117876984 A CN117876984 A CN 117876984A
- Authority
- CN
- China
- Prior art keywords
- point
- road
- point cloud
- image
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 62
- 238000001514 detection method Methods 0.000 title claims abstract description 23
- 238000000605 extraction Methods 0.000 claims abstract description 48
- 230000011218 segmentation Effects 0.000 claims abstract description 21
- 238000005457 optimization Methods 0.000 claims abstract description 18
- 230000000877 morphologic effect Effects 0.000 claims abstract description 17
- 230000000750 progressive effect Effects 0.000 claims abstract description 6
- 239000002245 particle Substances 0.000 claims description 27
- 238000004422 calculation algorithm Methods 0.000 claims description 22
- 238000004364 calculation method Methods 0.000 claims description 18
- 241000283070 Equus zebra Species 0.000 claims description 15
- 230000003044 adaptive effect Effects 0.000 claims description 14
- 238000006073 displacement reaction Methods 0.000 claims description 9
- 230000002159 abnormal effect Effects 0.000 claims description 6
- 238000011049 filling Methods 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 4
- 238000007781 pre-processing Methods 0.000 claims description 4
- 238000004088 simulation Methods 0.000 claims description 4
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 claims description 3
- 235000002566 Capsicum Nutrition 0.000 claims description 3
- 239000006002 Pepper Substances 0.000 claims description 3
- 235000016761 Piper aduncum Nutrition 0.000 claims description 3
- 235000017804 Piper guineense Nutrition 0.000 claims description 3
- 235000008184 Piper nigrum Nutrition 0.000 claims description 3
- 238000000354 decomposition reaction Methods 0.000 claims description 3
- 238000009499 grossing Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000002360 preparation method Methods 0.000 claims description 3
- 150000003839 salts Chemical class 0.000 claims description 3
- 244000203593 Piper nigrum Species 0.000 claims 1
- 125000003275 alpha amino acid group Chemical group 0.000 claims 1
- 150000001875 compounds Chemical class 0.000 claims 1
- 230000007547 defect Effects 0.000 claims 1
- 238000009432 framing Methods 0.000 claims 1
- 238000002372 labelling Methods 0.000 claims 1
- 239000003550 marker Substances 0.000 claims 1
- 238000002156 mixing Methods 0.000 claims 1
- 238000005192 partition Methods 0.000 claims 1
- 238000012216 screening Methods 0.000 claims 1
- 238000000926 separation method Methods 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 16
- 239000004744 fabric Substances 0.000 description 3
- 238000000513 principal component analysis Methods 0.000 description 3
- 241000722363 Piper Species 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/155—Segmentation; Edge detection involving morphological operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域Technical Field
本发明属于计算机视觉及人工智能技术领域,涉及基于MLS点云的道路标志线检测与识别方法。The present invention belongs to the technical field of computer vision and artificial intelligence, and relates to a road sign line detection and recognition method based on MLS point cloud.
背景技术Background technique
道路标志线检测是高精度地图和无人驾驶的重要感知模块之一。通常道路标志线检测主要分为以下三类:基于图像的方法、基于影像的方法和基于点云的方法,具体优缺点如下:基于图像的道路标志线检测的相关研究较多,但是该方法对于数据集的光照条件要求较高,并且过于依赖道路表面的纹理特征;基于影像的道路标志线检测也受光照的影响,并且由于影像拼接的问题,如果将其投影至同一坐标系会产生误差,进而影响后续检测的准确性。还有研究人员通过使用遥感影像代替传统图像和影像克服拼接产生的误差问题,但是遥感影像的分辨率不足,因此仍难以准确、全面地提取道路要素。与上述方法相比,基于点云的方法则表现出它在道路标志线检测方面的独特优势:1)不受光照的影响;2)数据的获取效率更高;3)获取到的数据连续性更好;4)具有更高的精度和采集密度;5)对于道路标志线的特殊材料具有很好的回波反射强度。但是由于存在室外场景数据过于庞大、点密度分布不均匀等问题,使得该研究仍然具有挑战性。Road sign detection is one of the important perception modules for high-precision maps and unmanned driving. Generally, road sign detection is mainly divided into the following three categories: image-based methods, image-based methods, and point cloud-based methods. The specific advantages and disadvantages are as follows: There are many related studies on image-based road sign detection, but this method has high requirements for the lighting conditions of the data set and is too dependent on the texture characteristics of the road surface; image-based road sign detection is also affected by lighting, and due to the problem of image splicing, errors will occur if it is projected into the same coordinate system, which will affect the accuracy of subsequent detection. Some researchers have also overcome the error problem caused by splicing by using remote sensing images instead of traditional images and images, but the resolution of remote sensing images is insufficient, so it is still difficult to accurately and comprehensively extract road features. Compared with the above methods, the point cloud-based method shows its unique advantages in road sign detection: 1) It is not affected by lighting; 2) The data acquisition efficiency is higher; 3) The acquired data has better continuity; 4) It has higher accuracy and acquisition density; 5) It has good echo reflection intensity for special materials of road signs. However, due to the problems of excessively large outdoor scene data and uneven point density distribution, this research is still challenging.
发明内容Summary of the invention
本发明的目的是提供基于MLS点云的道路标志线检测与识别方法,有效地提高了道路标志线的检测和识别精度。The purpose of the present invention is to provide a road sign line detection and recognition method based on MLS point cloud, which effectively improves the detection and recognition accuracy of road sign lines.
本发明采用的技术方案是,基于MLS点云的道路标志线检测与识别方法,具体按照以下步骤实施:The technical solution adopted by the present invention is a road sign line detection and recognition method based on MLS point cloud, which is specifically implemented according to the following steps:
步骤1,渐进式道路点云提取;Step 1, progressive road point cloud extraction;
步骤2,基于自适应阈值分割与形态学方法优化的道路标志线提取;Step 2: Road sign line extraction based on adaptive threshold segmentation and morphological method optimization;
步骤3,基于改进PointNet++的道路标志线分类。Step 3: Road sign line classification based on improved PointNet++.
本发明的特点还在于,The present invention is also characterized in that:
步骤1具体为:Step 1 is as follows:
步骤1.1,通过改进的CSF算法进行道路粗提取;Step 1.1, rough road extraction using the improved CSF algorithm;
步骤1.2,使用伪扫描线的道路点云提取算法进行道路点云精细提取;Step 1.2, using the pseudo-scan line road point cloud extraction algorithm to perform fine extraction of road point cloud;
步骤1.3,道路点云边界优化:基于DBSCAN聚类算法对道路点聚类,设定约束规则,采用RANSAC方法对边界重新拟合;Step 1.3, road point cloud boundary optimization: cluster the road points based on the DBSCAN clustering algorithm, set constraint rules, and use the RANSAC method to refit the boundary;
步骤2具体为:Step 2 is as follows:
步骤2.1,道路标志线分割;Step 2.1, road sign line segmentation;
步骤2.2,道路标志线分割优化;Step 2.2, road sign line segmentation optimization;
步骤2.3,通过点云投影时保存的深度图像和自适应阈值分割后的二值图像逆投影生成点云,实现对道路标志线点云的提取;Step 2.3, generating a point cloud by reversely projecting the depth image saved during point cloud projection and the binary image after adaptive threshold segmentation, thereby realizing the extraction of the road sign line point cloud;
步骤3具体为:Step 3 is as follows:
步骤3.1,道路标志线聚类分割;Step 3.1, road sign line clustering segmentation;
步骤3.2,车道线与斑马线分类;Step 3.2, lane line and zebra crossing classification;
步骤3.3,基于改进PointNet++的小尺寸道路标志线分类。Step 3.3, small-size road sign line classification based on improved PointNet++.
步骤1.1具体按照以下步骤实施:Step 1.1 is implemented according to the following steps:
步骤1.1.1,场景预处理:采用基于统计滤波器的离散点去除方法对场景进行预处理,索引场景中的所有点云,首先计算任意点pi与其周围n个最近邻点的平均距离dn,假设任意pi的平均距离dn服从高斯分布,通过计算高斯分布的均值μ和标准差σ,根据所得均值和标准差建立标准范围,当平均距离dn大于设定的标准范围时,则将该点定义为离散点并去除;Step 1.1.1, scene preprocessing: Use the discrete point removal method based on statistical filters to preprocess the scene. Index all point clouds in the scene. First, calculate the average distance dn between any point p i and its n nearest neighboring points. Assume that the average distance dn of any point p i follows a Gaussian distribution. Calculate the mean μ and standard deviation σ of the Gaussian distribution, and establish a standard range based on the obtained mean and standard deviation. When the average distance dn is greater than the set standard range, the point is defined as a discrete point and removed.
dn公式为:The formula for dn is:
步骤1.1.2,道路点云的粗提取:采用改进的CSF算法对道路进行滤波,从而初步提取道路点云,具体为Step 1.1.2, rough extraction of road point cloud: The improved CSF algorithm is used to filter the road to preliminarily extract the road point cloud.
步骤1.1.2.1,输入并反转去噪后的道路点云数据;Step 1.1.2.1, input and invert the denoised road point cloud data;
步骤1.1.2.2,建立道路点云的模拟布料格网,自定义网格的大小,初始时刻时,格网位于全部道路点云的上方;Step 1.1.2.2, establish a simulated cloth grid of the road point cloud, customize the size of the grid, and at the initial moment, the grid is located above all road point clouds;
步骤1.1.2.3,将场景中所有点云与格网粒子投影至同一水平面上,同时寻找每一个粒子的最近邻点,记录其投影前的高程值IHV;Step 1.1.2.3, project all point clouds and grid particles in the scene onto the same horizontal plane, find the nearest neighbor of each particle, and record its elevation value IHV before projection;
步骤1.1.2.4,对于每一个可移动的格网粒子,计算其内部因素影响所产生的位移,与当前最近邻点的高程值比较,如果此时该粒子的高程值低于或等于IHV,则将该粒子的高程值重新设置为IHV,并且该粒子变为不可移动点;Step 1.1.2.4, for each movable grid particle, calculate the displacement caused by its internal factors and compare it with the elevation value of the current nearest neighbor. If the elevation value of the particle is lower than or equal to IHV, reset the elevation value of the particle to IHV, and the particle becomes an immovable point;
步骤1.1.2.5,对于每个格网粒子,计算其受内部因素所产生的位移;Step 1.1.2.5, for each grid particle, calculate the displacement caused by internal factors;
步骤1.1.2.6,重复步骤1.1.2.4、步骤1.1.2.5,当迭代次数大于用户设置值时,则停止模拟过程;Step 1.1.2.6, repeat steps 1.1.2.4 and 1.1.2.5. When the number of iterations is greater than the user-set value, stop the simulation process;
步骤1.1.2.7,计算点云与格网粒子之间的高程差;Step 1.1.2.7, calculate the elevation difference between the point cloud and the grid particles;
步骤1.1.2.8,如果点云与格网粒子间距离小于预先设置的阈值HCC,则认为该点是地面点,反之则认为该点是非地面点,提取地面点即为道路点云。In step 1.1.2.8, if the distance between the point cloud and the grid particle is less than the preset threshold HCC, the point is considered to be a ground point, otherwise it is considered to be a non-ground point. The extracted ground point is the road point cloud.
步骤1.2具体按照以下步骤实施:Step 1.2 is implemented as follows:
步骤1.2.1,使用道路点云扫描线方向代替轨迹方向,并且沿扫描线方向建立伪扫描线,即对道路点云进行切片处理,假设扫描线轨迹上任意一点的坐标是(Xi,Yi),所以/>点对应的里程值Si的公式为:Step 1.2.1, use the road point cloud scan line direction instead of the trajectory direction, and establish a pseudo scan line along the scan line direction, that is, slice the road point cloud, assuming that any point on the scan line trajectory The coordinates of are (X i ,Y i ), so/> The formula for the mileage value Si corresponding to the point is:
设cn(X'n,Y’n)表示任意切片点位置,其中cn的坐标值的公式为:Let c n (X' n ,Y' n ) represent the position of any slice point, where the formula for the coordinate value of c n is:
式(3)中,表示/>的坐标方位角,Δn表示切片位置与轨迹上切片位置的最近邻点之间的距离,/>表示/>的坐标值;In formula (3), Indicates/> The coordinate azimuth of the slice position, Δn represents the distance between the slice position and the nearest neighbor point on the trajectory, /> Indicates/> The coordinate value of
由cn(X'n,Y′n)、α'n和D分别确定切片的位置、切片的方向和切片的长度;计算点云到切片段间的距离d,当距离小于切片长度的一半时,则点云位于同一切片上,L表示切片段在坐标系xoy面的投影,T表示切片段的法线,P(Xp,Yp)表示点云的坐标,d表示切片段到点云间的距离,则向量和法向量T之间的关系的公式表示为:The position, direction and length of the slice are determined by c n (X' n ,Y′ n ), α' n and D respectively; the distance d from the point cloud to the slice segment is calculated. When the distance is less than half the slice length, the point cloud is located on the same slice. L represents the projection of the slice segment on the xoy plane of the coordinate system, T represents the normal of the slice segment, P (X p ,Y p ) represents the coordinates of the point cloud, and d represents the distance from the slice segment to the point cloud. Then the vector The formula for the relationship between and the normal vector T is expressed as:
d=|(Xp-X'n)cos(α'n)+(Yp-Y′n)sin(α'n)| (4);d=|(Xp- X'n ) cos( α'n )+( Yp - Y'n )sin( α'n )| (4);
步骤1.2.2,道路横截面切片投影,重新构建伪扫描线的局部坐标系代替雷达系统的大地坐标系,将场景点云转换到相对应的伪扫描线的局部坐标系内,将坐标原点定义为每个道路分段的中心,X轴为轨迹方向,Y轴垂直于X轴,Pi(Xi,Yi,Zi)表示切片段上任意一个点,Pi坐标的公式为:Step 1.2.2, road cross-section slice projection, reconstruct the local coordinate system of the pseudo scan line to replace the geodetic coordinate system of the radar system, transform the scene point cloud into the local coordinate system of the corresponding pseudo scan line, define the coordinate origin as the center of each road segment, the X-axis is the trajectory direction, the Y-axis is perpendicular to the X-axis, Pi ( Xi , Yi , Zi ) represents any point on the slice segment, and the formula of Pi coordinates is:
式(5)中,(XC,YC)表示切片上的坐标,α表示轨迹点的方位角;In formula (5), (X C , Y C ) represents the coordinates on the slice, and α represents the azimuth of the trajectory point;
将三维点投影至YOZ平面,其中表示二维坐标系下的原点,投影后伪扫描线上的点/>的坐标公式为:Project the 3D point to the YOZ plane, where Represents the origin in the two-dimensional coordinate system, the point on the pseudo scan line after projection/> The coordinate formula is:
步骤1.2.3,使用滑动窗口滤波器对道路点云进行提取,将投影后的横截面切片上的点划分成若干小区间,设定区间阈值为M,此时将伪扫描线上的点云划分为k个区间,假设最后一个点是则/>为最后一个区间生成的伪扫描点坐标,采用RANSAC算法进行直线检测,通过随机选择一部分点进行拟合,统计与其他点的数据偏差,筛选出符合阈值的点,对于筛选出的点建立缓冲区,再使用滑动窗口滤波器对缓冲区内的非道路点云进行去除。Step 1.2.3, use the sliding window filter to extract the road point cloud, divide the points on the projected cross-sectional slice into several small intervals, set the interval threshold to M, and then divide the point cloud on the pseudo scan line into k intervals. Assume that the last point is Then/> The coordinates of the pseudo scanning points generated for the last interval are used for straight line detection using the RANSAC algorithm. A portion of points are randomly selected for fitting, and the data deviations from other points are counted to screen out points that meet the threshold. A buffer is established for the screened points, and a sliding window filter is used to remove non-road point clouds in the buffer.
步骤2.1具体为,Step 2.1 is as follows:
步骤2.1.1,道路点云投影:选取XOY面作为水平面,将道路点云数据沿着XOY平面的法向量方向投影至平面,设原始道路点云集合为Pn={P1,P2,...,Pn},投影后道路点云集合为Pn′={P1',P2',...,Pn'},在平面π上任意选取点t(x0,y0,z0),其中平面π的法向量为此时/>在/>方向上的投影的高度H计算公式为:Step 2.1.1, road point cloud projection: select the XOY plane as the horizontal plane, project the road point cloud data onto the plane along the normal vector direction of the XOY plane, assume that the original road point cloud set is Pn = { P1 , P2 , ..., Pn }, the projected road point cloud set is Pn ′ = { P1 ', P2 ', ..., Pn '}, and arbitrarily select a point t( x0 , y0 , z0 ) on plane π, where the normal vector of plane π is At this time/> In/> The calculation formula for the projection height H in the direction is:
此时是/>在/>方向上的投影向量,由于/>与/>的方向相同,因此/>那么此时投影点Pi'的坐标计算公式为:at this time Yes/> In/> The projection vector in the direction, due to/> With/> The direction is the same, so /> Then the coordinate calculation formula of the projection point Pi ' is:
步骤2.1.2,二维投影图像光栅化:将步骤2.1.1得到的投影后的点云转换为二维光栅图像,使用反距离权重插值法IDW,根据给定的控制点和控制点对的位移矢量,计算控制点对周围像素的反距离加权权重影响,从而计算出图像中每一个像素点的强度值,假设Pi(Xi,Yi,Zi,Ii)表示任意道路点的坐标和该点的反射强度Ii,Rpixel表示生成光栅图像的分辨率,RIDW表示插值的半径,当点Pi投影至平面π上时,此时投影后Pi变化为pj(Xj,Yj,Ij),此时光栅图像的大小的计算公式为:Step 2.1.2, rasterization of two-dimensional projection image: convert the projected point cloud obtained in step 2.1.1 into a two-dimensional raster image, use the inverse distance weighted interpolation method IDW, and calculate the inverse distance weighted weight influence of the control point on the surrounding pixels according to the given control point and the displacement vector of the control point pair, so as to calculate the intensity value of each pixel in the image. Assume that Pi ( Xi , Yi , Zi , Ii ) represents the coordinates of any road point and the reflection intensity Ii of the point, Rpixel represents the resolution of the generated raster image, RIDW represents the radius of interpolation, when point Pi is projected onto plane π, Pi changes to pj ( Xj , Yj , Ij ) after projection, and the calculation formula of the size of the raster image is:
式(9)中,maxX表示道路点云中X轴的最大值;minX表示道路点云中X轴的最小值;maxY表示道路点云中Y轴的最大值;minY表示道路点云中Y轴的最小值;Width、Height表示光栅的大小;In formula (9), maxX represents the maximum value of the X axis in the road point cloud; minX represents the minimum value of the X axis in the road point cloud; maxY represents the maximum value of the Y axis in the road point cloud; minY represents the minimum value of the Y axis in the road point cloud; Width and Height represent the size of the grating;
通过IDW算法求取每个点的强度值,位于内插半径内部点Pk由插值点Pixel(m,n)的位置和插值半径确定,d(m,n,k)则表示Pk到像素中心Pixel(m,n)的距离,计算公式为:The intensity value of each point is obtained by the IDW algorithm. The point P k inside the interpolation radius is determined by the position of the interpolation point Pixel (m,n) and the interpolation radius. d (m,n,k) represents the distance from P k to the pixel center Pixel (m,n) . The calculation formula is:
式(10)中,Xk表示第k个点的X坐标;Yk表示第k个点的Y坐标;In formula (10), Xk represents the X coordinate of the kth point; Yk represents the Y coordinate of the kth point;
此时插值点的强度值I(m,n)表示为:At this time, the intensity value I (m,n) of the interpolation point is expressed as:
通过判断每个格网内的强度值,并将其映射至图像的灰度区间,最终将道路点云转换为道路点云的强度图像;By determining the intensity value in each grid and mapping it to the grayscale interval of the image, the road point cloud is finally converted into an intensity image of the road point cloud;
步骤2.1.3,消除强度图像异常区域:采用IDW算法去除道路点云的强度图像中大部分强度异常点,针对仍然存在的少量椒盐噪声情况,采用改进的自适应中值滤波对道路点云的强度图像进行处理;Step 2.1.3, eliminate abnormal areas in the intensity image: use the IDW algorithm to remove most of the abnormal intensity points in the intensity image of the road point cloud. For the small amount of salt and pepper noise that still exists, use the improved adaptive median filter to process the intensity image of the road point cloud;
步骤2.1.4,基于积分图像自适应阈值分割:积分图像是指每个像素值都是原始图像左上角矩形区域的总和,通过积分图像计算原始图像任意矩形区域的像素值的和,而每个像素的自适应阈值由其周围像素灰度值的平均值确定,原始像素点为pi(xi,yi),该点的对应灰度值为I(xi,yi),积分图像表示为intI(xi,yi),通过定义背景半径s和分割系数t,计算矩形区域中像素灰度值之和,以pi为中心,长度为2s+1的灰度值之和来进行计算,矩形区域中像素灰度值之和公式表示为:Step 2.1.4, adaptive threshold segmentation based on integral image: The integral image means that each pixel value is the sum of the rectangular area in the upper left corner of the original image. The sum of the pixel values of any rectangular area of the original image is calculated by the integral image, and the adaptive threshold of each pixel is determined by the average grayscale value of the surrounding pixels. The original pixel point is p i ( xi , yi ), and the corresponding grayscale value of the point is I ( xi , yi ). The integral image is expressed as intI ( xi , yi ). By defining the background radius s and the segmentation coefficient t, the sum of the grayscale values of the pixels in the rectangular area is calculated. The calculation is performed by taking p i as the center and the sum of the grayscale values of the length 2s+1. The formula for the sum of the grayscale values of the pixels in the rectangular area is expressed as:
当矩形区域超出图像的边界时,以图像边界为准,此时二进制图像灰度值计算公式为:When the rectangular area exceeds the boundary of the image, the image boundary is used as the standard. At this time, the grayscale value calculation formula of the binary image is:
步骤2.2具体为,Step 2.2 is as follows:
步骤2.2.1:使用基于形态学的闭合运算对道路标志线进行优化,对道路标志线中间的孔洞残缺进行修复填充,建立原图像Mask和标记图像Maker,Mask是作为重建参考,Maker是使用形态学方法对步骤2.1.4得到的图像连续膨胀,然后根据Mask的约束收敛,即Mask≥Maker,最后对Maker进行取补操作,再与Mask相减完成对白色孔洞的填充,Mask的表达式为:Step 2.2.1: Use morphological closing operations to optimize the road marking lines, repair and fill the holes in the middle of the road marking lines, and establish the original image Mask and the marked image Maker. Mask is used as a reconstruction reference. Maker uses morphological methods to continuously expand the image obtained in step 2.1.4, and then converges according to the constraints of Mask, that is, Mask ≥ Maker. Finally, the Maker is complemented and then subtracted from the Mask to complete the filling of the white holes. The expression of Mask is:
步骤2.2.2,形态学方法优化道路标志线:通过形态学MLAA方法处理二值图像边界的锯齿化现象,具体为:Step 2.2.2, morphological method to optimize road marking lines: The morphological MLAA method is used to process the jagged phenomenon of the binary image boundary, specifically:
步骤2.2.2.1,首先查找步骤2.2.1已经进行白色孔洞填充的图片中明显像素不连续的部分,作为需要被处理的边缘;Step 2.2.2.1, first find the part with obvious pixel discontinuity in the image that has been filled with white holes in step 2.2.1 as the edge that needs to be processed;
步骤2.2.2.2,将需要被处理的边缘分类为不同的模式,对标记的边缘重新矢量化;Step 2.2.2.2, classify the edges to be processed into different modes and re-vectorize the marked edges;
步骤2.2.2.3,保留平滑部分,通过边缘矢量化计算权重,将边缘像素与其周围像素按照权重进行混合,实现边缘的平滑处理。Step 2.2.2.3, keep the smooth part, calculate the weight through edge vectorization, mix the edge pixels with the surrounding pixels according to the weight, and achieve edge smoothing.
步骤3.1采用欧式聚类将道路标志线点云划分各个独立对象,具体为:Step 3.1 uses Euclidean clustering to divide the road sign point cloud into independent objects, specifically:
步骤3.1.1,假设场景内道路标志线点集为P={Pi(Xi,Yi),i=1,2,3...},参数D表示聚类半径,C表示以D为半径的聚类圆,从点集P中取任意一点p,并且将p点放入点集聚类簇中;Step 3.1.1, assuming that the road sign line point set in the scene is P = {P i (X i ,Y i ), i = 1, 2, 3...}, parameter D represents the clustering radius, C represents the clustering circle with D as the radius, select any point p from the point set P, and put point p into the point set cluster;
步骤3.1.2,以D为半径,以p为圆心作圆,将圆内的点都并入之前定义的聚类簇中;Step 3.1.2, draw a circle with D as radius and p as center, and merge all points in the circle into the previously defined cluster;
步骤3.1.3,使用点集聚类簇中的其他点作为新的圆心,重复步骤3.1.2,直到没有新的点加入该聚类簇为止;Step 3.1.3, use other points in the point set cluster as the new center of the circle, and repeat step 3.1.2 until no new points are added to the cluster;
步骤3.1.4,重复步骤3.1.1和步骤3.1.3,使用点集P中的其他点生成新的聚类簇,每个聚类簇即为独立对象。Step 3.1.4, repeat steps 3.1.1 and 3.1.3, and use other points in the point set P to generate new clusters, each of which is an independent object.
步骤3.2具体为,Step 3.2 is as follows:
步骤3.2.1,道路标志线特征提取:为步骤3.1分离得到的独立对象建立MBR;Step 3.2.1, road sign feature extraction: establish MBR for the independent objects separated in step 3.1;
步骤3.2.2,车道线分类:定义3.2.1中得到的MBR的长宽比β为对象的几何特征,根据长宽比的从聚类簇中提取实型车道线与虚型车道线,当不满足实型车道线与虚型车道线的条件时,则认为该标志为小尺寸道路标志线箭头标志或斑马线;Step 3.2.2, lane line classification: define the aspect ratio β of the MBR obtained in 3.2.1 as the geometric feature of the object, extract the real lane lines and virtual lane lines from the cluster according to the aspect ratio, and when the conditions of the real lane lines and the virtual lane lines are not met, the sign is considered to be a small-sized road sign arrow sign or zebra crossing;
步骤3.2.3,斑马线分类:采用基于PCA的方法判别点云簇的排列方式,从而提取其中并排排列的矩形,并将其划分为斑马线,对于任意矩形标志R,计算其质心在xoy平面上的坐标,然后针对R计算其协方差矩阵CR,通过对CR进行特征值分解,选择与最大特征值相关的特征向量作为标志R的分布特征,并将其设为vr,该参数代码表矩形标志R的方向,为了判断标志R的排列方式,首先沿着标志R的两侧进行搜索,搜索的宽度为w,如果在搜索区域内搜索到质心为q、分布特征为vq的矩形标志Q,且满足公式(vr//vq)∧(vr//vrq)∧(vq//vrq,则将该标志R分类为斑马线。Step 3.2.3, zebra crossing classification: Use a PCA-based method to determine the arrangement of point cloud clusters, thereby extracting rectangles arranged side by side and dividing them into zebra crossings. For any rectangular sign R, calculate the coordinates of its center of mass on the xoy plane, and then calculate its covariance matrix CR for R. By performing eigenvalue decomposition on CR , select the eigenvector associated with the maximum eigenvalue as the distribution feature of the sign R, and set it to vr . The parameter code represents the direction of the rectangular sign R. In order to determine the arrangement of the sign R, first search along both sides of the sign R with a search width of w. If a rectangular sign Q with a center of mass of q and a distribution feature of vq is found in the search area, and the formula ( vr // vq )∧( vr // vrq )∧( vq //vrq) is satisfied , then the sign R is classified as a zebra crossing.
步骤3.3具体为,Step 3.3 is as follows:
步骤3.3.1,数据集制作,具体为:对于场景中存在的未分类的小尺寸道路标志线,获取其数据,手动框选数据,手动选取点集类别,标注数据,划分数据集;Step 3.3.1, data set preparation, specifically: for the unclassified small-sized road sign lines in the scene, obtain their data, manually select the data, manually select the point set category, annotate the data, and divide the data set;
步骤3.3.2,在原始PointNet++网络的主干分支旁构建自顶向下的网络,局部迭代性的使用PointNet网络不断提取特征,对步骤3.3.1中的数据集进行分类。In step 3.3.2, a top-down network is built next to the main branch of the original PointNet++ network, and the PointNet network is used locally and iteratively to continuously extract features and classify the data set in step 3.3.1.
本发明的有益效果是:The beneficial effects of the present invention are:
本发明基于MLS点云的道路标志线检测与识别方法,针对无法有效提取道路边界的问题,使用渐进式道路点云提取方法进行道路点云提取,针对道路标志线提取产生的过分割和欠分割、点云投影强度图像的锯齿化以及道路标志线自然残缺的问题,设计了基于自适应阈值分割和形态学方法优化的道路标志线提取方法,最后通过实现多尺度聚类和PointNet++的道路标志线分类方法,解决了特征相似导致分类精度较低的问题,有效地提高了道路标志线的检测和识别精度。The invention discloses a road sign line detection and recognition method based on MLS point cloud. Aiming at the problem that road boundaries cannot be effectively extracted, a progressive road point cloud extraction method is used to extract road point cloud. Aiming at the problems of over-segmentation and under-segmentation caused by road sign line extraction, jagged point cloud projection intensity image and natural incompleteness of road sign lines, a road sign line extraction method based on adaptive threshold segmentation and morphological method optimization is designed. Finally, by implementing multi-scale clustering and PointNet++ road sign line classification method, the problem of low classification accuracy caused by feature similarity is solved, and the detection and recognition accuracy of road sign lines is effectively improved.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1是原始场景数据经过本发明步骤1处理后所获得的道路点云提取结果图,其中,图1(a)是原始场景图,图1(b)是预处理后的场景图,图1(c)是经过地面点滤波的场景图,图1(d)是场景道路点云粗提取结果图,图1(e)是场景非道路点云提取结果图,图1(f)是场景道路点云精细提取结果图。Figure 1 is a road point cloud extraction result diagram obtained after the original scene data is processed by step 1 of the present invention, wherein Figure 1(a) is the original scene diagram, Figure 1(b) is the preprocessed scene diagram, Figure 1(c) is the scene diagram after ground point filtering, Figure 1(d) is the scene road point cloud coarse extraction result diagram, Figure 1(e) is the scene non-road point cloud extraction result diagram, and Figure 1(f) is the scene road point cloud fine extraction result diagram.
图2是选取的三个场景经过本发明步骤2优化后的最终道路标志线点云提取结果图,其中,图2(a)是选取的一个场景优化后的最终道路标志线点云提取结果图,图2(b)是选取的一个场景优化后的最终道路标志线点云提取结果图,图2(c)是选取的另一个场景优化后的最终道路标志线点云提取结果图。Figure 2 is a diagram of the final road sign line point cloud extraction results of the three selected scenes after optimization by step 2 of the present invention, wherein Figure 2(a) is a diagram of the final road sign line point cloud extraction results after optimization of a selected scene, Figure 2(b) is a diagram of the final road sign line point cloud extraction results after optimization of a selected scene, and Figure 2(c) is a diagram of the final road sign line point cloud extraction results after optimization of another selected scene.
图3是经过本发明步骤3处理后得到的道路标志线分类结果图。FIG. 3 is a diagram showing the classification result of road sign lines obtained after processing in step 3 of the present invention.
具体实施方式Detailed ways
下面结合附图和具体实施方式对本发明进行详细说明。The present invention is described in detail below with reference to the accompanying drawings and specific embodiments.
实施例1Example 1
本实施例提供一种基于MLS(MovingLeastSquar)点云的道路标志线检测与识别方法,具体按照以下步骤实施:This embodiment provides a road sign line detection and recognition method based on MLS (Moving Least Square) point cloud, which is specifically implemented according to the following steps:
步骤1,渐进式道路点云提取;Step 1, progressive road point cloud extraction;
步骤1.1,通过改进的布料模拟滤波(Cloth Simulation Filter,CSF)算法进行道路粗提取;Step 1.1, rough road extraction is performed using an improved Cloth Simulation Filter (CSF) algorithm;
步骤1.2,使用伪扫描线的道路点云提取算法进行道路点云精细提取;Step 1.2, using the pseudo-scan line road point cloud extraction algorithm to perform fine extraction of road point cloud;
步骤1.3,道路点云边界优化:基于密度的聚类(Density-Based SpatialClustering of Applications with Noise,DBSCAN)聚类算法对道路点聚类,设定约束规则,采用随机抽样一致(Random Sample Consensus,RANSAC)方法对边界重新拟合,使提取的道路边界更接近真实边界,并且提高道路点云的提取精度;Step 1.3, road point cloud boundary optimization: Density-Based Spatial Clustering of Applications with Noise (DBSCAN) clustering algorithm is used to cluster road points, set constraint rules, and use the Random Sample Consensus (RANSAC) method to refit the boundary, so that the extracted road boundary is closer to the real boundary and the extraction accuracy of the road point cloud is improved;
步骤2,基于自适应阈值分割与形态学方法优化的道路标志线提取;Step 2: Road sign line extraction based on adaptive threshold segmentation and morphological method optimization;
步骤2.1,道路标志线分割;Step 2.1, road sign line segmentation;
步骤2.2,道路标志线分割优化;Step 2.2, road sign line segmentation optimization;
步骤2.3,通过点云投影时保存的深度图像和自适应阈值分割后的二值图像逆投影生成点云,实现对道路标志线点云的提取;Step 2.3, generating a point cloud by reversely projecting the depth image saved during point cloud projection and the binary image after adaptive threshold segmentation, thereby realizing the extraction of the road sign line point cloud;
步骤3,基于改进PointNet++的道路标志线分类;Step 3: Road sign line classification based on improved PointNet++;
步骤3.1,道路标志线聚类分割;Step 3.1, road sign line clustering segmentation;
步骤3.2,车道线与斑马线分类;Step 3.2, lane line and zebra crossing classification;
步骤3.3,基于改进PointNet++的小尺寸道路标志线分类。Step 3.3, small-size road sign line classification based on improved PointNet++.
实施例2Example 2
本实施例提供一种基于MLS点云的道路标志线检测与识别方法,在实施例1的基础上,步骤1渐进式道路点云提取具体为:This embodiment provides a method for detecting and recognizing road signs based on MLS point cloud. Based on Embodiment 1, step 1 of progressive road point cloud extraction is specifically as follows:
步骤1.1具体按照以下步骤实施:Step 1.1 is implemented according to the following steps:
步骤1.1.1,场景预处理:采用基于统计滤波器的离散点去除方法对图1(a)所示的原始场景进行预处理,预处理后的场景如图1(b)所示,具体为索引场景中的所有点云,首先计算任意点pi与其周围n个最近邻点的平均距离dn,假设任意pi的平均距离dn服从高斯分布,通过计算高斯分布的均值μ和标准差σ,根据所得均值和标准差建立标准范围,当平均距离dn大于设定的标准范围时,则将该点定义为离散点并去除;Step 1.1.1, scene preprocessing: The original scene shown in Figure 1(a) is preprocessed by using a discrete point removal method based on a statistical filter. The preprocessed scene is shown in Figure 1(b), which specifically indexes all point clouds in the scene. First, the average distance dn between any point p i and its n nearest neighboring points is calculated. Assuming that the average distance dn of any point p i follows a Gaussian distribution, the mean μ and standard deviation σ of the Gaussian distribution are calculated, and a standard range is established based on the obtained mean and standard deviation. When the average distance dn is greater than the set standard range, the point is defined as a discrete point and removed.
dn公式为:The formula for dn is:
步骤1.1.2,道路点云的粗提取:采用改进的CSF算法对道路进行滤波,从而初步提取道路点云,经过地面点滤波的场景如图1(c)所示,具体为Step 1.1.2, rough extraction of road point cloud: The improved CSF algorithm is used to filter the road to preliminarily extract the road point cloud. The scene after ground point filtering is shown in Figure 1(c).
步骤1.1.2.1,输入并反转去噪后的道路点云数据;Step 1.1.2.1, input and invert the denoised road point cloud data;
步骤1.1.2.2,建立道路点云的模拟布料格网,自定义网格的大小,初始时刻时,格网位于全部道路点云的上方;Step 1.1.2.2, establish a simulated cloth grid of the road point cloud, customize the size of the grid, and at the initial moment, the grid is located above all road point clouds;
步骤1.1.2.3,将场景中所有点云与格网粒子投影至同一水平面上,同时寻找每一个粒子的最近邻点,记录其投影前的高程值IHV;Step 1.1.2.3, project all point clouds and grid particles in the scene onto the same horizontal plane, find the nearest neighbor of each particle, and record its elevation value IHV before projection;
步骤1.1.2.4,对于每一个可移动的格网粒子,由于改进模型忽略了其外部因素,因此计算其内部因素影响所产生的位移,与当前最近邻点的高程值比较,如果此时该粒子的高程值低于或等于IHV,则将该粒子的高程值重新设置为IHV,并且该粒子变为不可移动点;Step 1.1.2.4, for each movable grid particle, since the improved model ignores its external factors, the displacement caused by its internal factors is calculated and compared with the elevation value of the current nearest neighbor point. If the elevation value of the particle is lower than or equal to IHV at this time, the elevation value of the particle is reset to IHV, and the particle becomes an immovable point;
步骤1.1.2.5,对于每个格网粒子,计算其受内部因素所产生的位移;Step 1.1.2.5, for each grid particle, calculate the displacement caused by internal factors;
步骤1.1.2.6,重复步骤1.1.2.4、步骤1.1.2.5,当迭代次数大于用户设置值时,则停止模拟过程;Step 1.1.2.6, repeat steps 1.1.2.4 and 1.1.2.5. When the number of iterations is greater than the user-set value, stop the simulation process;
步骤1.1.2.7,计算点云与格网粒子之间的高程差;Step 1.1.2.7, calculate the elevation difference between the point cloud and the grid particles;
步骤1.1.2.8,如果点云与格网粒子间距离小于预先设置的阈值HCC,则认为该点是地面点,反之则认为该点是非地面点,提取地面点即为道路点云,实现道路粗提取,场景道路点云粗提取结果如图1(d)所示,场景非道路点云提取结果如图1(e)所示。In step 1.1.2.8, if the distance between the point cloud and the grid particle is less than the preset threshold HCC, the point is considered to be a ground point, otherwise it is considered to be a non-ground point. The extracted ground point is the road point cloud, and the rough extraction of the road is realized. The rough extraction result of the scene road point cloud is shown in Figure 1(d), and the extraction result of the scene non-road point cloud is shown in Figure 1(e).
步骤1.2具体按照以下步骤实施:Step 1.2 is implemented as follows:
步骤1.2.1,建立道路点云伪扫描线,提取道路点云只需要考虑行驶轨迹的平面信息,即点云的x和y坐标,使用道路点云扫描线方向代替轨迹方向,并且沿扫描线方向建立伪扫描线,即对道路点云进行切片处理,假设扫描线轨迹上任意一点Pri的坐标是(Xi,Yi),所以Pri点对应的里程值Si的公式为:Step 1.2.1, establish a pseudo scan line of the road point cloud. To extract the road point cloud, we only need to consider the plane information of the driving trajectory, that is, the x and y coordinates of the point cloud. The road point cloud scan line direction is used instead of the trajectory direction, and a pseudo scan line is established along the scan line direction, that is, the road point cloud is sliced. Assuming that the coordinates of any point P ri on the scan line trajectory are (X i ,Y i ), the formula for the mileage value S i corresponding to the P ri point is:
设cn(X'n,Y′n)表示任意切片点位置,其中cn的坐标值的公式为:Let c n (X' n ,Y′ n ) represent the position of any slice point, where the formula for the coordinate value of c n is:
式(3)中,表示/>的坐标方位角,Δn表示切片位置与轨迹上切片位置的最近邻点之间的距离,/>表示/>的坐标值;In formula (3), Indicates/> The coordinate azimuth of the slice position, Δn represents the distance between the slice position and the nearest neighbor point on the trajectory, /> Indicates/> The coordinate value of
由cn(X'n,Y′n)、α'n和D分别确定切片的位置、切片的方向和切片的长度;对点云进行切片分割处理还需要计算点云到切片段间的距离d,当距离小于切片长度的一半时,则点云位于同一切片上,L表示切片段在坐标系xoy面的投影,T表示切片段的法线,P(Xp,Yp)表示点云的坐标,d表示切片段到点云间的距离,则向量和法向量T之间的关系的公式表示为:The position, direction and length of the slice are determined by c n (X' n ,Y′ n ), α' n and D respectively; the slicing process of the point cloud also requires the calculation of the distance d between the point cloud and the slice segment. When the distance is less than half the length of the slice, the point cloud is located on the same slice. L represents the projection of the slice segment on the xoy plane of the coordinate system, T represents the normal of the slice segment, P (X p ,Y p ) represents the coordinates of the point cloud, and d represents the distance between the slice segment and the point cloud. Then the vector The formula for the relationship between and the normal vector T is expressed as:
d=|(Xp-X'n)cos(α'n)+(Yp-Y′n)sin(α'n)| (4)d=|( Xp - X'n )cos( α'n )+( Yp - Y'n )sin( α'n )| (4)
步骤1.2.2,道路横截面切片投影,重新构建伪扫描线的局部坐标系代替雷达系统的大地坐标系,将场景点云转换到相对应的伪扫描线的局部坐标系内,将坐标原点定义为每个道路分段的中心,X轴为轨迹方向,Y轴垂直于X轴,Pi(Xi,Yi,Zi)表示切片段上任意一个点,Pi坐标的公式为:Step 1.2.2, road cross-section slice projection, reconstruct the local coordinate system of the pseudo scan line to replace the geodetic coordinate system of the radar system, transform the scene point cloud into the local coordinate system of the corresponding pseudo scan line, define the coordinate origin as the center of each road segment, the X-axis is the trajectory direction, the Y-axis is perpendicular to the X-axis, Pi ( Xi , Yi , Zi ) represents any point on the slice segment, and the formula of Pi coordinates is:
式(5)中,(XC,YC)表示切片上的坐标,α表示轨迹点的方位角;In formula (5), (X C , Y C ) represents the coordinates on the slice, and α represents the azimuth of the trajectory point;
将三维点投影至YOZ平面,其中表示二维坐标系下的原点,投影后伪扫描线上的点/>的坐标公式为:Project the 3D point to the YOZ plane, where Represents the origin in the two-dimensional coordinate system, the point on the pseudo scan line after projection/> The coordinate formula is:
步骤1.2.3,使用滑动窗口滤波器对道路点云进行提取,将投影后的横截面切片上的点划分成若干小区间,设定区间阈值为M,此时将伪扫描线上的点云划分为k个区间,假设最后一个点是则/>为最后一个区间生成的伪扫描点坐标,采用RANSAC算法进行直线检测,通过随机选择一部分点进行拟合,统计与其他点的数据偏差,筛选出符合阈值的点,对于筛选出的点建立缓冲区,建立缓冲区后,再使用滑动窗口滤波器对缓冲区内的非道路点云进行去除,实现道路点云精细提取,提取结果如图1(f)所示。Step 1.2.3, use the sliding window filter to extract the road point cloud, divide the points on the projected cross-sectional slice into several small intervals, set the interval threshold to M, and then divide the point cloud on the pseudo scan line into k intervals. Assume that the last point is Then/> The coordinates of the pseudo scanning points generated for the last interval are used for line detection using the RANSAC algorithm. A portion of points are randomly selected for fitting, and the data deviations from other points are counted to screen out points that meet the threshold. A buffer is established for the screened points. After the buffer is established, a sliding window filter is used to remove the non-road point cloud in the buffer to achieve fine extraction of the road point cloud. The extraction result is shown in Figure 1(f).
滑动窗口滤波器的原理是根据道路点云和路缘石存在的高程差,通过建立若干个小窗口去除非道路点云,首先,在k个区间内建立若干个矩形窗口,每个窗口是区间内点云的最小包围窗口,窗口的高度为dhi,此时设置道路阈值该阈值表示道路点的最高高程值,当/>时,表明该窗口存在高程值高于路面的噪声点,则将该窗口标记为关键窗口;The principle of the sliding window filter is to remove non-road point clouds by establishing several small windows based on the elevation difference between the road point cloud and the curb. First, several rectangular windows are established in k intervals. Each window is the minimum enclosing window of the point cloud in the interval. The height of the window is dh i . At this time, the road threshold is set This threshold represents the highest elevation value of the road point. When , it indicates that there is a noise point with an elevation higher than the road surface in the window, and the window is marked as a key window;
步骤1.3,道路点云边界优化:基于DBSCAN聚类算法对道路点聚类,设定了约束规则,然后采用RANSAC方法对边界重新拟合,使提取的道路边界更接近真实边界,并且提高道路点云的提取精度。Step 1.3, road point cloud boundary optimization: cluster the road points based on the DBSCAN clustering algorithm, set constraint rules, and then use the RANSAC method to re-fit the boundary, so that the extracted road boundary is closer to the real boundary and the extraction accuracy of the road point cloud is improved.
实施例3Example 3
本实施例提供一种基于MLS点云的道路标志线检测与识别方法,在实施例1-2的基础上,步骤2基于自适应阈值分割与形态学方法优化的道路标志线提取具体为:This embodiment provides a method for detecting and recognizing road sign lines based on MLS point clouds. On the basis of Embodiments 1-2, step 2 is based on the extraction of road sign lines optimized by adaptive threshold segmentation and morphological methods as follows:
步骤2.1具体按照以下步骤实施:Step 2.1 is implemented as follows:
步骤2.1.1,道路点云投影:选取XOY面作为水平面,将道路点云数据沿着XOY平面的法向量方向投影至平面,设原始道路点云集合为Pn={P1,P2,...,Pn},投影后道路点云集合为Pn'={P1',P2',...,Pn'},在平面π上任意选取点t(x0,y0,z0),其中平面π的法向量为此时/>在/>方向上的投影的高度H计算公式为:Step 2.1.1, road point cloud projection: select the XOY plane as the horizontal plane, project the road point cloud data onto the plane along the normal vector direction of the XOY plane, assume that the original road point cloud set is Pn = { P1 , P2 , ..., Pn }, the projected road point cloud set is Pn ' = { P1 ', P2 ', ..., Pn '}, and arbitrarily select a point t( x0 , y0 , z0 ) on plane π, where the normal vector of plane π is At this time/> In/> The calculation formula for the projection height H in the direction is:
此时是/>在/>方向上的投影向量,由于/>与/>的方向相同,因此/>那么此时投影点Pi'的坐标计算公式为:at this time Yes/> In/> The projection vector in the direction, due to/> With/> The direction is the same, so /> Then the coordinate calculation formula of the projection point Pi ' is:
步骤2.1.2,二维投影图像光栅化:将步骤2.1.1得到的投影后的点云转换为二维光栅图像,使用反距离权重插值法(Inverse Distance Weighting,IDW),根据给定的控制点和控制点对的位移矢量,计算控制点对周围像素的反距离加权权重影响,从而计算出图像中每一个像素点的强度值,假设Pi(Xi,Yi,Zi,Ii)表示任意道路点的坐标和该点的反射强度Ii,Rpixel表示生成光栅图像的分辨率,RIDW表示插值的半径,当点Pi投影至平面π上时,此时投影后Pi变化为pi(Xj,Yj,Ij),此时光栅图像的大小的计算公式为:Step 2.1.2, rasterization of two-dimensional projection image: convert the projected point cloud obtained in step 2.1.1 into a two-dimensional raster image, use the inverse distance weighting (IDW) interpolation method, and calculate the inverse distance weighted weight influence of the control point on the surrounding pixels according to the given control point and the displacement vector of the control point pair, so as to calculate the intensity value of each pixel in the image. Assume that Pi ( Xi , Yi , Zi , Ii ) represents the coordinates of any road point and the reflection intensity Ii of the point, Rpixel represents the resolution of the generated raster image, and RIDW represents the radius of interpolation. When point Pi is projected onto plane π, Pi changes to Pi ( Xj , Yj , Ij ) after projection. At this time, the calculation formula of the size of the raster image is:
式(9)中,maxX表示道路点云中X轴的最大值;minX表示道路点云中X轴的最小值;maxY表示道路点云中Y轴的最大值;minY表示道路点云中Y轴的最小值;Width、Height表示光栅的大小;In formula (9), maxX represents the maximum value of the X axis in the road point cloud; minX represents the minimum value of the X axis in the road point cloud; maxY represents the maximum value of the Y axis in the road point cloud; minY represents the minimum value of the Y axis in the road point cloud; Width and Height represent the size of the grating;
通过IDW算法求取每个点的强度值,位于内插半径内部点Pk由插值点Pixel(m,n)的位置和插值半径确定,d(m,n,k)则表示Pk到像素中心Pixel(m,n)的距离,计算公式为:The intensity value of each point is obtained by the IDW algorithm. The point P k inside the interpolation radius is determined by the position of the interpolation point Pixel (m,n) and the interpolation radius. d (m,n,k) represents the distance from P k to the pixel center Pixel (m,n) . The calculation formula is:
式(10)中,Xk表示第k个点的X坐标;Yk表示第k个点的Y坐标;In formula (10), Xk represents the X coordinate of the kth point; Yk represents the Y coordinate of the kth point;
此时插值点的强度值I(m,n)表示为:At this time, the intensity value I (m,n) of the interpolation point is expressed as:
通过判断每个格网内的强度值,并将其映射至图像的灰度区间,最终将道路点云转换为道路点云的强度图像;By determining the intensity value in each grid and mapping it to the grayscale interval of the image, the road point cloud is finally converted into an intensity image of the road point cloud;
步骤2.1.3,消除强度图像异常区域:采用IDW算法去除道路点云的强度图像中大部分强度异常点,针对仍然存在的少量椒盐噪声情况,采用改进的自适应中值滤波对道路点云的强度图像进行处理;Step 2.1.3, eliminate abnormal areas in the intensity image: use the IDW algorithm to remove most of the abnormal intensity points in the intensity image of the road point cloud. For the small amount of salt and pepper noise that still exists, use the improved adaptive median filter to process the intensity image of the road point cloud;
采用改进的自适应中值滤波对道路点云的强度图像进行处理具体为:1)假设强度图像的像素点为(x,y),则f(x,y)表示强度图像的像素值,Wij为移动窗口,其中加入移动窗口的四个指标:Wmax、fmax、fmin、fmid分别表示移动窗口的最大值、灰度值的极大值、灰度值的极小值和灰度值的中值;2)当满足fmin<f(x,y)<fmax,则判定该像素点不是噪声点并且输出,否则判定该点为噪声点;3)对于判定的噪声点,判断其窗口fmid是否为噪声,如果fmin<fmid<fmax,则该窗口中值不是噪声,并且输出该窗口中值;如果fmid为噪声,此时扩大窗口且Wij=Wij+1。当Wij<Wmax,重复上述2)的操作;4)若Wij≥Wmax,则去除窗口内所有极大值点和极小值点,强度图像内剩余的像素做加权平均后输出;The improved adaptive median filter is used to process the intensity image of the road point cloud as follows: 1) Assuming that the pixel point of the intensity image is (x, y), then f(x, y) represents the pixel value of the intensity image, Wij is a moving window, and four indicators of the moving window are added: Wmax , fmax , fmin , and fmid , which respectively represent the maximum value of the moving window, the maximum value of the grayscale value, the minimum value of the grayscale value, and the median value of the grayscale value; 2) When fmin <f(x, y)< fmax , the pixel point is determined to be not a noise point and is output, otherwise the point is determined to be a noise point; 3) For the determined noise point, determine whether its window fmid is noise. If fmin < fmid < fmax , the median value in the window is not noise, and the median value in the window is output; if fmid is noise, the window is expanded and Wij = Wij +1. When Wij < Wmax , repeat the above operation 2); 4) if Wij ≥Wmax , remove all the maximum and minimum points in the window, and output the remaining pixels in the intensity image after weighted average;
步骤2.1.4,基于积分图像自适应阈值分割:积分图像是指每个像素值都是原始图像左上角矩形区域的总和,通过积分图像计算原始图像任意矩形区域的像素值的和,而每个像素的自适应阈值由其周围像素灰度值的平均值确定,原始像素点为pi(xi,yi),该点的对应灰度值为I(xi,yi),积分图像表示为intI(xi,yi),通过定义背景半径s和分割系数t,计算矩形区域中像素灰度值之和,以pi为中心,长度为2s+1的灰度值之和来进行计算,矩形区域中像素灰度值之和公式表示为:Step 2.1.4, adaptive threshold segmentation based on integral image: The integral image means that each pixel value is the sum of the rectangular area in the upper left corner of the original image. The sum of the pixel values of any rectangular area of the original image is calculated by the integral image, and the adaptive threshold of each pixel is determined by the average grayscale value of the surrounding pixels. The original pixel point is p i ( xi , yi ), and the corresponding grayscale value of the point is I ( xi , yi ). The integral image is expressed as intI ( xi , yi ). By defining the background radius s and the segmentation coefficient t, the sum of the grayscale values of the pixels in the rectangular area is calculated. The calculation is performed by taking p i as the center and the sum of the grayscale values of the length 2s+1. The formula for the sum of the grayscale values of the pixels in the rectangular area is expressed as:
当矩形区域超出图像的边界时,应该以图像边界为准,此时二进制图像灰度值计算公式为:When the rectangular area exceeds the boundary of the image, the image boundary should be used as the standard. At this time, the grayscale value calculation formula of the binary image is:
步骤2.2具体为,Step 2.2 is as follows:
步骤2.2.1:使用基于形态学的闭合运算对道路标志线进行优化,对道路标志线中间的孔洞残缺进行修复填充,建立原图像Mask和标记图像Maker,Mask是作为重建参考,Maker是使用形态学方法对步骤2.1.4中得到的图像连续膨胀,然后根据Mask的约束收敛,即Mask≥Maker,最后对Maker进行取补操作再与Mask相减完成对白色孔洞的填充,Mask的表达式为:Step 2.2.1: Use morphological closing operations to optimize the road marking lines, repair and fill the holes in the middle of the road marking lines, and establish the original image Mask and the marked image Maker. Mask is used as a reconstruction reference. Maker uses morphological methods to continuously expand the image obtained in step 2.1.4, and then converges according to the constraints of Mask, that is, Mask ≥ Maker. Finally, the Maker is complemented and then subtracted from the Mask to complete the filling of the white holes. The expression of Mask is:
步骤2.2.2,形态学方法优化道路标志线:通过形态抗锯齿方法(MorphologicalAntialiasing,MLAA)处理二值图像边界的锯齿化现象,具体为:Step 2.2.2, morphological method to optimize road marking lines: the aliasing phenomenon of the binary image boundary is processed by the morphological antialiasing method (MLAA), specifically:
步骤2.2.2.1,首先查找步骤2.2.1已经进行白色孔洞填充的图片中明显像素不连续的部分,作为需要被处理的边缘;Step 2.2.2.1, first find the part with obvious pixel discontinuity in the image that has been filled with white holes in step 2.2.1 as the edge that needs to be processed;
步骤2.2.2.2,将需要被处理的边缘分类为不同的模式,对标记的边缘重新矢量化;Step 2.2.2.2, classify the edges to be processed into different modes and re-vectorize the marked edges;
步骤2.2.2.3,保留平滑部分,通过边缘矢量化计算权重,将边缘像素与其周围像素按照权重进行混合,实现边缘的平滑处理。Step 2.2.2.3, keep the smooth part, calculate the weight through edge vectorization, mix the edge pixels with the surrounding pixels according to the weight, and achieve edge smoothing.
该步骤得到的道路标志线提取结果如图2所示,其中,图2(a)是选取的一个场景优化后的最终道路标志线点云提取结果图,图2(b)是选取的一个场景优化后的最终道路标志线点云提取结果图,图2(c)是选取的另一个场景优化后的最终道路标志线点云提取结果图。The road sign line extraction result obtained in this step is shown in Figure 2, wherein Figure 2(a) is a final road sign line point cloud extraction result diagram after optimization of a selected scene, Figure 2(b) is a final road sign line point cloud extraction result diagram after optimization of a selected scene, and Figure 2(c) is a final road sign line point cloud extraction result diagram after optimization of another selected scene.
实施例4Example 4
本实施例提供一种基于MLS点云的道路标志线检测与识别方法,在实施例1-3的基础上,步骤3基于改进PointNet++的道路标志线分类具体按照以下步骤实施:This embodiment provides a method for detecting and recognizing road sign lines based on MLS point clouds. On the basis of Embodiments 1-3, step 3 is based on the road sign line classification of improved PointNet++ and is specifically implemented according to the following steps:
步骤3.1采用欧式聚类将道路标志线点云划分各个独立对象,具体为:Step 3.1 uses Euclidean clustering to divide the road sign point cloud into independent objects, specifically:
步骤3.1.1,假设场景内道路标志线点集为P={Pi(Xi,Yi),i=1,2,3...},参数D表示聚类半径,C表示以D为半径的聚类圆,从点集P中取任意一点p,并且将p点放入点集聚类簇中;Step 3.1.1, assuming that the road sign line point set in the scene is P = {P i (X i ,Y i ), i = 1, 2, 3...}, parameter D represents the clustering radius, C represents the clustering circle with D as the radius, select any point p from the point set P, and put point p into the point set cluster;
步骤3.1.2,以D为半径,以p为圆心作圆,将圆内的点都并入之前定义的聚类簇中;Step 3.1.2, draw a circle with D as radius and p as center, and merge all points in the circle into the previously defined cluster;
步骤3.1.3,使用点集聚类簇中的其他点作为新的圆心,重复步骤3.1.2,直到没有新的点加入该聚类簇为止;Step 3.1.3, use other points in the point set cluster as the new center of the circle, and repeat step 3.1.2 until no new points are added to the cluster;
步骤3.1.4,重复步骤3.1.1和步骤3.1.3,使用点集P中的其他点生成新的聚类簇,每个聚类簇即为独立对象。Step 3.1.4, repeat steps 3.1.1 and 3.1.3, and use other points in the point set P to generate new clusters, each of which is an independent object.
步骤3.2具体为,Step 3.2 is as follows:
步骤3.2.1,道路标志线特征提取:为步骤3.1分离得到的独立对象建立最小外包矩形(Minimum bounding rectangle,MBR);Step 3.2.1, road sign line feature extraction: establish a minimum bounding rectangle (MBR) for the independent object separated in step 3.1;
建立MBR具体为:1)建立对象的简单外接矩形,并求取对象的中心点。假设此时对象内的点集为p={pi(xi,yi),i=1,2,3...},中心点坐标公式为:The specific steps of establishing MBR are as follows: 1) Establish a simple circumscribed rectangle of the object and find the center point of the object. Assume that the point set in the object is p = { pi ( xi , yi ), i = 1, 2, 3...}, and the center point coordinate formula is:
2)以中心点为原点旋转外接矩形,旋转阈值设置为1,每旋转1度,求解此时外接矩形面积,并且记录此时矩形的旋转角度和顶点坐标;2) Rotate the circumscribed rectangle with the center point as the origin, set the rotation threshold to 1, calculate the area of the circumscribed rectangle at each rotation of 1 degree, and record the rotation angle and vertex coordinates of the rectangle at this time;
3)比较上述2)旋转所得全部外接矩形的面积,面积最小的外接矩形即为对象的MBR;3) Compare the areas of all the circumscribed rectangles obtained by the rotation in 2) above, and the circumscribed rectangle with the smallest area is the MBR of the object;
步骤3.2.2,车道线分类:定义3.2.1中得到的MBR的长宽比β为对象的几何特征,根据长宽比的从聚类簇中提取实型车道线与虚型车道线,当β>37认为该点云簇为实型车道线,当13<β<27认为该点云簇为虚型车道线,当不满足实型车道线与虚型车道线的这两种条件时,则认为该标志为小尺寸道路标志线箭头标志或斑马线;Step 3.2.2, lane line classification: define the aspect ratio β of the MBR obtained in 3.2.1 as the geometric feature of the object, extract the real lane lines and virtual lane lines from the cluster according to the aspect ratio, when β>37, the point cloud cluster is considered to be a real lane line, when 13<β<27, the point cloud cluster is considered to be a virtual lane line, when the two conditions of real lane lines and virtual lane lines are not met, the sign is considered to be a small-sized road sign arrow sign or zebra crossing;
步骤3.2.3,斑马线分类:采用基于主成分分析(Principal Component Analysis,PCA)的方法判别点云簇的排列方式,从而提取其中并排排列的矩形,并将其划分为斑马线,对于任意矩形标志R,计算其质心在xoy平面上的坐标,然后针对R计算其协方差矩阵CR,通过对CR进行特征值分解,选择与最大特征值相关的特征向量作为标志R的分布特征,并将其设为vr,该参数代码表矩形标志R的方向,为了判断标志R的排列方式,首先沿着标志R的两侧进行搜索,搜索的宽度为w,如果在搜索区域内搜索到质心为q、分布特征为vq的矩形标志Q,且满足公式(vr//vq)∧(vr//vrq)∧(vq//vrq),则将该标志R分类为斑马线。Step 3.2.3, zebra crossing classification: The arrangement of point cloud clusters is determined by the principal component analysis (PCA) method, so as to extract the rectangles arranged side by side and divide them into zebra crossings. For any rectangular sign R, the coordinates of its centroid on the xoy plane are calculated, and then the covariance matrix CR is calculated for R. By performing eigenvalue decomposition on CR , the eigenvector associated with the maximum eigenvalue is selected as the distribution feature of the sign R and set it to vr . The parameter code represents the direction of the rectangular sign R. In order to determine the arrangement of the sign R, first search along both sides of the sign R with a search width of w. If a rectangular sign Q with a centroid of q and a distribution feature of vq is found in the search area and satisfies the formula ( vr // vq )∧( vr // vrq )∧( vq // vrq ), the sign R is classified as a zebra crossing.
步骤3.3具体为,Step 3.3 is as follows:
步骤3.3.1,数据集制作,具体为:对于场景中存在的未分类的小尺寸道路标志线,获取其数据,手动框选数据,手动选取点集类别,标注数据,划分数据集;Step 3.3.1, data set preparation, specifically: for the unclassified small-sized road sign lines in the scene, obtain their data, manually select the data, manually select the point set category, annotate the data, and divide the data set;
步骤3.3.2,在原始PointNet++网络的主干分支旁构建自顶向下的网络,局部迭代性的使用PointNet网络不断提取特征,对步骤3.3.1中的数据集进行分类,得到如图3所示的道路标志线分类结果图。In step 3.3.2, a top-down network is built next to the main branch of the original PointNet++ network. The PointNet network is used locally and iteratively to continuously extract features and classify the data set in step 3.3.1 to obtain the road sign line classification result diagram shown in Figure 3.
Claims (9)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311815103.9A CN117876984A (en) | 2023-12-27 | 2023-12-27 | Road sign detection and recognition method based on MLS point cloud |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311815103.9A CN117876984A (en) | 2023-12-27 | 2023-12-27 | Road sign detection and recognition method based on MLS point cloud |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117876984A true CN117876984A (en) | 2024-04-12 |
Family
ID=90591149
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311815103.9A Pending CN117876984A (en) | 2023-12-27 | 2023-12-27 | Road sign detection and recognition method based on MLS point cloud |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117876984A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118522004A (en) * | 2024-06-13 | 2024-08-20 | 大连慕泽科技有限公司 | Multifunctional road detection vehicle |
-
2023
- 2023-12-27 CN CN202311815103.9A patent/CN117876984A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118522004A (en) * | 2024-06-13 | 2024-08-20 | 大连慕泽科技有限公司 | Multifunctional road detection vehicle |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113780259B (en) | A pavement defect detection method, device, electronic device and readable storage medium | |
Xu et al. | Reconstruction of scaffolds from a photogrammetric point cloud of construction sites using a novel 3D local feature descriptor | |
US10049492B2 (en) | Method and apparatus for rendering facades of objects of interest from three-dimensional point clouds | |
Song et al. | Road extraction using SVM and image segmentation | |
CN104008553B (en) | Crack detection method with image gradient information and watershed method conflated | |
CN106022381B (en) | Automatic extraction method of street lamp pole based on vehicle-mounted laser scanning point cloud | |
AU2021249313A1 (en) | Feature extraction from mobile lidar and imagery data | |
CN105374033B (en) | SAR image segmentation method based on ridge ripple deconvolution network and sparse classification | |
CN104299260A (en) | Contact network three-dimensional reconstruction method based on SIFT and LBP point cloud registration | |
US11804025B2 (en) | Methods and systems for identifying topographic features | |
CN106340005B (en) | The non-supervisory dividing method of high score remote sensing image based on scale parameter Automatic Optimal | |
Sun et al. | Classification of MLS point clouds in urban scenes using detrended geometric features from supervoxel-based local contexts | |
CN116452852A (en) | An Automatic Generation Method of High Precision Vector Map | |
CN110263635A (en) | Marker detection and recognition methods based on structure forest and PCANet | |
CN103971377A (en) | Building extraction method based on prior shape level set segmentation | |
Femiani et al. | Shadow-based rooftop segmentation in visible band images | |
CN117292337A (en) | Remote sensing image target detection method | |
Chen et al. | An efficient global constraint approach for robust contour feature points extraction of point cloud | |
CN111783722B (en) | Lane line extraction method of laser point cloud and electronic equipment | |
CN117876984A (en) | Road sign detection and recognition method based on MLS point cloud | |
CN108022245A (en) | Photovoltaic panel template automatic generation method based on upper thread primitive correlation model | |
CN115063698A (en) | Automatic identification and information extraction method and system for slope surface deformation crack | |
Omidalizarandi et al. | Segmentation and classification of point clouds from dense aerial image matching | |
Zingman et al. | Automated search for livestock enclosures of rectangular shape in remotely sensed imagery | |
Gao et al. | Research on license plate detection and recognition based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |