CN117876984A - Road sign detection and recognition method based on MLS point cloud - Google Patents

Road sign detection and recognition method based on MLS point cloud Download PDF

Info

Publication number
CN117876984A
CN117876984A CN202311815103.9A CN202311815103A CN117876984A CN 117876984 A CN117876984 A CN 117876984A CN 202311815103 A CN202311815103 A CN 202311815103A CN 117876984 A CN117876984 A CN 117876984A
Authority
CN
China
Prior art keywords
point
road
point cloud
image
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311815103.9A
Other languages
Chinese (zh)
Inventor
宁小娟
赵昊罡
梁杰炜
石争浩
金海燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN202311815103.9A priority Critical patent/CN117876984A/en
Publication of CN117876984A publication Critical patent/CN117876984A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a road marking line detection and identification method based on MLS point clouds, which aims at solving the problem that road boundaries cannot be extracted effectively, adopts a progressive road point cloud extraction method to extract road point clouds, aims at solving the problems of over-segmentation and under-segmentation, jaggy of a point cloud projection intensity image and natural incomplete road marking lines generated by road marking line extraction, designs a road marking line extraction method based on self-adaptive threshold segmentation and morphological optimization, and finally solves the problem of lower classification precision caused by feature similarity by realizing a road marking line classification method of multi-scale clustering and PointNet++, and effectively improves the detection and identification precision of the road marking lines.

Description

基于MLS点云的道路标志线检测与识别方法Road sign detection and recognition method based on MLS point cloud

技术领域Technical Field

本发明属于计算机视觉及人工智能技术领域,涉及基于MLS点云的道路标志线检测与识别方法。The present invention belongs to the technical field of computer vision and artificial intelligence, and relates to a road sign line detection and recognition method based on MLS point cloud.

背景技术Background technique

道路标志线检测是高精度地图和无人驾驶的重要感知模块之一。通常道路标志线检测主要分为以下三类:基于图像的方法、基于影像的方法和基于点云的方法,具体优缺点如下:基于图像的道路标志线检测的相关研究较多,但是该方法对于数据集的光照条件要求较高,并且过于依赖道路表面的纹理特征;基于影像的道路标志线检测也受光照的影响,并且由于影像拼接的问题,如果将其投影至同一坐标系会产生误差,进而影响后续检测的准确性。还有研究人员通过使用遥感影像代替传统图像和影像克服拼接产生的误差问题,但是遥感影像的分辨率不足,因此仍难以准确、全面地提取道路要素。与上述方法相比,基于点云的方法则表现出它在道路标志线检测方面的独特优势:1)不受光照的影响;2)数据的获取效率更高;3)获取到的数据连续性更好;4)具有更高的精度和采集密度;5)对于道路标志线的特殊材料具有很好的回波反射强度。但是由于存在室外场景数据过于庞大、点密度分布不均匀等问题,使得该研究仍然具有挑战性。Road sign detection is one of the important perception modules for high-precision maps and unmanned driving. Generally, road sign detection is mainly divided into the following three categories: image-based methods, image-based methods, and point cloud-based methods. The specific advantages and disadvantages are as follows: There are many related studies on image-based road sign detection, but this method has high requirements for the lighting conditions of the data set and is too dependent on the texture characteristics of the road surface; image-based road sign detection is also affected by lighting, and due to the problem of image splicing, errors will occur if it is projected into the same coordinate system, which will affect the accuracy of subsequent detection. Some researchers have also overcome the error problem caused by splicing by using remote sensing images instead of traditional images and images, but the resolution of remote sensing images is insufficient, so it is still difficult to accurately and comprehensively extract road features. Compared with the above methods, the point cloud-based method shows its unique advantages in road sign detection: 1) It is not affected by lighting; 2) The data acquisition efficiency is higher; 3) The acquired data has better continuity; 4) It has higher accuracy and acquisition density; 5) It has good echo reflection intensity for special materials of road signs. However, due to the problems of excessively large outdoor scene data and uneven point density distribution, this research is still challenging.

发明内容Summary of the invention

本发明的目的是提供基于MLS点云的道路标志线检测与识别方法,有效地提高了道路标志线的检测和识别精度。The purpose of the present invention is to provide a road sign line detection and recognition method based on MLS point cloud, which effectively improves the detection and recognition accuracy of road sign lines.

本发明采用的技术方案是,基于MLS点云的道路标志线检测与识别方法,具体按照以下步骤实施:The technical solution adopted by the present invention is a road sign line detection and recognition method based on MLS point cloud, which is specifically implemented according to the following steps:

步骤1,渐进式道路点云提取;Step 1, progressive road point cloud extraction;

步骤2,基于自适应阈值分割与形态学方法优化的道路标志线提取;Step 2: Road sign line extraction based on adaptive threshold segmentation and morphological method optimization;

步骤3,基于改进PointNet++的道路标志线分类。Step 3: Road sign line classification based on improved PointNet++.

本发明的特点还在于,The present invention is also characterized in that:

步骤1具体为:Step 1 is as follows:

步骤1.1,通过改进的CSF算法进行道路粗提取;Step 1.1, rough road extraction using the improved CSF algorithm;

步骤1.2,使用伪扫描线的道路点云提取算法进行道路点云精细提取;Step 1.2, using the pseudo-scan line road point cloud extraction algorithm to perform fine extraction of road point cloud;

步骤1.3,道路点云边界优化:基于DBSCAN聚类算法对道路点聚类,设定约束规则,采用RANSAC方法对边界重新拟合;Step 1.3, road point cloud boundary optimization: cluster the road points based on the DBSCAN clustering algorithm, set constraint rules, and use the RANSAC method to refit the boundary;

步骤2具体为:Step 2 is as follows:

步骤2.1,道路标志线分割;Step 2.1, road sign line segmentation;

步骤2.2,道路标志线分割优化;Step 2.2, road sign line segmentation optimization;

步骤2.3,通过点云投影时保存的深度图像和自适应阈值分割后的二值图像逆投影生成点云,实现对道路标志线点云的提取;Step 2.3, generating a point cloud by reversely projecting the depth image saved during point cloud projection and the binary image after adaptive threshold segmentation, thereby realizing the extraction of the road sign line point cloud;

步骤3具体为:Step 3 is as follows:

步骤3.1,道路标志线聚类分割;Step 3.1, road sign line clustering segmentation;

步骤3.2,车道线与斑马线分类;Step 3.2, lane line and zebra crossing classification;

步骤3.3,基于改进PointNet++的小尺寸道路标志线分类。Step 3.3, small-size road sign line classification based on improved PointNet++.

步骤1.1具体按照以下步骤实施:Step 1.1 is implemented according to the following steps:

步骤1.1.1,场景预处理:采用基于统计滤波器的离散点去除方法对场景进行预处理,索引场景中的所有点云,首先计算任意点pi与其周围n个最近邻点的平均距离dn,假设任意pi的平均距离dn服从高斯分布,通过计算高斯分布的均值μ和标准差σ,根据所得均值和标准差建立标准范围,当平均距离dn大于设定的标准范围时,则将该点定义为离散点并去除;Step 1.1.1, scene preprocessing: Use the discrete point removal method based on statistical filters to preprocess the scene. Index all point clouds in the scene. First, calculate the average distance dn between any point p i and its n nearest neighboring points. Assume that the average distance dn of any point p i follows a Gaussian distribution. Calculate the mean μ and standard deviation σ of the Gaussian distribution, and establish a standard range based on the obtained mean and standard deviation. When the average distance dn is greater than the set standard range, the point is defined as a discrete point and removed.

dn公式为:The formula for dn is:

步骤1.1.2,道路点云的粗提取:采用改进的CSF算法对道路进行滤波,从而初步提取道路点云,具体为Step 1.1.2, rough extraction of road point cloud: The improved CSF algorithm is used to filter the road to preliminarily extract the road point cloud.

步骤1.1.2.1,输入并反转去噪后的道路点云数据;Step 1.1.2.1, input and invert the denoised road point cloud data;

步骤1.1.2.2,建立道路点云的模拟布料格网,自定义网格的大小,初始时刻时,格网位于全部道路点云的上方;Step 1.1.2.2, establish a simulated cloth grid of the road point cloud, customize the size of the grid, and at the initial moment, the grid is located above all road point clouds;

步骤1.1.2.3,将场景中所有点云与格网粒子投影至同一水平面上,同时寻找每一个粒子的最近邻点,记录其投影前的高程值IHV;Step 1.1.2.3, project all point clouds and grid particles in the scene onto the same horizontal plane, find the nearest neighbor of each particle, and record its elevation value IHV before projection;

步骤1.1.2.4,对于每一个可移动的格网粒子,计算其内部因素影响所产生的位移,与当前最近邻点的高程值比较,如果此时该粒子的高程值低于或等于IHV,则将该粒子的高程值重新设置为IHV,并且该粒子变为不可移动点;Step 1.1.2.4, for each movable grid particle, calculate the displacement caused by its internal factors and compare it with the elevation value of the current nearest neighbor. If the elevation value of the particle is lower than or equal to IHV, reset the elevation value of the particle to IHV, and the particle becomes an immovable point;

步骤1.1.2.5,对于每个格网粒子,计算其受内部因素所产生的位移;Step 1.1.2.5, for each grid particle, calculate the displacement caused by internal factors;

步骤1.1.2.6,重复步骤1.1.2.4、步骤1.1.2.5,当迭代次数大于用户设置值时,则停止模拟过程;Step 1.1.2.6, repeat steps 1.1.2.4 and 1.1.2.5. When the number of iterations is greater than the user-set value, stop the simulation process;

步骤1.1.2.7,计算点云与格网粒子之间的高程差;Step 1.1.2.7, calculate the elevation difference between the point cloud and the grid particles;

步骤1.1.2.8,如果点云与格网粒子间距离小于预先设置的阈值HCC,则认为该点是地面点,反之则认为该点是非地面点,提取地面点即为道路点云。In step 1.1.2.8, if the distance between the point cloud and the grid particle is less than the preset threshold HCC, the point is considered to be a ground point, otherwise it is considered to be a non-ground point. The extracted ground point is the road point cloud.

步骤1.2具体按照以下步骤实施:Step 1.2 is implemented as follows:

步骤1.2.1,使用道路点云扫描线方向代替轨迹方向,并且沿扫描线方向建立伪扫描线,即对道路点云进行切片处理,假设扫描线轨迹上任意一点的坐标是(Xi,Yi),所以/>点对应的里程值Si的公式为:Step 1.2.1, use the road point cloud scan line direction instead of the trajectory direction, and establish a pseudo scan line along the scan line direction, that is, slice the road point cloud, assuming that any point on the scan line trajectory The coordinates of are (X i ,Y i ), so/> The formula for the mileage value Si corresponding to the point is:

设cn(X'n,Y’n)表示任意切片点位置,其中cn的坐标值的公式为:Let c n (X' n ,Y' n ) represent the position of any slice point, where the formula for the coordinate value of c n is:

式(3)中,表示/>的坐标方位角,Δn表示切片位置与轨迹上切片位置的最近邻点之间的距离,/>表示/>的坐标值;In formula (3), Indicates/> The coordinate azimuth of the slice position, Δn represents the distance between the slice position and the nearest neighbor point on the trajectory, /> Indicates/> The coordinate value of

由cn(X'n,Y′n)、α'n和D分别确定切片的位置、切片的方向和切片的长度;计算点云到切片段间的距离d,当距离小于切片长度的一半时,则点云位于同一切片上,L表示切片段在坐标系xoy面的投影,T表示切片段的法线,P(Xp,Yp)表示点云的坐标,d表示切片段到点云间的距离,则向量和法向量T之间的关系的公式表示为:The position, direction and length of the slice are determined by c n (X' n ,Y′ n ), α' n and D respectively; the distance d from the point cloud to the slice segment is calculated. When the distance is less than half the slice length, the point cloud is located on the same slice. L represents the projection of the slice segment on the xoy plane of the coordinate system, T represents the normal of the slice segment, P (X p ,Y p ) represents the coordinates of the point cloud, and d represents the distance from the slice segment to the point cloud. Then the vector The formula for the relationship between and the normal vector T is expressed as:

d=|(Xp-X'n)cos(α'n)+(Yp-Y′n)sin(α'n)| (4);d=|(Xp- X'n ) cos( α'n )+( Yp - Y'n )sin( α'n )| (4);

步骤1.2.2,道路横截面切片投影,重新构建伪扫描线的局部坐标系代替雷达系统的大地坐标系,将场景点云转换到相对应的伪扫描线的局部坐标系内,将坐标原点定义为每个道路分段的中心,X轴为轨迹方向,Y轴垂直于X轴,Pi(Xi,Yi,Zi)表示切片段上任意一个点,Pi坐标的公式为:Step 1.2.2, road cross-section slice projection, reconstruct the local coordinate system of the pseudo scan line to replace the geodetic coordinate system of the radar system, transform the scene point cloud into the local coordinate system of the corresponding pseudo scan line, define the coordinate origin as the center of each road segment, the X-axis is the trajectory direction, the Y-axis is perpendicular to the X-axis, Pi ( Xi , Yi , Zi ) represents any point on the slice segment, and the formula of Pi coordinates is:

式(5)中,(XC,YC)表示切片上的坐标,α表示轨迹点的方位角;In formula (5), (X C , Y C ) represents the coordinates on the slice, and α represents the azimuth of the trajectory point;

将三维点投影至YOZ平面,其中表示二维坐标系下的原点,投影后伪扫描线上的点/>的坐标公式为:Project the 3D point to the YOZ plane, where Represents the origin in the two-dimensional coordinate system, the point on the pseudo scan line after projection/> The coordinate formula is:

步骤1.2.3,使用滑动窗口滤波器对道路点云进行提取,将投影后的横截面切片上的点划分成若干小区间,设定区间阈值为M,此时将伪扫描线上的点云划分为k个区间,假设最后一个点是则/>为最后一个区间生成的伪扫描点坐标,采用RANSAC算法进行直线检测,通过随机选择一部分点进行拟合,统计与其他点的数据偏差,筛选出符合阈值的点,对于筛选出的点建立缓冲区,再使用滑动窗口滤波器对缓冲区内的非道路点云进行去除。Step 1.2.3, use the sliding window filter to extract the road point cloud, divide the points on the projected cross-sectional slice into several small intervals, set the interval threshold to M, and then divide the point cloud on the pseudo scan line into k intervals. Assume that the last point is Then/> The coordinates of the pseudo scanning points generated for the last interval are used for straight line detection using the RANSAC algorithm. A portion of points are randomly selected for fitting, and the data deviations from other points are counted to screen out points that meet the threshold. A buffer is established for the screened points, and a sliding window filter is used to remove non-road point clouds in the buffer.

步骤2.1具体为,Step 2.1 is as follows:

步骤2.1.1,道路点云投影:选取XOY面作为水平面,将道路点云数据沿着XOY平面的法向量方向投影至平面,设原始道路点云集合为Pn={P1,P2,...,Pn},投影后道路点云集合为Pn′={P1',P2',...,Pn'},在平面π上任意选取点t(x0,y0,z0),其中平面π的法向量为此时/>在/>方向上的投影的高度H计算公式为:Step 2.1.1, road point cloud projection: select the XOY plane as the horizontal plane, project the road point cloud data onto the plane along the normal vector direction of the XOY plane, assume that the original road point cloud set is Pn = { P1 , P2 , ..., Pn }, the projected road point cloud set is Pn ′ = { P1 ', P2 ', ..., Pn '}, and arbitrarily select a point t( x0 , y0 , z0 ) on plane π, where the normal vector of plane π is At this time/> In/> The calculation formula for the projection height H in the direction is:

此时是/>在/>方向上的投影向量,由于/>与/>的方向相同,因此/>那么此时投影点Pi'的坐标计算公式为:at this time Yes/> In/> The projection vector in the direction, due to/> With/> The direction is the same, so /> Then the coordinate calculation formula of the projection point Pi ' is:

步骤2.1.2,二维投影图像光栅化:将步骤2.1.1得到的投影后的点云转换为二维光栅图像,使用反距离权重插值法IDW,根据给定的控制点和控制点对的位移矢量,计算控制点对周围像素的反距离加权权重影响,从而计算出图像中每一个像素点的强度值,假设Pi(Xi,Yi,Zi,Ii)表示任意道路点的坐标和该点的反射强度Ii,Rpixel表示生成光栅图像的分辨率,RIDW表示插值的半径,当点Pi投影至平面π上时,此时投影后Pi变化为pj(Xj,Yj,Ij),此时光栅图像的大小的计算公式为:Step 2.1.2, rasterization of two-dimensional projection image: convert the projected point cloud obtained in step 2.1.1 into a two-dimensional raster image, use the inverse distance weighted interpolation method IDW, and calculate the inverse distance weighted weight influence of the control point on the surrounding pixels according to the given control point and the displacement vector of the control point pair, so as to calculate the intensity value of each pixel in the image. Assume that Pi ( Xi , Yi , Zi , Ii ) represents the coordinates of any road point and the reflection intensity Ii of the point, Rpixel represents the resolution of the generated raster image, RIDW represents the radius of interpolation, when point Pi is projected onto plane π, Pi changes to pj ( Xj , Yj , Ij ) after projection, and the calculation formula of the size of the raster image is:

式(9)中,maxX表示道路点云中X轴的最大值;minX表示道路点云中X轴的最小值;maxY表示道路点云中Y轴的最大值;minY表示道路点云中Y轴的最小值;Width、Height表示光栅的大小;In formula (9), maxX represents the maximum value of the X axis in the road point cloud; minX represents the minimum value of the X axis in the road point cloud; maxY represents the maximum value of the Y axis in the road point cloud; minY represents the minimum value of the Y axis in the road point cloud; Width and Height represent the size of the grating;

通过IDW算法求取每个点的强度值,位于内插半径内部点Pk由插值点Pixel(m,n)的位置和插值半径确定,d(m,n,k)则表示Pk到像素中心Pixel(m,n)的距离,计算公式为:The intensity value of each point is obtained by the IDW algorithm. The point P k inside the interpolation radius is determined by the position of the interpolation point Pixel (m,n) and the interpolation radius. d (m,n,k) represents the distance from P k to the pixel center Pixel (m,n) . The calculation formula is:

式(10)中,Xk表示第k个点的X坐标;Yk表示第k个点的Y坐标;In formula (10), Xk represents the X coordinate of the kth point; Yk represents the Y coordinate of the kth point;

此时插值点的强度值I(m,n)表示为:At this time, the intensity value I (m,n) of the interpolation point is expressed as:

通过判断每个格网内的强度值,并将其映射至图像的灰度区间,最终将道路点云转换为道路点云的强度图像;By determining the intensity value in each grid and mapping it to the grayscale interval of the image, the road point cloud is finally converted into an intensity image of the road point cloud;

步骤2.1.3,消除强度图像异常区域:采用IDW算法去除道路点云的强度图像中大部分强度异常点,针对仍然存在的少量椒盐噪声情况,采用改进的自适应中值滤波对道路点云的强度图像进行处理;Step 2.1.3, eliminate abnormal areas in the intensity image: use the IDW algorithm to remove most of the abnormal intensity points in the intensity image of the road point cloud. For the small amount of salt and pepper noise that still exists, use the improved adaptive median filter to process the intensity image of the road point cloud;

步骤2.1.4,基于积分图像自适应阈值分割:积分图像是指每个像素值都是原始图像左上角矩形区域的总和,通过积分图像计算原始图像任意矩形区域的像素值的和,而每个像素的自适应阈值由其周围像素灰度值的平均值确定,原始像素点为pi(xi,yi),该点的对应灰度值为I(xi,yi),积分图像表示为intI(xi,yi),通过定义背景半径s和分割系数t,计算矩形区域中像素灰度值之和,以pi为中心,长度为2s+1的灰度值之和来进行计算,矩形区域中像素灰度值之和公式表示为:Step 2.1.4, adaptive threshold segmentation based on integral image: The integral image means that each pixel value is the sum of the rectangular area in the upper left corner of the original image. The sum of the pixel values of any rectangular area of the original image is calculated by the integral image, and the adaptive threshold of each pixel is determined by the average grayscale value of the surrounding pixels. The original pixel point is p i ( xi , yi ), and the corresponding grayscale value of the point is I ( xi , yi ). The integral image is expressed as intI ( xi , yi ). By defining the background radius s and the segmentation coefficient t, the sum of the grayscale values of the pixels in the rectangular area is calculated. The calculation is performed by taking p i as the center and the sum of the grayscale values of the length 2s+1. The formula for the sum of the grayscale values of the pixels in the rectangular area is expressed as:

当矩形区域超出图像的边界时,以图像边界为准,此时二进制图像灰度值计算公式为:When the rectangular area exceeds the boundary of the image, the image boundary is used as the standard. At this time, the grayscale value calculation formula of the binary image is:

步骤2.2具体为,Step 2.2 is as follows:

步骤2.2.1:使用基于形态学的闭合运算对道路标志线进行优化,对道路标志线中间的孔洞残缺进行修复填充,建立原图像Mask和标记图像Maker,Mask是作为重建参考,Maker是使用形态学方法对步骤2.1.4得到的图像连续膨胀,然后根据Mask的约束收敛,即Mask≥Maker,最后对Maker进行取补操作,再与Mask相减完成对白色孔洞的填充,Mask的表达式为:Step 2.2.1: Use morphological closing operations to optimize the road marking lines, repair and fill the holes in the middle of the road marking lines, and establish the original image Mask and the marked image Maker. Mask is used as a reconstruction reference. Maker uses morphological methods to continuously expand the image obtained in step 2.1.4, and then converges according to the constraints of Mask, that is, Mask ≥ Maker. Finally, the Maker is complemented and then subtracted from the Mask to complete the filling of the white holes. The expression of Mask is:

步骤2.2.2,形态学方法优化道路标志线:通过形态学MLAA方法处理二值图像边界的锯齿化现象,具体为:Step 2.2.2, morphological method to optimize road marking lines: The morphological MLAA method is used to process the jagged phenomenon of the binary image boundary, specifically:

步骤2.2.2.1,首先查找步骤2.2.1已经进行白色孔洞填充的图片中明显像素不连续的部分,作为需要被处理的边缘;Step 2.2.2.1, first find the part with obvious pixel discontinuity in the image that has been filled with white holes in step 2.2.1 as the edge that needs to be processed;

步骤2.2.2.2,将需要被处理的边缘分类为不同的模式,对标记的边缘重新矢量化;Step 2.2.2.2, classify the edges to be processed into different modes and re-vectorize the marked edges;

步骤2.2.2.3,保留平滑部分,通过边缘矢量化计算权重,将边缘像素与其周围像素按照权重进行混合,实现边缘的平滑处理。Step 2.2.2.3, keep the smooth part, calculate the weight through edge vectorization, mix the edge pixels with the surrounding pixels according to the weight, and achieve edge smoothing.

步骤3.1采用欧式聚类将道路标志线点云划分各个独立对象,具体为:Step 3.1 uses Euclidean clustering to divide the road sign point cloud into independent objects, specifically:

步骤3.1.1,假设场景内道路标志线点集为P={Pi(Xi,Yi),i=1,2,3...},参数D表示聚类半径,C表示以D为半径的聚类圆,从点集P中取任意一点p,并且将p点放入点集聚类簇中;Step 3.1.1, assuming that the road sign line point set in the scene is P = {P i (X i ,Y i ), i = 1, 2, 3...}, parameter D represents the clustering radius, C represents the clustering circle with D as the radius, select any point p from the point set P, and put point p into the point set cluster;

步骤3.1.2,以D为半径,以p为圆心作圆,将圆内的点都并入之前定义的聚类簇中;Step 3.1.2, draw a circle with D as radius and p as center, and merge all points in the circle into the previously defined cluster;

步骤3.1.3,使用点集聚类簇中的其他点作为新的圆心,重复步骤3.1.2,直到没有新的点加入该聚类簇为止;Step 3.1.3, use other points in the point set cluster as the new center of the circle, and repeat step 3.1.2 until no new points are added to the cluster;

步骤3.1.4,重复步骤3.1.1和步骤3.1.3,使用点集P中的其他点生成新的聚类簇,每个聚类簇即为独立对象。Step 3.1.4, repeat steps 3.1.1 and 3.1.3, and use other points in the point set P to generate new clusters, each of which is an independent object.

步骤3.2具体为,Step 3.2 is as follows:

步骤3.2.1,道路标志线特征提取:为步骤3.1分离得到的独立对象建立MBR;Step 3.2.1, road sign feature extraction: establish MBR for the independent objects separated in step 3.1;

步骤3.2.2,车道线分类:定义3.2.1中得到的MBR的长宽比β为对象的几何特征,根据长宽比的从聚类簇中提取实型车道线与虚型车道线,当不满足实型车道线与虚型车道线的条件时,则认为该标志为小尺寸道路标志线箭头标志或斑马线;Step 3.2.2, lane line classification: define the aspect ratio β of the MBR obtained in 3.2.1 as the geometric feature of the object, extract the real lane lines and virtual lane lines from the cluster according to the aspect ratio, and when the conditions of the real lane lines and the virtual lane lines are not met, the sign is considered to be a small-sized road sign arrow sign or zebra crossing;

步骤3.2.3,斑马线分类:采用基于PCA的方法判别点云簇的排列方式,从而提取其中并排排列的矩形,并将其划分为斑马线,对于任意矩形标志R,计算其质心在xoy平面上的坐标,然后针对R计算其协方差矩阵CR,通过对CR进行特征值分解,选择与最大特征值相关的特征向量作为标志R的分布特征,并将其设为vr,该参数代码表矩形标志R的方向,为了判断标志R的排列方式,首先沿着标志R的两侧进行搜索,搜索的宽度为w,如果在搜索区域内搜索到质心为q、分布特征为vq的矩形标志Q,且满足公式(vr//vq)∧(vr//vrq)∧(vq//vrq,则将该标志R分类为斑马线。Step 3.2.3, zebra crossing classification: Use a PCA-based method to determine the arrangement of point cloud clusters, thereby extracting rectangles arranged side by side and dividing them into zebra crossings. For any rectangular sign R, calculate the coordinates of its center of mass on the xoy plane, and then calculate its covariance matrix CR for R. By performing eigenvalue decomposition on CR , select the eigenvector associated with the maximum eigenvalue as the distribution feature of the sign R, and set it to vr . The parameter code represents the direction of the rectangular sign R. In order to determine the arrangement of the sign R, first search along both sides of the sign R with a search width of w. If a rectangular sign Q with a center of mass of q and a distribution feature of vq is found in the search area, and the formula ( vr // vq )∧( vr // vrq )∧( vq //vrq) is satisfied , then the sign R is classified as a zebra crossing.

步骤3.3具体为,Step 3.3 is as follows:

步骤3.3.1,数据集制作,具体为:对于场景中存在的未分类的小尺寸道路标志线,获取其数据,手动框选数据,手动选取点集类别,标注数据,划分数据集;Step 3.3.1, data set preparation, specifically: for the unclassified small-sized road sign lines in the scene, obtain their data, manually select the data, manually select the point set category, annotate the data, and divide the data set;

步骤3.3.2,在原始PointNet++网络的主干分支旁构建自顶向下的网络,局部迭代性的使用PointNet网络不断提取特征,对步骤3.3.1中的数据集进行分类。In step 3.3.2, a top-down network is built next to the main branch of the original PointNet++ network, and the PointNet network is used locally and iteratively to continuously extract features and classify the data set in step 3.3.1.

本发明的有益效果是:The beneficial effects of the present invention are:

本发明基于MLS点云的道路标志线检测与识别方法,针对无法有效提取道路边界的问题,使用渐进式道路点云提取方法进行道路点云提取,针对道路标志线提取产生的过分割和欠分割、点云投影强度图像的锯齿化以及道路标志线自然残缺的问题,设计了基于自适应阈值分割和形态学方法优化的道路标志线提取方法,最后通过实现多尺度聚类和PointNet++的道路标志线分类方法,解决了特征相似导致分类精度较低的问题,有效地提高了道路标志线的检测和识别精度。The invention discloses a road sign line detection and recognition method based on MLS point cloud. Aiming at the problem that road boundaries cannot be effectively extracted, a progressive road point cloud extraction method is used to extract road point cloud. Aiming at the problems of over-segmentation and under-segmentation caused by road sign line extraction, jagged point cloud projection intensity image and natural incompleteness of road sign lines, a road sign line extraction method based on adaptive threshold segmentation and morphological method optimization is designed. Finally, by implementing multi-scale clustering and PointNet++ road sign line classification method, the problem of low classification accuracy caused by feature similarity is solved, and the detection and recognition accuracy of road sign lines is effectively improved.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1是原始场景数据经过本发明步骤1处理后所获得的道路点云提取结果图,其中,图1(a)是原始场景图,图1(b)是预处理后的场景图,图1(c)是经过地面点滤波的场景图,图1(d)是场景道路点云粗提取结果图,图1(e)是场景非道路点云提取结果图,图1(f)是场景道路点云精细提取结果图。Figure 1 is a road point cloud extraction result diagram obtained after the original scene data is processed by step 1 of the present invention, wherein Figure 1(a) is the original scene diagram, Figure 1(b) is the preprocessed scene diagram, Figure 1(c) is the scene diagram after ground point filtering, Figure 1(d) is the scene road point cloud coarse extraction result diagram, Figure 1(e) is the scene non-road point cloud extraction result diagram, and Figure 1(f) is the scene road point cloud fine extraction result diagram.

图2是选取的三个场景经过本发明步骤2优化后的最终道路标志线点云提取结果图,其中,图2(a)是选取的一个场景优化后的最终道路标志线点云提取结果图,图2(b)是选取的一个场景优化后的最终道路标志线点云提取结果图,图2(c)是选取的另一个场景优化后的最终道路标志线点云提取结果图。Figure 2 is a diagram of the final road sign line point cloud extraction results of the three selected scenes after optimization by step 2 of the present invention, wherein Figure 2(a) is a diagram of the final road sign line point cloud extraction results after optimization of a selected scene, Figure 2(b) is a diagram of the final road sign line point cloud extraction results after optimization of a selected scene, and Figure 2(c) is a diagram of the final road sign line point cloud extraction results after optimization of another selected scene.

图3是经过本发明步骤3处理后得到的道路标志线分类结果图。FIG. 3 is a diagram showing the classification result of road sign lines obtained after processing in step 3 of the present invention.

具体实施方式Detailed ways

下面结合附图和具体实施方式对本发明进行详细说明。The present invention is described in detail below with reference to the accompanying drawings and specific embodiments.

实施例1Example 1

本实施例提供一种基于MLS(MovingLeastSquar)点云的道路标志线检测与识别方法,具体按照以下步骤实施:This embodiment provides a road sign line detection and recognition method based on MLS (Moving Least Square) point cloud, which is specifically implemented according to the following steps:

步骤1,渐进式道路点云提取;Step 1, progressive road point cloud extraction;

步骤1.1,通过改进的布料模拟滤波(Cloth Simulation Filter,CSF)算法进行道路粗提取;Step 1.1, rough road extraction is performed using an improved Cloth Simulation Filter (CSF) algorithm;

步骤1.2,使用伪扫描线的道路点云提取算法进行道路点云精细提取;Step 1.2, using the pseudo-scan line road point cloud extraction algorithm to perform fine extraction of road point cloud;

步骤1.3,道路点云边界优化:基于密度的聚类(Density-Based SpatialClustering of Applications with Noise,DBSCAN)聚类算法对道路点聚类,设定约束规则,采用随机抽样一致(Random Sample Consensus,RANSAC)方法对边界重新拟合,使提取的道路边界更接近真实边界,并且提高道路点云的提取精度;Step 1.3, road point cloud boundary optimization: Density-Based Spatial Clustering of Applications with Noise (DBSCAN) clustering algorithm is used to cluster road points, set constraint rules, and use the Random Sample Consensus (RANSAC) method to refit the boundary, so that the extracted road boundary is closer to the real boundary and the extraction accuracy of the road point cloud is improved;

步骤2,基于自适应阈值分割与形态学方法优化的道路标志线提取;Step 2: Road sign line extraction based on adaptive threshold segmentation and morphological method optimization;

步骤2.1,道路标志线分割;Step 2.1, road sign line segmentation;

步骤2.2,道路标志线分割优化;Step 2.2, road sign line segmentation optimization;

步骤2.3,通过点云投影时保存的深度图像和自适应阈值分割后的二值图像逆投影生成点云,实现对道路标志线点云的提取;Step 2.3, generating a point cloud by reversely projecting the depth image saved during point cloud projection and the binary image after adaptive threshold segmentation, thereby realizing the extraction of the road sign line point cloud;

步骤3,基于改进PointNet++的道路标志线分类;Step 3: Road sign line classification based on improved PointNet++;

步骤3.1,道路标志线聚类分割;Step 3.1, road sign line clustering segmentation;

步骤3.2,车道线与斑马线分类;Step 3.2, lane line and zebra crossing classification;

步骤3.3,基于改进PointNet++的小尺寸道路标志线分类。Step 3.3, small-size road sign line classification based on improved PointNet++.

实施例2Example 2

本实施例提供一种基于MLS点云的道路标志线检测与识别方法,在实施例1的基础上,步骤1渐进式道路点云提取具体为:This embodiment provides a method for detecting and recognizing road signs based on MLS point cloud. Based on Embodiment 1, step 1 of progressive road point cloud extraction is specifically as follows:

步骤1.1具体按照以下步骤实施:Step 1.1 is implemented according to the following steps:

步骤1.1.1,场景预处理:采用基于统计滤波器的离散点去除方法对图1(a)所示的原始场景进行预处理,预处理后的场景如图1(b)所示,具体为索引场景中的所有点云,首先计算任意点pi与其周围n个最近邻点的平均距离dn,假设任意pi的平均距离dn服从高斯分布,通过计算高斯分布的均值μ和标准差σ,根据所得均值和标准差建立标准范围,当平均距离dn大于设定的标准范围时,则将该点定义为离散点并去除;Step 1.1.1, scene preprocessing: The original scene shown in Figure 1(a) is preprocessed by using a discrete point removal method based on a statistical filter. The preprocessed scene is shown in Figure 1(b), which specifically indexes all point clouds in the scene. First, the average distance dn between any point p i and its n nearest neighboring points is calculated. Assuming that the average distance dn of any point p i follows a Gaussian distribution, the mean μ and standard deviation σ of the Gaussian distribution are calculated, and a standard range is established based on the obtained mean and standard deviation. When the average distance dn is greater than the set standard range, the point is defined as a discrete point and removed.

dn公式为:The formula for dn is:

步骤1.1.2,道路点云的粗提取:采用改进的CSF算法对道路进行滤波,从而初步提取道路点云,经过地面点滤波的场景如图1(c)所示,具体为Step 1.1.2, rough extraction of road point cloud: The improved CSF algorithm is used to filter the road to preliminarily extract the road point cloud. The scene after ground point filtering is shown in Figure 1(c).

步骤1.1.2.1,输入并反转去噪后的道路点云数据;Step 1.1.2.1, input and invert the denoised road point cloud data;

步骤1.1.2.2,建立道路点云的模拟布料格网,自定义网格的大小,初始时刻时,格网位于全部道路点云的上方;Step 1.1.2.2, establish a simulated cloth grid of the road point cloud, customize the size of the grid, and at the initial moment, the grid is located above all road point clouds;

步骤1.1.2.3,将场景中所有点云与格网粒子投影至同一水平面上,同时寻找每一个粒子的最近邻点,记录其投影前的高程值IHV;Step 1.1.2.3, project all point clouds and grid particles in the scene onto the same horizontal plane, find the nearest neighbor of each particle, and record its elevation value IHV before projection;

步骤1.1.2.4,对于每一个可移动的格网粒子,由于改进模型忽略了其外部因素,因此计算其内部因素影响所产生的位移,与当前最近邻点的高程值比较,如果此时该粒子的高程值低于或等于IHV,则将该粒子的高程值重新设置为IHV,并且该粒子变为不可移动点;Step 1.1.2.4, for each movable grid particle, since the improved model ignores its external factors, the displacement caused by its internal factors is calculated and compared with the elevation value of the current nearest neighbor point. If the elevation value of the particle is lower than or equal to IHV at this time, the elevation value of the particle is reset to IHV, and the particle becomes an immovable point;

步骤1.1.2.5,对于每个格网粒子,计算其受内部因素所产生的位移;Step 1.1.2.5, for each grid particle, calculate the displacement caused by internal factors;

步骤1.1.2.6,重复步骤1.1.2.4、步骤1.1.2.5,当迭代次数大于用户设置值时,则停止模拟过程;Step 1.1.2.6, repeat steps 1.1.2.4 and 1.1.2.5. When the number of iterations is greater than the user-set value, stop the simulation process;

步骤1.1.2.7,计算点云与格网粒子之间的高程差;Step 1.1.2.7, calculate the elevation difference between the point cloud and the grid particles;

步骤1.1.2.8,如果点云与格网粒子间距离小于预先设置的阈值HCC,则认为该点是地面点,反之则认为该点是非地面点,提取地面点即为道路点云,实现道路粗提取,场景道路点云粗提取结果如图1(d)所示,场景非道路点云提取结果如图1(e)所示。In step 1.1.2.8, if the distance between the point cloud and the grid particle is less than the preset threshold HCC, the point is considered to be a ground point, otherwise it is considered to be a non-ground point. The extracted ground point is the road point cloud, and the rough extraction of the road is realized. The rough extraction result of the scene road point cloud is shown in Figure 1(d), and the extraction result of the scene non-road point cloud is shown in Figure 1(e).

步骤1.2具体按照以下步骤实施:Step 1.2 is implemented as follows:

步骤1.2.1,建立道路点云伪扫描线,提取道路点云只需要考虑行驶轨迹的平面信息,即点云的x和y坐标,使用道路点云扫描线方向代替轨迹方向,并且沿扫描线方向建立伪扫描线,即对道路点云进行切片处理,假设扫描线轨迹上任意一点Pri的坐标是(Xi,Yi),所以Pri点对应的里程值Si的公式为:Step 1.2.1, establish a pseudo scan line of the road point cloud. To extract the road point cloud, we only need to consider the plane information of the driving trajectory, that is, the x and y coordinates of the point cloud. The road point cloud scan line direction is used instead of the trajectory direction, and a pseudo scan line is established along the scan line direction, that is, the road point cloud is sliced. Assuming that the coordinates of any point P ri on the scan line trajectory are (X i ,Y i ), the formula for the mileage value S i corresponding to the P ri point is:

设cn(X'n,Y′n)表示任意切片点位置,其中cn的坐标值的公式为:Let c n (X' n ,Y′ n ) represent the position of any slice point, where the formula for the coordinate value of c n is:

式(3)中,表示/>的坐标方位角,Δn表示切片位置与轨迹上切片位置的最近邻点之间的距离,/>表示/>的坐标值;In formula (3), Indicates/> The coordinate azimuth of the slice position, Δn represents the distance between the slice position and the nearest neighbor point on the trajectory, /> Indicates/> The coordinate value of

由cn(X'n,Y′n)、α'n和D分别确定切片的位置、切片的方向和切片的长度;对点云进行切片分割处理还需要计算点云到切片段间的距离d,当距离小于切片长度的一半时,则点云位于同一切片上,L表示切片段在坐标系xoy面的投影,T表示切片段的法线,P(Xp,Yp)表示点云的坐标,d表示切片段到点云间的距离,则向量和法向量T之间的关系的公式表示为:The position, direction and length of the slice are determined by c n (X' n ,Y′ n ), α' n and D respectively; the slicing process of the point cloud also requires the calculation of the distance d between the point cloud and the slice segment. When the distance is less than half the length of the slice, the point cloud is located on the same slice. L represents the projection of the slice segment on the xoy plane of the coordinate system, T represents the normal of the slice segment, P (X p ,Y p ) represents the coordinates of the point cloud, and d represents the distance between the slice segment and the point cloud. Then the vector The formula for the relationship between and the normal vector T is expressed as:

d=|(Xp-X'n)cos(α'n)+(Yp-Y′n)sin(α'n)| (4)d=|( Xp - X'n )cos( α'n )+( Yp - Y'n )sin( α'n )| (4)

步骤1.2.2,道路横截面切片投影,重新构建伪扫描线的局部坐标系代替雷达系统的大地坐标系,将场景点云转换到相对应的伪扫描线的局部坐标系内,将坐标原点定义为每个道路分段的中心,X轴为轨迹方向,Y轴垂直于X轴,Pi(Xi,Yi,Zi)表示切片段上任意一个点,Pi坐标的公式为:Step 1.2.2, road cross-section slice projection, reconstruct the local coordinate system of the pseudo scan line to replace the geodetic coordinate system of the radar system, transform the scene point cloud into the local coordinate system of the corresponding pseudo scan line, define the coordinate origin as the center of each road segment, the X-axis is the trajectory direction, the Y-axis is perpendicular to the X-axis, Pi ( Xi , Yi , Zi ) represents any point on the slice segment, and the formula of Pi coordinates is:

式(5)中,(XC,YC)表示切片上的坐标,α表示轨迹点的方位角;In formula (5), (X C , Y C ) represents the coordinates on the slice, and α represents the azimuth of the trajectory point;

将三维点投影至YOZ平面,其中表示二维坐标系下的原点,投影后伪扫描线上的点/>的坐标公式为:Project the 3D point to the YOZ plane, where Represents the origin in the two-dimensional coordinate system, the point on the pseudo scan line after projection/> The coordinate formula is:

步骤1.2.3,使用滑动窗口滤波器对道路点云进行提取,将投影后的横截面切片上的点划分成若干小区间,设定区间阈值为M,此时将伪扫描线上的点云划分为k个区间,假设最后一个点是则/>为最后一个区间生成的伪扫描点坐标,采用RANSAC算法进行直线检测,通过随机选择一部分点进行拟合,统计与其他点的数据偏差,筛选出符合阈值的点,对于筛选出的点建立缓冲区,建立缓冲区后,再使用滑动窗口滤波器对缓冲区内的非道路点云进行去除,实现道路点云精细提取,提取结果如图1(f)所示。Step 1.2.3, use the sliding window filter to extract the road point cloud, divide the points on the projected cross-sectional slice into several small intervals, set the interval threshold to M, and then divide the point cloud on the pseudo scan line into k intervals. Assume that the last point is Then/> The coordinates of the pseudo scanning points generated for the last interval are used for line detection using the RANSAC algorithm. A portion of points are randomly selected for fitting, and the data deviations from other points are counted to screen out points that meet the threshold. A buffer is established for the screened points. After the buffer is established, a sliding window filter is used to remove the non-road point cloud in the buffer to achieve fine extraction of the road point cloud. The extraction result is shown in Figure 1(f).

滑动窗口滤波器的原理是根据道路点云和路缘石存在的高程差,通过建立若干个小窗口去除非道路点云,首先,在k个区间内建立若干个矩形窗口,每个窗口是区间内点云的最小包围窗口,窗口的高度为dhi,此时设置道路阈值该阈值表示道路点的最高高程值,当/>时,表明该窗口存在高程值高于路面的噪声点,则将该窗口标记为关键窗口;The principle of the sliding window filter is to remove non-road point clouds by establishing several small windows based on the elevation difference between the road point cloud and the curb. First, several rectangular windows are established in k intervals. Each window is the minimum enclosing window of the point cloud in the interval. The height of the window is dh i . At this time, the road threshold is set This threshold represents the highest elevation value of the road point. When , it indicates that there is a noise point with an elevation higher than the road surface in the window, and the window is marked as a key window;

步骤1.3,道路点云边界优化:基于DBSCAN聚类算法对道路点聚类,设定了约束规则,然后采用RANSAC方法对边界重新拟合,使提取的道路边界更接近真实边界,并且提高道路点云的提取精度。Step 1.3, road point cloud boundary optimization: cluster the road points based on the DBSCAN clustering algorithm, set constraint rules, and then use the RANSAC method to re-fit the boundary, so that the extracted road boundary is closer to the real boundary and the extraction accuracy of the road point cloud is improved.

实施例3Example 3

本实施例提供一种基于MLS点云的道路标志线检测与识别方法,在实施例1-2的基础上,步骤2基于自适应阈值分割与形态学方法优化的道路标志线提取具体为:This embodiment provides a method for detecting and recognizing road sign lines based on MLS point clouds. On the basis of Embodiments 1-2, step 2 is based on the extraction of road sign lines optimized by adaptive threshold segmentation and morphological methods as follows:

步骤2.1具体按照以下步骤实施:Step 2.1 is implemented as follows:

步骤2.1.1,道路点云投影:选取XOY面作为水平面,将道路点云数据沿着XOY平面的法向量方向投影至平面,设原始道路点云集合为Pn={P1,P2,...,Pn},投影后道路点云集合为Pn'={P1',P2',...,Pn'},在平面π上任意选取点t(x0,y0,z0),其中平面π的法向量为此时/>在/>方向上的投影的高度H计算公式为:Step 2.1.1, road point cloud projection: select the XOY plane as the horizontal plane, project the road point cloud data onto the plane along the normal vector direction of the XOY plane, assume that the original road point cloud set is Pn = { P1 , P2 , ..., Pn }, the projected road point cloud set is Pn ' = { P1 ', P2 ', ..., Pn '}, and arbitrarily select a point t( x0 , y0 , z0 ) on plane π, where the normal vector of plane π is At this time/> In/> The calculation formula for the projection height H in the direction is:

此时是/>在/>方向上的投影向量,由于/>与/>的方向相同,因此/>那么此时投影点Pi'的坐标计算公式为:at this time Yes/> In/> The projection vector in the direction, due to/> With/> The direction is the same, so /> Then the coordinate calculation formula of the projection point Pi ' is:

步骤2.1.2,二维投影图像光栅化:将步骤2.1.1得到的投影后的点云转换为二维光栅图像,使用反距离权重插值法(Inverse Distance Weighting,IDW),根据给定的控制点和控制点对的位移矢量,计算控制点对周围像素的反距离加权权重影响,从而计算出图像中每一个像素点的强度值,假设Pi(Xi,Yi,Zi,Ii)表示任意道路点的坐标和该点的反射强度Ii,Rpixel表示生成光栅图像的分辨率,RIDW表示插值的半径,当点Pi投影至平面π上时,此时投影后Pi变化为pi(Xj,Yj,Ij),此时光栅图像的大小的计算公式为:Step 2.1.2, rasterization of two-dimensional projection image: convert the projected point cloud obtained in step 2.1.1 into a two-dimensional raster image, use the inverse distance weighting (IDW) interpolation method, and calculate the inverse distance weighted weight influence of the control point on the surrounding pixels according to the given control point and the displacement vector of the control point pair, so as to calculate the intensity value of each pixel in the image. Assume that Pi ( Xi , Yi , Zi , Ii ) represents the coordinates of any road point and the reflection intensity Ii of the point, Rpixel represents the resolution of the generated raster image, and RIDW represents the radius of interpolation. When point Pi is projected onto plane π, Pi changes to Pi ( Xj , Yj , Ij ) after projection. At this time, the calculation formula of the size of the raster image is:

式(9)中,maxX表示道路点云中X轴的最大值;minX表示道路点云中X轴的最小值;maxY表示道路点云中Y轴的最大值;minY表示道路点云中Y轴的最小值;Width、Height表示光栅的大小;In formula (9), maxX represents the maximum value of the X axis in the road point cloud; minX represents the minimum value of the X axis in the road point cloud; maxY represents the maximum value of the Y axis in the road point cloud; minY represents the minimum value of the Y axis in the road point cloud; Width and Height represent the size of the grating;

通过IDW算法求取每个点的强度值,位于内插半径内部点Pk由插值点Pixel(m,n)的位置和插值半径确定,d(m,n,k)则表示Pk到像素中心Pixel(m,n)的距离,计算公式为:The intensity value of each point is obtained by the IDW algorithm. The point P k inside the interpolation radius is determined by the position of the interpolation point Pixel (m,n) and the interpolation radius. d (m,n,k) represents the distance from P k to the pixel center Pixel (m,n) . The calculation formula is:

式(10)中,Xk表示第k个点的X坐标;Yk表示第k个点的Y坐标;In formula (10), Xk represents the X coordinate of the kth point; Yk represents the Y coordinate of the kth point;

此时插值点的强度值I(m,n)表示为:At this time, the intensity value I (m,n) of the interpolation point is expressed as:

通过判断每个格网内的强度值,并将其映射至图像的灰度区间,最终将道路点云转换为道路点云的强度图像;By determining the intensity value in each grid and mapping it to the grayscale interval of the image, the road point cloud is finally converted into an intensity image of the road point cloud;

步骤2.1.3,消除强度图像异常区域:采用IDW算法去除道路点云的强度图像中大部分强度异常点,针对仍然存在的少量椒盐噪声情况,采用改进的自适应中值滤波对道路点云的强度图像进行处理;Step 2.1.3, eliminate abnormal areas in the intensity image: use the IDW algorithm to remove most of the abnormal intensity points in the intensity image of the road point cloud. For the small amount of salt and pepper noise that still exists, use the improved adaptive median filter to process the intensity image of the road point cloud;

采用改进的自适应中值滤波对道路点云的强度图像进行处理具体为:1)假设强度图像的像素点为(x,y),则f(x,y)表示强度图像的像素值,Wij为移动窗口,其中加入移动窗口的四个指标:Wmax、fmax、fmin、fmid分别表示移动窗口的最大值、灰度值的极大值、灰度值的极小值和灰度值的中值;2)当满足fmin<f(x,y)<fmax,则判定该像素点不是噪声点并且输出,否则判定该点为噪声点;3)对于判定的噪声点,判断其窗口fmid是否为噪声,如果fmin<fmid<fmax,则该窗口中值不是噪声,并且输出该窗口中值;如果fmid为噪声,此时扩大窗口且Wij=Wij+1。当Wij<Wmax,重复上述2)的操作;4)若Wij≥Wmax,则去除窗口内所有极大值点和极小值点,强度图像内剩余的像素做加权平均后输出;The improved adaptive median filter is used to process the intensity image of the road point cloud as follows: 1) Assuming that the pixel point of the intensity image is (x, y), then f(x, y) represents the pixel value of the intensity image, Wij is a moving window, and four indicators of the moving window are added: Wmax , fmax , fmin , and fmid , which respectively represent the maximum value of the moving window, the maximum value of the grayscale value, the minimum value of the grayscale value, and the median value of the grayscale value; 2) When fmin <f(x, y)< fmax , the pixel point is determined to be not a noise point and is output, otherwise the point is determined to be a noise point; 3) For the determined noise point, determine whether its window fmid is noise. If fminfmidfmax , the median value in the window is not noise, and the median value in the window is output; if fmid is noise, the window is expanded and WijWij +1. When Wij < Wmax , repeat the above operation 2); 4) if Wij ≥Wmax , remove all the maximum and minimum points in the window, and output the remaining pixels in the intensity image after weighted average;

步骤2.1.4,基于积分图像自适应阈值分割:积分图像是指每个像素值都是原始图像左上角矩形区域的总和,通过积分图像计算原始图像任意矩形区域的像素值的和,而每个像素的自适应阈值由其周围像素灰度值的平均值确定,原始像素点为pi(xi,yi),该点的对应灰度值为I(xi,yi),积分图像表示为intI(xi,yi),通过定义背景半径s和分割系数t,计算矩形区域中像素灰度值之和,以pi为中心,长度为2s+1的灰度值之和来进行计算,矩形区域中像素灰度值之和公式表示为:Step 2.1.4, adaptive threshold segmentation based on integral image: The integral image means that each pixel value is the sum of the rectangular area in the upper left corner of the original image. The sum of the pixel values of any rectangular area of the original image is calculated by the integral image, and the adaptive threshold of each pixel is determined by the average grayscale value of the surrounding pixels. The original pixel point is p i ( xi , yi ), and the corresponding grayscale value of the point is I ( xi , yi ). The integral image is expressed as intI ( xi , yi ). By defining the background radius s and the segmentation coefficient t, the sum of the grayscale values of the pixels in the rectangular area is calculated. The calculation is performed by taking p i as the center and the sum of the grayscale values of the length 2s+1. The formula for the sum of the grayscale values of the pixels in the rectangular area is expressed as:

当矩形区域超出图像的边界时,应该以图像边界为准,此时二进制图像灰度值计算公式为:When the rectangular area exceeds the boundary of the image, the image boundary should be used as the standard. At this time, the grayscale value calculation formula of the binary image is:

步骤2.2具体为,Step 2.2 is as follows:

步骤2.2.1:使用基于形态学的闭合运算对道路标志线进行优化,对道路标志线中间的孔洞残缺进行修复填充,建立原图像Mask和标记图像Maker,Mask是作为重建参考,Maker是使用形态学方法对步骤2.1.4中得到的图像连续膨胀,然后根据Mask的约束收敛,即Mask≥Maker,最后对Maker进行取补操作再与Mask相减完成对白色孔洞的填充,Mask的表达式为:Step 2.2.1: Use morphological closing operations to optimize the road marking lines, repair and fill the holes in the middle of the road marking lines, and establish the original image Mask and the marked image Maker. Mask is used as a reconstruction reference. Maker uses morphological methods to continuously expand the image obtained in step 2.1.4, and then converges according to the constraints of Mask, that is, Mask ≥ Maker. Finally, the Maker is complemented and then subtracted from the Mask to complete the filling of the white holes. The expression of Mask is:

步骤2.2.2,形态学方法优化道路标志线:通过形态抗锯齿方法(MorphologicalAntialiasing,MLAA)处理二值图像边界的锯齿化现象,具体为:Step 2.2.2, morphological method to optimize road marking lines: the aliasing phenomenon of the binary image boundary is processed by the morphological antialiasing method (MLAA), specifically:

步骤2.2.2.1,首先查找步骤2.2.1已经进行白色孔洞填充的图片中明显像素不连续的部分,作为需要被处理的边缘;Step 2.2.2.1, first find the part with obvious pixel discontinuity in the image that has been filled with white holes in step 2.2.1 as the edge that needs to be processed;

步骤2.2.2.2,将需要被处理的边缘分类为不同的模式,对标记的边缘重新矢量化;Step 2.2.2.2, classify the edges to be processed into different modes and re-vectorize the marked edges;

步骤2.2.2.3,保留平滑部分,通过边缘矢量化计算权重,将边缘像素与其周围像素按照权重进行混合,实现边缘的平滑处理。Step 2.2.2.3, keep the smooth part, calculate the weight through edge vectorization, mix the edge pixels with the surrounding pixels according to the weight, and achieve edge smoothing.

该步骤得到的道路标志线提取结果如图2所示,其中,图2(a)是选取的一个场景优化后的最终道路标志线点云提取结果图,图2(b)是选取的一个场景优化后的最终道路标志线点云提取结果图,图2(c)是选取的另一个场景优化后的最终道路标志线点云提取结果图。The road sign line extraction result obtained in this step is shown in Figure 2, wherein Figure 2(a) is a final road sign line point cloud extraction result diagram after optimization of a selected scene, Figure 2(b) is a final road sign line point cloud extraction result diagram after optimization of a selected scene, and Figure 2(c) is a final road sign line point cloud extraction result diagram after optimization of another selected scene.

实施例4Example 4

本实施例提供一种基于MLS点云的道路标志线检测与识别方法,在实施例1-3的基础上,步骤3基于改进PointNet++的道路标志线分类具体按照以下步骤实施:This embodiment provides a method for detecting and recognizing road sign lines based on MLS point clouds. On the basis of Embodiments 1-3, step 3 is based on the road sign line classification of improved PointNet++ and is specifically implemented according to the following steps:

步骤3.1采用欧式聚类将道路标志线点云划分各个独立对象,具体为:Step 3.1 uses Euclidean clustering to divide the road sign point cloud into independent objects, specifically:

步骤3.1.1,假设场景内道路标志线点集为P={Pi(Xi,Yi),i=1,2,3...},参数D表示聚类半径,C表示以D为半径的聚类圆,从点集P中取任意一点p,并且将p点放入点集聚类簇中;Step 3.1.1, assuming that the road sign line point set in the scene is P = {P i (X i ,Y i ), i = 1, 2, 3...}, parameter D represents the clustering radius, C represents the clustering circle with D as the radius, select any point p from the point set P, and put point p into the point set cluster;

步骤3.1.2,以D为半径,以p为圆心作圆,将圆内的点都并入之前定义的聚类簇中;Step 3.1.2, draw a circle with D as radius and p as center, and merge all points in the circle into the previously defined cluster;

步骤3.1.3,使用点集聚类簇中的其他点作为新的圆心,重复步骤3.1.2,直到没有新的点加入该聚类簇为止;Step 3.1.3, use other points in the point set cluster as the new center of the circle, and repeat step 3.1.2 until no new points are added to the cluster;

步骤3.1.4,重复步骤3.1.1和步骤3.1.3,使用点集P中的其他点生成新的聚类簇,每个聚类簇即为独立对象。Step 3.1.4, repeat steps 3.1.1 and 3.1.3, and use other points in the point set P to generate new clusters, each of which is an independent object.

步骤3.2具体为,Step 3.2 is as follows:

步骤3.2.1,道路标志线特征提取:为步骤3.1分离得到的独立对象建立最小外包矩形(Minimum bounding rectangle,MBR);Step 3.2.1, road sign line feature extraction: establish a minimum bounding rectangle (MBR) for the independent object separated in step 3.1;

建立MBR具体为:1)建立对象的简单外接矩形,并求取对象的中心点。假设此时对象内的点集为p={pi(xi,yi),i=1,2,3...},中心点坐标公式为:The specific steps of establishing MBR are as follows: 1) Establish a simple circumscribed rectangle of the object and find the center point of the object. Assume that the point set in the object is p = { pi ( xi , yi ), i = 1, 2, 3...}, and the center point coordinate formula is:

2)以中心点为原点旋转外接矩形,旋转阈值设置为1,每旋转1度,求解此时外接矩形面积,并且记录此时矩形的旋转角度和顶点坐标;2) Rotate the circumscribed rectangle with the center point as the origin, set the rotation threshold to 1, calculate the area of the circumscribed rectangle at each rotation of 1 degree, and record the rotation angle and vertex coordinates of the rectangle at this time;

3)比较上述2)旋转所得全部外接矩形的面积,面积最小的外接矩形即为对象的MBR;3) Compare the areas of all the circumscribed rectangles obtained by the rotation in 2) above, and the circumscribed rectangle with the smallest area is the MBR of the object;

步骤3.2.2,车道线分类:定义3.2.1中得到的MBR的长宽比β为对象的几何特征,根据长宽比的从聚类簇中提取实型车道线与虚型车道线,当β>37认为该点云簇为实型车道线,当13<β<27认为该点云簇为虚型车道线,当不满足实型车道线与虚型车道线的这两种条件时,则认为该标志为小尺寸道路标志线箭头标志或斑马线;Step 3.2.2, lane line classification: define the aspect ratio β of the MBR obtained in 3.2.1 as the geometric feature of the object, extract the real lane lines and virtual lane lines from the cluster according to the aspect ratio, when β>37, the point cloud cluster is considered to be a real lane line, when 13<β<27, the point cloud cluster is considered to be a virtual lane line, when the two conditions of real lane lines and virtual lane lines are not met, the sign is considered to be a small-sized road sign arrow sign or zebra crossing;

步骤3.2.3,斑马线分类:采用基于主成分分析(Principal Component Analysis,PCA)的方法判别点云簇的排列方式,从而提取其中并排排列的矩形,并将其划分为斑马线,对于任意矩形标志R,计算其质心在xoy平面上的坐标,然后针对R计算其协方差矩阵CR,通过对CR进行特征值分解,选择与最大特征值相关的特征向量作为标志R的分布特征,并将其设为vr,该参数代码表矩形标志R的方向,为了判断标志R的排列方式,首先沿着标志R的两侧进行搜索,搜索的宽度为w,如果在搜索区域内搜索到质心为q、分布特征为vq的矩形标志Q,且满足公式(vr//vq)∧(vr//vrq)∧(vq//vrq),则将该标志R分类为斑马线。Step 3.2.3, zebra crossing classification: The arrangement of point cloud clusters is determined by the principal component analysis (PCA) method, so as to extract the rectangles arranged side by side and divide them into zebra crossings. For any rectangular sign R, the coordinates of its centroid on the xoy plane are calculated, and then the covariance matrix CR is calculated for R. By performing eigenvalue decomposition on CR , the eigenvector associated with the maximum eigenvalue is selected as the distribution feature of the sign R and set it to vr . The parameter code represents the direction of the rectangular sign R. In order to determine the arrangement of the sign R, first search along both sides of the sign R with a search width of w. If a rectangular sign Q with a centroid of q and a distribution feature of vq is found in the search area and satisfies the formula ( vr // vq )∧( vr // vrq )∧( vq // vrq ), the sign R is classified as a zebra crossing.

步骤3.3具体为,Step 3.3 is as follows:

步骤3.3.1,数据集制作,具体为:对于场景中存在的未分类的小尺寸道路标志线,获取其数据,手动框选数据,手动选取点集类别,标注数据,划分数据集;Step 3.3.1, data set preparation, specifically: for the unclassified small-sized road sign lines in the scene, obtain their data, manually select the data, manually select the point set category, annotate the data, and divide the data set;

步骤3.3.2,在原始PointNet++网络的主干分支旁构建自顶向下的网络,局部迭代性的使用PointNet网络不断提取特征,对步骤3.3.1中的数据集进行分类,得到如图3所示的道路标志线分类结果图。In step 3.3.2, a top-down network is built next to the main branch of the original PointNet++ network. The PointNet network is used locally and iteratively to continuously extract features and classify the data set in step 3.3.1 to obtain the road sign line classification result diagram shown in Figure 3.

Claims (9)

1. The road sign line detection and identification method based on the MLS point cloud is characterized by comprising the following steps of:
step 1, extracting a progressive road point cloud;
step 2, extracting road sign lines optimized based on a self-adaptive threshold segmentation and morphology method;
and 3, classifying road sign lines based on the improved PointNet++.
2. The road sign line detection and identification method based on the MLS point cloud according to claim 1, wherein the step 1 specifically comprises:
step 1.1, performing rough road extraction through an improved CSF algorithm;
step 1.2, carrying out fine extraction of road point clouds by using a road point cloud extraction algorithm of the pseudo scanning line;
step 1.3, road point cloud boundary optimization: clustering road points based on a DBSCAN clustering algorithm, setting constraint rules, and re-fitting boundaries by adopting a RANSAC method;
the step 2 specifically comprises the following steps:
step 2.1, road sign line segmentation;
step 2.2, road sign line segmentation optimization;
step 2.3, generating point cloud through the back projection of the depth image stored during the point cloud projection and the binary image segmented by the self-adaptive threshold value, and realizing the extraction of the point cloud of the road sign line;
the step 3 specifically comprises the following steps:
Step 3.1, clustering and dividing the road sign lines;
step 3.2, classifying lane lines and zebra crossings;
step 3.3, small-sized road sign line classification based on modified PointNet++.
3. The road sign line detection and identification method based on the MLS point cloud according to claim 2, wherein the step 1.1 is specifically implemented according to the following steps:
step 1.1.1, scene preprocessing: preprocessing a scene by adopting a discrete point removal method based on a statistical filter, indexing all point clouds in the scene, and firstly calculating any point p i Average distance d from n nearest neighbors around it n Assuming arbitrary p i Average distance d of (2) n Obeying Gaussian distribution, calculating the mean value mu and standard deviation sigma of the Gaussian distribution, establishing a standard range according to the obtained mean value and standard deviation, and taking the mean distance d as n If the point is larger than the set standard range, defining the point as a discrete point and removing the discrete point;
d n the formula is:
step 1.1.2, coarse extraction of road point cloud: filtering the road by adopting an improved CSF algorithm, thereby preliminarily extracting the road point cloud, in particular
Step 1.1.2.1, inputting and reversing denoised road point cloud data;
step 1.1.2.2, establishing a simulated distribution grid of the road point cloud, and customizing the size of the grid, wherein the grid is positioned above all the road point clouds at the initial moment;
Step 1.1.2.3, projecting all point clouds and grid particles in a scene onto the same horizontal plane, searching nearest neighbors of each particle at the same time, and recording elevation values IHV before projection;
step 1.1.2.4, for each movable grid particle, calculating the displacement generated by the influence of the internal factors, comparing with the elevation value of the current nearest neighbor, and resetting the elevation value of the particle to IHV and changing the particle to an immovable point if the elevation value of the particle is lower than or equal to IHV at this time;
step 1.1.2.5, for each mesh particle, calculating the displacement thereof due to the internal factors;
step 1.1.2.6, repeating step 1.1.2.4 and step 1.1.2.5, and stopping the simulation process when the iteration number is greater than the user set value;
step 1.1.2.7, calculating the elevation difference between the point cloud and the grid particles;
in step 1.1.2.8, if the distance between the point cloud and the grid particles is smaller than the preset threshold HCC, the point is considered to be a ground point, otherwise, the point is considered to be a non-ground point, and the extracted ground point is the road point cloud.
4. The road sign line detection and identification method based on the MLS point cloud according to claim 2, wherein the step 1.2 is specifically implemented according to the following steps:
Step 1.2.1, using the scanning line direction of the road point cloud to replace the track direction, and establishing a pseudo scanning line along the scanning line direction, namely slicing the road point cloud, and assuming any point on the track of the scanning lineIs (X) i ,Y i ) Therefore->Mileage value S corresponding to point i The formula of (2) is:
let c n (X′ n ,Y′ n ) Represents arbitrary slice point position, where c n The formula of the coordinate value of (a) is:
in the formula (3), the amino acid sequence of the compound,representation->Coordinate azimuth, delta n Representing the distance between the slice position and the nearest neighbor of the slice position on the track,/o>Representation->Coordinate values of (2);
from c n (X′ n ,Y′ n )、α′ n And D, determining the position of the slice, the direction of the slice and the length of the slice respectively; calculating the distance d between the point cloud and the slice segment, wherein when the distance is smaller than half of the slice length, the point cloud is positioned on the same slice, L represents the projection of the slice segment on the xoy plane of the coordinate system, T represents the normal line of the slice segment, and P (X) p ,Y p ) Representing the coordinates of the point cloud, and d representing the distance between the slice segment and the point cloud, then the vectorThe relationship between the normal vector T is formulated as:
d=|(X p -X′ n )cos(α′ n )+(Y p -Y′ n )sin(α′ n )| (4);
step 1.2.2, reconstructing a local coordinate system of a pseudo-scanning line to replace a geodetic coordinate system of a radar system by the projection of the road cross section slice, converting scene point clouds into the local coordinate system of the corresponding pseudo-scanning line, defining a coordinate origin as the center of each road segment, taking an X axis as a track direction, and taking a Y axis perpendicular to the X axis and P i (X i ,Y i ,Z i ) Represents any point on the segment, P i The formula of the coordinates is:
in the formula (5), (X) C ,Y C ) Representing coordinates on the slice, α representing an azimuth angle of the track point;
projecting the three-dimensional point to the YOZ plane, whereinRepresenting the origin in the two-dimensional coordinate system, points on the pseudo-scan line after projection +.>The coordinate formula of (2) is:
step 1.2.3, extracting the road point cloud by using a sliding window filter, dividing the points on the projected cross section slice into a plurality of cells, setting the threshold value of the cells as M, dividing the point cloud on the pseudo scan line into k sections at the moment, and assuming that the last point isThen->And (3) for the pseudo scanning point coordinates generated in the last interval, adopting a RANSAC algorithm to perform straight line detection, randomly selecting a part of points to perform fitting, counting data deviation with other points, screening out points meeting a threshold value, establishing a buffer area for the screened points, and removing non-road point clouds in the buffer area by using a sliding window filter.
5. The method for detecting and identifying road sign lines based on MLS point clouds according to claim 4, wherein said step 2.1 is specifically,
step 2.1.1, road point cloud projection: selecting an XOY plane as a horizontal plane, projecting road point cloud data to the plane along the normal vector direction of the XOY plane, and setting an original road point cloud set as P n ={P 1 ,P 2 ,...,P n The projected road point cloud set isThe point t (x) is arbitrarily selected on the plane pi 0 ,y 0 ,z 0 ) Wherein the normal vector of the plane pi is +.>At this time->At->The calculation formula of the projected height H in the direction is:
at this timeIs->At->Projection vector in direction due to +.>And->Is the same direction, thus->Then the point of projection P i The' coordinate calculation formula is:
step 2.1.2, two-dimensional projection image rasterization: converting the projected point cloud obtained in the step 2.1.1 into a two-dimensional grating image, calculating the inverse distance weighting influence of the control points on surrounding pixels according to the given control points and the displacement vector of the control point pairs by using an inverse distance weighting interpolation method IDW, thereby calculating the intensity value of each pixel point in the image, and assuming P i (X i ,Y i ,Z i ,I i ) Representing coordinates of an arbitrary road point and reflection intensity I of the point i ,R pixel Representing the resolution of the generated raster image, R IDW Representing the radius of the interpolation, point P i When projected onto the plane pi, P is the same as P i Change to p i (X j ,Y j ,I j ) The calculation formula of the size of the raster image at this time is:
in the formula (9), maxX represents the maximum value of the X axis in the road point cloud; minX represents the minimum value of the X axis in the road point cloud; maxY represents the maximum value of the Y axis in the road point cloud; minY represents the minimum value of the Y axis in the road point cloud; width, height represent the size of the grating;
The intensity value of each point is obtained through an IDW algorithm, and the point P is positioned in the interpolation radius k From interpolation points Pixel (m,n) Position and interpolation radius determination, d (m,n,k) Then represent P k To the Pixel center Pixel (m,n) The calculation formula is as follows:
in the formula (10), X k An X coordinate representing a kth point; y is Y k A Y coordinate representing a kth point;
intensity value I of interpolation point at this time (m,n) Expressed as:
the intensity value in each grid is judged and mapped to the gray scale interval of the image, and finally the road point cloud is converted into an intensity image of the road point cloud;
step 2.1.3, eliminating abnormal areas of the intensity image: removing most of abnormal points of intensity in the intensity image of the road point cloud by adopting an IDW algorithm, and processing the intensity image of the road point cloud by adopting improved self-adaptive median filtering aiming at the condition of small amount of salt and pepper noise still existing;
step 2.1.4, based on integral image adaptive threshold segmentation: the integral image refers to the sum of the pixel values of the rectangular area of the left upper corner of the original image, the sum of the pixel values of any rectangular area of the original image is calculated through the integral image, the self-adaptive threshold value of each pixel is determined by the average value of the gray values of the surrounding pixels, and the original pixel point is p i (x i ,y i ) The corresponding gray value of the point is I (x i ,y i ) The integral image is expressed as an intI (x i ,y i ) Calculating the sum of pixel gray values in a rectangular region by defining a background radius s and a partition coefficient t to p i For the center, the sum of gray values of 2s+1 length is calculated, and the sum of pixel gray values in the rectangular area is expressed as:
when the rectangular area exceeds the boundary of the image, taking the boundary of the image as the reference, the gray value calculation formula of the binary image is as follows:
6. the method for detecting and identifying road sign lines based on MLS point clouds according to claim 5, wherein said step 2.2 is specifically,
step 2.2.1: optimizing a road marking line by using a morphological-based closing operation, repairing and filling hole defects in the middle of the road marking line, establishing an original image Mask and a marked image Mask as reconstruction references, continuously expanding the image obtained in the step 2.1.4 by using a morphological method, converging according to the constraint of the Mask, namely, the Mask is more than or equal to the Mask, and finally performing a compensation operation on the Mask and subtracting the Mask to finish filling the white holes, wherein the expression of the Mask is as follows:
step 2.2.2, optimizing the road marking line by a morphological method: the aliasing phenomenon of the binary image boundary is processed by a morphological MLAA method, specifically:
Step 2.2.2.1, firstly searching a part with obvious pixel discontinuity in the picture filled with the white holes in step 2.2.1 as an edge to be processed;
step 2.2.2.2, classifying the edges to be processed into different modes, and re-vectorizing the marked edges;
and 2.2.2.3, reserving a smooth part, calculating weights through edge vectorization, and mixing edge pixels with surrounding pixels according to the weights to realize edge smoothing.
7. The method for detecting and identifying the road marking based on the MLS point cloud according to claim 6, wherein the step 3.1 adopts European clustering to divide the road marking point cloud into individual objects, specifically:
step 3.1.1, assume that the set of road marking points in the scene is p= { P i (X i ,Y i ) I=1, 2,3. }, the parameter D represents a cluster radius, C represents a cluster circle with D as a radius, taking any point P from the point set P, and placing the P points into the point set cluster;
step 3.1.2, taking D as a radius, taking p as a circle center to make a circle, and merging points in the circle into a cluster defined before;
step 3.1.3, using other points in the point cluster as new circle centers, repeating the step 3.1.2 until no new points are added into the cluster;
And 3.1.4, repeating the step 3.1.1 and the step 3.1.3, and generating new cluster clusters by using other points in the point set P, wherein each cluster is an independent object.
8. The method for detecting and identifying road sign lines based on MLS point clouds according to claim 3, wherein said step 3.2 is specifically,
step 3.2.1, extracting road sign line characteristics: establishing MBR for the independent object obtained by the separation in the step 3.1;
step 3.2.2, lane line classification: defining the aspect ratio beta of the MBR obtained in 3.2.1 as the geometric feature of the object, extracting a real lane line and a virtual lane line from the cluster according to the aspect ratio, and considering the mark as a small-size road mark line arrow mark or a zebra mark when the conditions of the real lane line and the virtual lane line are not satisfied;
step 3.2.3, zebra crossing classification: judging the arrangement mode of the point cloud clusters by adopting a PCA-based method, extracting rectangles arranged side by side, dividing the rectangles into zebra stripes, calculating the coordinates of the mass centers of any rectangular mark R on an xoy plane, and then calculating a covariance matrix C of any rectangular mark R R By the method of C R Performing eigenvalue decomposition, selecting eigenvector related to maximum eigenvalue as distribution characteristic of sign R, and setting it as v r In order to judge the arrangement mode of the marks R, the direction of the rectangular marks R of the parameter code table is firstly searched along the two sides of the marks R, the searching width is w, if the centroid is q and the distribution characteristic is v in the searching area q And satisfies the formula (v) r //v q )∧(v r //v rq )∧(v q //v rq The marker R is classified as a zebra crossing.
9. The method for detecting and identifying road sign lines based on MLS point clouds according to claim 2, wherein the step 3.3 is specifically,
step 3.3.1, data set preparation, specifically: for unclassified small-size road mark lines in a scene, acquiring data of the unclassified small-size road mark lines, manually framing the data, manually selecting point set categories, labeling the data, and dividing a data set;
and 3.3.2, constructing a top-down network beside a main branch of the original PointNet++ network, continuously extracting features by using the PointNet network in a local iteration mode, and classifying the data set in the step 3.3.1.
CN202311815103.9A 2023-12-27 2023-12-27 Road sign detection and recognition method based on MLS point cloud Pending CN117876984A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311815103.9A CN117876984A (en) 2023-12-27 2023-12-27 Road sign detection and recognition method based on MLS point cloud

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311815103.9A CN117876984A (en) 2023-12-27 2023-12-27 Road sign detection and recognition method based on MLS point cloud

Publications (1)

Publication Number Publication Date
CN117876984A true CN117876984A (en) 2024-04-12

Family

ID=90591149

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311815103.9A Pending CN117876984A (en) 2023-12-27 2023-12-27 Road sign detection and recognition method based on MLS point cloud

Country Status (1)

Country Link
CN (1) CN117876984A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118522004A (en) * 2024-06-13 2024-08-20 大连慕泽科技有限公司 Multifunctional road detection vehicle

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118522004A (en) * 2024-06-13 2024-08-20 大连慕泽科技有限公司 Multifunctional road detection vehicle

Similar Documents

Publication Publication Date Title
CN113780259B (en) A pavement defect detection method, device, electronic device and readable storage medium
Xu et al. Reconstruction of scaffolds from a photogrammetric point cloud of construction sites using a novel 3D local feature descriptor
US10049492B2 (en) Method and apparatus for rendering facades of objects of interest from three-dimensional point clouds
Song et al. Road extraction using SVM and image segmentation
CN104008553B (en) Crack detection method with image gradient information and watershed method conflated
CN106022381B (en) Automatic extraction method of street lamp pole based on vehicle-mounted laser scanning point cloud
AU2021249313A1 (en) Feature extraction from mobile lidar and imagery data
CN105374033B (en) SAR image segmentation method based on ridge ripple deconvolution network and sparse classification
CN104299260A (en) Contact network three-dimensional reconstruction method based on SIFT and LBP point cloud registration
US11804025B2 (en) Methods and systems for identifying topographic features
CN106340005B (en) The non-supervisory dividing method of high score remote sensing image based on scale parameter Automatic Optimal
Sun et al. Classification of MLS point clouds in urban scenes using detrended geometric features from supervoxel-based local contexts
CN116452852A (en) An Automatic Generation Method of High Precision Vector Map
CN110263635A (en) Marker detection and recognition methods based on structure forest and PCANet
CN103971377A (en) Building extraction method based on prior shape level set segmentation
Femiani et al. Shadow-based rooftop segmentation in visible band images
CN117292337A (en) Remote sensing image target detection method
Chen et al. An efficient global constraint approach for robust contour feature points extraction of point cloud
CN111783722B (en) Lane line extraction method of laser point cloud and electronic equipment
CN117876984A (en) Road sign detection and recognition method based on MLS point cloud
CN108022245A (en) Photovoltaic panel template automatic generation method based on upper thread primitive correlation model
CN115063698A (en) Automatic identification and information extraction method and system for slope surface deformation crack
Omidalizarandi et al. Segmentation and classification of point clouds from dense aerial image matching
Zingman et al. Automated search for livestock enclosures of rectangular shape in remotely sensed imagery
Gao et al. Research on license plate detection and recognition based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination