CN113379711B - An Image-Based Method for Obtaining the Adhesion Coefficient of Urban Road Pavement - Google Patents

An Image-Based Method for Obtaining the Adhesion Coefficient of Urban Road Pavement Download PDF

Info

Publication number
CN113379711B
CN113379711B CN202110683924.6A CN202110683924A CN113379711B CN 113379711 B CN113379711 B CN 113379711B CN 202110683924 A CN202110683924 A CN 202110683924A CN 113379711 B CN113379711 B CN 113379711B
Authority
CN
China
Prior art keywords
image
road surface
road
size
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110683924.6A
Other languages
Chinese (zh)
Other versions
CN113379711A (en
Inventor
刘俊
郭洪艳
刘惠
赵旭
陈虹
高振海
胡云峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN202110683924.6A priority Critical patent/CN113379711B/en
Publication of CN113379711A publication Critical patent/CN113379711A/en
Application granted granted Critical
Publication of CN113379711B publication Critical patent/CN113379711B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T90/00Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Library & Information Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an image-based urban road pavement adhesion coefficient acquisition method, which comprises the steps of firstly establishing a pavement image information base, then establishing a pavement image data set, establishing and training a pavement image area extraction network, then establishing and training a pavement type recognition network, and finally acquiring pavement adhesion coefficient information; the method can provide road adhesion coefficient information for the development of an intelligent driving auxiliary system and an unmanned driving system; the method realizes the acquisition of the adhesion coefficient by acquiring the front road image, and can realize the advance acquisition of the road adhesion information; the method designs a method for extracting the network and the road surface identification network serial based on the image road surface area, and then simplifies the structure of the road surface identification network, so that the real-time and rapid acquisition of the front road surface adhesion information can be realized.

Description

一种基于图像的城市道路路面附着系数获取方法An Image-Based Method for Obtaining the Adhesion Coefficient of Urban Road Pavement

技术领域technical field

本发明属于智能汽车技术领域,涉及一种路面附着系数获取方法,更加具体的来讲,涉及一种基于图像的城市道路路面附着系数获取方法。The invention belongs to the technical field of intelligent vehicles, and relates to a method for obtaining a road surface adhesion coefficient, more particularly, to an image-based method for obtaining the road surface adhesion coefficient of an urban road.

背景技术Background technique

随着汽车智能化的发展,用户对车载智能驾驶辅助系统和无人驾驶系统的性能需求越来越高,而多数智能驾驶辅助系统性能的提升依赖于动力学控制的精准度,高性能动力学控制系统的设计需要实时准确的获取路面信息,基于动力学模型的估计器能够获取实时准确的路面附着系数估计值,但这种动力学估计方法很大程度上依赖车辆模型和轮胎模型的精度,并且需要满足一定的行驶激励条件,此外,动力学估计结果反映的是轮胎和路面接触印迹处的路面附着系数,存在一定的滞后性,难以提前获取路面附着系数预测值。With the development of automobile intelligence, users have higher and higher requirements for the performance of in-vehicle intelligent driving assistance systems and unmanned driving systems, and the performance improvement of most intelligent driving assistance systems depends on the accuracy of dynamic control, high-performance dynamic The design of the control system needs to obtain real-time and accurate road information. The estimator based on the dynamic model can obtain the real-time and accurate estimated value of the road adhesion coefficient, but this dynamic estimation method largely depends on the accuracy of the vehicle model and the tire model. And it needs to meet certain driving excitation conditions. In addition, the dynamic estimation results reflect the road adhesion coefficient at the contact footprint between the tire and the road surface. There is a certain hysteresis, and it is difficult to obtain the predicted value of the road adhesion coefficient in advance.

现阶段越来越多的智能车开始配置摄像头等设备来获取道路信息和周围车辆信息,这为路面附着系数识别方法研究带来了新的机遇,其优势在于能够提前感知前方路面状况,因此具备一定的预测能力,使得智能驾驶车辆能够在路面发生突变的情况下提前调整控制策略,提升对危险工况的应对能力,但如何利用图像传感信息获取路面附着系数信息仍然是一个挑战。At this stage, more and more smart cars are equipped with cameras and other equipment to obtain road information and surrounding vehicle information, which brings new opportunities for the research on road adhesion coefficient identification methods. A certain prediction ability enables intelligent driving vehicles to adjust control strategies in advance in the event of sudden changes in the road surface and improve the ability to cope with dangerous conditions. However, how to use image sensing information to obtain road adhesion coefficient information is still a challenge.

发明内容SUMMARY OF THE INVENTION

为了克服现有技术存在的上述问题,本发明提供了一种基于图像的城市道路路面附着系数获取方法。In order to overcome the above problems existing in the prior art, the present invention provides an image-based method for obtaining the adhesion coefficient of urban road pavement.

本方法是通过如下技术方案实现的:This method is achieved through the following technical solutions:

一种基于图像的城市道路路面附着系数获取方法,具体步骤如下:An image-based method for obtaining the adhesion coefficient of urban road pavement, the specific steps are as follows:

步骤一、建立路面图像信息库Step 1. Establish a road image information database

采用基于图像的路面附着系数获取的前提条件是能够建立完善的路面图像信息库,并对样本图像进行适当处理以保证充分获取图像中的特征信息;The precondition of using image-based pavement adhesion coefficient acquisition is that a complete pavement image information database can be established, and the sample images can be properly processed to ensure that the feature information in the images can be fully acquired;

首先需要进行路面图像数据的采集,路面图像采集过程中需要弥补对成像效果不利的因素,图像采集设备并不局限某一种或某一类图像获取设备,设备性能及安装位置需求如下:具有1280×720及以上的视频分辨率,视频帧率在30帧每秒及以上,最大有效拍摄距离70米以上,具备宽动态技术能够快速适应光线强度变化;设备安装位置应该保证所获取的图像信息中拍摄到的路面占整个图像区域的一半以上;First of all, it is necessary to collect road image data. In the process of road image collection, it is necessary to make up for the factors that are unfavorable to the imaging effect. The image acquisition equipment is not limited to a certain type or type of image acquisition equipment. The equipment performance and installation location requirements are as follows: with 1280 The video resolution of ×720 and above, the video frame rate is 30 frames per second and above, the maximum effective shooting distance is more than 70 meters, and the wide dynamic technology can quickly adapt to changes in light intensity; the installation location of the equipment should ensure that the acquired image information The captured road occupies more than half of the entire image area;

根据不同天气状况下城市道路路面条件,通过对比分析并结合我国城市道路路面类型种类,将需要识别的路面类型具体定义为沥青路面、水泥路面、松雪路面、压实雪路面和冰板路面5种路面类型,将数据采集过程视频文件按照每隔10帧的间隔分解成图片,依据《GB/T 920-2002公路路面等级与面层类型代码》与《寒冷地区路面附着系数调查分析》中路面特征将图片按照上述5种归属类别进行整理,将同类路面图像统一存储于同一文件夹下,完成路面图像信息库的建立;According to the urban road pavement conditions under different weather conditions, through comparative analysis and combined with the types of urban road pavements in my country, the pavement types that need to be identified are specifically defined as asphalt pavement, cement pavement, loose snow pavement, compacted snow pavement and ice slab pavement. Pavement type, the video files of the data collection process are decomposed into pictures at intervals of 10 frames, according to the pavement characteristics in "GB/T 920-2002 Highway Pavement Grade and Surface Type Code" and "Investigation and Analysis of Pavement Adhesion Coefficient in Cold Areas" Organize the pictures according to the above five attribution categories, and store the same type of road images in the same folder to complete the establishment of the road image information database;

步骤二、建立路面图像数据集Step 2. Create a road image dataset

原始采集的图像中仍包含大量的非路面元素,严重影响着路面附着系数获取的精度,因此基于图像的路面附着系数获取方法需要图像样本以及与路面对应区域的像素级标签,因此需要对步骤一收集到的路面图像信息库中的图像进行路面范围的标注,选取软件Anaconda中的Labelme作为标注工具,使用标注工具对样本集内每张图片进行逐张手动标注,标注过程是点击create polygons按钮,沿着图片内路面区域边界描点,使标注框能够完整覆盖路面区域,标注类别命名为road;标注完成后保存即可生成一个json文件,使用软件Anaconda中自带的json_to_dataset.py脚本程序将json文件进行转换得到_json文件夹,文件夹下包含名称加后缀名分别为img.png、label.png、label_viz.png、info.yaml和label_names.txt的五个文件,只需要将label.png图片格式文件通过转换得到8bit的灰度标签图即可,对路面图像信息库中的图片依次用Anaconda中的Labelme进行上述标注过程,获取路面图像信息库中的图片的灰度标签图集,路面图像信息库中的图片的灰度标签图集即是路面图像数据集;The original collected images still contain a large number of non-pavement elements, which seriously affects the accuracy of the pavement adhesion coefficient acquisition. Therefore, the image-based pavement adhesion coefficient acquisition method requires image samples and pixel-level labels of the corresponding areas of the pavement. Therefore, step 1 is required. The images in the collected road image information database are used to mark the road range. Labelme in the software Anaconda is selected as the labeling tool, and each picture in the sample set is manually labelled one by one using the labeling tool. The labeling process is to click the create polygons button. Draw points along the boundary of the road area in the picture, so that the label box can completely cover the road area, and the label type is named road; after labeling, save it to generate a json file, and use the json_to_dataset.py script program that comes with the software Anaconda to convert the json file Convert to get the _json folder. The folder contains five files with names and suffixes: img.png, label.png, label_viz.png, info.yaml and label_names.txt. You only need to format the label.png image. The file can be converted to an 8-bit grayscale label image. The above-mentioned labeling process is performed on the pictures in the road image information database with Labelme in Anaconda in turn, and the grayscale label atlas and road image information of the pictures in the road image information database are obtained. The grayscale label atlas of the pictures in the library is the road image dataset;

步骤三、建立并训练路面图像区域提取网络Step 3. Establish and train the road image region extraction network

路面图像区域提取过程是通过语义分割网络在Anaconda环境下进行实现的,整个语义分割网络为编码器-解码器结构,具体设计如下:The extraction process of pavement image area is implemented in the Anaconda environment through the semantic segmentation network. The entire semantic segmentation network is an encoder-decoder structure, and the specific design is as follows:

3.1、首先将待识别图像缩放成尺寸为769×769×3的图片,作为语义分割网络的输入;3.1. First, scale the image to be recognized into a picture with a size of 769×769×3 as the input of the semantic segmentation network;

3.2、然后设置第一层为卷积层,采用32个3×3大小的滤波器,步长为2,填充为1,并经过批量正则化和ReLU激活函数,得到的卷积层输出特征图尺寸为385×385×32;3.2. Then set the first layer as the convolution layer, use 32 filters of 3 × 3 size, the stride is 2, the padding is 1, and after batch regularization and ReLU activation function, the obtained convolution layer output feature map The size is 385×385×32;

3.3、将卷积层输出特征图输入到最大值池化层,尺寸为3×3,步长为2,得到池化层输出特征图尺寸为193×193×32;3.3. Input the output feature map of the convolutional layer to the maximum pooling layer, the size is 3×3, the stride is 2, and the size of the output feature map of the pooling layer is 193×193×32;

3.4、将池化层输出特征图作为瓶颈模块结构的输入,瓶颈模块结构详细实现过程如下:首先复制输入特征通道用以增加特征维度,其中一个分支直接经过尺寸为3×3、步长为2的深度卷积,另一分支则通过通道拆分等分成两个子分支,其中一个子分支经过3×3的深度卷积和1×1的逐点卷积,另一子分支则直接采用特征复用的方式,随后通过通道拼接将两子分支连接起来,再经过通道清洗打乱通道排列顺序,并在经过同样的尺寸为3×3、步长为2的深度卷积后与复制通道进行拼接,最后经过1×1的逐点卷积实现组与组间的信息交流,因此整个瓶颈模块结构输出特征图尺寸减少一半,通道数增加一倍,经过一次瓶颈模块结构的输出特征图尺寸为97×97×64;3.4. Use the output feature map of the pooling layer as the input of the bottleneck module structure. The detailed implementation process of the bottleneck module structure is as follows: First, copy the input feature channel to increase the feature dimension, and one of the branches directly passes through the size of 3 × 3 and the step size of 2 The depth convolution, the other branch is divided into two sub-branches by channel splitting, one sub-branch undergoes 3×3 depth convolution and 1×1 point-by-point convolution, and the other sub-branch directly adopts feature complex Then, the two sub-branches are connected by channel splicing, and then the channel arrangement order is disrupted by channel cleaning, and after the same depth convolution with the size of 3 × 3 and the stride of 2, it is spliced with the copy channel. , and finally through 1×1 point-by-point convolution to achieve information exchange between groups, so the output feature map size of the entire bottleneck module structure is reduced by half, the number of channels is doubled, and the output feature map size after one bottleneck module structure is 97. ×97×64;

3.5、将过程3.4的输出结果作为输入,再次经过瓶颈模块结构,得到经过两次瓶颈模块结构的输出特征图尺寸为49×49×128,将此结果作为输入,再次经过瓶颈模块结构,得到经过三次瓶颈模块结构的输出特征图尺寸为25×25×256,比原图缩小了32倍,上述整个部分作为语义分割网络的编码器部分;3.5. Take the output result of process 3.4 as input, go through the bottleneck module structure again, and obtain the output feature map size of 49×49×128 after passing through the bottleneck module structure twice. Take this result as input, go through the bottleneck module structure again, and get the The output feature map size of the triple bottleneck module structure is 25×25×256, which is 32 times smaller than the original image, and the whole part above is used as the encoder part of the semantic segmentation network;

3.6、解码器部分采用跳跃结构,使用双线性插值方法将经过过程3.5的三次瓶颈模块结构的输出特征图进行2倍上采样,得到尺寸为49×49×256的特征图,并与经过过程3.5的两次瓶颈模块结构的输出特征图进行逐像素点相加,在此过程中需要复制经过过程3.5的两次瓶颈模块结构的输出特征通道,以保证结果仍为256个输出通道;3.6. The decoder part adopts the skip structure, and uses the bilinear interpolation method to upsample the output feature map of the third-order bottleneck module structure through 3.5 to obtain a feature map with a size of 49 × 49 × 256. The output feature maps of the two-time bottleneck module structure of 3.5 are added pixel by pixel. In this process, the output feature channels of the two-time bottleneck module structure of process 3.5 need to be copied to ensure that the result is still 256 output channels;

3.7、再次使用双线性插值方法将过程3.6的结果进行2倍上采样,得到尺寸为97×97×256的特征图,并与经过过程3.4的一次瓶颈模块结构的输出特征图进行逐像素点相加;3.7. Use the bilinear interpolation method again to upsample the result of process 3.6 by a factor of 2 to obtain a feature map with a size of 97 × 97 × 256, and perform pixel by pixel with the output feature map of the primary bottleneck module structure of process 3.4. add;

3.8、将过程3.7的结果经过1×1卷积层使得输出通道数变为语义类别数,并设置Dropout层减少过拟合现象的发生,最后通过8倍上采样得到与原图尺寸相同的特征图,并使用Argmax函数按照最大概率给定每一像素点的语义类别预测结果,最终得到了整个语义分割网络;3.8. Pass the result of process 3.7 through the 1×1 convolution layer to make the number of output channels become the number of semantic categories, and set the Dropout layer to reduce the occurrence of overfitting. Finally, the features with the same size as the original image are obtained by upsampling 8 times. Figure, and use the Argmax function to give the semantic category prediction result of each pixel point according to the maximum probability, and finally obtain the entire semantic segmentation network;

将步骤二中建立的路面图像数据集随机打乱,选取80%样本图片作为训练集,20%样本图片作为验证集;在语义分割网络训练期间;将读取的训练集图片张量按照0.25步长在0.5倍和2倍之间随机缩放尺寸,以769×769像素大小随机裁剪图片张量并随机左右翻转,来达到数据增强的目的,提升分割网络的适应性,并将像素点值从0-255归一化到0-1之间;Randomly scramble the pavement image data set established in step 2, select 80% of the sample images as the training set, and 20% of the sample images as the validation set; during the training of the semantic segmentation network; read the training set image tensor according to 0.25 steps The length is randomly scaled between 0.5 times and 2 times, and the image tensor is randomly cropped with a size of 769×769 pixels and randomly flipped left and right to achieve the purpose of data enhancement, improve the adaptability of the segmentation network, and change the pixel value from 0 -255 normalized to between 0-1;

训练语义分割网络时选用Poly学习率规则,学习率衰减表达式为式(1),初始学习率为0.001,训练迭代步数为iter,最大训练步数max_iter设置为20K步,power设置为0.9;使用Adam优化求解算法,利用梯度的一阶矩估计和二阶矩估计动态调整每个参数的学习率,根据计算机硬件性能,设置批处理大小为16,每隔10-30min保存一次模型参数,同时使用验证集对网络进行性能评估;When training the semantic segmentation network, the Poly learning rate rule is used, the learning rate decay expression is formula (1), the initial learning rate is 0.001, the number of training iteration steps is iter, the maximum number of training steps max_iter is set to 20K steps, and the power is set to 0.9; Use the Adam optimization algorithm to dynamically adjust the learning rate of each parameter by using the first-order moment estimation and second-order moment estimation of the gradient. According to the computer hardware performance, set the batch size to 16, save the model parameters every 10-30min, and at the same time Use the validation set to evaluate the performance of the network;

Figure BDA0003123608070000041
Figure BDA0003123608070000041

网络训练完成后需要选用合适的语义分割评价指标用于评估模型性能,在此之前首先介绍混淆矩阵,如表1所示,二分类混淆矩阵的每一行代表了预测类别,二分类混淆矩阵的每一列代表了数据的真实归属类别,具体数值表示被预测为某类的样本数量;After the network training is completed, appropriate semantic segmentation evaluation indicators need to be selected to evaluate the model performance. Before that, the confusion matrix is first introduced. As shown in Table 1, each row of the binary confusion matrix represents the predicted category, and each row of the binary confusion matrix represents the predicted category. One column represents the true attribution category of the data, and the specific value indicates the number of samples predicted to be a certain category;

表1二分类混淆矩阵示意Table 1 Schematic representation of the two-class confusion matrix

Figure BDA0003123608070000042
Figure BDA0003123608070000042

语义分割网络的评价指标为平均交并比MIoU,表示对每一类预测结果和真实值的交集与并集的比值,求和再平均的结果,如公式(2)所示:The evaluation index of the semantic segmentation network is the average intersection and union ratio MIoU, which represents the ratio of the intersection and union of each type of prediction result and the real value, summed and averaged, as shown in formula (2):

Figure BDA0003123608070000043
Figure BDA0003123608070000043

当训练至MIoU>60%时可认为训练完成,保存训练完成后的模型及模型参数即可获得路面图像区域提取网络,将实际采集原始图像输入到路面图像区域提取网络即可完成图像中路面区域的提取;When the training reaches MIoU>60%, it can be considered that the training is completed. Save the model and model parameters after training to obtain the road image area extraction network, and input the actual collected original image into the road surface image area extraction network to complete the road area in the image. extraction;

步骤四、建立并训练路面类型识别网络Step 4. Establish and train the pavement type recognition network

通过步骤三的网络即可完成对实时图像信息中的路面区域提取过程,步骤四是在步骤三的路面区域提取结果的基础上完成对路面类型的识别;The process of extracting the road surface area in the real-time image information can be completed through the network in step 3, and step 4 is to complete the identification of the road surface type on the basis of the road surface area extraction result in step 3;

图像路面数据集在经过语义分割网络处理完成后,得到了只包含路面区域的图像集,这将作为训练与评估路面类型识别网络的最终数据集,为此在Anaconda环境下搭建路面类型识别网络,具体网络结构设计如下:After the image pavement data set is processed by the semantic segmentation network, an image set containing only the pavement area is obtained, which will be used as the final data set for training and evaluating the pavement type recognition network. The specific network structure is designed as follows:

4.1、首先将待分类识别图像缩放成尺寸为224×224×3的图片,作为卷积神经网络的输入;4.1. First, the image to be classified and recognized is scaled into a picture with a size of 224×224×3, which is used as the input of the convolutional neural network;

4.2、然后设置第一层为卷积层,采用32个3×3大小的滤波器,步长为2,填充为1,并经过批量正则化和ReLU激活函数,得到的卷积层输出特征图尺寸为112×112×32;4.2. Then set the first layer as the convolution layer, use 32 filters of 3 × 3 size, the stride is 2, the padding is 1, and after batch regularization and ReLU activation function, the output feature map of the convolution layer is obtained The size is 112×112×32;

4.3、将卷积层输出特征图输入到最大值池化层,尺寸为3×3,步长为2,得到池化层输出特征图尺寸为56×56×32;4.3. Input the output feature map of the convolutional layer to the maximum pooling layer, the size is 3×3, the stride is 2, and the output feature map size of the pooling layer is 56×56×32;

4.4、将池化层输出特征图作为瓶颈模块结构的输入,瓶颈模块结构详细实现过程如下:首先复制输入特征通道用以增加特征维度,其中一个分支直接经过尺寸为3×3,步长为2的深度卷积,另一分支则通过通道拆分等分成两个子分支,其中一个子分支经过3×3的深度卷积和1×1的逐点卷积,另一子分支则直接采用特征复用的方式,随后通过通道拼接将两子分支连接起来,再经过通道清洗打乱通道排列顺序,并在经过同样的尺寸为3×3,步长为2的深度卷积后与复制通道进行拼接,最后经过1×1的逐点卷积实现组与组间的信息交流,可见整个瓶颈模块结构输出特征图尺寸减少一半,通道数增加一倍,经过一次瓶颈模块结构的输出特征图尺寸为28×28×64;4.4. The output feature map of the pooling layer is used as the input of the bottleneck module structure. The detailed implementation process of the bottleneck module structure is as follows: First, copy the input feature channel to increase the feature dimension, and one of the branches directly passes through the size of 3 × 3 and the step size of 2 The depth convolution, the other branch is divided into two sub-branches by channel splitting, one sub-branch undergoes 3×3 depth convolution and 1×1 point-by-point convolution, and the other sub-branch directly adopts feature complex Then, the two sub-branches are connected by channel splicing, and then the channel arrangement order is disrupted by channel cleaning, and is spliced with the copy channel after the same depth convolution with the same size of 3 × 3 and a stride of 2. , and finally through 1×1 point-by-point convolution to achieve information exchange between groups, it can be seen that the output feature map size of the entire bottleneck module structure is reduced by half, and the number of channels is doubled. ×28×64;

4.5、将过程4.4的输出结果作为输入,再次经过瓶颈模块结构,得到经过两次瓶颈模块结构的输出特征图尺寸为14×14×128,将此结果作为输入,再次经过瓶颈模块结构,得到经过三次瓶颈模块结构的输出特征图尺寸为7×7×256;4.5. Take the output result of process 4.4 as input, go through the bottleneck module structure again, get the output feature map size of 14×14×128 after passing through the bottleneck module structure twice, take this result as input, go through the bottleneck module structure again, get the The output feature map size of the triple bottleneck module structure is 7×7×256;

4.6、使用尺寸为7×7的全局平均池化层,将过程4.5中输出结果转换成尺寸为1×1×256的特征图;4.6. Use a global average pooling layer of size 7×7 to convert the output result in process 4.5 into a feature map of size 1×1×256;

4.7、使用一层全连接层和Softmax函数作为网络分类器,将过程4.6中输出特征图转化为隶属于各类别的概率值,并使用Argmax函数按照最大概率值确定网络分类结果;4.7. Use a fully connected layer and Softmax function as the network classifier, convert the output feature map in process 4.6 into the probability value of each category, and use the Argmax function to determine the network classification result according to the maximum probability value;

然后将步骤三得到的只包含路面区域的图像制作成训练路面类型识别网络的数据集,不同类型的路面图像仍按照步骤一中所建立的文件夹名称分类存放,依次读取不同文件夹中图像数据,并附加上5位的0/1标签信息和路面附着系数信息,见表2,以双线性插值的方式将图片尺寸调整为224×224像素,将像素点值从0-255归一化到0-1之间,打乱路面图像数据集并按20%比例随机抽取每类图片作为验证集,剩余部分作为训练集;Then, the images containing only the road surface area obtained in step 3 are made into a data set for training the road surface type recognition network. Different types of road surface images are still classified and stored according to the folder names established in step 1, and the images in different folders are read in turn. data, and add 5-bit 0/1 label information and road adhesion coefficient information, see Table 2, adjust the image size to 224×224 pixels by bilinear interpolation, and normalize the pixel value from 0-255 Change it to between 0 and 1, scramble the road image data set and randomly select 20% of each type of pictures as the validation set, and the rest as the training set;

表2路面图像类别标签Table 2 Pavement image category labels

Figure BDA0003123608070000061
Figure BDA0003123608070000061

在路面类型识别网络的训练集和验证集制作完成后,开始训练和评估网络模型,批处理大小设置为64,选择交叉熵损失函数,使用Adam优化求解算法,基础学习率为0.0001,当训练至MIoU>80%时可认为训练完成,并按照迭代次数epoch保存模型和训练结果,即可获取训练好的路面类型识别网络;After the training set and validation set of the road surface type recognition network are completed, the network model is trained and evaluated. The batch size is set to 64, the cross entropy loss function is selected, and the Adam optimization algorithm is used. The basic learning rate is 0.0001. When MIoU>80%, it can be considered that the training is completed, and the model and training results are saved according to the number of iterations epoch, and the trained road type recognition network can be obtained;

步骤五、获取路面附着系数信息Step 5. Obtain the information of the pavement adhesion coefficient

路面附着系数信息获取流程如下:在车辆行驶过程中通过摄像头拍摄前方路面图像,将摄像头拍摄到的前方路面图像传输给路面图像区域提取网络以获取路面区域,再将只包含路面区域的图像传输给路面类型识别网络进行分类识别,路面识别完成后根据对应车速判断所处路面的附着系数范围,参见表2,并取路面附着系数范围上下限的中间值为当前路面附着系数,即可完成对路面附着系数信息的获取。The process of obtaining road adhesion coefficient information is as follows: the front road image is captured by the camera during the driving process of the vehicle, and the front road image captured by the camera is transmitted to the road image area extraction network to obtain the road area, and then the image containing only the road area is transmitted to The road surface type identification network performs classification and identification. After the road surface identification is completed, the adhesion coefficient range of the road surface is determined according to the corresponding vehicle speed, see Table 2, and the middle value of the upper and lower limits of the road adhesion coefficient range is the current road adhesion coefficient, and the road surface can be completed. Acquisition of adhesion coefficient information.

与现有技术相比本发明的有益效果为:Compared with the prior art, the beneficial effects of the present invention are:

本发明公开了一种基于图像的路面附着系数获取方法,可以为智能驾驶辅助系统以及无人驾驶系统的开发提供路面附着系数信息;本方法通过获取前方道路图像实现附着系数的获取,能够实现路面附着信息的提前获取;因为本方法设计了基于路面图像区域提取网络和路面类型识别网络串行的方法,所以能够实现前方路面附着信息的实时快速获取。The invention discloses an image-based road adhesion coefficient acquisition method, which can provide road adhesion coefficient information for the development of an intelligent driving assistance system and an unmanned driving system; the method realizes the acquisition of the adhesion coefficient by acquiring the road image ahead, and can realize the road surface adhesion coefficient. Advance acquisition of adhesion information; because this method designs a network serial method based on road image area extraction and road type identification network, it can realize real-time and fast acquisition of front road adhesion information.

附图说明Description of drawings

图1为本方法中的一种基于图像的城市道路路面附着系数获取方法的流程简图;Fig. 1 is a schematic flowchart of an image-based method for obtaining the adhesion coefficient of urban road pavement in this method;

图2为本方法中的路面图像区域提取网络结构图;Fig. 2 is the network structure diagram of the road surface image area extraction in this method;

图3为本方法中的瓶颈模块结构图;Fig. 3 is the bottleneck module structure diagram in this method;

图4为本方法中的路面类型识别网络结构图;Fig. 4 is the network structure diagram of road type identification in this method;

具体实施方式Detailed ways

本发明为智能车驾驶辅助与无人驾驶技术的开发所需要的路面附着系数信息的获取问题,提出了一种基于图像的城市道路路面附着系数获取方法。The invention proposes an image-based method for obtaining the road surface adhesion coefficient of urban roads for the problem of obtaining the road surface adhesion coefficient information required for the development of intelligent vehicle driving assistance and unmanned driving technology.

本发明所述的一种基于图像的城市道路路面附着系数获取方法,具体步骤如下:An image-based method for obtaining the adhesion coefficient of urban road pavement according to the present invention, the specific steps are as follows:

步骤一、建立路面图像信息库Step 1. Establish a road image information database

采用基于图像的路面附着系数获取的前提条件是能够建立完善的路面图像信息库,并对样本图像进行适当处理以保证充分获取图像中的特征信息;The precondition of using image-based pavement adhesion coefficient acquisition is that a complete pavement image information database can be established, and the sample images can be properly processed to ensure that the feature information in the images can be fully acquired;

首先需要进行路面图像数据的采集,路面图像采集过程中需要弥补对成像效果不利的因素,图像采集设备并不局限某一种或某一类图像获取设备,设备性能及安装位置需求如下:具有1280×720及以上的视频分辨率,视频帧率在30帧每秒及以上,最大有效拍摄距离70米以上,具备宽动态技术能够快速适应光线强度变化;设备安装位置应该保证所获取的图像信息中拍摄到的路面占整个图像区域的一半以上;First of all, it is necessary to collect road image data. In the process of road image collection, it is necessary to make up for the factors that are unfavorable to the imaging effect. The image acquisition equipment is not limited to a certain type or type of image acquisition equipment. The equipment performance and installation location requirements are as follows: with 1280 The video resolution of ×720 and above, the video frame rate is 30 frames per second and above, the maximum effective shooting distance is more than 70 meters, and the wide dynamic technology can quickly adapt to changes in light intensity; the installation location of the equipment should ensure that the acquired image information The captured road occupies more than half of the entire image area;

根据不同天气状况下城市道路路面条件,通过对比分析并结合我国城市道路路面类型种类,将需要识别的路面类型具体定义为沥青路面、水泥路面、松雪路面、压实雪路面和冰板路面5种路面类型,将数据采集过程视频文件按照每隔10帧的间隔分解成图片,依据《GB/T 920-2002公路路面等级与面层类型代码》与《寒冷地区路面附着系数调查分析》中路面特征将图片按照上述5种归属类别进行整理,将同类路面图像统一存储于同一文件夹下,完成路面图像信息库的建立;According to the urban road pavement conditions under different weather conditions, through comparative analysis and combined with the types of urban road pavements in my country, the pavement types that need to be identified are specifically defined as asphalt pavement, cement pavement, loose snow pavement, compacted snow pavement and ice slab pavement. Pavement type, the video files of the data collection process are decomposed into pictures at intervals of 10 frames, according to the pavement characteristics in "GB/T 920-2002 Highway Pavement Grade and Surface Type Code" and "Investigation and Analysis of Pavement Adhesion Coefficient in Cold Areas" Organize the pictures according to the above five attribution categories, and store the same type of road images in the same folder to complete the establishment of the road image information database;

步骤二、建立路面图像数据集Step 2. Create a road image dataset

原始采集的图像中仍包含大量的非路面元素,严重影响着路面附着系数获取的精度,因此基于图像的路面附着系数获取方法需要图像样本以及与路面对应区域的像素级标签,因此需要对步骤一收集到的路面图像信息库中的图像进行路面范围的标注,选取软件Anaconda中的Labelme作为标注工具,使用标注工具对样本集内每张图片进行逐张手动标注,标注过程是点击create polygons按钮,沿着图片内路面区域边界描点,使标注框能够完整覆盖路面区域,标注类别命名为road;标注完成后保存即可生成一个json文件,使用软件Anaconda中自带的json_to_dataset.py脚本程序将json文件进行转换得到_json文件夹,文件夹下包含名称加后缀名分别为img.png、label.png、label_viz.png、info.yaml和label_names.txt的五个文件,只需要将label.png图片格式文件通过转换得到8bit的灰度标签图即可,对路面图像信息库中的图片依次用Anaconda中的Labelme进行上述标注过程,获取路面图像信息库中的图片的灰度标签图集,路面图像信息库中的图片的灰度标签图集即是路面图像数据集;The original collected images still contain a large number of non-pavement elements, which seriously affects the accuracy of the pavement adhesion coefficient acquisition. Therefore, the image-based pavement adhesion coefficient acquisition method requires image samples and pixel-level labels of the corresponding areas of the pavement. Therefore, step 1 is required. The images in the collected road image information database are used to mark the road range. Labelme in the software Anaconda is selected as the labeling tool, and each picture in the sample set is manually labelled one by one using the labeling tool. The labeling process is to click the create polygons button. Draw points along the boundary of the road area in the picture, so that the label box can completely cover the road area, and the label type is named road; after labeling, save it to generate a json file, and use the json_to_dataset.py script program that comes with the software Anaconda to convert the json file Convert to get the _json folder. The folder contains five files with names and suffixes: img.png, label.png, label_viz.png, info.yaml and label_names.txt. You only need to format the label.png image. The file can be converted to an 8-bit grayscale label image. The above-mentioned labeling process is performed on the pictures in the road image information database with Labelme in Anaconda in turn, and the grayscale label atlas and road image information of the pictures in the road image information database are obtained. The grayscale label atlas of the pictures in the library is the road image dataset;

步骤三、路面图像区域提取网络的训练Step 3. Training of the road image area extraction network

路面图像区域提取网络是通过语义分割网络在Anaconda环境下进行实现的,语义分割网络结构如图2所示,整个语义分割网络为编码器-解码器结构,具体设计如下:The road image area extraction network is implemented in the Anaconda environment through the semantic segmentation network. The structure of the semantic segmentation network is shown in Figure 2. The entire semantic segmentation network is an encoder-decoder structure. The specific design is as follows:

3.1、首先将待识别图像缩放成尺寸为769×769×3的图片,作为语义分割网络的输入;3.1. First, scale the image to be recognized into a picture with a size of 769×769×3 as the input of the semantic segmentation network;

3.2、然后设置第一层为卷积层,采用32个3×3大小的滤波器,步长为2,填充为1,并经过批量正则化和ReLU激活函数,得到的卷积层输出特征图尺寸为385×385×32;3.2. Then set the first layer as the convolution layer, use 32 filters of 3 × 3 size, the stride is 2, the padding is 1, and after batch regularization and ReLU activation function, the obtained convolution layer output feature map The size is 385×385×32;

3.3、将卷积层输出特征图输入到最大值池化层,尺寸为3×3,步长为2,得到池化层输出特征图尺寸为193×193×32;3.3. Input the output feature map of the convolutional layer to the maximum pooling layer, the size is 3×3, the stride is 2, and the size of the output feature map of the pooling layer is 193×193×32;

3.4、将池化层输出特征图作为瓶颈模块结构的输入,瓶颈模块结构如图3所示,瓶颈模块结构详细实现过程如下:首先复制输入特征通道用以增加特征维度,其中一个分支直接经过尺寸为3×3、步长为2的深度卷积,另一分支则通过通道拆分等分成两个子分支,其中一个子分支经过3×3的深度卷积和1×1的逐点卷积,另一子分支则直接采用特征复用的方式,随后通过通道拼接将两子分支连接起来,再经过通道清洗打乱通道排列顺序,并在经过同样的尺寸为3×3、步长为2的深度卷积后与复制通道进行拼接,最后经过1×1的逐点卷积实现组与组间的信息交流,因此整个瓶颈模块结构输出特征图尺寸减少一半,通道数增加一倍,经过一次瓶颈模块结构的输出特征图尺寸为97×97×64;3.4. The output feature map of the pooling layer is used as the input of the bottleneck module structure. The bottleneck module structure is shown in Figure 3. The detailed implementation process of the bottleneck module structure is as follows: First, the input feature channel is copied to increase the feature dimension, and one of the branches directly passes the size It is a 3×3 depth convolution with a stride of 2, and the other branch is divided into two sub-branches by channel splitting. One sub-branch undergoes a 3×3 depth convolution and a 1×1 point-by-point convolution. The other sub-branch directly adopts the method of feature multiplexing, and then connects the two sub-branches through channel splicing, and then disrupts the channel arrangement order through channel cleaning. After depth convolution, it is spliced with the copy channel, and finally the information exchange between groups is realized through 1×1 point-by-point convolution. Therefore, the output feature map size of the entire bottleneck module structure is reduced by half, and the number of channels is doubled. After a bottleneck The output feature map size of the module structure is 97×97×64;

3.5、将过程3.4的输出结果作为输入,再次经过瓶颈模块结构,得到经过两次瓶颈模块结构的输出特征图尺寸为49×49×128,将此结果作为输入,再次经过瓶颈模块结构,得到经过三次瓶颈模块结构的输出特征图尺寸为25×25×256,比原图缩小了32倍,上述整个部分作为语义分割网络的编码器部分;3.5. Take the output result of process 3.4 as input, go through the bottleneck module structure again, and obtain the output feature map size of 49×49×128 after passing through the bottleneck module structure twice. Take this result as input, go through the bottleneck module structure again, and get the The output feature map size of the triple bottleneck module structure is 25×25×256, which is 32 times smaller than the original image, and the whole part above is used as the encoder part of the semantic segmentation network;

3.6、解码器部分采用跳跃结构,使用双线性插值方法将经过过程3.5的三次瓶颈模块结构的输出特征图进行2倍上采样,得到尺寸为49×49×256的特征图,并与经过过程3.5的两次瓶颈模块结构的输出特征图进行逐像素点相加,在此过程中需要复制经过过程3.5的两次瓶颈模块结构的输出特征通道,以保证结果仍为256个输出通道;3.6. The decoder part adopts the skip structure, and uses the bilinear interpolation method to upsample the output feature map of the third-order bottleneck module structure through 3.5 to obtain a feature map with a size of 49 × 49 × 256. The output feature maps of the two-time bottleneck module structure of 3.5 are added pixel by pixel. In this process, the output feature channels of the two-time bottleneck module structure of process 3.5 need to be copied to ensure that the result is still 256 output channels;

3.7、再次使用双线性插值方法将过程3.6的结果进行2倍上采样,得到尺寸为97×97×256的特征图,并与经过过程3.4的一次瓶颈模块结构的输出特征图进行逐像素点相加;3.7. Use the bilinear interpolation method again to upsample the result of process 3.6 by a factor of 2 to obtain a feature map with a size of 97 × 97 × 256, and perform pixel by pixel with the output feature map of the primary bottleneck module structure of process 3.4. add;

3.8、将过程3.7的结果经过1×1卷积层使得输出通道数变为语义类别数,并设置Dropout层减少过拟合现象的发生,最后通过8倍上采样得到与原图尺寸相同的特征图,并使用Argmax函数按照最大概率给定每一像素点的语义类别预测结果,最终得到了整个语义分割网络;3.8. Pass the result of process 3.7 through the 1×1 convolution layer to make the number of output channels become the number of semantic categories, and set the Dropout layer to reduce the occurrence of overfitting. Finally, the features with the same size as the original image are obtained by upsampling 8 times. Figure, and use the Argmax function to give the semantic category prediction result of each pixel point according to the maximum probability, and finally obtain the entire semantic segmentation network;

将步骤二中建立的路面图像数据集随机打乱,选取80%样本图片作为训练集,20%样本图片作为验证集;在语义分割网络训练期间;将读取的训练集图片张量按照0.25步长在0.5倍和2倍之间随机缩放尺寸,以769×769像素大小随机裁剪图片张量并随机左右翻转,来达到数据增强的目的,提升分割网络的适应性,并将像素点值从0-255归一化到0-1之间;Randomly scramble the pavement image data set established in step 2, select 80% of the sample images as the training set, and 20% of the sample images as the validation set; during the training of the semantic segmentation network; read the training set image tensor according to 0.25 steps The length is randomly scaled between 0.5 times and 2 times, and the image tensor is randomly cropped with a size of 769×769 pixels and randomly flipped left and right to achieve the purpose of data enhancement, improve the adaptability of the segmentation network, and change the pixel value from 0 -255 normalized to between 0-1;

训练语义分割网络时选用Poly学习率规则,学习率衰减表达式为式(1),初始学习率为0.001,训练迭代步数为iter,最大训练步数max_iter设置为20K步,power设置为0.9;使用Adam优化求解算法,利用梯度的一阶矩估计和二阶矩估计动态调整每个参数的学习率,根据计算机硬件性能,设置批处理大小为16,每隔10-30min保存一次模型参数,同时使用验证集对网络进行性能评估;When training the semantic segmentation network, the Poly learning rate rule is used, the learning rate decay expression is formula (1), the initial learning rate is 0.001, the number of training iteration steps is iter, the maximum number of training steps max_iter is set to 20K steps, and the power is set to 0.9; Use the Adam optimization algorithm to dynamically adjust the learning rate of each parameter by using the first-order moment estimation and second-order moment estimation of the gradient. According to the computer hardware performance, set the batch size to 16, save the model parameters every 10-30min, and at the same time Use the validation set to evaluate the performance of the network;

Figure BDA0003123608070000091
Figure BDA0003123608070000091

网络训练完成后需要选用合适的语义分割评价指标用于评估模型性能,在此之前首先介绍混淆矩阵,如表1所示,二分类混淆矩阵的每一行代表了预测类别,二分类混淆矩阵的每一列代表了数据的真实归属类别,具体数值表示被预测为某类的样本数量;After the network training is completed, appropriate semantic segmentation evaluation indicators need to be selected to evaluate the model performance. Before that, the confusion matrix is first introduced. As shown in Table 1, each row of the binary confusion matrix represents the predicted category, and each row of the binary confusion matrix represents the predicted category. One column represents the true attribution category of the data, and the specific value indicates the number of samples predicted to be a certain category;

表1二分类混淆矩阵示意Table 1 Schematic representation of the two-class confusion matrix

Figure BDA0003123608070000092
Figure BDA0003123608070000092

Figure BDA0003123608070000101
Figure BDA0003123608070000101

语义分割网络的评价指标为平均交并比MIoU,表示对每一类预测结果和真实值的交集与并集的比值,求和再平均的结果,如公式(2)所示:The evaluation index of the semantic segmentation network is the average intersection and union ratio MIoU, which represents the ratio of the intersection and union of each type of prediction result and the real value, summed and averaged, as shown in formula (2):

Figure BDA0003123608070000102
Figure BDA0003123608070000102

当训练至MIoU>60%时可认为训练完成,保存训练完成后的模型及模型参数即可获得路面图像区域提取网络,将实际采集原始图像输入到路面图像区域提取网络即可完成图像中路面区域的提取;When the training reaches MIoU>60%, it can be considered that the training is completed. Save the model and model parameters after training to obtain the road image area extraction network, and input the actual collected original image into the road surface image area extraction network to complete the road area in the image. extraction;

步骤四、训练路面识别网络Step 4. Train the road recognition network

通过步骤三的网络即可完成对实时图像信息中的路面区域提取过程,步骤四是在步骤三的路面区域提取结果的基础上完成对路面类型的识别;The process of extracting the road surface area in the real-time image information can be completed through the network in step 3, and step 4 is to complete the identification of the road surface type on the basis of the road surface area extraction result in step 3;

图像路面数据集在经过语义分割网络处理完成后,得到了只包含路面区域的图像集,这将作为训练与评估路面类型识别网络的最终数据集,为此在Anaconda环境下搭建路面类型识别网络,如图4所示,具体网络结构设计如下:After the image pavement data set is processed by the semantic segmentation network, an image set containing only the pavement area is obtained, which will be used as the final data set for training and evaluating the pavement type recognition network. As shown in Figure 4, the specific network structure is designed as follows:

4.1、首先将待分类识别图像缩放成尺寸为224×224×3的图片,作为卷积神经网络的输入;4.1. First, the image to be classified and recognized is scaled into a picture with a size of 224×224×3, which is used as the input of the convolutional neural network;

4.2、然后设置第一层为卷积层,采用32个3×3大小的滤波器,步长为2,填充为1,并经过批量正则化和ReLU激活函数,得到的卷积层输出特征图尺寸为112×112×32;4.2. Then set the first layer as the convolution layer, use 32 filters of 3 × 3 size, the stride is 2, the padding is 1, and after batch regularization and ReLU activation function, the output feature map of the convolution layer is obtained The size is 112×112×32;

4.3、将卷积层输出特征图输入到最大值池化层,尺寸为3×3,步长为2,得到池化层输出特征图尺寸为56×56×32;4.3. Input the output feature map of the convolutional layer to the maximum pooling layer, the size is 3×3, the stride is 2, and the output feature map size of the pooling layer is 56×56×32;

4.4、将池化层输出特征图作为瓶颈模块结构的输入,瓶颈模块结构详细实现过程如下:首先复制输入特征通道用以增加特征维度,其中一个分支直接经过尺寸为3×3,步长为2的深度卷积,另一分支则通过通道拆分等分成两个子分支,其中一个子分支经过3×3的深度卷积和1×1的逐点卷积,另一子分支则直接采用特征复用的方式,随后通过通道拼接将两子分支连接起来,再经过通道清洗打乱通道排列顺序,并在经过同样的尺寸为3×3,步长为2的深度卷积后与复制通道进行拼接,最后经过1×1的逐点卷积实现组与组间的信息交流,可见整个瓶颈模块结构输出特征图尺寸减少一半,通道数增加一倍,经过一次瓶颈模块结构的输出特征图尺寸为28×28×64;4.4. The output feature map of the pooling layer is used as the input of the bottleneck module structure. The detailed implementation process of the bottleneck module structure is as follows: First, copy the input feature channel to increase the feature dimension, and one of the branches directly passes through the size of 3 × 3 and the step size of 2 The depth convolution, the other branch is divided into two sub-branches by channel splitting, one sub-branch undergoes 3×3 depth convolution and 1×1 point-by-point convolution, and the other sub-branch directly adopts feature complex Then, the two sub-branches are connected by channel splicing, and then the channel arrangement order is disrupted by channel cleaning, and is spliced with the copy channel after the same depth convolution with the same size of 3 × 3 and a stride of 2. , and finally through 1×1 point-by-point convolution to achieve information exchange between groups, it can be seen that the output feature map size of the entire bottleneck module structure is reduced by half, and the number of channels is doubled. ×28×64;

4.5、将过程4.4的输出结果作为输入,再次经过瓶颈模块结构,得到经过两次瓶颈模块结构的输出特征图尺寸为14×14×128,将此结果作为输入,再次经过瓶颈模块结构,得到经过三次瓶颈模块结构的输出特征图尺寸为7×7×256;4.5. Take the output result of process 4.4 as input, go through the bottleneck module structure again, get the output feature map size of 14×14×128 after passing through the bottleneck module structure twice, take this result as input, go through the bottleneck module structure again, get the The output feature map size of the triple bottleneck module structure is 7×7×256;

4.6、使用尺寸为7×7的全局平均池化层,将过程4.5中输出结果转换成尺寸为1×1×256的特征图;4.6. Use a global average pooling layer of size 7×7 to convert the output result in process 4.5 into a feature map of size 1×1×256;

4.7、使用一层全连接层和Softmax函数作为网络分类器,将过程4.6中输出特征图转化为隶属于各类别的概率值,并使用Argmax函数按照最大概率值确定网络分类结果;4.7. Use a fully connected layer and Softmax function as the network classifier, convert the output feature map in process 4.6 into the probability value of each category, and use the Argmax function to determine the network classification result according to the maximum probability value;

然后将步骤三得到的只包含路面区域的图像制作成训练路面类型识别网络的数据集,不同类型的路面图像仍按照步骤一中所建立的文件夹名称分类存放,依次读取不同文件夹中图像数据,并附加上5位的0/1标签信息和路面附着系数信息,见表2,以双线性插值的方式将图片尺寸调整为224×224像素,将像素点值从0-255归一化到0-1之间,打乱路面图像数据集并按20%比例随机抽取每类图片作为验证集,剩余部分作为训练集;Then, the images containing only the road surface area obtained in step 3 are made into a data set for training the road surface type recognition network. Different types of road surface images are still classified and stored according to the folder names established in step 1, and the images in different folders are read in turn. data, and add 5-bit 0/1 label information and road adhesion coefficient information, see Table 2, adjust the image size to 224×224 pixels by bilinear interpolation, and normalize the pixel value from 0-255 Change it to between 0 and 1, scramble the road image data set and randomly select 20% of each type of pictures as the validation set, and the rest as the training set;

表2路面图像类别标签Table 2 Pavement image category labels

Figure BDA0003123608070000111
Figure BDA0003123608070000111

在路面类型识别网络的训练集和验证集制作完成后,开始训练和评估网络模型,批处理大小设置为64,选择交叉熵损失函数,使用Adam优化求解算法,基础学习率为0.0001,当训练至MIoU>80%时可认为训练完成,并按照迭代次数epoch保存模型和训练结果,即可获取训练好的路面类型识别网络;After the training set and validation set of the road surface type recognition network are completed, the network model is trained and evaluated. The batch size is set to 64, the cross entropy loss function is selected, and the Adam optimization algorithm is used. The basic learning rate is 0.0001. When MIoU>80%, it can be considered that the training is completed, and the model and training results are saved according to the number of iterations epoch, and the trained road type recognition network can be obtained;

步骤五、获取路面附着系数信息Step 5. Obtain the information of the pavement adhesion coefficient

路面附着系数信息获取流程如下:在车辆行驶过程中通过摄像头拍摄前方路面图像,将摄像头拍摄到的前方路面图像传输给路面图像区域提取网络以获取路面区域,再将只包含路面区域的图像传输给路面类型识别网络进行分类识别,路面识别完成后根据对应车速判断所处路面的附着系数范围,参见表2,并取路面附着系数范围上下限的中间值为当前路面附着系数,即可完成对路面附着系数信息的获取。The process of obtaining the road adhesion coefficient information is as follows: during the driving process of the vehicle, the front road image is captured by the camera, and the front road image captured by the camera is transmitted to the road image area extraction network to obtain the road area, and then the image containing only the road area is transmitted to The road surface type identification network performs classification and identification. After the road surface identification is completed, the adhesion coefficient range of the road surface is determined according to the corresponding vehicle speed, see Table 2, and the middle value of the upper and lower limits of the road adhesion coefficient range is the current road adhesion coefficient, and the road surface can be completed. Acquisition of adhesion coefficient information.

Claims (1)

1. An image-based urban road pavement adhesion coefficient acquisition method is characterized by comprising the following specific steps:
step one, establishing a road surface image information base
The precondition for obtaining the road adhesion coefficient based on the image is that a perfect road image information base can be established, and the sample image is properly processed to ensure that the characteristic information in the image is fully obtained;
firstly, acquiring pavement image data, wherein adverse factors on imaging effects need to be made up in the pavement image acquisition process, the image acquisition equipment is not limited to one or a certain type of image acquisition equipment, and the requirements on equipment performance and installation position are as follows: the video resolution of 1280 multiplied by 720 and above is provided, the video frame rate is 30 frames per second and above, the maximum effective shooting distance is more than 70 meters, and the wide dynamic technology is provided to quickly adapt to the light intensity change; the installation position of the equipment should ensure that the road surface shot in the acquired image information occupies more than half of the whole image area;
according to the conditions of urban road surfaces under different weather conditions, through comparative analysis and by combining the types of the urban road surfaces in China, the road surface types to be identified are specifically defined as 5 road surface types including an asphalt road surface, a cement road surface, a loose snow road surface, a compacted snow road surface and an ice plate road surface, a video file in the data acquisition process is decomposed into pictures at intervals of 10 frames, the pictures are sorted according to the 5 attribution types according to road surface characteristics in GB/T920 plus 2002 road surface grade and surface layer type code and pavement adhesion coefficient survey analysis in cold regions, the same type of road surface images are uniformly stored under the same folder, and the establishment of a road surface image information base is completed;
step two, establishing a pavement image data set
The method comprises the steps that an original acquired image still contains a large number of non-road surface elements, and the acquisition precision of a road surface adhesion coefficient is seriously influenced, so that an image sample and a pixel-level label of a region corresponding to a road surface are needed in the road surface adhesion coefficient acquisition method based on the image, the image in a road surface image information base acquired in the step one needs to be subjected to road surface range labeling, Labelme in software Anaccnda is selected as a labeling tool, the labeling tool is used for manually labeling each image in a sample set one by one, a create polygon button is clicked in the labeling process, points are drawn along the boundary of the road surface region in the image, a labeling frame can completely cover the road surface region, and the labeling category is named as road; after the labeling is finished, a json file can be generated and is converted by using a self-contained json _ to _ dataset. py script program in software Anaconda to obtain a json folder, five files with names and suffixes of img.png, label.png, label _ viz.png, info.yaml and label _ names are contained under the folder, only the file with the label.png picture format is required to be converted to obtain a 8-bit gray label image, the labeling process is sequentially carried out on the pictures in the road image information base by using Labelmes in the Anaconda to obtain a gray label image set of the pictures in the road image information base, and the gray label image set of the pictures in the road image information base is a road image data set;
step three, establishing and training a road surface image area extraction network
The extraction network of the pavement image area is realized in an Anaconda environment through a semantic segmentation network, the whole semantic segmentation network is of an encoder-decoder structure, and the specific design is as follows:
3.1, firstly, zooming the image to be recognized into a picture with the size of 769 multiplied by 3, and taking the picture as the input of a semantic segmentation network;
3.2, then setting the first layer as a convolutional layer, adopting 32 filters with the size of 3 × 3, the step length is 2, filling the filter with 1, and obtaining the size of an output characteristic graph of the convolutional layer, which is 385 × 385 × 32, through batch regularization and a ReLU activation function;
3.3, inputting the convolution layer output characteristic diagram into a maximum value pooling layer, wherein the size is 3 multiplied by 3, the step length is 2, and the size of the output characteristic diagram of the pooling layer is 193 multiplied by 32;
3.4, taking the output characteristic diagram of the pooling layer as the input of the bottleneck module structure, wherein the bottleneck module structure is realized in detail as follows: firstly, copying input characteristic channel to increase characteristic dimension, one branch directly passing through depth convolution with size of 3X 3 and step length of 2, another branch equally dividing into two sub-branches by channel splitting, one sub-branch is subjected to 3 x 3 depth convolution and 1 x 1 point-by-point convolution, the other sub-branch is directly subjected to a characteristic multiplexing mode, then the two sub-branches are connected through channel splicing, the channel arrangement sequence is disturbed through channel cleaning, after the same depth convolution with the size of 3 multiplied by 3 and the step length of 2, the data are spliced with a copy channel, finally the information exchange between groups is realized through the point-by-point convolution with the size of 1 multiplied by 1, therefore, the size of the output characteristic diagram of the whole bottleneck module structure is reduced by half, the number of channels is doubled, and the size of the output characteristic diagram passing through the bottleneck module structure once is 97 multiplied by 64;
3.5, taking the output result of the process 3.4 as input, passing through the bottleneck module structure again to obtain the output characteristic graph with the size of 49 × 49 × 128 after passing through the bottleneck module structure twice, taking the result as input, passing through the bottleneck module structure again to obtain the output characteristic graph with the size of 25 × 25 × 256 after passing through the bottleneck module structure three times, and reducing the size by 32 times compared with the original image, wherein the whole part is used as an encoder part of a semantic segmentation network;
3.6, a decoder part adopts a jump structure, 2 times of upsampling is carried out on the output characteristic diagram of the three-time bottleneck module structure which is processed by the 3.5 process by using a bilinear interpolation method to obtain a characteristic diagram with the size of 49 multiplied by 256, the characteristic diagram is added with the output characteristic diagram of the two-time bottleneck module structure which is processed by the 3.5 process pixel by pixel, and output characteristic channels of the two-time bottleneck module structure which is processed by the 3.5 process need to be copied in the process so as to ensure that the result still has 256 output channels;
3.7, performing 2 times of upsampling on the result obtained in the process 3.6 by using a bilinear interpolation method again to obtain a characteristic diagram with the size of 97 multiplied by 256, and adding the characteristic diagram with the output characteristic diagram of the primary bottleneck module structure passing through the process 3.4 pixel by pixel;
3.8, converting the output channel number into a semantic category number by passing the result of the process 3.7 through a 1 × 1 convolutional layer, setting a Dropout layer to reduce the occurrence of an overfitting phenomenon, finally obtaining a feature map with the same size as the original image by 8 times of upsampling, giving a semantic category prediction result of each pixel point according to the maximum probability by using an Argmax function, and finally obtaining the whole semantic segmentation network;
randomly disordering the pavement image data set established in the step two, selecting 80% of sample pictures as a training set, and selecting 20% of sample pictures as a verification set; during semantic segmentation network training; the size of the read training set picture tensor is randomly scaled between 0.5 time and 2 times according to 0.25 step length, the picture tensor is randomly cut according to the size of 769 multiplied by 769 pixels and is randomly turned left and right, the purpose of data enhancement is achieved, the adaptability of a segmentation network is improved, and pixel point values are normalized from 0-255 to 0-1;
selecting a Poly learning rate rule when training a semantic segmentation network, wherein a learning rate attenuation expression is an expression (1), an initial learning rate is 0.001, training iteration steps are iters, the maximum training step max _ iter is set to be 20K steps, and power is set to be 0.9; using an Adam optimization solution algorithm, dynamically adjusting the learning rate of each parameter by using first moment estimation and second moment estimation of the gradient, setting the batch processing size to be 16 according to the performance of computer hardware, storing the model parameters once every 10-30min, and simultaneously using a verification set to perform performance evaluation on the network;
Figure FDA0003123608060000031
after the network training is finished, a proper semantic segmentation evaluation index is required to be selected for evaluating the performance of the model, before that, a confusion matrix is introduced, as shown in table 1, each row of the two-classification confusion matrix represents a prediction class, each column of the two-classification confusion matrix represents a real attribution class of data, and a specific numerical value represents the number of samples predicted to be a certain class;
TABLE 1 two-class confusion matrix schematic
Figure FDA0003123608060000032
The evaluation index of the semantic segmentation network is an average intersection-to-union ratio MIoU, which represents the ratio of the intersection and the union of each type of prediction result and the true value, and the result of the sum and the re-averaging is shown in formula (2):
Figure FDA0003123608060000041
when the MIoU is trained to be more than 60%, the training can be considered to be finished, the trained model and model parameters are stored, a road surface image area extraction network can be obtained, and the actually acquired original image is input into the road surface image area extraction network, so that the extraction of the road surface area in the image can be finished;
step four, establishing and training a road surface type recognition network
The extraction process of the road surface area in the real-time image information can be completed through the network in the third step, and the identification of the road surface type is completed on the basis of the extraction result of the road surface area in the third step;
after the image pavement data set is processed by the semantic segmentation network, an image set only containing a pavement area is obtained and is used as a final data set of a training and evaluation pavement type recognition network, so that the pavement type recognition network is built under the Anaconda environment, and the specific network structure is designed as follows:
4.1, firstly, scaling the image to be classified and identified into a picture with the size of 224 multiplied by 3 as the input of a convolutional neural network;
4.2, then setting the first layer as a convolutional layer, adopting 32 filters with the size of 3 × 3, the step length is 2, filling the filter with 1, and obtaining the size of an output characteristic diagram of the convolutional layer with the size of 112 × 112 × 32 through batch regularization and a ReLU activation function;
4.3, inputting the convolution layer output characteristic diagram into a maximum value pooling layer, wherein the size is 3 multiplied by 3, the step length is 2, and the size of the output characteristic diagram of the pooling layer is 56 multiplied by 32;
4.4, taking the output characteristic diagram of the pooling layer as the input of the bottleneck module structure, wherein the bottleneck module structure is implemented in detail as follows: firstly, copying input characteristic channel to increase characteristic dimension, one branch directly passing through depth convolution with size of 3X 3 and step length of 2, another branch equally dividing into two sub-branches by channel splitting, one sub-branch is subjected to 3 x 3 depth convolution and 1 x 1 point-by-point convolution, the other sub-branch is directly subjected to a characteristic multiplexing mode, then the two sub-branches are connected through channel splicing, the channel arrangement sequence is disturbed through channel cleaning, after the same depth convolution with the size of 3 x 3 and the step length of 2, the data are spliced with a copy channel, finally the information exchange between groups is realized through the point-by-point convolution with the size of 1 x 1, it can be seen that the size of the output characteristic diagram of the whole bottleneck module structure is reduced by half, the number of channels is doubled, and the size of the output characteristic diagram passing through the bottleneck module structure for one time is 28 multiplied by 64;
4.5, taking the output result of the process 4.4 as input, and passing through the bottleneck module structure again to obtain an output characteristic diagram with the size of 14 multiplied by 128 after passing through the bottleneck module structure twice, and taking the result as input, and passing through the bottleneck module structure again to obtain an output characteristic diagram with the size of 7 multiplied by 256 after passing through the bottleneck module structure three times;
4.6, converting the output result in the process 4.5 into a characteristic diagram with the size of 1 × 1 × 256 by using a global average pooling layer with the size of 7 × 7;
4.7, using a layer of full connection layer and a Softmax function as a network classifier, converting the output characteristic diagram in the process 4.6 into probability values belonging to various categories, and determining a network classification result according to the maximum probability value by using an Argmax function;
then, the images only containing the road surface area obtained in the step three are made into data sets for training a road surface type recognition network, the road surface images of different types are still stored according to the folder names established in the step one in a classified mode, the image data in different folders are read in sequence, 5-bit 0/1 label information and road surface adhesion coefficient information are added, see table 2, the image size is adjusted to 224 x 224 pixels in a bilinear interpolation mode, the pixel point value is normalized to be between 0 and 255 and 0 and 1, the road surface image data sets are disturbed, each type of images are randomly extracted according to a proportion of 20 percent to serve as verification sets, and the rest parts serve as training sets;
TABLE 2 pavement image category labels
Figure FDA0003123608060000051
After the training set and the verification set of the road surface type recognition network are manufactured, training and evaluating a network model are started, the batch processing size is set to 64, a cross entropy loss function is selected, an Adam optimization solving algorithm is used, the basic learning rate is 0.0001, when the training is completed until the MIoU is more than 80%, the training can be considered to be completed, the model and the training result are stored according to the iteration times epoch, and the well-trained road surface type recognition network can be obtained;
step five, obtaining the road surface adhesion coefficient information
The road surface adhesion coefficient information acquisition process is as follows: the method comprises the steps of shooting a front road image through a camera in the driving process of a vehicle, transmitting the front road image shot by the camera to a road image area extraction network to obtain a road area, transmitting the image only containing the road area to a road type identification network for classification and identification, judging the adhesion coefficient range of the road where the road is located according to the corresponding vehicle speed after the road identification is finished, referring to a table 2, and taking the intermediate values of the upper limit and the lower limit of the road adhesion coefficient range as the current road adhesion coefficient to finish the acquisition of the road adhesion coefficient information.
CN202110683924.6A 2021-06-21 2021-06-21 An Image-Based Method for Obtaining the Adhesion Coefficient of Urban Road Pavement Active CN113379711B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110683924.6A CN113379711B (en) 2021-06-21 2021-06-21 An Image-Based Method for Obtaining the Adhesion Coefficient of Urban Road Pavement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110683924.6A CN113379711B (en) 2021-06-21 2021-06-21 An Image-Based Method for Obtaining the Adhesion Coefficient of Urban Road Pavement

Publications (2)

Publication Number Publication Date
CN113379711A CN113379711A (en) 2021-09-10
CN113379711B true CN113379711B (en) 2022-07-08

Family

ID=77577937

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110683924.6A Active CN113379711B (en) 2021-06-21 2021-06-21 An Image-Based Method for Obtaining the Adhesion Coefficient of Urban Road Pavement

Country Status (1)

Country Link
CN (1) CN113379711B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114170584A (en) * 2021-12-15 2022-03-11 北京中科慧眼科技有限公司 Driving road classification and identification method, system and intelligent terminal based on assisted driving
CN114648750A (en) * 2022-03-29 2022-06-21 国交空间信息技术(北京)有限公司 Image-based pavement material type identification method and device
CN117390475A (en) * 2022-06-30 2024-01-12 交通运输部公路科学研究所 A method for detecting slippery condition of tunnel pavement based on mobile detection equipment
CN116653889A (en) * 2023-06-20 2023-08-29 中国第一汽车股份有限公司 Vehicle parking brake control method, device, device and medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE502007002821D1 (en) * 2007-08-10 2010-03-25 Sick Ag Recording of equalized images of moving objects with uniform resolution by line sensor
CN202351162U (en) * 2011-11-01 2012-07-25 长安大学 Road pavement adhesion coefficient detection device
CN107491736A (en) * 2017-07-20 2017-12-19 重庆邮电大学 A kind of pavement adhesion factor identifying method based on convolutional neural networks
CN109455178A (en) * 2018-11-13 2019-03-12 吉林大学 A kind of road vehicles traveling active control system and method based on binocular vision
CN109460738A (en) * 2018-11-14 2019-03-12 吉林大学 A kind of road surface types evaluation method of the depth convolutional neural networks based on free of losses function
CN110378416A (en) * 2019-07-19 2019-10-25 北京中科原动力科技有限公司 A kind of coefficient of road adhesion estimation method of view-based access control model
CN111688706A (en) * 2020-05-26 2020-09-22 同济大学 Road adhesion coefficient interactive estimation method based on vision and dynamics
CN111723849A (en) * 2020-05-26 2020-09-29 同济大学 A method and system for online estimation of road adhesion coefficient based on vehicle camera
CN112706728A (en) * 2020-12-30 2021-04-27 吉林大学 Automatic emergency braking control method based on road adhesion coefficient estimation of vision

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE502007002821D1 (en) * 2007-08-10 2010-03-25 Sick Ag Recording of equalized images of moving objects with uniform resolution by line sensor
CN202351162U (en) * 2011-11-01 2012-07-25 长安大学 Road pavement adhesion coefficient detection device
CN107491736A (en) * 2017-07-20 2017-12-19 重庆邮电大学 A kind of pavement adhesion factor identifying method based on convolutional neural networks
CN109455178A (en) * 2018-11-13 2019-03-12 吉林大学 A kind of road vehicles traveling active control system and method based on binocular vision
CN109460738A (en) * 2018-11-14 2019-03-12 吉林大学 A kind of road surface types evaluation method of the depth convolutional neural networks based on free of losses function
CN110378416A (en) * 2019-07-19 2019-10-25 北京中科原动力科技有限公司 A kind of coefficient of road adhesion estimation method of view-based access control model
CN111688706A (en) * 2020-05-26 2020-09-22 同济大学 Road adhesion coefficient interactive estimation method based on vision and dynamics
CN111723849A (en) * 2020-05-26 2020-09-29 同济大学 A method and system for online estimation of road adhesion coefficient based on vehicle camera
CN112706728A (en) * 2020-12-30 2021-04-27 吉林大学 Automatic emergency braking control method based on road adhesion coefficient estimation of vision

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
"A federated filter design of electronic stability control for electric-wheel vehicle";Cheng Wang;《2015 8th International Congress on Image and Signal Processing (CISP)》;20160218;第1105-1110页 *
"基于道路图像对比度-区域均匀性图分析的自适应阈值算法";管欣等;《吉林大学学报(工学版)》;20080715;第758-763页 *
"汽车防抱死制动系统的滑模变结构控制器设计";刘柏楠等;《吉林大学学报(信息科学版)》;20150115;第19-25页 *
"视觉与动力学信息融合的智能车辆路面附着系数估计";刘惠;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;20220115;C035-367 *
"高速公路车辆智能驾驶仿真平台";王萍等;《系统仿真学报》;20121208;第2473-2478页 *

Also Published As

Publication number Publication date
CN113379711A (en) 2021-09-10

Similar Documents

Publication Publication Date Title
CN113379711B (en) An Image-Based Method for Obtaining the Adhesion Coefficient of Urban Road Pavement
CN109993082B (en) Convolutional neural network road scene classification and road segmentation method
CN112183203B (en) Real-time traffic sign detection method based on multi-scale pixel feature fusion
CN107239778B (en) Efficient and accurate license plate recognition method
CN110147763A (en) Video semanteme dividing method based on convolutional neural networks
CN110781850A (en) Semantic segmentation system and method for road recognition, and computer storage medium
CN112508977A (en) Deep learning-based semantic segmentation method for automatic driving scene
CN114120272B (en) A multi-supervised intelligent lane semantic segmentation method integrating edge detection
CN113688836A (en) Real-time road image semantic segmentation method and system based on deep learning
CN112990065B (en) Vehicle classification detection method based on optimized YOLOv5 model
CN113033604A (en) Vehicle detection method, system and storage medium based on SF-YOLOv4 network model
CN111882620A (en) Road drivable area segmentation method based on multi-scale information
CN116630702A (en) Pavement adhesion coefficient prediction method based on semantic segmentation network
CN112819000A (en) Streetscape image semantic segmentation system, streetscape image semantic segmentation method, electronic equipment and computer readable medium
CN113505640B (en) A small-scale pedestrian detection method based on multi-scale feature fusion
CN109670392A (en) Road image semantic segmentation method based on hybrid automatic encoder
CN113205107A (en) Vehicle type recognition method based on improved high-efficiency network
CN114913498A (en) Parallel multi-scale feature aggregation lane line detection method based on key point estimation
CN113554032A (en) Remote sensing image segmentation method based on highly aware multi-channel parallel network
CN112613434A (en) Road target detection method, device and storage medium
CN113255574B (en) Urban street semantic segmentation method and automatic driving method
CN115359455A (en) A lightweight vehicle detection method based on deep learning
CN113762396A (en) A method for semantic segmentation of two-dimensional images
CN112634289A (en) Rapid feasible domain segmentation method based on asymmetric void convolution
CN114782949B (en) Traffic scene semantic segmentation method for boundary guide context aggregation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant