CN112329533A - A method for estimating local pavement adhesion coefficient based on image segmentation - Google Patents

A method for estimating local pavement adhesion coefficient based on image segmentation Download PDF

Info

Publication number
CN112329533A
CN112329533A CN202011067813.4A CN202011067813A CN112329533A CN 112329533 A CN112329533 A CN 112329533A CN 202011067813 A CN202011067813 A CN 202011067813A CN 112329533 A CN112329533 A CN 112329533A
Authority
CN
China
Prior art keywords
road
adhesion coefficient
image segmentation
local
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011067813.4A
Other languages
Chinese (zh)
Other versions
CN112329533B (en
Inventor
王海
蔡柏湘
蔡英凤
李祎承
陈龙
陈小波
刘擎超
孙晓强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University
Original Assignee
Jiangsu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University filed Critical Jiangsu University
Priority to CN202011067813.4A priority Critical patent/CN112329533B/en
Publication of CN112329533A publication Critical patent/CN112329533A/en
Application granted granted Critical
Publication of CN112329533B publication Critical patent/CN112329533B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于图像分割的局部路面附着系数估计方法,步骤1:离线预训练图像分割网络,具体包括:a.利用CARLA软件采集不同天气状况的路面图像,b.对采集的不同天气状况的路面图像进行局部标注,形成局部路面附着系数估计的数据集,c.利用深度学习的手段搭建图像分割的深度学习算法网络模型,d.利用局部路面附着系数估计的数据集对图像分割的深度学习算法网络框架进行端到端的训练。步骤2:获取实时路面图像,对路面局部附着系数实时估计,具体包括:a.利用车载摄像头采集实时路面图像,b.用预训练好的图像分割网络对实时获取的图像进行分类并定位不同类别,形成实时路况图,c.根据路面类型对实时路况图进行局部路面附着系数估计。

Figure 202011067813

The invention discloses a local road adhesion coefficient estimation method based on image segmentation. Step 1: off-line pre-training image segmentation network, which specifically includes: a. using CARLA software to collect road images under different weather conditions, b. The road image of the condition is locally marked to form a data set for local road adhesion coefficient estimation. c. Use deep learning to build a deep learning algorithm network model for image segmentation. The deep learning algorithm network framework is trained end-to-end. Step 2: Acquiring real-time road images, and estimating the local adhesion coefficient of the road in real time, including: a. Using the vehicle camera to collect real-time road images, b. Using a pre-trained image segmentation network to classify the real-time acquired images and locate different categories , form a real-time road condition map, and c. estimate the local road adhesion coefficient on the real-time road condition map according to the type of road surface.

Figure 202011067813

Description

Local pavement adhesion coefficient estimation method based on image segmentation
Technical Field
The invention belongs to the field of image segmentation, and particularly relates to a local pavement adhesion coefficient estimation method based on image segmentation.
Background
Better estimation of road adhesion coefficient has always been a very challenging problem. The road adhesion coefficient not only influences the dynamic property and the braking performance of the vehicle, but also influences the operation stability of the vehicle during running, accurately distinguishes the road adhesion coefficient in real time, and can greatly improve the safety and the comfort of the vehicle during running. With the continuous advancement of the industry towards intellectualization, the accurate estimation of the road adhesion coefficient can also greatly influence the path planning and decision of systems such as intelligent vehicles and robots. Therefore, the road surface adhesion coefficient is accurately estimated in high real-time, the driving safety of the vehicle can be greatly improved, and meanwhile, the planning and decision accuracy of the intelligent system can be improved.
At present, the estimation methods of the road adhesion coefficient mainly include three types, one type is traditional indirect estimation based on vehicle dynamic parameter identification, the second type is to acquire road surface data through sensors (sound, light, temperature sensors and the like) and estimate according to the relationship between the sensor data and the road adhesion coefficient, and the third type is to directly estimate the road adhesion coefficient based on the road surface of a vision sensor by means of deep learning.
Although the road adhesion coefficient estimation by adopting the dynamic modeling method is accurate and reliable, the model is complex, so that a plurality of vehicle dynamic parameters need to be acquired, and the instantaneity cannot be ensured; in addition, the method based on dynamic estimation requires the contact between the vehicle tire and the road surface for estimation, so that only the adhesion coefficient of the road surface in contact with the tire at present can be estimated, the road surface adhesion coefficient prediction cannot be carried out on the road surface which is about to pass through in the future, and the vehicle cannot be intervened and controlled in time. Although the vision-based method based on the deep learning means can make up for the defects of the dynamic method and has a certain advance predictability, the method only estimates the adhesion coefficient of the whole road surface and does not estimate the adhesion coefficient of the local road surface, such as local water accumulation, local snow accumulation, local icing and the like.
Disclosure of Invention
In order to overcome the defects of the existing road adhesion coefficient estimation method, the invention provides a local road adhesion coefficient estimation method based on image segmentation.
The technical steps of the invention are as follows:
step 1: the offline pre-training image segmentation network specifically comprises: a. the method comprises the steps of collecting road surface images under different weather conditions by using CARLA software, b, carrying out local labeling on the collected road surface images under different weather conditions to form a data set of local road surface adhesion coefficient estimation, c, building a deep learning algorithm network model of image segmentation by using a deep learning means, and d, carrying out end-to-end training on a deep learning algorithm network framework of image segmentation by using the data set of local road surface adhesion coefficient estimation.
Step 2: the method comprises the following steps of acquiring a real-time road surface image, and estimating a local adhesion coefficient of the road surface in real time, wherein the method specifically comprises the following steps: a. acquiring real-time road surface images by using a vehicle-mounted camera, classifying the real-time acquired images by using a pre-trained image segmentation network and positioning different types to form a real-time road condition map, and c, estimating local road surface adhesion coefficients of the real-time road condition map according to the road surface type.
Further, in the step 1a, road surface images of different weather conditions are collected, wherein the collection tool is CARLA simulation software, and the collected images include a local water accumulation image of the road surface, a local snow accumulation image of the road surface, a local icing image of the road surface, a normal asphalt road surface and the like.
Further, the deep learning algorithm network model for image segmentation comprises a basic network structure built by utilizing a residual error structure, a highly-driven attention module is added on the basic network, and the semantic segmentation neural network model is named as H-ResNet, wherein H represents the highly-driven attention module, and ResNet represents the basic network structure built by the residual error structure.
Further, the deep learning algorithm network model is a semantic segmentation algorithm network built by using a TensorFlow or Keras or Caffe2 or PyTorch or MXNet deep learning framework.
Further, the training method is a back propagation method of Batch Gradient Descent (Batch Gradient Descent) or random Gradient Descent (Stochastic Gradient Descent) or small-Batch Gradient Descent (Mini-Batch Gradient Descent) by using a single GPU or multiple GPUs.
Further, the vehicle-mounted camera in step 2a is a web camera or a USB camera.
Further, in the step 2b, the obtained images are classified and positioned into different categories, wherein the classification and the positioning of the different categories refer to that the dry asphalt is used as a background road surface, and water accumulation areas, snow accumulation areas, icing areas and the like or random combinations of any of the water accumulation areas, the snow accumulation areas, the icing areas and the like are distinguished and the distribution of the water accumulation areas, the snow accumulation areas, the icing areas and the like on the asphalt road surface is obtained.
The invention has the beneficial effects that:
1. the method can predict the pavement type of the pavement to be contacted, can obtain the distribution conditions of different local pavements on the same asphalt pavement, makes up the defects of a dynamic method and a visual method by means of deep learning, and provides a prerequisite for path planning and decision of intelligent vehicles and intelligent machines.
2. And an offline pre-training image segmentation network is adopted, so that the real-time performance is good, and the safety of the system is improved to a great extent.
3. The CARLA software is used for acquiring a data set required by pre-training, so that the time cost and the economic cost for acquiring image data are greatly reduced.
Drawings
FIG. 1 is a schematic overall flow diagram of the process of the present invention;
FIG. 2 is a schematic view of a high attention module;
FIG. 3 is a schematic diagram of a residual structure;
fig. 4 is a schematic diagram of the overall structure of H-ResNet.
Detailed Description
The invention will be further explained with reference to the drawings. It should be understood that the specific examples described herein are intended to be illustrative only and are not intended to be limiting.
The general technical process of the local road surface adhesion coefficient estimation method based on image segmentation is shown in the attached figure 1, and comprises the following steps:
step 1: the offline pre-training image segmentation network specifically comprises: a. the method comprises the steps of collecting road surface images under different weather conditions by using CARLA software, b, carrying out local labeling on the collected road surface images under different weather conditions to form a data set of local road surface adhesion coefficient estimation, c, building a deep learning algorithm network framework of image segmentation by using a deep learning means, d, carrying out end-to-end training on the deep learning algorithm network framework of image segmentation by using the data set of local road surface adhesion coefficient estimation, and e, transplanting a trained network model into a vehicle machine of an intelligent vehicle or an intelligent machine through ROS software.
Step 2: the method comprises the following steps of acquiring a real-time road surface image, and estimating a local adhesion coefficient of the road surface in real time, wherein the method specifically comprises the following steps: a. acquiring real-time road surface images by using a vehicle-mounted camera, classifying the real-time acquired images by using a pre-trained image segmentation network and positioning different types to form a real-time road condition map, and c, estimating local road surface adhesion coefficients of the real-time road condition map according to the road surface type.
Further, in the step 1a, road surface images of different weather conditions are collected, wherein the collection tool is CARLA simulation software, and the collected images include a local water accumulation image of the road surface, a local snow accumulation image of the road surface, a local icing image of the road surface, a normal asphalt road surface and the like.
The specific implementation procedure for steps 1 and 2 above is as follows:
the weather conditions of the scene are set in CARLA software, the collected image vehicles are driven on asphalt pavements with different weather conditions by a first person, and the pavement images are collected, so that the pavement types of the single-frame images are guaranteed to contain two or more than two of the pavement types in the attached list. At this stage, the data including the scene of the road, the illumination, the weather condition and the road structure are ensured to be as diverse as possible. The scene of the road includes but is not limited to a high-speed asphalt pavement, an urban downtown asphalt pavement and an urban suburban asphalt pavement; weather conditions include, but are not limited to, sunny weather, rainy weather, snowy weather, etc., and lighting conditions include, but are not limited to, morning, noon, dusk, night, etc.
And (3) local labeling of the image, namely performing local labeling of the road surface type and the corresponding road surface adhesion coefficient on the road surface image according to the road surface type and the corresponding road surface adhesion coefficient in the attached table 1 to form a data set required by pre-training.
TABLE 1
Figure BDA0002714322640000041
The semantic segmentation neural network algorithm model comprises a basic network structure constructed by using a residual error structure, a highly-driven attention module is added on the basic network, the semantic segmentation neural network is named as H-ResNet, wherein H represents the highly-driven attention module, and ResNet represents the basic network structure constructed by the residual error structure. The algorithm flow comprises a data import module, a data preprocessing module, a neural network forward propagation module, an activation function, a loss function, a backward propagation module and an optimization module.
As shown in fig. 3, the residual structure unit mainly includes: quick connection and identity mapping. As shown in fig. 3, X represents a characteristic input, mapped to an output by the shortcut connection identity on the right. weight layer represents the convolution weight layer, relu represents the activation function, and F (X) represents the residual error of the feature representation learned by X passing through the convolution weight layer. Let h (X) denote the feature that input X finally learns, it is now desirable that it can learn the residual, so that the feature that actually needs to be learned becomes f (X) ═ h (X) -X. This is so because residual learning is easier than direct learning of the original features. When the residual f (x) is 0, the convolution weight layer is only mapped identically, so that at least the performance of the network is not degraded, and actually the residual f (x) is not 0, which enables the convolution weight layer to learn new features based on the input features.
In this example, the structure of the residual error is adopted to prevent the problem of gradient disappearance in pre-training, so as to train the network better, the convolution operation with different convolution kernel sizes is carried out for a plurality of times in the structure of the residual error to better extract image features, and the convergence rate of neural network training can be increased by adopting the ReLU nonlinear activation function.
In this example, the convolution operation is formulated as follows, where w (x, y) represents a convolution kernel of size m × n, f (x, y) is an image, and · represents the convolution operation.
Figure BDA0002714322640000051
Wherein a represents the sum of pixels in the image width direction
b represents the sum of pixels in the image length direction
s represents the pixel position in the image width direction
t represents the pixel position in the image width direction
In this example, the ReLU activation function formula is as follows, and is a non-linear function, where the function is 0 when x, the abscissa, is equal to or less than 0, and the function value is equal to x when x is greater than 0.
Figure BDA0002714322640000052
In this embodiment, a highly-driven attention module is added to the basic network to improve the recognition accuracy and ensure the real-time performance of recognition.
The high-driven attention module is provided by analyzing the difference of pixels occupied by different scene types in different height levels of a single-frame image in an actual urban scene image. As shown in fig. 2, a single frame image is divided into three levels of height, i.e., upper, middle and lower levels, where the image-dominant pixel in the upper level is sky, the image-dominant pixel in the middle level is a vehicle, a pedestrian, a building, etc., and the image-dominant pixel in the lower level is a road surface, i.e., an area of interest. The attention module driven by the height can directly acquire the interested region in the image, so that the recognition accuracy is improved, and the recognition speed is accelerated.
The height-driven attention module includes operations of width pooling, down-sampling, calculating a height-driven attention feature map, inserting feature position coding, and the like. Wherein, the width pooling operation is to obtain a characteristic diagram in the image width direction, and the pooling mode is average pooling; the feature maps obtained through the width pooling operation are not all necessary, the down-sampling aims to remove unnecessary feature maps, the calculation of the down-sampled feature maps utilizes convolution operation, such as formula 1, to obtain the adjacent position relation of the feature maps, the final inserted feature position coding operation is to obtain the prior information of the vertical position of a specific object, and the position coding is generated through sine and cosine functions of different frequencies and then is added with the feature vectors of corresponding positions element by element.
In this embodiment, the overall structure of H-ResNet is shown in FIG. 4, where each ResNet stage represents a residual structure and H represents a highly driven attention module. The identification result is more accurate as the number of ResNet stages is larger, however, as the number of ResNet stages is increased, the network calculation amount is increased, and the real-time performance is reduced. The number of high-drive attention modules increases with the number of ResNet stages, in this example 4 ResNet stages, 3 high-drive attention modules are used. Each height driving attention module is inserted between the two residual error modules, the prior information of the vertical position is obtained once after the residual error modules pass each time, and the state information of the road surface can be more accurately obtained after the prior information of the vertical position is obtained for multiple times.
In this example, the deep learning framework is TensorFlow or Keras or Caffe2 or PyTorch or MXNet.
In this example, the hardware configuration of the experimental platform used for training the semantic segmentation model is a GPU of GeForce GTX 1080Ti and a CPU of i7-9700k with 64GB memory. In terms of software configuration, the experimental platform is based on Ubuntu18.04 of a 64-bit operating system. A network model is constructed by adopting a current mainstream deep learning frame pytorech and python languages, and high-performance parallel computing is carried out by using a parallel computing architecture CUDA and a GPU acceleration library CUDNN.
In this example, the training uses Focal local, also called focus Loss, as shown in equation 7, as a function of the training Loss. Focal local reduces the weight of samples which can be well classified by modifying a standard cross entropy Loss function, increases the weight of samples which are difficult to classify, and enables a model to quickly pay attention to difficult samples, namely relatively few samples, in the training process so as to solve the problem of class sample imbalance.
Figure BDA0002714322640000061
Where N represents the number of samples used for network training, i represents the sample index, yiRepresenting the label sample corresponding to each training sample, and alpha represents the weight parameter at [0,1]The middle value, gamma, is also a hyperparameter, piIn [0,1 ]]A median value, corresponding to yiAs a general theory of prediction of +1,
Figure BDA0002714322640000062
in this example, the low-level and high-level semantic information of the image is learned most quickly in order to ensure that the model is stable. The learning rate in model training is selected as the exponentially decaying learning rate, and the formula is shown in fig. 8.
decayed_lr=init_lr×decay_rate(global_step/decay_steps) (8)
In the formula: init _ lr learning rate of initial setting
decay _ rate-attenuation coefficient
global _ step-iteration round number
decay step decay Rate
In this example, the number of images input during each iterative training is set to 8, and data enhancement is performed on the input images by means of random scale transformation, random angle rotation, image inversion and the like. The initial learning rate init _ lr is set to 0.001, the attenuation coefficient decay _ rete is set to 0.95, the iteration round number global _ step is 5400, and the attenuation speed decay _ steps is set to 50.
And inputting the collected data set into the constructed semantic segmentation neural network for end-to-end training, and integrating the trained model into a vehicle machine of an intelligent vehicle or other intelligent machines through ROS software.
And acquiring a real-time image by using a vehicle-mounted network camera or a USB camera, inputting the real-time image into a semantic segmentation model integrated in a vehicle machine, and distinguishing the categories of different areas on the road surface in real time and positioning the distribution of the categories so as to obtain a road surface condition distribution map.
In this example, the camera is installed in car windshield department, avoids receiving environmental disturbance, influences the quality of gathering the image.
The above-listed series of detailed descriptions are merely specific illustrations of possible embodiments of the present invention, and they are not intended to limit the scope of the present invention, and all equivalent means or modifications that do not depart from the technical spirit of the present invention are intended to be included within the scope of the present invention.

Claims (9)

1.一种基于图像分割的局部路面附着系数估计方法,其特征在于,包括如下步骤:1. a local road adhesion coefficient estimation method based on image segmentation, is characterized in that, comprises the steps: 步骤1:离线预训练图像分割网络模型,具体包括:Step 1: Offline pre-training image segmentation network model, including: 1.1.采集不同天气状况的路面图像,1.1. Collect road images of different weather conditions, 1.2.对采集的不同天气状况的路面图像进行局部标注,形成局部路面附着系数估计的数据集,1.2. Locally label the collected road images of different weather conditions to form a data set for local road adhesion coefficient estimation, 1.3.搭建图像分割的深度学习网络模型,1.3. Build a deep learning network model for image segmentation, 1.4.利用局部路面附着系数估计的数据集对图像分割的深度学习算法网络框架进行端到端的训练;1.4. End-to-end training of the deep learning algorithm network framework for image segmentation using the data set of local road adhesion coefficient estimation; 步骤2:获取实时路面图像,对路面局部附着系数实时估计,具体包括:Step 2: Obtain a real-time road image and estimate the local adhesion coefficient of the road in real time, including: 2.1.采集路面图像,2.1. Collect road images, 2.2.用预训练好的图像分割网络模型对实时获取的图像进行分类并定位不同类别,形成实时路况图,2.2. Use the pre-trained image segmentation network model to classify the images obtained in real time and locate different categories to form a real-time road map, 2.3.根据路面类型对实时路况图进行局部路面附着系数估计。2.3. Estimate the local road adhesion coefficient on the real-time road map according to the road type. 2.根据权利要求1所述的一种基于图像分割的局部路面附着系数估计方法,其特征在于,所述1.1中,采集不同天气状况的路面图像,所使用的采集工具为CARLA模拟软件,采集的图像包括路面局部积水图像,路面局部积雪图像,路面局部结冰图像及正常沥青路面。2. a kind of local road adhesion coefficient estimation method based on image segmentation according to claim 1, is characterized in that, in described 1.1, collect road surface images of different weather conditions, the collection tool used is CARLA simulation software, collects The images include local water images on the road surface, local snow images on the road surface, local icing images on the road surface and normal asphalt roads. 3.根据权利要求1所述的一种基于图像分割的局部路面附着系数估计方法,其特征在于,所述1.3中,所述图像分割的深度学习网络模型包括利用残差结构搭建的基础网络结构,在基础网络上加上高度驱动的注意力模块,将此语义分割神经网络模型命名为H-ResNet模型,其中H代表高度驱动的注意力模块,ResNet代表由残差结构搭建的基础网络结构。3. A method for estimating local road adhesion coefficient based on image segmentation according to claim 1, wherein in said 1.3, the deep learning network model of said image segmentation comprises a basic network structure built by using a residual structure , add a highly driven attention module to the basic network, and name this semantic segmentation neural network model as the H-ResNet model, where H represents the highly driven attention module, and ResNet represents the basic network structure built by the residual structure. 4.根据权利要求3所述的一种基于图像分割的局部路面附着系数估计方法,其特征在于,所述H-ResNet模型中每个ResNet stage代表一个残差结构,H代表高度驱动的注意力模块;采用4个ResNet stage,3个高度驱动注意力模块,每个高度驱动注意力模块,插入在两个残差模块之间,每次经过残差模块后都获得一次垂直位置的先验信息,经过多次获得垂直位置的先验信息后更能准确的获得路面的状态信息。4. A method for estimating local road adhesion coefficient based on image segmentation according to claim 3, wherein each ResNet stage in the H-ResNet model represents a residual structure, and H represents highly driven attention Module; using 4 ResNet stages, 3 highly-driven attention modules, each highly-driven attention module, inserted between two residual modules, each time through the residual module to obtain a priori information of the vertical position , the state information of the road surface can be more accurately obtained after obtaining the prior information of the vertical position for many times. 5.根据权利要求4所述的一种基于图像分割的局部路面附着系数估计方法,其特征在于,所述残差单元包括:快捷连接和恒等映射;设X代表特征输入,通过快捷连接、恒等映射到输出;weight layer代表卷积权重层,relu表示激活函数,F(X)表示X经过卷积权重层所学习到的特征表示的残差,H(X)表示输入X最终学习到的特征,将需要学习的特征等价为F(X)=H(X)-X;5. A method for estimating local road adhesion coefficient based on image segmentation according to claim 4, wherein the residual unit comprises: a shortcut connection and an identity mapping; The identity is mapped to the output; weight layer represents the convolution weight layer, relu represents the activation function, F(X) represents the residual of the feature representation learned by X through the convolution weight layer, and H(X) represents the input X finally learned The features that need to be learned are equivalent to F(X)=H(X)-X; 所述激活函数采用ReLU非线性激活函数:当横坐标x小于等于0的时候函数为0,当x大于0的时候,函数值等于x:
Figure RE-FDA0002860511330000021
The activation function adopts the ReLU nonlinear activation function: when the abscissa x is less than or equal to 0, the function is 0, and when x is greater than 0, the function value is equal to x:
Figure RE-FDA0002860511330000021
所述卷积层的操作公式:
Figure RE-FDA0002860511330000022
The operation formula of the convolutional layer:
Figure RE-FDA0002860511330000022
其中w(x,y)代表大小为m×n的卷积核,f(x,y)为一幅图像,·代表卷积运算。where w(x, y) represents a convolution kernel of size m×n, f(x, y) is an image, and · represents the convolution operation.
6.根据权利要求4所述的一种基于图像分割的局部路面附着系数估计方法,其特征在于,所述高度驱动的注意力模块是针对实际城市场景图像中单帧图像在不同高度层次中不同景象类别所占的像素不同而设计的,具体是把单帧图像分为上中下三个高度层次,在上层次中占据图像主导像素为天空,在中间层次中占据图像主导像素为车辆,行人,建筑等,而在下层次中占据图像主导像素为路面,既感兴趣区域。6 . The method for estimating local road adhesion coefficient based on image segmentation according to claim 4 , wherein the height-driven attention module is aimed at different height levels of single-frame images in actual urban scene images. 7 . It is designed for the different pixels occupied by the scene category. Specifically, the single frame image is divided into three height levels: upper, middle and lower levels. In the upper level, the dominant pixel of the image is the sky, and in the middle level, the dominant pixel of the image is vehicles and pedestrians. , buildings, etc., while the dominant pixels occupying the image in the lower layers are the pavement, that is, the region of interest. 该高度驱动的注意力模块包括宽度池化、下采样、计算高度驱动注意力特征图、插入特征位置编码;其中,宽度池化操作是为了获得图像宽度方向的特征图,采用的池化方式是平均池化;经过宽度池化操作活得的特征地图并不是都是必须的,通过下采样去除不必要的特征地图,对下采样的特征地图计算是利用卷积操作,如公式1,获得其相邻位置关系,最后的插入特征位置编码操作能够获得特定物体的垂直位置的先验信息,位置编码通过不同频率的正弦,余弦函数生成,然后和对应的位置的特征向量进行逐元素相加。The height-driven attention module includes width pooling, downsampling, calculating height-driven attention feature maps, and inserting feature location codes; wherein, the width pooling operation is to obtain the feature map in the width direction of the image, and the pooling method used is Average pooling; not all feature maps obtained through the width pooling operation are necessary, and unnecessary feature maps are removed by downsampling, and the convolution operation is used to calculate the downsampled feature map, such as formula 1, to obtain its Adjacent position relationship, the final insertion feature position encoding operation can obtain the prior information of the vertical position of a specific object, the position code is generated by sine and cosine functions of different frequencies, and then added element by element with the corresponding position feature vector. 7.根据权利要求1所述的一种基于图像分割的局部路面附着系数估计方法,其特征在于,所述1.4中的训练方法为利用单GPU或多GPU的批量梯度下降(Batch GradientDescent)或随机梯度下降(Stochastic Gradient Descent)或小批量梯度下降(Mini-Batch Gradient Descent)的反向传播方法;7. a kind of local road adhesion coefficient estimation method based on image segmentation according to claim 1, is characterized in that, the training method in described 1.4 is to utilize single GPU or multi-GPU batch gradient descent (Batch GradientDescent) or random Backpropagation methods of Stochastic Gradient Descent or Mini-Batch Gradient Descent; 训练采用的Focal Loss通过修改标准的交叉熵损失函数得到:The Focal Loss used for training is obtained by modifying the standard cross-entropy loss function:
Figure RE-FDA0002860511330000023
Figure RE-FDA0002860511330000023
选择指数衰减学习率作为模型训练时的学习率,如公式8所示:The exponential decay learning rate is chosen as the learning rate during model training, as shown in Equation 8: decayed_lr=init_lr×decay_rate(global_step/decay_steps) (8)decayed_lr=init_lr×decay_rate (global_step/decay_steps) (8) 式中:init_lr—初始设定的学习率In the formula: init_lr—the initial learning rate decay_rate—衰减系数decay_rate—decay coefficient global_step—迭代轮数global_step—number of iteration rounds decay_steps—衰减速度。decay_steps - Decay speed.
8.根据权利要求1所述的一种基于图像分割的局部路面附着系数估计方法,其特征在于,所述2.1中采集路面图像的工具为车载摄像头,可以为网络摄像头或USB摄像头。8 . The method for estimating local road adhesion coefficient based on image segmentation according to claim 1 , wherein the tool for collecting road images in 2.1 is a vehicle-mounted camera, which can be a web camera or a USB camera. 9 . 9.根据权利要求1所述的一种基于图像分割的局部路面附着系数估计方法,其特征在于,所述2.2中所述的分类和定位不同类别指的是以干沥青为背景路面,辨别出积水区域或积雪区域或结冰区域等或其中任意几种的随机组合并得到其在沥青路面的分布。9. A method for estimating local road adhesion coefficient based on image segmentation according to claim 1, characterized in that, the different categories of classification and positioning described in 2.2 refer to taking dry asphalt as the background road surface, and distinguishing Water accumulation area, snow accumulation area or icing area, etc. or random combination of any of them, and get its distribution on the asphalt pavement.
CN202011067813.4A 2020-10-07 2020-10-07 Local road surface adhesion coefficient estimation method based on image segmentation Active CN112329533B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011067813.4A CN112329533B (en) 2020-10-07 2020-10-07 Local road surface adhesion coefficient estimation method based on image segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011067813.4A CN112329533B (en) 2020-10-07 2020-10-07 Local road surface adhesion coefficient estimation method based on image segmentation

Publications (2)

Publication Number Publication Date
CN112329533A true CN112329533A (en) 2021-02-05
CN112329533B CN112329533B (en) 2024-05-14

Family

ID=74314547

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011067813.4A Active CN112329533B (en) 2020-10-07 2020-10-07 Local road surface adhesion coefficient estimation method based on image segmentation

Country Status (1)

Country Link
CN (1) CN112329533B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113312976A (en) * 2021-04-30 2021-08-27 淮阴工学院 Braking distance calculation method based on combination of image processing and road adhesion coefficient
CN113435409A (en) * 2021-07-23 2021-09-24 北京地平线信息技术有限公司 Training method and device of image recognition model, storage medium and electronic equipment
CN113887532A (en) * 2021-11-17 2022-01-04 安徽省公共气象服务中心 Method for identifying and correcting accumulated snow images of expressway based on scene classification
CN114332715A (en) * 2021-12-30 2022-04-12 武汉华信联创技术工程有限公司 Method, device and equipment for identifying snow through automatic meteorological observation and storage medium
CN114332722A (en) * 2021-12-31 2022-04-12 吉林大学 Real-time estimation method for adhesion coefficient of mixed ice and snow road surface based on video data
CN114820819A (en) * 2022-05-26 2022-07-29 广东机电职业技术学院 Expressway automatic driving method and system
CN118691838A (en) * 2024-08-22 2024-09-24 云途信息科技(杭州)有限公司 Urban water depth estimation method and system based on multi-reference system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263844A (en) * 2019-06-18 2019-09-20 北京中科原动力科技有限公司 A kind of method of on-line study and real-time estimation pavement state
CN110378416A (en) * 2019-07-19 2019-10-25 北京中科原动力科技有限公司 A kind of coefficient of road adhesion estimation method of view-based access control model
CN111723849A (en) * 2020-05-26 2020-09-29 同济大学 A method and system for online estimation of road adhesion coefficient based on vehicle camera

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263844A (en) * 2019-06-18 2019-09-20 北京中科原动力科技有限公司 A kind of method of on-line study and real-time estimation pavement state
CN110378416A (en) * 2019-07-19 2019-10-25 北京中科原动力科技有限公司 A kind of coefficient of road adhesion estimation method of view-based access control model
CN111723849A (en) * 2020-05-26 2020-09-29 同济大学 A method and system for online estimation of road adhesion coefficient based on vehicle camera

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113312976A (en) * 2021-04-30 2021-08-27 淮阴工学院 Braking distance calculation method based on combination of image processing and road adhesion coefficient
CN113435409A (en) * 2021-07-23 2021-09-24 北京地平线信息技术有限公司 Training method and device of image recognition model, storage medium and electronic equipment
CN113887532A (en) * 2021-11-17 2022-01-04 安徽省公共气象服务中心 Method for identifying and correcting accumulated snow images of expressway based on scene classification
CN114332715A (en) * 2021-12-30 2022-04-12 武汉华信联创技术工程有限公司 Method, device and equipment for identifying snow through automatic meteorological observation and storage medium
CN114332722A (en) * 2021-12-31 2022-04-12 吉林大学 Real-time estimation method for adhesion coefficient of mixed ice and snow road surface based on video data
CN114820819A (en) * 2022-05-26 2022-07-29 广东机电职业技术学院 Expressway automatic driving method and system
CN114820819B (en) * 2022-05-26 2023-03-31 广东机电职业技术学院 Expressway automatic driving method and system
CN118691838A (en) * 2024-08-22 2024-09-24 云途信息科技(杭州)有限公司 Urban water depth estimation method and system based on multi-reference system
CN118691838B (en) * 2024-08-22 2024-11-15 云途信息科技(杭州)有限公司 Urban water depth estimation method and system based on multi-reference system

Also Published As

Publication number Publication date
CN112329533B (en) 2024-05-14

Similar Documents

Publication Publication Date Title
CN112329533B (en) Local road surface adhesion coefficient estimation method based on image segmentation
CN111310574B (en) A vehicle vision real-time multi-target multi-task joint perception method and device
CN109101907B (en) A Vehicle Image Semantic Segmentation System Based on Bilateral Segmentation Network
CN110147763A (en) Video semanteme dividing method based on convolutional neural networks
CN108830171B (en) Intelligent logistics warehouse guide line visual detection method based on deep learning
CN113126115B (en) Semantic SLAM method and device based on point cloud, electronic equipment and storage medium
CN106228125B (en) Method for detecting lane lines based on integrated study cascade classifier
CN113902915A (en) Semantic segmentation method and system based on low-illumination complex road scene
CN106599827A (en) Small target rapid detection method based on deep convolution neural network
CN113065578A (en) Image visual semantic segmentation method based on double-path region attention coding and decoding
CN113506300A (en) A method and system for image semantic segmentation based on complex road scenes in rainy days
Zhang et al. End to end video segmentation for driving: Lane detection for autonomous car
CN110717886A (en) Pavement pool detection method based on machine vision in complex environment
JP4420512B2 (en) Moving object motion classification method and apparatus, and image recognition apparatus
CN117351702A (en) Intelligent traffic management method based on adjustment of traffic flow
CN116630702A (en) Pavement adhesion coefficient prediction method based on semantic segmentation network
CN117763423A (en) Intelligent automobile laser radar point cloud anomaly detection method based on deep learning
Saeed et al. Gravel road classification based on loose gravel using transfer learning
CN116503834A (en) Traffic road surface attachment coefficient prediction model and prediction method based on semantic segmentation
CN117218858B (en) Traffic safety warning system and method for highway
CN112597996A (en) Task-driven natural scene-based traffic sign significance detection method
CN115147450B (en) Moving target detection method and detection device based on motion frame difference image
CN110765900A (en) DSSD-based automatic illegal building detection method and system
CN110738113B (en) An Object Detection Method Based on Neighboring Scale Feature Filtering and Transfer
CN113312976A (en) Braking distance calculation method based on combination of image processing and road adhesion coefficient

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant