CN113379711A - Image-based urban road pavement adhesion coefficient acquisition method - Google Patents

Image-based urban road pavement adhesion coefficient acquisition method Download PDF

Info

Publication number
CN113379711A
CN113379711A CN202110683924.6A CN202110683924A CN113379711A CN 113379711 A CN113379711 A CN 113379711A CN 202110683924 A CN202110683924 A CN 202110683924A CN 113379711 A CN113379711 A CN 113379711A
Authority
CN
China
Prior art keywords
image
road surface
size
road
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110683924.6A
Other languages
Chinese (zh)
Other versions
CN113379711B (en
Inventor
刘俊
郭洪艳
刘惠
赵旭
陈虹
高振海
胡云峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN202110683924.6A priority Critical patent/CN113379711B/en
Publication of CN113379711A publication Critical patent/CN113379711A/en
Application granted granted Critical
Publication of CN113379711B publication Critical patent/CN113379711B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T90/00Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Library & Information Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供了一种基于图像的城市道路路面附着系数获取方法,首先建立路面图像信息库,然后建立路面图像数据集,建立并训练路面图像区域提取网络,再建立并训练路面类型识别网络,最后获取路面附着系数信息;本方法可以为智能驾驶辅助系统以及无人驾驶系统的开发提供路面附着系数信息;本方法通过获取前方道路图像实现附着系数的获取,能够实现路面附着信息的提前获取;因为本方法设计了基于图像路面区域提取网络和路面识别网络串行的方法,然后将路面识别网络进行结构简化,所以能够实现前方路面附着信息的实时快速获取。

Figure 202110683924

The invention provides an image-based method for obtaining the adhesion coefficient of urban road pavement. First, a pavement image information database is established, then a pavement image data set is established, a pavement image area extraction network is established and trained, a pavement type recognition network is established and trained, and finally Obtain road adhesion coefficient information; this method can provide road adhesion coefficient information for the development of intelligent driving assistance systems and unmanned driving systems; this method realizes the acquisition of the adhesion coefficient by acquiring the road image ahead, and can realize the early acquisition of road adhesion information; because This method designs a serial method based on the image pavement area extraction network and the pavement identification network, and then simplifies the structure of the pavement identification network, so that the real-time and fast acquisition of the front pavement attachment information can be realized.

Figure 202110683924

Description

Image-based urban road pavement adhesion coefficient acquisition method
Technical Field
The invention belongs to the technical field of intelligent automobiles, relates to a road surface adhesion coefficient acquisition method, and more particularly relates to an image-based urban road surface adhesion coefficient acquisition method.
Background
With the development of automobile intellectualization, the performance requirements of users on vehicle-mounted intelligent driving auxiliary systems and unmanned driving systems are higher and higher, the improvement of the performance of most intelligent driving auxiliary systems depends on the accuracy of dynamic control, the design of a high-performance dynamic control system needs to accurately acquire road surface information in real time, and an estimator based on a dynamic model can acquire a real-time and accurate road surface adhesion coefficient estimation value.
At the present stage, more and more intelligent vehicles are provided with cameras and other devices to acquire road information and surrounding vehicle information, so that new opportunities are brought to the research of a road adhesion coefficient identification method, and the intelligent vehicle has the advantages that the conditions of the front road surface can be sensed in advance, so that certain prediction capability is provided, the intelligent driving vehicle can adjust a control strategy in advance under the condition that the road surface changes suddenly, the capability of responding to dangerous working conditions is improved, and the challenge is to how to acquire the road adhesion coefficient information by using image sensing information.
Disclosure of Invention
In order to overcome the problems in the prior art, the invention provides an urban road pavement adhesion coefficient acquisition method based on an image.
The method is realized by the following technical scheme:
an urban road pavement adhesion coefficient obtaining method based on images comprises the following specific steps:
step one, establishing a road surface image information base
The precondition for obtaining the road adhesion coefficient based on the image is that a perfect road image information base can be established, and the sample image is properly processed to ensure that the characteristic information in the image is fully obtained;
firstly, acquiring pavement image data, wherein adverse factors on imaging effects need to be made up in the pavement image acquisition process, the image acquisition equipment is not limited to one or a certain type of image acquisition equipment, and the requirements on equipment performance and installation position are as follows: the video resolution of 1280 multiplied by 720 and above is provided, the video frame rate is 30 frames per second and above, the maximum effective shooting distance is more than 70 meters, and the wide dynamic technology is provided to quickly adapt to the light intensity change; the installation position of the equipment should ensure that the road surface shot in the acquired image information occupies more than half of the whole image area;
according to the conditions of urban road surfaces under different weather conditions, through comparative analysis and by combining the types of the urban road surfaces in China, the road surface types to be identified are specifically defined as 5 road surface types including an asphalt road surface, a cement road surface, a loose snow road surface, a compacted snow road surface and an ice plate road surface, a video file in the data acquisition process is decomposed into pictures at intervals of 10 frames, the pictures are sorted according to the 5 attribution types according to road surface characteristics in GB/T920 plus 2002 road surface grade and surface layer type code and pavement adhesion coefficient survey analysis in cold regions, the same type of road surface images are uniformly stored under the same folder, and the establishment of a road surface image information base is completed;
step two, establishing a pavement image data set
The method comprises the steps that an original collected image still contains a large number of non-road surface elements, and the acquisition precision of a road surface adhesion coefficient is seriously influenced, so that an image sample and a pixel-level label of a region corresponding to a road surface are needed in the road surface adhesion coefficient acquisition method based on the image, the image in a road surface image information base collected in the step one needs to be subjected to road surface range labeling, Labelme in software Anaconda is selected as a labeling tool, the labeling tool is used for manually labeling each image in a sample set one by one, a create polygon button is clicked in the labeling process, points are drawn along the boundary of the road surface region in the image, a labeling frame can completely cover the road surface region, and the labeling category is named as road; after the labeling is finished, a json file can be generated and is converted by using a self-contained json _ to _ dataset. py script program in software Anaconda to obtain a json folder, five files with names and suffixes of img.png, label.png, label _ viz.png, info.yaml and label _ names are contained under the folder, only the file with the label.png picture format is required to be converted to obtain a 8-bit gray label image, the labeling process is sequentially carried out on the pictures in the road image information base by using Labelmes in the Anaconda to obtain a gray label image set of the pictures in the road image information base, and the gray label image set of the pictures in the road image information base is a road image data set;
step three, establishing and training a road surface image area extraction network
The extraction process of the pavement image area is realized under the Anaconda environment through a semantic segmentation network, the whole semantic segmentation network is of an encoder-decoder structure, and the specific design is as follows:
3.1, firstly, zooming the image to be recognized into a picture with the size of 769 multiplied by 3, and taking the picture as the input of a semantic segmentation network;
3.2, then setting the first layer as a convolutional layer, adopting 32 filters with the size of 3 × 3, the step length is 2, filling the filter with 1, and obtaining the size of an output characteristic graph of the convolutional layer, which is 385 × 385 × 32, through batch regularization and a ReLU activation function;
3.3, inputting the convolution layer output characteristic diagram into a maximum value pooling layer, wherein the size is 3 multiplied by 3, the step length is 2, and the size of the obtained pooling layer output characteristic diagram is 193 multiplied by 32;
3.4, taking the output characteristic diagram of the pooling layer as the input of the bottleneck module structure, wherein the bottleneck module structure is realized in detail as follows: firstly, copying input characteristic channel to increase characteristic dimension, one branch directly passing through depth convolution with size of 3X 3 and step length of 2, another branch equally dividing into two sub-branches by channel splitting, one sub-branch is subjected to 3 x 3 depth convolution and 1 x 1 point-by-point convolution, the other sub-branch is directly subjected to a characteristic multiplexing mode, then the two sub-branches are connected through channel splicing, the channel arrangement sequence is disturbed through channel cleaning, after the same depth convolution with the size of 3 multiplied by 3 and the step length of 2, the data are spliced with a copy channel, finally the information exchange between groups is realized through the point-by-point convolution with the size of 1 multiplied by 1, therefore, the size of the output characteristic diagram of the whole bottleneck module structure is reduced by half, the number of channels is doubled, and the size of the output characteristic diagram passing through the bottleneck module structure once is 97 multiplied by 64;
3.5, taking the output result of the process 3.4 as input, passing through the bottleneck module structure again to obtain the output characteristic graph with the size of 49 × 49 × 128 after passing through the bottleneck module structure twice, taking the result as input, passing through the bottleneck module structure again to obtain the output characteristic graph with the size of 25 × 25 × 256 after passing through the bottleneck module structure three times, and reducing the size by 32 times compared with the original image, wherein the whole part is used as an encoder part of a semantic segmentation network;
3.6, the decoder part adopts a jump structure, 2 times of upsampling is carried out on the output characteristic diagram of the three-time bottleneck module structure which is processed by the process 3.5 by using a bilinear interpolation method to obtain a characteristic diagram with the size of 49 multiplied by 256, and the characteristic diagram and the output characteristic diagram of the two-time bottleneck module structure which is processed by the process 3.5 are added pixel by pixel, and in the process, the output characteristic channels of the two-time bottleneck module structure which is processed by the process 3.5 need to be copied to ensure that the result is still 256 output channels;
3.7, performing 2 times of upsampling on the result obtained in the process 3.6 by using a bilinear interpolation method again to obtain a characteristic diagram with the size of 97 multiplied by 256, and adding the characteristic diagram with the output characteristic diagram of the primary bottleneck module structure passing through the process 3.4 pixel by pixel;
3.8, converting the output channel number into a semantic category number by passing the result of the process 3.7 through a 1 × 1 convolutional layer, setting a Dropout layer to reduce the occurrence of an overfitting phenomenon, finally obtaining a feature map with the same size as the original image by 8 times of upsampling, giving a semantic category prediction result of each pixel point according to the maximum probability by using an Argmax function, and finally obtaining the whole semantic segmentation network;
randomly disordering the pavement image data set established in the step two, selecting 80% of sample pictures as a training set, and selecting 20% of sample pictures as a verification set; during semantic segmentation network training; the size of the read training set picture tensor is randomly scaled between 0.5 time and 2 times according to 0.25 step length, the picture tensor is randomly cut according to the size of 769 multiplied by 769 pixels and is randomly turned left and right, the purpose of data enhancement is achieved, the adaptability of a segmentation network is improved, and pixel point values are normalized from 0-255 to 0-1;
selecting a Poly learning rate rule when training a semantic segmentation network, wherein a learning rate attenuation expression is an expression (1), an initial learning rate is 0.001, training iteration steps are iters, the maximum training step max _ iter is set to be 20K steps, and power is set to be 0.9; using an Adam optimization solution algorithm, dynamically adjusting the learning rate of each parameter by using first moment estimation and second moment estimation of the gradient, setting the batch processing size to be 16 according to the performance of computer hardware, storing the model parameters once every 10-30min, and simultaneously using a verification set to perform performance evaluation on the network;
Figure BDA0003123608070000041
after the network training is finished, a proper semantic segmentation evaluation index is required to be selected for evaluating the performance of the model, before that, a confusion matrix is introduced, as shown in table 1, each row of the two-classification confusion matrix represents a prediction class, each column of the two-classification confusion matrix represents a real attribution class of data, and a specific numerical value represents the number of samples predicted to be a certain class;
TABLE 1 two-class confusion matrix schematic
Figure BDA0003123608070000042
The evaluation index of the semantic segmentation network is an average intersection-to-union ratio MIoU, which represents the ratio of the intersection and the union of each type of prediction result and the true value, and the result of the sum and the re-averaging is shown in formula (2):
Figure BDA0003123608070000043
when the MIoU is trained to be more than 60%, the training can be considered to be finished, the trained model and model parameters are stored, a road surface image area extraction network can be obtained, and the actually acquired original image is input into the road surface image area extraction network, so that the extraction of the road surface area in the image can be finished;
step four, establishing and training a road surface type recognition network
The extraction process of the road surface area in the real-time image information can be completed through the network in the third step, and the identification of the road surface type is completed on the basis of the extraction result of the road surface area in the third step;
after the image pavement data set is processed by the semantic segmentation network, an image set only containing a pavement area is obtained and is used as a final data set of a training and evaluation pavement type recognition network, so that the pavement type recognition network is built under the Anaconda environment, and the specific network structure is designed as follows:
4.1, firstly, scaling the image to be classified and identified into a picture with the size of 224 multiplied by 3 as the input of a convolutional neural network;
4.2, then setting the first layer as a convolutional layer, adopting 32 filters with the size of 3 × 3, the step length is 2, filling the filter with 1, and obtaining the size of an output characteristic diagram of the convolutional layer with the size of 112 × 112 × 32 through batch regularization and a ReLU activation function;
4.3, inputting the convolution layer output characteristic diagram into a maximum value pooling layer, wherein the size is 3 multiplied by 3, the step length is 2, and the size of the output characteristic diagram of the pooling layer is 56 multiplied by 32;
4.4, taking the output characteristic diagram of the pooling layer as the input of the bottleneck module structure, wherein the bottleneck module structure is implemented in detail as follows: firstly, copying input characteristic channel to increase characteristic dimension, one branch directly passing through depth convolution with size of 3X 3 and step length of 2, another branch equally dividing into two sub-branches by channel splitting, one sub-branch is subjected to 3 x 3 depth convolution and 1 x 1 point-by-point convolution, the other sub-branch is directly subjected to a characteristic multiplexing mode, then the two sub-branches are connected through channel splicing, the channel arrangement sequence is disturbed through channel cleaning, after the same depth convolution with the size of 3 x 3 and the step length of 2, the data are spliced with a copy channel, finally the information exchange between groups is realized through the point-by-point convolution with the size of 1 x 1, it can be seen that the size of the output characteristic diagram of the whole bottleneck module structure is reduced by half, the number of channels is doubled, and the size of the output characteristic diagram passing through the bottleneck module structure for one time is 28 multiplied by 64;
4.5, taking the output result of the process 4.4 as input, passing through the bottleneck module structure again to obtain an output characteristic diagram with the size of 14 multiplied by 128 after passing through the bottleneck module structure twice, taking the result as input, and passing through the bottleneck module structure again to obtain an output characteristic diagram with the size of 7 multiplied by 256 after passing through the bottleneck module structure three times;
4.6, converting the output result in the process 4.5 into a characteristic diagram with the size of 1 × 1 × 256 by using a global average pooling layer with the size of 7 × 7;
4.7, using a layer of full connection layer and a Softmax function as a network classifier, converting the output characteristic diagram in the process 4.6 into probability values belonging to various categories, and determining a network classification result according to the maximum probability value by using an Argmax function;
then, making the images only containing the road surface area obtained in the step three into a data set for training a road surface type identification network, storing different types of road surface images in a classified manner according to the folder names established in the step one, sequentially reading the image data in different folders, adding 5-bit 0/1 label information and road surface adhesion coefficient information, referring to table 2, adjusting the size of the image to 224 x 224 pixels in a bilinear interpolation manner, normalizing the pixel point values from 0-255 to 0-1, disturbing the road surface image data set, randomly extracting each type of image according to a proportion of 20% as a verification set, and taking the rest as a training set;
TABLE 2 pavement image Category labels
Figure BDA0003123608070000061
After the training set and the verification set of the road surface type recognition network are manufactured, training and evaluating a network model are started, the batch processing size is set to 64, a cross entropy loss function is selected, an Adam optimization solving algorithm is used, the basic learning rate is 0.0001, when the training is completed until the MIoU is more than 80%, the training can be considered to be completed, the model and the training result are stored according to the iteration times epoch, and the well-trained road surface type recognition network can be obtained;
step five, obtaining the road surface adhesion coefficient information
The road surface adhesion coefficient information acquisition process is as follows: the method comprises the steps of shooting a front road image through a camera in the driving process of a vehicle, transmitting the front road image shot by the camera to a road image area extraction network to obtain a road area, transmitting the image only containing the road area to a road type identification network for classification and identification, judging the adhesion coefficient range of the road where the road is located according to the corresponding vehicle speed after the road identification is finished, referring to a table 2, and taking the intermediate values of the upper limit and the lower limit of the road adhesion coefficient range as the current road adhesion coefficient to finish the acquisition of the road adhesion coefficient information.
Compared with the prior art, the invention has the beneficial effects that:
the invention discloses a road adhesion coefficient acquisition method based on images, which can provide road adhesion coefficient information for the development of an intelligent driving auxiliary system and an unmanned driving system; the method realizes the acquisition of the adhesion coefficient by acquiring the front road image, and can realize the advance acquisition of the road adhesion information; the method designs a method for extracting the network and identifying the network serial of the road surface type based on the road surface image area, so that the real-time and rapid acquisition of the front road surface attachment information can be realized.
Drawings
FIG. 1 is a simplified flow chart of a method for obtaining an image-based road surface adhesion coefficient of an urban road in the method;
FIG. 2 is a network structure diagram of the road surface image area extraction in the method;
FIG. 3 is a diagram of the bottleneck module in the present method;
FIG. 4 is a diagram of a road surface type recognition network in the present method;
Detailed Description
The invention provides an urban road adhesion coefficient acquisition method based on images, which aims to solve the problem of acquisition of road adhesion coefficient information required by development of intelligent vehicle driving assistance and unmanned technology.
The invention relates to an image-based urban road pavement adhesion coefficient acquisition method, which comprises the following specific steps:
step one, establishing a road surface image information base
The precondition for obtaining the road adhesion coefficient based on the image is that a perfect road image information base can be established, and the sample image is properly processed to ensure that the characteristic information in the image is fully obtained;
firstly, acquiring pavement image data, wherein adverse factors on imaging effects need to be made up in the pavement image acquisition process, the image acquisition equipment is not limited to one or a certain type of image acquisition equipment, and the requirements on equipment performance and installation position are as follows: the video resolution of 1280 multiplied by 720 and above is provided, the video frame rate is 30 frames per second and above, the maximum effective shooting distance is more than 70 meters, and the wide dynamic technology is provided to quickly adapt to the light intensity change; the installation position of the equipment should ensure that the road surface shot in the acquired image information occupies more than half of the whole image area;
according to the conditions of urban road surfaces under different weather conditions, through comparative analysis and by combining the types of the urban road surfaces in China, the road surface types to be identified are specifically defined as 5 road surface types including an asphalt road surface, a cement road surface, a loose snow road surface, a compacted snow road surface and an ice plate road surface, a video file in the data acquisition process is decomposed into pictures at intervals of 10 frames, the pictures are sorted according to the 5 attribution types according to road surface characteristics in GB/T920 plus 2002 road surface grade and surface layer type code and pavement adhesion coefficient survey analysis in cold regions, the same type of road surface images are uniformly stored under the same folder, and the establishment of a road surface image information base is completed;
step two, establishing a pavement image data set
The method comprises the steps that an original collected image still contains a large number of non-road surface elements, and the acquisition precision of a road surface adhesion coefficient is seriously influenced, so that an image sample and a pixel-level label of a region corresponding to a road surface are needed in the road surface adhesion coefficient acquisition method based on the image, the image in a road surface image information base collected in the step one needs to be subjected to road surface range labeling, Labelme in software Anaconda is selected as a labeling tool, the labeling tool is used for manually labeling each image in a sample set one by one, a create polygon button is clicked in the labeling process, points are drawn along the boundary of the road surface region in the image, a labeling frame can completely cover the road surface region, and the labeling category is named as road; after the labeling is finished, a json file can be generated and is converted by using a self-contained json _ to _ dataset. py script program in software Anaconda to obtain a json folder, five files with names and suffixes of img.png, label.png, label _ viz.png, info.yaml and label _ names are contained under the folder, only the file with the label.png picture format is required to be converted to obtain a 8-bit gray label image, the labeling process is sequentially carried out on the pictures in the road image information base by using Labelmes in the Anaconda to obtain a gray label image set of the pictures in the road image information base, and the gray label image set of the pictures in the road image information base is a road image data set;
step three, training of road surface image area extraction network
The extraction network of the pavement image area is realized in an Anaconda environment through a semantic segmentation network, the structure of the semantic segmentation network is shown in FIG. 2, the whole semantic segmentation network is an encoder-decoder structure, and the specific design is as follows:
3.1, firstly, zooming the image to be recognized into a picture with the size of 769 multiplied by 3, and taking the picture as the input of a semantic segmentation network;
3.2, then setting the first layer as a convolutional layer, adopting 32 filters with the size of 3 × 3, the step length is 2, filling the filter with 1, and obtaining the size of an output characteristic graph of the convolutional layer, which is 385 × 385 × 32, through batch regularization and a ReLU activation function;
3.3, inputting the convolution layer output characteristic diagram into a maximum value pooling layer, wherein the size is 3 multiplied by 3, the step length is 2, and the size of the obtained pooling layer output characteristic diagram is 193 multiplied by 32;
3.4, taking the output characteristic diagram of the pooling layer as the input of the bottleneck module structure, wherein the bottleneck module structure is shown in fig. 3, and the detailed implementation process of the bottleneck module structure is as follows: firstly, copying input characteristic channel to increase characteristic dimension, one branch directly passing through depth convolution with size of 3X 3 and step length of 2, another branch equally dividing into two sub-branches by channel splitting, one sub-branch is subjected to 3 x 3 depth convolution and 1 x 1 point-by-point convolution, the other sub-branch is directly subjected to a characteristic multiplexing mode, then the two sub-branches are connected through channel splicing, the channel arrangement sequence is disturbed through channel cleaning, after the same depth convolution with the size of 3 multiplied by 3 and the step length of 2, the data are spliced with a copy channel, finally the information exchange between groups is realized through the point-by-point convolution with the size of 1 multiplied by 1, therefore, the size of the output characteristic diagram of the whole bottleneck module structure is reduced by half, the number of channels is doubled, and the size of the output characteristic diagram passing through the bottleneck module structure once is 97 multiplied by 64;
3.5, taking the output result of the process 3.4 as input, passing through the bottleneck module structure again to obtain the output characteristic graph with the size of 49 × 49 × 128 after passing through the bottleneck module structure twice, taking the result as input, passing through the bottleneck module structure again to obtain the output characteristic graph with the size of 25 × 25 × 256 after passing through the bottleneck module structure three times, and reducing the size by 32 times compared with the original image, wherein the whole part is used as an encoder part of a semantic segmentation network;
3.6, the decoder part adopts a jump structure, 2 times of upsampling is carried out on the output characteristic diagram of the three-time bottleneck module structure which is processed by the process 3.5 by using a bilinear interpolation method to obtain a characteristic diagram with the size of 49 multiplied by 256, and the characteristic diagram and the output characteristic diagram of the two-time bottleneck module structure which is processed by the process 3.5 are added pixel by pixel, and in the process, the output characteristic channels of the two-time bottleneck module structure which is processed by the process 3.5 need to be copied to ensure that the result is still 256 output channels;
3.7, performing 2 times of upsampling on the result obtained in the process 3.6 by using a bilinear interpolation method again to obtain a characteristic diagram with the size of 97 multiplied by 256, and adding the characteristic diagram with the output characteristic diagram of the primary bottleneck module structure passing through the process 3.4 pixel by pixel;
3.8, converting the output channel number into a semantic category number by passing the result of the process 3.7 through a 1 × 1 convolutional layer, setting a Dropout layer to reduce the occurrence of an overfitting phenomenon, finally obtaining a feature map with the same size as the original image by 8 times of upsampling, giving a semantic category prediction result of each pixel point according to the maximum probability by using an Argmax function, and finally obtaining the whole semantic segmentation network;
randomly disordering the pavement image data set established in the step two, selecting 80% of sample pictures as a training set, and selecting 20% of sample pictures as a verification set; during semantic segmentation network training; the size of the read training set picture tensor is randomly scaled between 0.5 time and 2 times according to 0.25 step length, the picture tensor is randomly cut according to the size of 769 multiplied by 769 pixels and is randomly turned left and right, the purpose of data enhancement is achieved, the adaptability of a segmentation network is improved, and pixel point values are normalized from 0-255 to 0-1;
selecting a Poly learning rate rule when training a semantic segmentation network, wherein a learning rate attenuation expression is an expression (1), an initial learning rate is 0.001, training iteration steps are iters, the maximum training step max _ iter is set to be 20K steps, and power is set to be 0.9; using an Adam optimization solution algorithm, dynamically adjusting the learning rate of each parameter by using first moment estimation and second moment estimation of the gradient, setting the batch processing size to be 16 according to the performance of computer hardware, storing the model parameters once every 10-30min, and simultaneously using a verification set to perform performance evaluation on the network;
Figure BDA0003123608070000091
after the network training is finished, a proper semantic segmentation evaluation index is required to be selected for evaluating the performance of the model, before that, a confusion matrix is introduced, as shown in table 1, each row of the two-classification confusion matrix represents a prediction class, each column of the two-classification confusion matrix represents a real attribution class of data, and a specific numerical value represents the number of samples predicted to be a certain class;
TABLE 1 two-class confusion matrix schematic
Figure BDA0003123608070000092
Figure BDA0003123608070000101
The evaluation index of the semantic segmentation network is an average intersection-to-union ratio MIoU, which represents the ratio of the intersection and the union of each type of prediction result and the true value, and the result of the sum and the re-averaging is shown in formula (2):
Figure BDA0003123608070000102
when the MIoU is trained to be more than 60%, the training can be considered to be finished, the trained model and model parameters are stored, a road surface image area extraction network can be obtained, and the actually acquired original image is input into the road surface image area extraction network, so that the extraction of the road surface area in the image can be finished;
step four, training the road surface recognition network
The extraction process of the road surface area in the real-time image information can be completed through the network in the third step, and the identification of the road surface type is completed on the basis of the extraction result of the road surface area in the third step;
after the image pavement data set is processed by the semantic segmentation network, an image set only containing a pavement area is obtained, and the image pavement data set is used as a final data set of a training and evaluation pavement type recognition network, so that the pavement type recognition network is built under an Anaconda environment, as shown in FIG. 4, the specific network structure is designed as follows:
4.1, firstly, scaling the image to be classified and identified into a picture with the size of 224 multiplied by 3 as the input of a convolutional neural network;
4.2, then setting the first layer as a convolutional layer, adopting 32 filters with the size of 3 × 3, the step length is 2, filling the filter with 1, and obtaining the size of an output characteristic diagram of the convolutional layer with the size of 112 × 112 × 32 through batch regularization and a ReLU activation function;
4.3, inputting the convolution layer output characteristic diagram into a maximum value pooling layer, wherein the size is 3 multiplied by 3, the step length is 2, and the size of the output characteristic diagram of the pooling layer is 56 multiplied by 32;
4.4, taking the output characteristic diagram of the pooling layer as the input of the bottleneck module structure, wherein the bottleneck module structure is implemented in detail as follows: firstly, copying input characteristic channel to increase characteristic dimension, one branch directly passing through depth convolution with size of 3X 3 and step length of 2, another branch equally dividing into two sub-branches by channel splitting, one sub-branch is subjected to 3 x 3 depth convolution and 1 x 1 point-by-point convolution, the other sub-branch is directly subjected to a characteristic multiplexing mode, then the two sub-branches are connected through channel splicing, the channel arrangement sequence is disturbed through channel cleaning, after the same depth convolution with the size of 3 x 3 and the step length of 2, the data are spliced with a copy channel, finally the information exchange between groups is realized through the point-by-point convolution with the size of 1 x 1, it can be seen that the size of the output characteristic diagram of the whole bottleneck module structure is reduced by half, the number of channels is doubled, and the size of the output characteristic diagram passing through the bottleneck module structure for one time is 28 multiplied by 64;
4.5, taking the output result of the process 4.4 as input, passing through the bottleneck module structure again to obtain an output characteristic diagram with the size of 14 multiplied by 128 after passing through the bottleneck module structure twice, taking the result as input, and passing through the bottleneck module structure again to obtain an output characteristic diagram with the size of 7 multiplied by 256 after passing through the bottleneck module structure three times;
4.6, converting the output result in the process 4.5 into a characteristic diagram with the size of 1 × 1 × 256 by using a global average pooling layer with the size of 7 × 7;
4.7, using a layer of full connection layer and a Softmax function as a network classifier, converting the output characteristic diagram in the process 4.6 into probability values belonging to various categories, and determining a network classification result according to the maximum probability value by using an Argmax function;
then, making the images only containing the road surface area obtained in the step three into a data set for training a road surface type identification network, storing different types of road surface images in a classified manner according to the folder names established in the step one, sequentially reading the image data in different folders, adding 5-bit 0/1 label information and road surface adhesion coefficient information, referring to table 2, adjusting the size of the image to 224 x 224 pixels in a bilinear interpolation manner, normalizing the pixel point values from 0-255 to 0-1, disturbing the road surface image data set, randomly extracting each type of image according to a proportion of 20% as a verification set, and taking the rest as a training set;
TABLE 2 pavement image Category labels
Figure BDA0003123608070000111
After the training set and the verification set of the road surface type recognition network are manufactured, training and evaluating a network model are started, the batch processing size is set to 64, a cross entropy loss function is selected, an Adam optimization solving algorithm is used, the basic learning rate is 0.0001, when the training is completed until the MIoU is more than 80%, the training can be considered to be completed, the model and the training result are stored according to the iteration times epoch, and the well-trained road surface type recognition network can be obtained;
step five, obtaining the road surface adhesion coefficient information
The road surface adhesion coefficient information acquisition process is as follows: the method comprises the steps of shooting a front road image through a camera in the driving process of a vehicle, transmitting the front road image shot by the camera to a road image area extraction network to obtain a road area, transmitting the image only containing the road area to a road type identification network for classification and identification, judging the adhesion coefficient range of the road where the road is located according to the corresponding vehicle speed after the road identification is finished, referring to a table 2, and taking the intermediate values of the upper limit and the lower limit of the road adhesion coefficient range as the current road adhesion coefficient to finish the acquisition of the road adhesion coefficient information.

Claims (1)

1. An image-based urban road pavement adhesion coefficient acquisition method is characterized by comprising the following specific steps:
step one, establishing a road surface image information base
The precondition for obtaining the road adhesion coefficient based on the image is that a perfect road image information base can be established, and the sample image is properly processed to ensure that the characteristic information in the image is fully obtained;
firstly, acquiring pavement image data, wherein adverse factors on imaging effects need to be made up in the pavement image acquisition process, the image acquisition equipment is not limited to one or a certain type of image acquisition equipment, and the requirements on equipment performance and installation position are as follows: the video resolution of 1280 multiplied by 720 and above is provided, the video frame rate is 30 frames per second and above, the maximum effective shooting distance is more than 70 meters, and the wide dynamic technology is provided to quickly adapt to the light intensity change; the installation position of the equipment should ensure that the road surface shot in the acquired image information occupies more than half of the whole image area;
according to the conditions of urban road surfaces under different weather conditions, through comparative analysis and by combining the types of the urban road surfaces in China, the road surface types to be identified are specifically defined as 5 road surface types including an asphalt road surface, a cement road surface, a loose snow road surface, a compacted snow road surface and an ice plate road surface, a video file in the data acquisition process is decomposed into pictures at intervals of 10 frames, the pictures are sorted according to the 5 attribution types according to road surface characteristics in GB/T920 plus 2002 road surface grade and surface layer type code and pavement adhesion coefficient survey analysis in cold regions, the same type of road surface images are uniformly stored under the same folder, and the establishment of a road surface image information base is completed;
step two, establishing a pavement image data set
The method comprises the steps that an original acquired image still contains a large number of non-road surface elements, and the acquisition precision of a road surface adhesion coefficient is seriously influenced, so that an image sample and a pixel-level label of a region corresponding to a road surface are needed in the road surface adhesion coefficient acquisition method based on the image, the image in a road surface image information base acquired in the step one needs to be subjected to road surface range labeling, Labelme in software Anaccnda is selected as a labeling tool, the labeling tool is used for manually labeling each image in a sample set one by one, a create polygon button is clicked in the labeling process, points are drawn along the boundary of the road surface region in the image, a labeling frame can completely cover the road surface region, and the labeling category is named as road; after the labeling is finished, a json file can be generated and is converted by using a self-contained json _ to _ dataset. py script program in software Anaconda to obtain a json folder, five files with names and suffixes of img.png, label.png, label _ viz.png, info.yaml and label _ names are contained under the folder, only the file with the label.png picture format is required to be converted to obtain a 8-bit gray label image, the labeling process is sequentially carried out on the pictures in the road image information base by using Labelmes in the Anaconda to obtain a gray label image set of the pictures in the road image information base, and the gray label image set of the pictures in the road image information base is a road image data set;
step three, establishing and training a road surface image area extraction network
The extraction network of the pavement image area is realized in an Anaconda environment through a semantic segmentation network, the whole semantic segmentation network is of an encoder-decoder structure, and the specific design is as follows:
3.1, firstly, zooming the image to be recognized into a picture with the size of 769 multiplied by 3, and taking the picture as the input of a semantic segmentation network;
3.2, then setting the first layer as a convolutional layer, adopting 32 filters with the size of 3 × 3, the step length is 2, filling the filter with 1, and obtaining the size of an output characteristic graph of the convolutional layer, which is 385 × 385 × 32, through batch regularization and a ReLU activation function;
3.3, inputting the convolution layer output characteristic diagram into a maximum value pooling layer, wherein the size is 3 multiplied by 3, the step length is 2, and the size of the obtained pooling layer output characteristic diagram is 193 multiplied by 32;
3.4, taking the output characteristic diagram of the pooling layer as the input of the bottleneck module structure, wherein the bottleneck module structure is realized in detail as follows: firstly, copying input characteristic channel to increase characteristic dimension, one branch directly passing through depth convolution with size of 3X 3 and step length of 2, another branch equally dividing into two sub-branches by channel splitting, one sub-branch is subjected to 3 x 3 depth convolution and 1 x 1 point-by-point convolution, the other sub-branch is directly subjected to a characteristic multiplexing mode, then the two sub-branches are connected through channel splicing, the channel arrangement sequence is disturbed through channel cleaning, after the same depth convolution with the size of 3 multiplied by 3 and the step length of 2, the data are spliced with a copy channel, finally the information exchange between groups is realized through the point-by-point convolution with the size of 1 multiplied by 1, therefore, the size of the output characteristic diagram of the whole bottleneck module structure is reduced by half, the number of channels is doubled, and the size of the output characteristic diagram passing through the bottleneck module structure once is 97 multiplied by 64;
3.5, taking the output result of the process 3.4 as input, passing through the bottleneck module structure again to obtain the output characteristic graph with the size of 49 × 49 × 128 after passing through the bottleneck module structure twice, taking the result as input, passing through the bottleneck module structure again to obtain the output characteristic graph with the size of 25 × 25 × 256 after passing through the bottleneck module structure three times, and reducing the size by 32 times compared with the original image, wherein the whole part is used as an encoder part of a semantic segmentation network;
3.6, the decoder part adopts a jump structure, 2 times of upsampling is carried out on the output characteristic diagram of the three-time bottleneck module structure which is processed by the process 3.5 by using a bilinear interpolation method to obtain a characteristic diagram with the size of 49 multiplied by 256, and the characteristic diagram and the output characteristic diagram of the two-time bottleneck module structure which is processed by the process 3.5 are added pixel by pixel, and in the process, the output characteristic channels of the two-time bottleneck module structure which is processed by the process 3.5 need to be copied to ensure that the result is still 256 output channels;
3.7, performing 2 times of upsampling on the result obtained in the process 3.6 by using a bilinear interpolation method again to obtain a characteristic diagram with the size of 97 multiplied by 256, and adding the characteristic diagram with the output characteristic diagram of the primary bottleneck module structure passing through the process 3.4 pixel by pixel;
3.8, converting the output channel number into a semantic category number by passing the result of the process 3.7 through a 1 × 1 convolutional layer, setting a Dropout layer to reduce the occurrence of an overfitting phenomenon, finally obtaining a feature map with the same size as the original image by 8 times of upsampling, giving a semantic category prediction result of each pixel point according to the maximum probability by using an Argmax function, and finally obtaining the whole semantic segmentation network;
randomly disordering the pavement image data set established in the step two, selecting 80% of sample pictures as a training set, and selecting 20% of sample pictures as a verification set; during semantic segmentation network training; the size of the read training set picture tensor is randomly scaled between 0.5 time and 2 times according to 0.25 step length, the picture tensor is randomly cut according to the size of 769 multiplied by 769 pixels and is randomly turned left and right, the purpose of data enhancement is achieved, the adaptability of a segmentation network is improved, and pixel point values are normalized from 0-255 to 0-1;
selecting a Poly learning rate rule when training a semantic segmentation network, wherein a learning rate attenuation expression is an expression (1), an initial learning rate is 0.001, training iteration steps are iters, the maximum training step max _ iter is set to be 20K steps, and power is set to be 0.9; using an Adam optimization solution algorithm, dynamically adjusting the learning rate of each parameter by using first moment estimation and second moment estimation of the gradient, setting the batch processing size to be 16 according to the performance of computer hardware, storing the model parameters once every 10-30min, and simultaneously using a verification set to perform performance evaluation on the network;
Figure FDA0003123608060000031
after the network training is finished, a proper semantic segmentation evaluation index is required to be selected for evaluating the performance of the model, before that, a confusion matrix is introduced, as shown in table 1, each row of the two-classification confusion matrix represents a prediction class, each column of the two-classification confusion matrix represents a real attribution class of data, and a specific numerical value represents the number of samples predicted to be a certain class;
TABLE 1 two-class confusion matrix schematic
Figure FDA0003123608060000032
The evaluation index of the semantic segmentation network is an average intersection-to-union ratio MIoU, which represents the ratio of the intersection and the union of each type of prediction result and the true value, and the result of the sum and the re-averaging is shown in formula (2):
Figure FDA0003123608060000041
when the MIoU is trained to be more than 60%, the training can be considered to be finished, the trained model and model parameters are stored, a road surface image area extraction network can be obtained, and the actually acquired original image is input into the road surface image area extraction network, so that the extraction of the road surface area in the image can be finished;
step four, establishing and training a road surface type recognition network
The extraction process of the road surface area in the real-time image information can be completed through the network in the third step, and the identification of the road surface type is completed on the basis of the extraction result of the road surface area in the third step;
after the image pavement data set is processed by the semantic segmentation network, an image set only containing a pavement area is obtained and is used as a final data set of a training and evaluation pavement type recognition network, so that the pavement type recognition network is built under the Anaconda environment, and the specific network structure is designed as follows:
4.1, firstly, scaling the image to be classified and identified into a picture with the size of 224 multiplied by 3 as the input of a convolutional neural network;
4.2, then setting the first layer as a convolutional layer, adopting 32 filters with the size of 3 × 3, the step length is 2, filling the filter with 1, and obtaining the size of an output characteristic diagram of the convolutional layer with the size of 112 × 112 × 32 through batch regularization and a ReLU activation function;
4.3, inputting the convolution layer output characteristic diagram into a maximum value pooling layer, wherein the size is 3 multiplied by 3, the step length is 2, and the size of the output characteristic diagram of the pooling layer is 56 multiplied by 32;
4.4, taking the output characteristic diagram of the pooling layer as the input of the bottleneck module structure, wherein the bottleneck module structure is implemented in detail as follows: firstly, copying input characteristic channel to increase characteristic dimension, one branch directly passing through depth convolution with size of 3X 3 and step length of 2, another branch equally dividing into two sub-branches by channel splitting, one sub-branch is subjected to 3 x 3 depth convolution and 1 x 1 point-by-point convolution, the other sub-branch is directly subjected to a characteristic multiplexing mode, then the two sub-branches are connected through channel splicing, the channel arrangement sequence is disturbed through channel cleaning, after the same depth convolution with the size of 3 x 3 and the step length of 2, the data are spliced with a copy channel, finally the information exchange between groups is realized through the point-by-point convolution with the size of 1 x 1, it can be seen that the size of the output characteristic diagram of the whole bottleneck module structure is reduced by half, the number of channels is doubled, and the size of the output characteristic diagram passing through the bottleneck module structure for one time is 28 multiplied by 64;
4.5, taking the output result of the process 4.4 as input, passing through the bottleneck module structure again to obtain an output characteristic diagram with the size of 14 multiplied by 128 after passing through the bottleneck module structure twice, taking the result as input, and passing through the bottleneck module structure again to obtain an output characteristic diagram with the size of 7 multiplied by 256 after passing through the bottleneck module structure three times;
4.6, converting the output result in the process 4.5 into a characteristic diagram with the size of 1 × 1 × 256 by using a global average pooling layer with the size of 7 × 7;
4.7, using a layer of full connection layer and a Softmax function as a network classifier, converting the output characteristic diagram in the process 4.6 into probability values belonging to various categories, and determining a network classification result according to the maximum probability value by using an Argmax function;
then, making the images only containing the road surface area obtained in the step three into a data set for training a road surface type identification network, storing different types of road surface images in a classified manner according to the folder names established in the step one, sequentially reading the image data in different folders, adding 5-bit 0/1 label information and road surface adhesion coefficient information, referring to table 2, adjusting the size of the image to 224 x 224 pixels in a bilinear interpolation manner, normalizing the pixel point values from 0-255 to 0-1, disturbing the road surface image data set, randomly extracting each type of image according to a proportion of 20% as a verification set, and taking the rest as a training set;
TABLE 2 pavement image Category labels
Figure FDA0003123608060000051
After the training set and the verification set of the road surface type recognition network are manufactured, training and evaluating a network model are started, the batch processing size is set to 64, a cross entropy loss function is selected, an Adam optimization solving algorithm is used, the basic learning rate is 0.0001, when the training is completed until the MIoU is more than 80%, the training can be considered to be completed, the model and the training result are stored according to the iteration times epoch, and the well-trained road surface type recognition network can be obtained;
step five, obtaining the road surface adhesion coefficient information
The road surface adhesion coefficient information acquisition process is as follows: the method comprises the steps of shooting a front road image through a camera in the driving process of a vehicle, transmitting the front road image shot by the camera to a road image area extraction network to obtain a road area, transmitting the image only containing the road area to a road type identification network for classification and identification, judging the adhesion coefficient range of the road where the road is located according to the corresponding vehicle speed after the road identification is finished, referring to a table 2, and taking the intermediate values of the upper limit and the lower limit of the road adhesion coefficient range as the current road adhesion coefficient to finish the acquisition of the road adhesion coefficient information.
CN202110683924.6A 2021-06-21 2021-06-21 An Image-Based Method for Obtaining the Adhesion Coefficient of Urban Road Pavement Active CN113379711B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110683924.6A CN113379711B (en) 2021-06-21 2021-06-21 An Image-Based Method for Obtaining the Adhesion Coefficient of Urban Road Pavement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110683924.6A CN113379711B (en) 2021-06-21 2021-06-21 An Image-Based Method for Obtaining the Adhesion Coefficient of Urban Road Pavement

Publications (2)

Publication Number Publication Date
CN113379711A true CN113379711A (en) 2021-09-10
CN113379711B CN113379711B (en) 2022-07-08

Family

ID=77577937

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110683924.6A Active CN113379711B (en) 2021-06-21 2021-06-21 An Image-Based Method for Obtaining the Adhesion Coefficient of Urban Road Pavement

Country Status (1)

Country Link
CN (1) CN113379711B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114170584A (en) * 2021-12-15 2022-03-11 北京中科慧眼科技有限公司 Driving road classification and identification method, system and intelligent terminal based on assisted driving
CN114648750A (en) * 2022-03-29 2022-06-21 国交空间信息技术(北京)有限公司 Image-based pavement material type identification method and device
CN114819001A (en) * 2022-06-30 2022-07-29 交通运输部公路科学研究所 Tunnel pavement slippery state evaluation method based on mobile detection equipment
CN116653889A (en) * 2023-06-20 2023-08-29 中国第一汽车股份有限公司 Vehicle parking brake control method, device, device and medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE502007002821D1 (en) * 2007-08-10 2010-03-25 Sick Ag Recording of equalized images of moving objects with uniform resolution by line sensor
CN202351162U (en) * 2011-11-01 2012-07-25 长安大学 Road pavement adhesion coefficient detection device
CN107491736A (en) * 2017-07-20 2017-12-19 重庆邮电大学 A kind of pavement adhesion factor identifying method based on convolutional neural networks
CN109460738A (en) * 2018-11-14 2019-03-12 吉林大学 A kind of road surface types evaluation method of the depth convolutional neural networks based on free of losses function
CN109455178A (en) * 2018-11-13 2019-03-12 吉林大学 A kind of road vehicles traveling active control system and method based on binocular vision
CN110378416A (en) * 2019-07-19 2019-10-25 北京中科原动力科技有限公司 A kind of coefficient of road adhesion estimation method of view-based access control model
CN111688706A (en) * 2020-05-26 2020-09-22 同济大学 Road adhesion coefficient interactive estimation method based on vision and dynamics
CN111723849A (en) * 2020-05-26 2020-09-29 同济大学 A method and system for online estimation of road adhesion coefficient based on vehicle camera
CN112706728A (en) * 2020-12-30 2021-04-27 吉林大学 Automatic emergency braking control method based on road adhesion coefficient estimation of vision

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE502007002821D1 (en) * 2007-08-10 2010-03-25 Sick Ag Recording of equalized images of moving objects with uniform resolution by line sensor
CN202351162U (en) * 2011-11-01 2012-07-25 长安大学 Road pavement adhesion coefficient detection device
CN107491736A (en) * 2017-07-20 2017-12-19 重庆邮电大学 A kind of pavement adhesion factor identifying method based on convolutional neural networks
CN109455178A (en) * 2018-11-13 2019-03-12 吉林大学 A kind of road vehicles traveling active control system and method based on binocular vision
CN109460738A (en) * 2018-11-14 2019-03-12 吉林大学 A kind of road surface types evaluation method of the depth convolutional neural networks based on free of losses function
CN110378416A (en) * 2019-07-19 2019-10-25 北京中科原动力科技有限公司 A kind of coefficient of road adhesion estimation method of view-based access control model
CN111688706A (en) * 2020-05-26 2020-09-22 同济大学 Road adhesion coefficient interactive estimation method based on vision and dynamics
CN111723849A (en) * 2020-05-26 2020-09-29 同济大学 A method and system for online estimation of road adhesion coefficient based on vehicle camera
CN112706728A (en) * 2020-12-30 2021-04-27 吉林大学 Automatic emergency braking control method based on road adhesion coefficient estimation of vision

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
CHENG WANG: ""A federated filter design of electronic stability control for electric-wheel vehicle"", 《2015 8TH INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING (CISP)》 *
刘惠: ""视觉与动力学信息融合的智能车辆路面附着系数估计"", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 *
刘柏楠等: ""汽车防抱死制动系统的滑模变结构控制器设计"", 《吉林大学学报(信息科学版)》 *
王萍等: ""高速公路车辆智能驾驶仿真平台"", 《系统仿真学报》 *
管欣等: ""基于道路图像对比度-区域均匀性图分析的自适应阈值算法"", 《吉林大学学报(工学版)》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114170584A (en) * 2021-12-15 2022-03-11 北京中科慧眼科技有限公司 Driving road classification and identification method, system and intelligent terminal based on assisted driving
CN114648750A (en) * 2022-03-29 2022-06-21 国交空间信息技术(北京)有限公司 Image-based pavement material type identification method and device
CN114819001A (en) * 2022-06-30 2022-07-29 交通运输部公路科学研究所 Tunnel pavement slippery state evaluation method based on mobile detection equipment
CN116653889A (en) * 2023-06-20 2023-08-29 中国第一汽车股份有限公司 Vehicle parking brake control method, device, device and medium

Also Published As

Publication number Publication date
CN113379711B (en) 2022-07-08

Similar Documents

Publication Publication Date Title
CN113379711B (en) An Image-Based Method for Obtaining the Adhesion Coefficient of Urban Road Pavement
CN109993082B (en) Convolutional neural network road scene classification and road segmentation method
CN112183203B (en) Real-time traffic sign detection method based on multi-scale pixel feature fusion
CN113506300B (en) Picture semantic segmentation method and system based on rainy day complex road scene
CN111882620B (en) Road drivable area segmentation method based on multi-scale information
CN112508977A (en) Deep learning-based semantic segmentation method for automatic driving scene
CN112990065B (en) Vehicle classification detection method based on optimized YOLOv5 model
CN114120272B (en) A multi-supervised intelligent lane semantic segmentation method integrating edge detection
CN108830254B (en) A fine-grained vehicle detection and recognition method based on data balance strategy and dense attention network
CN111008639B (en) License plate character recognition method based on attention mechanism
CN113688836A (en) Real-time road image semantic segmentation method and system based on deep learning
CN113205107A (en) Vehicle type recognition method based on improved high-efficiency network
CN114913498A (en) Parallel multi-scale feature aggregation lane line detection method based on key point estimation
CN116824542B (en) Light-weight foggy-day vehicle detection method based on deep learning
CN113505640B (en) A small-scale pedestrian detection method based on multi-scale feature fusion
CN116630702A (en) Pavement adhesion coefficient prediction method based on semantic segmentation network
CN112766056A (en) Method and device for detecting lane line in low-light environment based on deep neural network
CN113255574A (en) Urban street semantic segmentation method and automatic driving method
CN111881914B (en) License plate character segmentation method and system based on self-learning threshold
CN114581664B (en) Road scene segmentation method, device, electronic device and storage medium
CN112785610A (en) Lane line semantic segmentation method fusing low-level features
CN112634289A (en) Rapid feasible domain segmentation method based on asymmetric void convolution
CN112132839B (en) Multi-scale rapid face segmentation method based on deep convolution cascade network
CN115496764A (en) A Semantic Segmentation Method for Foggy Images Based on Dense Feature Fusion
CN111931768A (en) Vehicle identification method and system capable of self-adapting to sample distribution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant