CN113486748A - Method for predicting friction coefficient of automatic driving road surface, electronic device and medium - Google Patents

Method for predicting friction coefficient of automatic driving road surface, electronic device and medium Download PDF

Info

Publication number
CN113486748A
CN113486748A CN202110718997.4A CN202110718997A CN113486748A CN 113486748 A CN113486748 A CN 113486748A CN 202110718997 A CN202110718997 A CN 202110718997A CN 113486748 A CN113486748 A CN 113486748A
Authority
CN
China
Prior art keywords
road surface
road
friction coefficient
image
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110718997.4A
Other languages
Chinese (zh)
Other versions
CN113486748B (en
Inventor
武妍
莫宇剑
刘飞麟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN202110718997.4A priority Critical patent/CN113486748B/en
Publication of CN113486748A publication Critical patent/CN113486748A/en
Application granted granted Critical
Publication of CN113486748B publication Critical patent/CN113486748B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method for predicting a friction coefficient of an automatic driving road surface, electronic equipment and a medium. Compared with the prior art, the method has the advantages of high prediction accuracy, good real-time performance and the like.

Description

Method for predicting friction coefficient of automatic driving road surface, electronic device and medium
Technical Field
The present invention relates to the field of automatic driving, and in particular, to a method, an electronic device, and a medium for predicting a friction coefficient of an automatic driving road surface.
Background
The existing automatic driving perception system focuses on the perception of traffic participation objects, such as pedestrians, vehicles and the like, and rarely perceives parameters such as friction coefficients of road surfaces and the like. The friction coefficient of the road surface directly determines the braking distance of the vehicle. The autonomous driving vehicle needs to sense the friction coefficient of the road surface in real time, and parameters of a vehicle decision layer and a control layer are adjusted according to the friction coefficient of the road surface so as to meet the requirement of safety. The friction coefficient of the road surface is not only related to the material, temperature and other factors of the road surface, but also related to the current road surface environment. Under different road surface environments, such as wet road surfaces, ice, snow and the like, the friction coefficients of the road surfaces are greatly different. Therefore, the autonomous driving vehicle needs to estimate the friction coefficient of the road surface in real time according to the current road surface environment.
Conventional road surface friction coefficient estimation methods are based primarily on vehicle response and tire dynamics, which directly calculate the friction coefficient of the road surface from tire deformation, noise response, and vehicle slip rate during braking. However, the method based on vehicle response and tire dynamics can only calculate the friction coefficient of the passing area, and cannot predict the friction coefficient of the road surface ahead.
The road surface friction coefficient estimation method based on vision has strong prediction capability, can be used as a basic perception module of an advanced autonomous driving system, and has a great application prospect. Therefore, how to improve the prediction accuracy of the road surface friction coefficient by using the technology becomes a technical problem to be solved.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide the method for predicting the friction coefficient of the automatic driving road surface based on the self-adaptive fusion of multi-level information, which has high prediction precision and good real-time performance.
The purpose of the invention can be realized by the following technical scheme:
according to a first aspect of the present invention, there is provided an automatic driving road surface friction coefficient prediction method, the method comprising a road surface image acquisition process, a training classification process based on a road surface image, and a friction coefficient estimation according to a classification result;
the training classification process based on the road surface image comprises the following steps:
step one, extracting road image characteristics f by using a Convolutional Neural Network (CNN) model0
Step two, extracting fine-grained features f by using a cavity space pyramid pooling module ASPP in a branch of drivable road segmentation1
Step three, using a channel attention module CAM to acquire internal relevance c of characteristics among channels in a branch of road surface environment classification1And for adjusting the road image characteristics f0Obtaining the adjusted road image characteristic f2
Fourthly, using a position attention module PAM to acquire space dependency information p among fine-grained features of the segmentation branches1Passed to the classification branch and used to adjust the road image characteristics f0Obtaining the adjusted road image characteristic f3
Step five, adjusting the road image characteristics f2And f3Carrying out self-adaptive weighted fusion to obtain the characteristic f4
Step six, fusing the characteristics f4And features f of road images0Performing fusion to obtain the characteristic f5
And seventhly, classifying the road images by using the full connection layer.
As a preferred technical solution, the process of acquiring the road surface image includes:
fixing an RGB camera on an automatic driving vehicle, and calibrating and calculating internal parameters and external parameters of the camera;
and acquiring a road image by using the calibrated camera.
As a preferred technical solution, the training process of the road surface image is a multitask training.
As a preferred technical solution, the ASPP module performs resampling using a plurality of sampling rates.
As a preferred technical solution, the adaptive weighted fusion calculation formula in the step five is:
Figure BDA0003136210570000021
the W is1And W2The specific numerical value of the trainable parameter is obtained by training the neural network.
As a preferred technical scheme, the classification result category comprises Dry, Wet, Partly Snow, filtered Snow, Packed Snow, and Slush.
As a preferred technical solution, the estimated friction coefficient value is a median value of the category corresponding range.
In the training stage, the road surface image is randomly turned left and right, zoomed and rotated.
According to a second aspect of the invention, there is provided an electronic device comprising a memory having stored thereon a computer program and a processor implementing the method when executing the program.
According to a third aspect of the invention, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the method.
Compared with the prior art, the invention has the following advantages:
1) the spatial dependency information among the fine-grained features of the segmentation branches is used for guiding the adjustment of the pavement image features, so that the pavement image features have fine-grained global context information;
2) and the characteristics adjusted by the CAM module and the PAM module are subjected to adaptive weighting fusion, and context information of space dimension and channel dimension is selectively aggregated.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a flowchart of an algorithm of an embodiment of a method for predicting a friction coefficient of an automatically driven road surface based on adaptive fusion of multilevel information;
FIG. 3 is a network model diagram of an embodiment of a method for predicting a friction coefficient of an automatically driven road surface based on adaptive fusion of multilevel information;
fig. 4 is a flowchart of an algorithm of the PAM module in the example.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, shall fall within the scope of protection of the present invention.
Examples
In the vision-based road surface friction coefficient estimation task, in order to effectively improve the estimation precision, redundant background information needs to be eliminated, the road surface characteristics with fine-grained global context information are extracted, the classification of the road surface environment is completed, and the friction coefficient of the road surface is predicted.
The method provided by the invention mainly comprises the following steps: fixing the RGB camera on the automatic driving vehicle, calibrating and calculating internal parameters and external parameters of the RGB camera; collecting a road surface image by using an RGB camera on the calibrated automatic driving vehicle; extracting feature f of road surface image using CNN (convolutional neural network) model0(ii) a Fine-grained feature f extraction using ASPP module in motorway surface-segmented branches1(ii) a Use of CAM modules in branches of road environment classification for obtaining characteristic internal correlations c between channels1And for adjusting the characteristic f of the road surface image0Obtaining adjusted road surface image characteristics f2(ii) a The PAM module is used for transmitting the space dependency information between the fine-grained features acquired by the segmentation branches to the classification branches and adjusting the features f of the road surface image0Obtaining adjusted road surface image characteristics f3(ii) a The adjusted road surface image characteristic f2And f3Carrying out self-adaptive weighted fusion to obtain fused feature f4(ii) a Fusing the features f4And the features f of the road surface image0Fusion to obtain fused features f5(ii) a Classifying the road surface images by using the full-connection layer; and searching a corresponding friction coefficient value according to the classification result of the road surface image.
The invention is tested by a self-built road image data set, wherein the self-built road image data set comprises 4586 training pictures and 475 testing pictures. The self-constructed road image data sets are classified into dry roads, wet roads, partial snow roads, melted snow roads, snow-water mixtures according to the types of attachments on the road surface, and the specific number of samples of each category is shown in table 1.
TABLE 1 number of samples for different road classes in self-created dataset
Dataset Dry Wet Partly Snow Melted Snow Packed Snow Slush Total
Train 2436 1466 108 355 85 136 4586
Test 100 100 50 100 50 75 475
Total 2536 1566 158 455 135 211 5061
In a specific embodiment, ResNet-50 is used as a backbone for extraction of road image features. In order to verify the performance of the method of the present invention, ablation studies with different settings are performed in table 2, AS being a branch of drivable road segmentation, and Weight being the result of adaptively weighting and fusing PAM and CAM. As can be seen from table 2, the method proposed by the present invention uses the self-constructed road image data set to improve Acc by 3.58% compared with the classical ResNet 50 neural network model.
TABLE 2 ablation experiments of the method of the invention
Backbone AS PAM CAM Weight Acc%
ResNet 50 82.95
ResNet 50 84.63
ResNet 50 85.47
ResNet 50 85.26
ResNet 50 84.84
ResNet 50 86.53
In order to verify the performance of the method of the present invention, the classification effect of the method of the present invention on the self-established data set is compared with that of other neural network models based on CNN in table 3, and it can be known from table 3 that the Acc index of the method of the present invention is superior to that of other network models on the self-established data set.
TABLE 3 comparison of the classification effectiveness of the present grammar with other neural network models
Figure BDA0003136210570000041
Figure BDA0003136210570000051
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention are described in detail below with reference to fig. 2 and 3 of the present invention:
step S201: fixing an RGB camera on an automatic driving vehicle, and calibrating and calculating internal parameters and external parameters of the RGB camera by using a Zhang-Zhengyou calibration method;
step S202: collecting a road surface image by using a calibrated vehicle-mounted RGB camera, and making a data set for training a CNN model, wherein the resolution of the road surface image is 512 multiplied by 512;
step S203: using ResNet 50 network model pre-trained on ImageNet as a Backbone, reducing the number of channels by using 1 × 1 convolution, and extracting the road surfaceFeatures f of the image0Wherein the number of channels of the 1 × 1 convolution is 512;
step S204: in a multitask learning mode, an ASPP module is used in branches of a drivable road surface segmentation to perform multi-sampling rate (the sampling rate is 12,24 and 36) on the characteristics f of a road surface image0Resampling is carried out on the fine-grained features to obtain multi-scale fine-grained features, padding is added to the features with different scales to ensure that the sizes of output features are the same, different outputs are fused in a mode of element-by-element summation to obtain fine-grained features f1Then connecting a sampling layer for completing the division task of the road surface;
step S205: in the branch of road surface environment classification, a CAM module is used for explicitly modeling the dependency relationship between channels, dimension transformation and matrix multiplication are carried out on any channel characteristic pairs to obtain an association strength matrix between any channel pairs, then a softmax function is used for carrying out normalization processing on the association strength matrix to obtain an attention map between channels, and then the attention map between channels is used for adjusting the characteristic f0Obtaining adjusted road surface image characteristics f2
Step S206: fine-grained feature f calculated by dividing branch ASPP module by using PAM module1And the features f of the road surface image0The strength of the correlation between the road surface images generates a cross-task attention map and is used for adjusting the characteristics f of the road surface images0Obtaining the adjusted road surface image characteristic f3
Step S207: the road surface image characteristics f after being adjusted by the PAM module and the CAM module2And f3Are weighted respectively, the weight is W1And W2And fusing in a pixel-by-pixel summation mode to obtain fused features f4,f4Is calculated by the formula
Figure BDA0003136210570000052
W1And W2The parameters are trainable parameters, and specific numerical values of the parameters are obtained by training a neural network;
step S208: fusing the features f4And the features f of the road surface image0Fusing in a pixel-by-pixel summation mannerCombining to obtain the fused feature f5
Step S209: fused feature f5Then, a convolution layer, a pooling layer and a full connection layer are connected for classifying the pavement environment;
step S210: and searching a reference value of the corresponding friction coefficient in the table 4 according to the obtained road surface environment type to obtain an estimated value of the friction coefficient.
The road surface image is collected by using the calibrated vehicle-mounted RGB camera in step S202, and a data set is made for training the neural network model, which will be further described as follows:
the method comprises the steps of collecting images of a road surface by using a calibrated vehicle-mounted RGB camera under different road conditions, and marking different reference friction coefficient values according to attachments of the road surface, such as road snow and the like, of the collected images of the road surface.
For step S206, the PAM module is used to divide the fine-grained characteristics f calculated by the branch ASPP module1And the features f of the road surface image0The strength of the correlation between the road surface images generates a cross-task attention map and is used for adjusting the characteristics f of the road surface images0Obtaining the adjusted road surface image characteristic f3For further explanation:
using a PAM module to transfer spatial dependency information between fine-grained features obtained by the segmentation branches to the classification branches, wherein the flow of the PAM module is shown in fig. 4, and the algorithm steps are as follows:
inputting: f. of0∈Rh*w*c,f1∈Rh*w*cH, w are height and width of the feature map, respectively, c is channel
Number of
And (3) outputting: f. of3∈Rh*w*c
1. Transformation f0Dimension c (h w) and marked as B e Rc*(h*w)(ii) a Transformation f1Dimension (h x w) c,
is marked as A epsilon R(h*w)*c
2.S=AB,S∈R(h*w)(h*w)
3.S′=softmax(S)
4.f3=S′f0
The step S210 of finding the corresponding friction coefficient value according to the classification result will be further explained as follows:
the corresponding relationship between the category of the road image and the friction coefficient value is shown in table 4, and then a corresponding road friction coefficient estimation value is obtained according to the result of model classification;
TABLE 4 estimated values of friction coefficients corresponding to different types of road surface images
Figure BDA0003136210570000061
The algorithm proposed by the present invention is further explained: firstly, training the algorithm in a multitask learning mode by using the data set collected in the step S202, adding the loss of the segmentation branch and the loss of the classification branch, then classifying the road environment according to the road image collected by the vehicle-mounted RGB camera, and finally looking up a table according to the classification result to estimate the road friction coefficient. Wherein an Adam optimizer is used in the training process, and the initial learning rate is 10-4A weight attenuation coefficient of 10-4
f0,f1,f2,f3,f4,f5Are all 64 × 64 × 512(h × w × c), where h is height, w is weight, and c is the number of channels.
The model is pre-trained with 20 epochs of classification tasks on the entire dataset, and then fine-tuned with 10 epochs of model weights on subsets of classification tasks and segmentation tasks based on a multi-task learning mechanism. In addition, to enhance the data, the input image is randomly flipped left and right, scaled and rotated during the training and testing phase to further increase the diversity of the data.
The device includes a Central Processing Unit (CPU) that can perform various appropriate actions and processes in accordance with computer program instructions stored in a Read Only Memory (ROM) or loaded from a storage unit into a Random Access Memory (RAM). In the RAM, various programs and data required for the operation of the device can also be stored. The CPU, ROM, and RAM are connected to each other via a bus. An input/output (I/O) interface is also connected to the bus.
A plurality of components in the device are connected to the I/O interface, including: an input unit such as a keyboard, a mouse, etc.; an output unit such as various types of displays, speakers, and the like; storage units such as magnetic disks, optical disks, and the like; and a communication unit such as a network card, modem, wireless communication transceiver, etc. The communication unit allows the device to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The processing unit performs the various methods and processes described above, such as the method of the present invention. For example, in some embodiments, the inventive methods may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as a storage unit. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device via ROM and/or the communication unit. When the computer program is loaded into RAM and executed by a CPU, it may perform one or more of the steps of the method of the invention described above. Alternatively, in other embodiments, the CPU may be configured to perform the inventive method by any other suitable means (e.g. by means of firmware).
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a load programmable logic device (CPLD), and the like.
Program code for implementing the methods of the present invention may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. The method for predicting the friction coefficient of the automatic driving road surface is characterized by comprising a road surface image obtaining process, a training classification process based on the road surface image and a friction coefficient estimation according to a classification result;
the training classification process based on the road surface image comprises the following steps:
step one, extracting road image characteristics f by using a Convolutional Neural Network (CNN) model0
Step two, extracting fine-grained features f by using a cavity space pyramid pooling module ASPP in a branch of drivable road segmentation1
Step three, using a channel attention module CAM to acquire internal relevance c of characteristics among channels in a branch of road surface environment classification1And for adjusting the road image characteristics f0Obtaining the adjusted road image characteristic f2
Fourthly, using a position attention module PAM to acquire space dependency information p among fine-grained features of the segmentation branches1Passed to the classification branch and used to adjust the road image characteristics f0Obtaining the adjusted road image characteristic f3
Step five, adjusting the road image characteristics f2And f3Carrying out self-adaptive weighted fusion to obtain the characteristic f4
Step six, fusing the characteristics f4And features f of road images0Performing fusion to obtain the characteristic f5
And seventhly, classifying the road images by using the full connection layer.
2. The method according to claim 1, wherein the process of acquiring the road surface image includes:
fixing an RGB camera on an automatic driving vehicle, and calibrating and calculating internal parameters and external parameters of the camera;
and acquiring a road image by using the calibrated camera.
3. The method of claim 1, wherein the road surface image training process is a multitask training.
4. The method of claim 1, wherein the ASPP module resamples the road surface using a plurality of sampling rates.
5. The method of predicting the friction coefficient of an automatically driven road according to claim 1, wherein the method further comprises the step of calculating the friction coefficient of the automatically driven road according to the calculated friction coefficientIn the fifth step, the adaptive weighted fusion calculation formula is:
Figure FDA0003136210560000011
the W is1And W2The specific numerical value of the trainable parameter is obtained by training the neural network.
6. The method of claim 1, wherein the classification result category comprises Dry, Wet, Partly Snow, filtered Snow, Packed Snow, Slush.
7. The method of predicting a friction coefficient of an automatically driven road surface according to claim 1, wherein the estimated value of the friction coefficient is a median value of the range corresponding to the category.
8. The method of claim 1, wherein in the training phase, the road surface image is randomly flipped left and right, scaled and rotated.
9. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program, wherein the processor, when executing the program, implements the method of any of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202110718997.4A 2021-06-28 2021-06-28 Method for predicting friction coefficient of automatic driving road surface, electronic device and medium Active CN113486748B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110718997.4A CN113486748B (en) 2021-06-28 2021-06-28 Method for predicting friction coefficient of automatic driving road surface, electronic device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110718997.4A CN113486748B (en) 2021-06-28 2021-06-28 Method for predicting friction coefficient of automatic driving road surface, electronic device and medium

Publications (2)

Publication Number Publication Date
CN113486748A true CN113486748A (en) 2021-10-08
CN113486748B CN113486748B (en) 2022-06-24

Family

ID=77936457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110718997.4A Active CN113486748B (en) 2021-06-28 2021-06-28 Method for predicting friction coefficient of automatic driving road surface, electronic device and medium

Country Status (1)

Country Link
CN (1) CN113486748B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116653975A (en) * 2023-06-01 2023-08-29 盐城工学院 Vehicle stability control method based on road surface recognition

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020101448A1 (en) * 2018-08-28 2020-05-22 Samsung Electronics Co., Ltd. Method and apparatus for image segmentation
CN112163465A (en) * 2020-09-11 2021-01-01 华南理工大学 Fine-grained image classification method, fine-grained image classification system, computer equipment and storage medium
US20210150281A1 (en) * 2019-11-14 2021-05-20 Nec Laboratories America, Inc. Domain adaptation for semantic segmentation via exploiting weak labels
CN112863247A (en) * 2020-12-30 2021-05-28 潍柴动力股份有限公司 Road identification method, device, equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020101448A1 (en) * 2018-08-28 2020-05-22 Samsung Electronics Co., Ltd. Method and apparatus for image segmentation
US20210150281A1 (en) * 2019-11-14 2021-05-20 Nec Laboratories America, Inc. Domain adaptation for semantic segmentation via exploiting weak labels
CN112163465A (en) * 2020-09-11 2021-01-01 华南理工大学 Fine-grained image classification method, fine-grained image classification system, computer equipment and storage medium
CN112863247A (en) * 2020-12-30 2021-05-28 潍柴动力股份有限公司 Road identification method, device, equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JUN FU ET AL.: "Dual Attention Network for Scene Segmentation", 《2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 *
XIAOXU LI ET AL.: "Mixed Attention Mechanism for Small-Sample Fine-grained Image Classification", 《2019 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC)》 *
ZHENGYANG ZHOU ET AL.: "Attention Based Stack ResNet for Citywide Traffic Accident Prediction", 《2019 20TH IEEE INTERNATIONAL CONFERENCE ON MOBILE DATA MANAGEMENT (MDM)》 *
何凯等: "基于多尺度特征融合与反复注意力机制的细粒度图像分类算法", 《天津大学学报(自然科学与工程技术版)》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116653975A (en) * 2023-06-01 2023-08-29 盐城工学院 Vehicle stability control method based on road surface recognition

Also Published As

Publication number Publication date
CN113486748B (en) 2022-06-24

Similar Documents

Publication Publication Date Title
CN111401201B (en) Aerial image multi-scale target detection method based on spatial pyramid attention drive
CN111160311B (en) Yellow river ice semantic segmentation method based on multi-attention machine system double-flow fusion network
CN111401516B (en) Searching method for neural network channel parameters and related equipment
CN111079780B (en) Training method for space diagram convolution network, electronic equipment and storage medium
JP6742554B1 (en) Information processing apparatus and electronic apparatus including the same
CN110263628B (en) Obstacle detection method, obstacle detection device, electronic apparatus, and storage medium
CN113283404B (en) Pedestrian attribute identification method and device, electronic equipment and storage medium
CN112733885A (en) Point cloud identification model determining method and point cloud identification method and device
CN110826457B (en) Vehicle detection method and device under complex scene
CN113486748B (en) Method for predicting friction coefficient of automatic driving road surface, electronic device and medium
CN111461145A (en) Method for detecting target based on convolutional neural network
CN116704431A (en) On-line monitoring system and method for water pollution
CN115937571A (en) Device and method for detecting sphericity of glass for vehicle
CN111860823A (en) Neural network training method, neural network training device, neural network image processing method, neural network image processing device, neural network image processing equipment and storage medium
CN113902010A (en) Training method of classification model, image classification method, device, equipment and medium
CN115830596A (en) Remote sensing image semantic segmentation method based on fusion pyramid attention
US11062141B2 (en) Methods and apparatuses for future trajectory forecast
CN115995042A (en) Video SAR moving target detection method and device
CN113743163A (en) Traffic target recognition model training method, traffic target positioning method and device
CN116861262B (en) Perception model training method and device, electronic equipment and storage medium
CN104050349B (en) External air temperature measuring apparatus and method
CN115131621A (en) Image quality evaluation method and device
CN117034090A (en) Model parameter adjustment and model application methods, devices, equipment and media
CN116071591A (en) Class hierarchy-based dynamic efficient network training method, device, computer equipment and storage medium
CN110705695B (en) Method, device, equipment and storage medium for searching model structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant