CN112365476A - Fog visibility detection method based on dual-channel deep network - Google Patents

Fog visibility detection method based on dual-channel deep network Download PDF

Info

Publication number
CN112365476A
CN112365476A CN202011268313.7A CN202011268313A CN112365476A CN 112365476 A CN112365476 A CN 112365476A CN 202011268313 A CN202011268313 A CN 202011268313A CN 112365476 A CN112365476 A CN 112365476A
Authority
CN
China
Prior art keywords
visibility
network
network model
grade
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011268313.7A
Other languages
Chinese (zh)
Other versions
CN112365476B (en
Inventor
孙玉宝
闫宏艳
李家豪
刘青山
闫麒名
岳志远
耿玉标
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202011268313.7A priority Critical patent/CN112365476B/en
Publication of CN112365476A publication Critical patent/CN112365476A/en
Application granted granted Critical
Publication of CN112365476B publication Critical patent/CN112365476B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a fog visibility detection method based on a double-channel depth network, which comprises the steps of collecting highway monitoring images, classifying the highway monitoring images into a plurality of grades according to visibility distances, and dividing the highway monitoring images into a training data set and a test data set; constructing a double-channel depth network model for fog visibility detection, wherein two channels respectively learn dark channel prior information and depth characteristics of a fog image, and are combined with the two types of characteristics to be classified through a full connection layer; designing an objective function for optimizing network model parameter learning, and presetting training hyper-parameters of a network model; sending the training data into a network model, and adopting an Adam optimizer to realize iterative optimization and updating of model parameters according to an objective function; the trained network model can realize end-to-end classification of the visibility grade of the expressway in foggy days and predict the visibility grade of the expressway monitoring image. The invention can realize automatic detection of the visibility grade of the highway in fog days and provide technical support for intelligent management of highway management departments.

Description

Fog visibility detection method based on dual-channel deep network
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a method for detecting visibility in foggy weather based on a dual-channel deep network.
Background
The visibility detection of the highway in the foggy days has important significance for traffic early warning and safe driving. The dark channel prior defogging algorithm is a well-known algorithm in the field of CV boundary defogging. The so-called dark channel is a basic assumption that in most non-sky local areas, some pixels will always have at least one color channel with a very low value. By combining dark channel prior with a foggy-sky atmospheric scattering model in the image, the corresponding transmittance and atmospheric light can be effectively obtained, and the defogging of the image is finally realized.
In recent years, Convolutional Neural Networks (CNNs) have been widely used in the field of computer vision, and have achieved good results. Among them, MobileNet is an effective lightweight classification network. The basic unit of MobileNet is depth-level separable convolution (depth separable convolution), which proposes a new idea: convolving with different convolution kernels for different input channels is in fact a kind of decomposable convolution operation (factored convolutions) that can be decomposed into two smaller operations: depthwise restriction and pointwise restriction. The deep separable convolution considers the change of the channel and the region simultaneously in the past common convolution operation (only the region is considered in the convolution firstly, and then the channel is considered in the convolution), so that the separation of the channel and the region is realized, the parameter quantity is reduced, and a better classification effect is realized.
Disclosure of Invention
The invention provides a double-channel deep network-based method for detecting visibility in foggy days, which can accurately detect and classify visibility levels in foggy days and aims to solve the technical problem of how to effectively extract information of images obtained in foggy days and detect visibility.
The technical scheme adopted by the invention is as follows:
a fog visibility detection method based on a dual-channel deep network comprises the following steps:
firstly, collecting highway monitoring images, classifying the highway monitoring images into a plurality of grades according to visibility distance, and dividing the highway monitoring images into a training data set and a testing data set;
secondly, constructing a double-channel depth network model for fog visibility detection, wherein the two channels respectively learn dark channel prior information and depth characteristics of a fog image, and combine the two types of characteristics to classify through a full connection layer;
designing an objective function for optimizing network model parameter learning, and presetting training super parameters of the network model;
fourthly, the training data are sent into a network model, and iterative optimization and updating of model parameters are achieved by adopting an Adam optimizer according to an objective function;
and fifthly, if the network model is converged, the trained network model can realize end-to-end classification of the visibility grade of the expressway in foggy days, and the visibility grade of the expressway monitoring image is predicted, otherwise, the fourth step is returned.
Further, in the first step, the visibility grade is 0 grade when the visibility distance is d and d is less than 50 m; when 50m < d <100m, the visibility grade is 1 grade; when 100m < d <200m, the visibility grade is grade 2; when d is more than 200m and less than 500m, the visibility grade is 3 grade; at 500m < d, the visibility level is 4; and dividing the original highway monitoring image into 5 types according to the visibility grade standard, and dividing a training data set and a testing data set according to the ratio of 0.8: 0.2.
Further, in the second step, the expression of the dual-channel deep network model is
Figure BDA0002776879430000021
Wherein N ism(X) is a MobileNet network module introducing an attention mechanism, D (X) is a dark channel first-inspection algorithm, Nc(. about.) is a convolution layer, C [. about. ]]For the configure operation, f {. is the fully connected layer;
further, a channel attention and pixel attention module is introduced into the MobileNet network module; the structure of the convolutional network is as follows: conv → pool → conv → pool → conv.
Further, in the third step, the target function adopts a cross entropy loss function, and the expression is
Figure BDA0002776879430000022
Wherein,
Figure BDA0002776879430000023
is the true value of the ith class, yiAnd theta is a predicted value of the ith category and is a parameter needing to be optimized. The training hyper-parameter of the network model comprises a model learning rate alpha, iteration times L, a training batch size S, the depth and the number of layers of the network model and the category of an activation function.
Further, the fourth step includes:
s401, initializing corresponding parameters of each neural network module of the network; selecting S training images { x in a training data set(1),…,x(s)Sending the data to a network model, and obtaining a corresponding output vector y(1),…,y(s)};
Step S402, updating the network parameters omega, omega ← omega + alpha Adam (omega, d) of each neural network module through a back propagation algorithmω) Wherein Adam is one of gradient descent algorithms;
and step S403, sequentially performing the operations of the steps S401 and S402 on all the images of the whole training data set, and performing L iterations in total.
Further, the fifth step comprises:
step 501, judging whether the network model is converged: in the iterative process of network training, if the objective function value is reduced and gradually advances to a certain value, the network is judged to be converged;
step S502, inputting the processed highway foggy day image data into a converged network model, so that end-to-end classification of highway foggy day visibility levels can be realized, and the visibility levels of highway monitoring images are predicted;
and step S503, if the iterative training is not converged, returning to execute the step four.
The invention has the beneficial effects that:
according to the invention, a dark channel first inspection algorithm is combined with an attention classification network, so that on one hand, a transmission matrix vector corresponding to an image is obtained through the dark channel first inspection algorithm, and then a feature vector is extracted through a convolutional layer. On the other hand, the MobileNet network is combined with the attention module to extract the features of the original image, the obtained two part feature vectors are spliced and then sent to the full-connection layer to be classified, the classification accuracy is high, the detection process is quick, better fog visibility level detection can be realized, and powerful technical support is provided for intelligent management of a highway management department.
Drawings
FIG. 1 is a block diagram of a process of visibility detection in foggy weather according to the present invention;
FIG. 2 is a schematic diagram of a classification network model;
fig. 3 is a schematic structural diagram of the attention module.
Detailed Description
The following describes the dual-channel deep network-based fog visibility detection method in detail with reference to the accompanying drawings and specific embodiments.
As shown in fig. 1, the fog visibility detection method based on the dual-channel deep network includes the following steps:
the method comprises the steps of collecting highway monitoring images, classifying the highway monitoring images into a plurality of grades according to visibility distances, and dividing the highway monitoring images into a training data set and a testing data set.
In the first step, the visibility distance is set as d, and when d is less than 50m, the visibility grade is 0 grade. At 50m < d <100m, the visibility level is grade 1. The visibility level is grade 2 when 100m < d <200 m. The visibility level is grade 3 when 200m < d <500 m. At 500m < d, the visibility level is 4. And dividing the original highway monitoring image into 5 types according to the visibility grade standard, and dividing a training data set and a testing data set according to the ratio of 0.8: 0.2.
And secondly, constructing a double-channel depth network model (see fig. 2) for the fog visibility detection, wherein the two channels respectively learn the prior information of the dark channel and the depth characteristics of the fog image, and are combined with the two types of characteristics to be classified through a full connecting layer.
It is considered that the characteristics are not the same between each channel and pixel of the image because the density distribution of the fog is not uniform. Thus, the present invention introduces a channel attention and pixel attention module, as shown in FIG. 3, which allows for more flexibility in handling different types of information.
The image data is sent to two paths (an upper path network and a lower path network) of the network for processing and then is combined. As shown in fig. 2, in the upper path network, the image data is sent to a MobileNet network module for feature extraction, and a feature vector with dimensions of 7 × 7 × 1024 is obtained. Then, the feature vectors are sequentially sent to a channel attention and pixel attention module to obtain the feature vectors with dimensions of 7 × 7 × 1024.
As in the path network of fig. 2, dark channel prior algorithm processing is performed on the image data to obtain a transmission matrix map with a size of 3 × 3 × 224. And then, the transmission matrix image is sent into a convolutional network to extract features, and a feature vector with the dimension size of 7 multiplied by 512 is obtained. Wherein, the structure of the convolution network is conv → pool → conv → pool → conv. And finally, carrying out concatemate operation on the feature vectors obtained by the two branch networks, and sending the feature vectors into a full connection layer for classification.
The network model is expressed as
Figure BDA0002776879430000041
Wherein N ism(X) is a MobileNet network module introducing an attention mechanism, D (X) is a dark channel first-inspection algorithm, Nc(. about.) is a convolution layer, C [. about. ]]For the configure operation, f {. is the fully connected layer.
And thirdly, designing an objective function for optimizing the parameter learning of the network model, and presetting the training hyper-parameters of the network model.
The target function adopts a cross entropy loss function, and the expression is
Figure BDA0002776879430000042
Wherein,
Figure BDA0002776879430000043
is the true value of the ith class, yiAnd theta is a predicted value of the ith category and is a parameter needing to be optimized. The training hyper-parameter of the network model comprises a model learning rate alpha, iteration times L, a training batch size S, the depth and the number of layers of the network model and the category of an activation function.
And fourthly, transmitting the training data into a network model, and adopting an Adam optimizer to realize iterative optimization and updating of model parameters according to the objective function. The method comprises the following steps:
s401, initializing corresponding parameters of each neural network module of the network; selecting S training images { x in a training data set(1),…,x(s)Sending the data to a network model, and obtaining a corresponding output vector y(1),…,y(s)};
Step S402, updating the network parameters omega, omega ← omega + alpha Adam (omega, d) of each neural network module through a back propagation algorithmω) Wherein Adam is one of gradient descent algorithms;
and step S403, sequentially performing the operations of the steps S401 and S402 on all the images of the whole training data set, and performing L iterations in total.
And fifthly, if the network model is converged, the trained network model can realize end-to-end classification of the visibility grade of the expressway in foggy days, and the visibility grade of the expressway monitoring image is predicted, otherwise, the fourth step is returned.
The fifth step comprises the following steps:
step 501, judging whether the network model is converged: in the iterative process of network training, if the objective function value is reduced and gradually advances to a certain value, the network is judged to be converged.
And S502, inputting the processed highway foggy day image data into a converged network model, so that end-to-end classification of highway foggy day visibility levels can be realized, and the visibility levels of highway monitoring images can be predicted.
And step S503, if the iterative training is not converged, returning to execute the step four.
To verify the effect of the invention and the effectiveness of the proposed dark channel preoperative algorithm and attention module, the invention is subjected to a simulation experiment and an ablation experiment, the test column specification is 224 x 224, a model is trained and tested on a foggy day expressway image training data set, and relevant parameters are set: α ═ 0.0004, L ═ 50, and S ═ 16, quantitative analytical methods were used for experimental evaluation.
By performing experiments on the test set, the final classification accuracy was 66.08%.
Ablation experiments were also performed on the test set to verify the effectiveness of the dark channel preoperative algorithm and attention module. The ablation experiment results are shown in table 1 by removing the dark channel pre-inspection algorithm and the attention module, respectively, and only keeping the MobileNet network module for comparison with the complete network structure.
TABLE 1
Figure BDA0002776879430000051
As can be seen from table 1, compared with a classification network that only retains the MobileNet module, the dark channel pre-inspection algorithm and the attention mechanism module can effectively improve the classification accuracy.
The above description is only an embodiment of the present invention, but the scope of the present invention is not limited thereto, and any alternative or alternative method that can be easily conceived by those skilled in the art within the technical scope of the present invention should be covered by the scope of the present invention.

Claims (7)

1. A fog visibility detection method based on a double-channel deep network is characterized by comprising the following steps:
firstly, collecting highway monitoring images, classifying the highway monitoring images into a plurality of grades according to visibility distance, and dividing the highway monitoring images into a training data set and a testing data set;
secondly, constructing a double-channel depth network model for fog visibility detection, wherein the two channels respectively learn dark channel prior information and depth characteristics of a fog image, and combine the two types of characteristics to classify through a full connection layer;
designing an objective function for optimizing network model parameter learning, and presetting training super parameters of the network model;
fourthly, the training data are sent into a network model, and iterative optimization and updating of model parameters are achieved by adopting an Adam optimizer according to an objective function;
and fifthly, if the network model is converged, the trained network model can realize end-to-end classification of the visibility grade of the expressway in foggy days, and the visibility grade of the expressway monitoring image is predicted, otherwise, the fourth step is returned.
2. The method for detecting the visibility in the foggy days based on the dual-channel deep network as claimed in claim 1, wherein in the first step, the visibility grade is 0 grade when the visibility distance is d and d is less than 50 m; when 50m < d <100m, the visibility grade is 1 grade; when 100m < d <200m, the visibility grade is grade 2; when d is more than 200m and less than 500m, the visibility grade is 3 grade; at 500m < d, the visibility level is 4; and dividing the original highway monitoring image into 5 types according to the visibility grade standard, and dividing a training data set and a testing data set according to the ratio of 0.8: 0.2.
3. The method for detecting visibility in foggy weather based on dual-channel deep network as claimed in claim 1, wherein in step two, the expression of dual-channel deep network model is
Figure FDA0002776879420000011
Wherein N ism(x) is a MobileNet network module introducing an attention mechanism, and D (x) isDark channel first-pass algorithm, Nc(. about.) is a convolution layer, C [. about. ]]For the configure operation, f {. is the fully connected layer.
4. The method for detecting the visibility in the foggy days based on the dual-channel deep network as claimed in claim 3, wherein a channel attention and pixel attention module is introduced into the MobileNet network module; the structure of the convolutional network is as follows:
Figure FDA0002776879420000014
5. the method for detecting visibility in foggy weather based on the dual-channel deep network as claimed in claim 1, wherein in step three, the objective function adopts a cross entropy loss function, and the expression is
Figure FDA0002776879420000012
Wherein,
Figure FDA0002776879420000013
is the true value of the ith class, yiAnd theta is a predicted value of the ith category and is a parameter needing to be optimized. The training hyper-parameter of the network model comprises a model learning rate alpha, iteration times L, a training batch size S, the depth and the number of layers of the network model and the category of an activation function.
6. The method for detecting the visibility in the foggy days based on the dual-channel deep network as claimed in claim 1, wherein the fourth step comprises:
s401, initializing corresponding parameters of each neural network module of the network; selecting S training images { x in a training data set(1),…,x(s)Sending the data to a network model, and obtaining a corresponding output vector y(1),…,y(s)};
Step S402, updating the network parameters omega, omega ← omega + alpha Adam (omega, d) of each neural network module through a back propagation algorithmω) Wherein Adam is under gradientOne of the reduction algorithms;
and step S403, sequentially performing the operations of the steps S401 and S402 on all the images of the whole training data set, and performing L iterations in total.
7. The method for detecting the visibility in the foggy days based on the dual-channel deep network as claimed in claim 1, wherein the fifth step comprises:
step 501, judging whether the network model is converged: in the iterative process of network training, if the objective function value is reduced and gradually advances to a certain value, the network is judged to be converged;
step S502, inputting the processed highway foggy day image data into a converged network model, so that end-to-end classification of highway foggy day visibility levels can be realized, and the visibility levels of highway monitoring images are predicted;
and step S503, if the iterative training is not converged, returning to execute the step four.
CN202011268313.7A 2020-11-13 2020-11-13 Fog day visibility detection method based on double-channel depth network Active CN112365476B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011268313.7A CN112365476B (en) 2020-11-13 2020-11-13 Fog day visibility detection method based on double-channel depth network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011268313.7A CN112365476B (en) 2020-11-13 2020-11-13 Fog day visibility detection method based on double-channel depth network

Publications (2)

Publication Number Publication Date
CN112365476A true CN112365476A (en) 2021-02-12
CN112365476B CN112365476B (en) 2023-12-08

Family

ID=74514681

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011268313.7A Active CN112365476B (en) 2020-11-13 2020-11-13 Fog day visibility detection method based on double-channel depth network

Country Status (1)

Country Link
CN (1) CN112365476B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113129273A (en) * 2021-03-30 2021-07-16 武汉伏佳安达电气技术有限公司 Automatic transmission line fog inspection method and system
CN113313644A (en) * 2021-05-26 2021-08-27 西安理工大学 Underwater image enhancement method based on residual double attention network
CN113392818A (en) * 2021-08-17 2021-09-14 江苏省气象服务中心 Expressway severe weather identification method based on multi-scale fusion network
CN113658275A (en) * 2021-08-23 2021-11-16 深圳市商汤科技有限公司 Visibility value detection method, device, equipment and storage medium
CN114627382A (en) * 2022-05-11 2022-06-14 南京信息工程大学 Expressway fog visibility detection method combined with geometric prior of lane lines
CN114663452A (en) * 2022-02-28 2022-06-24 南京工业大学 Airport visibility classification method based on MobileNet-V2 neural network
CN115394073A (en) * 2022-06-13 2022-11-25 上海理工大学 CA-SIR model-based highway congestion propagation method in foggy weather environment
CN115661751A (en) * 2022-11-02 2023-01-31 山东高速集团有限公司创新研究院 Highway low visibility detection method and system based on attention transformation network
CN116664448A (en) * 2023-07-24 2023-08-29 南京邮电大学 Medium-high visibility calculation method and system based on image defogging

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107194924A (en) * 2017-05-23 2017-09-22 重庆大学 Expressway foggy-dog visibility detecting method based on dark channel prior and deep learning
CN107506729A (en) * 2017-08-24 2017-12-22 中国科学技术大学 A kind of visibility detecting method based on deep learning
CN109712083A (en) * 2018-12-06 2019-05-03 南京邮电大学 A kind of single image to the fog method based on convolutional neural networks
CN110263819A (en) * 2019-05-28 2019-09-20 中国农业大学 A kind of object detection method and device for shellfish image
WO2020020445A1 (en) * 2018-07-24 2020-01-30 Toyota Motor Europe A method and a system for processing images to obtain foggy images
CN111339858A (en) * 2020-02-17 2020-06-26 电子科技大学 Oil and gas pipeline marker identification method based on neural network
CN111553856A (en) * 2020-04-24 2020-08-18 西安电子科技大学 Image defogging method based on depth estimation assistance
CN111582074A (en) * 2020-04-23 2020-08-25 安徽海德瑞丰信息科技有限公司 Monitoring video leaf occlusion detection method based on scene depth information perception

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107194924A (en) * 2017-05-23 2017-09-22 重庆大学 Expressway foggy-dog visibility detecting method based on dark channel prior and deep learning
CN107506729A (en) * 2017-08-24 2017-12-22 中国科学技术大学 A kind of visibility detecting method based on deep learning
WO2020020445A1 (en) * 2018-07-24 2020-01-30 Toyota Motor Europe A method and a system for processing images to obtain foggy images
CN109712083A (en) * 2018-12-06 2019-05-03 南京邮电大学 A kind of single image to the fog method based on convolutional neural networks
CN110263819A (en) * 2019-05-28 2019-09-20 中国农业大学 A kind of object detection method and device for shellfish image
CN111339858A (en) * 2020-02-17 2020-06-26 电子科技大学 Oil and gas pipeline marker identification method based on neural network
CN111582074A (en) * 2020-04-23 2020-08-25 安徽海德瑞丰信息科技有限公司 Monitoring video leaf occlusion detection method based on scene depth information perception
CN111553856A (en) * 2020-04-24 2020-08-18 西安电子科技大学 Image defogging method based on depth estimation assistance

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
XU QIN 等: "FFA-Net: Feature Fusion Attention Network for Single Image Dehazing", 《ARXIV》, pages 1 - 8 *
闫宏艳: "基于深度卷积网络的高速公路雾天能见度检测", 《中国优秀硕士学位论文全文数据库 工程科技Ⅰ辑》, no. 1, pages 026 - 200 *
项煜 等: "基于双路神经网络融合模型的高速公路雾天检测", 《西南交通大学学报》, vol. 54, no. 1, pages 173 - 179 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113129273A (en) * 2021-03-30 2021-07-16 武汉伏佳安达电气技术有限公司 Automatic transmission line fog inspection method and system
CN113313644A (en) * 2021-05-26 2021-08-27 西安理工大学 Underwater image enhancement method based on residual double attention network
CN113313644B (en) * 2021-05-26 2024-03-26 西安理工大学 Underwater image enhancement method based on residual double-attention network
CN113392818A (en) * 2021-08-17 2021-09-14 江苏省气象服务中心 Expressway severe weather identification method based on multi-scale fusion network
CN113658275A (en) * 2021-08-23 2021-11-16 深圳市商汤科技有限公司 Visibility value detection method, device, equipment and storage medium
CN114663452A (en) * 2022-02-28 2022-06-24 南京工业大学 Airport visibility classification method based on MobileNet-V2 neural network
CN114627382B (en) * 2022-05-11 2022-07-22 南京信息工程大学 Expressway fog visibility detection method combined with geometric prior of lane lines
CN114627382A (en) * 2022-05-11 2022-06-14 南京信息工程大学 Expressway fog visibility detection method combined with geometric prior of lane lines
CN115394073A (en) * 2022-06-13 2022-11-25 上海理工大学 CA-SIR model-based highway congestion propagation method in foggy weather environment
CN115394073B (en) * 2022-06-13 2023-05-26 上海理工大学 Highway congestion propagation method based on CA-SIR model in foggy environment
CN115661751A (en) * 2022-11-02 2023-01-31 山东高速集团有限公司创新研究院 Highway low visibility detection method and system based on attention transformation network
CN116664448A (en) * 2023-07-24 2023-08-29 南京邮电大学 Medium-high visibility calculation method and system based on image defogging
CN116664448B (en) * 2023-07-24 2023-10-03 南京邮电大学 Medium-high visibility calculation method and system based on image defogging

Also Published As

Publication number Publication date
CN112365476B (en) 2023-12-08

Similar Documents

Publication Publication Date Title
CN112365476B (en) Fog day visibility detection method based on double-channel depth network
CN109685776B (en) Pulmonary nodule detection method and system based on CT image
CN108830188B (en) Vehicle detection method based on deep learning
Ren et al. YOLOv5s-M: A deep learning network model for road pavement damage detection from urban street-view imagery
CN110021425B (en) Comparison detector, construction method thereof and cervical cancer cell detection method
CN110070008A (en) Bridge disease identification method adopting unmanned aerial vehicle image
CN108052911A (en) Multi-modal remote sensing image high-level characteristic integrated classification method based on deep learning
CN112529005B (en) Target detection method based on semantic feature consistency supervision pyramid network
CN112687327A (en) Cancer survival analysis system based on multitask and multi-mode
CN113971764B (en) Remote sensing image small target detection method based on improvement YOLOv3
CN111798447B (en) Deep learning plasticized material defect detection method based on fast RCNN
CN111145145B (en) Image surface defect detection method based on MobileNet
CN113191391A (en) Road disease classification method aiming at three-dimensional ground penetrating radar map
CN114898327B (en) Vehicle detection method based on lightweight deep learning network
CN116416479B (en) Mineral classification method based on deep convolution fusion of multi-scale image features
CN110599459A (en) Underground pipe network risk assessment cloud system based on deep learning
CN111914726B (en) Pedestrian detection method based on multichannel self-adaptive attention mechanism
CN116883650A (en) Image-level weak supervision semantic segmentation method based on attention and local stitching
CN115131561A (en) Potassium salt flotation froth image segmentation method based on multi-scale feature extraction and fusion
CN113936222A (en) Mars terrain segmentation method based on double-branch input neural network
CN115393587A (en) Expressway asphalt pavement disease sensing method based on fusion convolutional neural network
CN114972759A (en) Remote sensing image semantic segmentation method based on hierarchical contour cost function
CN115984543A (en) Target detection algorithm based on infrared and visible light images
Matarneh et al. Evaluation and optimisation of pre-trained CNN models for asphalt pavement crack detection and classification
CN118196740A (en) Automatic driving target detection method and device based on complex scene and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant