CN113469912A - Fog visibility estimation method and system based on deep learning - Google Patents

Fog visibility estimation method and system based on deep learning Download PDF

Info

Publication number
CN113469912A
CN113469912A CN202110737978.6A CN202110737978A CN113469912A CN 113469912 A CN113469912 A CN 113469912A CN 202110737978 A CN202110737978 A CN 202110737978A CN 113469912 A CN113469912 A CN 113469912A
Authority
CN
China
Prior art keywords
estimation
map
image
transmittance
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110737978.6A
Other languages
Chinese (zh)
Other versions
CN113469912B (en
Inventor
裴欣
胡坚明
游晶
贾邵程
岳云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202110737978.6A priority Critical patent/CN113469912B/en
Publication of CN113469912A publication Critical patent/CN113469912A/en
Application granted granted Critical
Publication of CN113469912B publication Critical patent/CN113469912B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a fog visibility estimation method and system based on deep learning, which is characterized by comprising the following steps of: 1) for the input foggy image, estimating a transmittance map through a transmittance estimation neural network; 2) restoring to obtain a defogged image by using the transmittance graph and the fog imaging model obtained by estimation; 3) inputting the defogged image into a depth estimation neural network estimation depth map; 4) calculating an extinction coefficient map through the depth map and the transmittance map; 5) the visibility map is calculated by Koschmieder's law through an extinction coefficient map. By analyzing the internal relation between the fog imaging model and the Koschmieder law and combining the two to estimate the visibility, the visibility estimation method has better interpretability, and can be widely applied to the field of visibility estimation in severe weather such as foggy days.

Description

Fog visibility estimation method and system based on deep learning
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to a fog visibility estimation method and system based on deep learning.
Background
Visibility is the maximum distance a person with normal vision can identify an object from the background. Visibility is mainly affected by two factors, namely brightness difference between a target object and the background of the target object and atmospheric transparency. The atmosphere transparency is greatly related to weather, and the atmosphere transparency is affected by rain, fog, snow, sand, haze and other weather, so that the visibility is reduced. Thus, in general, visibility depends mainly on weather conditions.
In the measurement of visibility, the traditional method is to measure the extinction coefficient by an instrument so as to deduce the visibility. The limitations of these conventional approaches are also very significant: the atmospheric transmission instrument calculates visibility by measuring the air column transmittance between two points, and the method has larger error in low visibility weather such as rain, fog and the like; the automatic laser visibility measuring instrument calculates the visibility by measuring the atmospheric extinction coefficient, and although the method is more accurate, the instrument has the defects of high manufacturing cost, complex operation, difficult popularization and the like.
With the development of computer science, some methods based on digital image processing technology and data driving are also in progress. However, the current non-deep learning method can only estimate one visibility value for one image, which is too coarse and insufficient in accuracy for the visibility estimation problem under uneven weather conditions. In the field of computer vision, due to the development of deep neural network theory and the improvement of computing power, visibility estimation based on a deep learning method becomes a direction of research, but the deep learning method suffers from a problem due to poor interpretability.
Disclosure of Invention
In view of the above problems, an object of the present invention is to provide a method and a system for estimating visibility in foggy days based on deep learning, which can output visibility at a pixel level by inputting only one image, thereby solving the problem of dependence of the conventional method on measuring instruments and measuring conditions, and solving the problem of too rough visibility estimation of a non-deep learning method in the field of computers due to the visibility at the pixel level. In addition, the method captures the intrinsic connection between the fog imaging model and the Koschmieder law and combines the fog imaging model and the Koschmieder law to carry out visibility estimation, so that the problem of insufficient interpretability of the deep learning method is effectively solved.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention provides a method for estimating the visibility in foggy days based on deep learning, which comprises the following steps:
1) inputting the foggy image into a pre-constructed transmittance estimation neural network for transmittance estimation to obtain a transmittance graph of the foggy image;
2) processing the defogged image by using the transmittance image and the fog imaging model obtained by estimation, and recovering to obtain a defogged image;
3) inputting the defogged image into a depth estimation neural network which is constructed in advance for depth estimation to obtain a depth map;
4) calculating to obtain an extinction coefficient map through the depth map and the transmittance map;
5) and calculating the visibility graph by using Koschmieder law through an extinction coefficient graph.
Further, in the step 1), the method for inputting the foggy image into a pre-constructed transmittance estimation neural network for transmittance estimation to obtain a transmittance map includes the following steps:
1.1) constructing and initializing a transmittance estimation neural network;
1.2) loading the network parameters of the pre-trained transmittance estimation neural network and setting the network parameters into a test mode;
1.3) preprocessing the input fog image according to the requirement of the transmittance estimation neural network;
1.4) inputting the preprocessed fog-carrying image into a transmittance estimation neural network to obtain a transmittance map.
Further, in the step 2), the method for recovering the defogged image through the transmittance map and the fogging map comprises the following steps:
2.1) calculating to obtain an atmospheric light component A according to the input fogged image;
2.2) substituting the fog image, the transmittance image obtained in the step 1) and the atmospheric light component A into a fog imaging model to obtain a defogged image.
Further, the method for calculating and obtaining the atmospheric light component a according to the input foggy image in the step 2.1) comprises the following steps:
2.1.1) solving the minimum value of each pixel of the input fog-carrying image on three channels of RGB;
2.1.2) carrying out minimum value filtering on the minimum value image obtained in the step 2.1.1) to obtain a dark channel image;
2.1.3) taking the first 0.1% pixels according to the brightness of the dark channel map obtained in the step 2.1.2), finding the corresponding pixel values of the pixels in the input fog image, and taking the average value of the pixel values as the atmospheric light component A.
Further, in the step 3), the method for inputting the defogged image into a depth estimation neural network constructed in advance to perform depth estimation to obtain a depth map includes the following steps:
3.1) constructing and initializing a depth estimation neural network;
3.2) loading the pre-trained network parameters of the deep estimation neural network, and setting the network parameters as a test mode;
3.3) preprocessing the defogged image according to requirements;
and 3.4) inputting the preprocessed and defogged image into a depth estimation neural network for depth estimation to obtain a depth map.
Further, in the step 4), the method for obtaining the extinction coefficient map through calculation by using the depth map and the transmittance map includes:
firstly, calculating the extinction coefficient corresponding to each pixel according to the transmittance map and the depth map, wherein the calculation formula is as follows:
t(x)=e-β(x)d(x)wherein, t (x) is the transmittance of the transmittance graph at a pixel point; beta (x) is an extinction coefficient corresponding to each pixel(ii) a d (x) is the depth of the depth map at one pixel;
next, an extinction coefficient map is obtained from the extinction coefficient β (x) corresponding to each pixel.
Further, in the step 5), a method for calculating a visibility map by using Koschmieder's law through an extinction coefficient map includes:
firstly, calculating the visibility of each pixel point according to an extinction coefficient diagram and a Koschmieder law, wherein the calculation formula is as follows:
Figure BDA0003140507930000031
wherein, v (x) is the visibility of a pixel; ε is the eye contrast threshold; beta (x) is the extinction coefficient of the extinction coefficient graph at a pixel point;
and secondly, obtaining a visibility graph according to the visibility V (x) of each pixel point.
In a second aspect of the present invention, a fog visibility estimation system based on deep learning is provided, which includes:
the transmissivity graph determining module is used for inputting the fog images into a preset transmissivity estimation neural network for transmissivity estimation to obtain a transmissivity graph;
the defogged image determining module is used for recovering and obtaining a defogged image by utilizing the transmittance graph and the fog imaging model which are obtained by estimation;
the depth map determining module is used for inputting the defogged image into a pre-constructed depth estimation neural network for depth estimation to obtain a depth map;
the extinction coefficient map determining module is used for calculating to obtain an extinction coefficient map through the depth map and the transmissivity map;
and the visibility map determining module is used for calculating the visibility map by utilizing the Koschmieder law through the extinction coefficient map.
Further, the transmittance map determination module includes:
the transmissivity estimation neural network construction module is used for constructing and initializing a transmissivity estimation neural network;
the first network parameter loading module is used for loading the network parameters of the pre-trained transmittance estimation neural network and setting the network parameters into a test mode;
the first preprocessing module is used for preprocessing the input image with fog according to requirements;
and the transmissivity estimation module is used for inputting the preprocessed foggy image into a transmissivity estimation neural network for transmissivity estimation to obtain a transmissivity graph.
Further, the depth map determination module comprises:
the depth estimation neural network construction module is used for constructing and initializing a depth estimation neural network;
the second network parameter loading module is used for loading the network parameters of the pre-trained deep estimation neural network and setting the network parameters as a test mode;
the second preprocessing module is used for preprocessing the defogged image according to requirements;
and the depth estimation module is used for inputting the preprocessed and defogged image into a depth estimation neural network for depth estimation to obtain a depth map.
Due to the adoption of the technical scheme, the invention has the following advantages:
1. the invention analyzes the internal relation between the fog imaging model and the Koschmieder law and combines the two to estimate the visibility, so the invention has better interpretability;
2. the method is not only suitable for the foggy days, but also suitable for the weather conditions of other imaging models which are the same as those of the foggy days;
3. different from the traditional method in the field of computer vision, the method can obtain visibility estimation at a pixel level, and the traditional method can only obtain visibility estimation at a picture level, so that the method can better estimate uneven fog, has stronger robustness and has finer granularity;
4. the invention has the characteristics of modularization and low coupling, so that with the development of scientific technology, modules in the framework of the invention can be replaced by a method with better effect, thereby improving the total estimation effect.
Therefore, the method can be widely applied to the field of visibility estimation in severe weather such as foggy days.
Drawings
FIG. 1 is a flow chart of a fog visibility estimation method based on deep learning according to the present invention;
FIG. 2 is a flow chart of the present invention for transmittance estimation of a hazy image using a transmittance estimation neural network;
FIG. 4 is a flow chart of the present invention for solving for atmospheric light components;
FIG. 3 is a flow chart of inputting a defogged image into a depth estimation neural network for estimation to obtain a depth map thereof in the present invention;
FIG. 5 is a schematic diagram of a network structure of DehazeNet constructed in an embodiment of the present invention;
fig. 6 is a schematic diagram of a network structure of a dorn (deep atomic Regression network) constructed in the embodiment of the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and examples.
The invention provides a fog visibility estimation method based on deep learning based on a fog imaging model, a Koschmieder law and related work in the field of image defogging and depth estimation at present, and the following is a brief introduction of the Koschmieder law and the fog imaging model which are theoretical foundations of the invention:
the Koschmieder law is the basis for determining visibility. When human eyes visually observe a far ground target facing the terrace sky along the horizontal direction, aerosol particles in the atmosphere generate an absorption scattering effect due to the influence of direct solar light, the sky, a cloud layer and ground diffuse scattering, so that an observer feels like a layer of shielding curtain in the atmosphere, the direct contrast between a target object and the background is reduced, and the distance that the observer can see the target object is reduced. The law of the visual contrast C as a function of the distance d, i.e. Koschmieder's law, is expressed as follows:
Figure BDA0003140507930000051
wherein C is the visual contrast, also called visual contrast, which is the relative brightness difference between the target and the background; beta is an extinction coefficient, which is a characteristic quantity for representing the transparency of the atmosphere in any case; d is the distance of the observer from the target object; i isobjectAnd IbackgroundThe brightness of the object and the background, respectively.
If the target object is just visible, it means that the visual contrast C reaches the eye contrast threshold e, which is 0.02 or 0.05 for a normal person. At this time, the distance d between the observer and the target object represents the visibility V, and the above formula (1) can be expressed as:
ε=e-βV (2)
namely:
Figure BDA0003140507930000052
further, in computer vision and computer graphics, a fog imaging model described by the following equation is widely used:
I(x)=J(x)t(x)+A(1-t(x)) (4)
t(x)=e-β(x)d(x) (5)
wherein, I (x) is the value of the image to be defogged at a pixel point; j (x) is the value of the image after defogging at a pixel point; a is an atmospheric light component, which is constant for one image; t (x) is the transmittance of the image at a pixel; beta (x) is an extinction coefficient, the size of the extinction coefficient reflects fog concentration, generally, when the fog concentration is uniform, the fog concentration can be regarded as a constant for a specific image, and when the fog concentration is nonuniform, the extinction coefficient refers to the extinction coefficient of a pixel point in the image; and d (x) is the actual physical distance of the target object relative to the shooting source on one pixel point of the image.
According to the theoretical analysis, the visibility can be calculated by utilizing the Koschmieder law as long as the extinction coefficient is calculated by the fog imaging model. As shown in fig. 1, the invention provides a method for estimating visibility in foggy weather based on deep learning, which comprises the following steps:
1) inputting the foggy image into a pre-constructed transmittance estimation neural network for transmittance estimation to obtain a transmittance graph of the foggy image;
2) processing the defogged image by using the transmittance image and the fog imaging model obtained by estimation, and recovering to obtain a defogged image;
3) inputting the defogged image into a depth estimation neural network which is constructed in advance for depth estimation to obtain a depth map of the image;
4) calculating an extinction coefficient map through the obtained depth map and the obtained transmittance map;
5) and calculating the visibility graph by using Koschmieder law through an extinction coefficient graph.
As shown in fig. 2, the method for obtaining the transmittance map of the foggy image by inputting the foggy image into the transmittance estimation neural network constructed in advance in the step 1) and performing transmittance estimation includes the following steps:
1.1) constructing and initializing a transmissivity estimation neural Network, wherein the neural Network can adopt DehazeNet, DCPDN (Densely Connected Pyramid Dehazing Network) and the like;
1.2) loading the network parameters of the pre-trained transmittance estimation neural network and setting the network parameters into a test mode;
1.3) preprocessing input fog images according to the requirements of a transmittance estimation neural network, such as clipping, scaling, normalizing mean value and variance and the like;
1.4) inputting the preprocessed fog-carrying image into a transmittance estimation neural network to obtain a transmittance graph of the fog-carrying image.
In the step 2), the method for obtaining the defogged image comprises the following steps:
2.1) calculating to obtain an atmospheric light component A based on the foggy image;
2.2) substituting the fog image, the transmittance image obtained in the step 1) and the atmospheric light component A into a fog imaging model (namely formula (4)) to obtain a defogged image.
As shown in fig. 3, there are many algorithms for obtaining the atmospheric light component a in the step 2.1), and a method of using a pixel which is 0.1% of the pixel in the dark channel map according to the brightness as a is widely used at present, and the method includes the following steps:
2.1.1) solving the minimum value of each pixel of the fogged image on three channels of RGB;
2.1.2) carrying out minimum value filtering on the obtained minimum value graph to obtain a dark channel graph;
2.1.3) taking the first 0.1% of pixels according to the brightness of the obtained dark channel map, finding the corresponding pixel values of the pixels in the input image, and taking the average value of the pixel values as the atmospheric light component A.
As shown in fig. 4, the method for constructing the depth estimation neural network, initializing network parameters of the depth estimation neural network, and estimating the transmittance using the depth estimation neural network in step 3) includes the following steps:
3.1) constructing and initializing a depth estimation neural network, such as FCRN (full capacitive recovery networks), DORN (deep atomic recovery networks), and the like;
3.2) loading the pre-trained deep estimation neural network parameters and setting the parameters as a test mode;
3.3) preprocessing the defogged image according to the requirement of the depth estimation application network, such as cutting, zooming, standardizing the mean value and the variance and the like;
and 3.4) inputting the preprocessed and defogged image into a depth estimation neural network to obtain a depth estimation image.
In the step 4), the transmittance map is obtained according to the step 1), and the depth map is obtained according to the step 3), so that t (x) and d (x) in the fog imaging model are known, and the extinction coefficient β (x) corresponding to each pixel can be calculated by using the formula (5), i.e., the extinction coefficient map is obtained.
In the step 5), since the extinction coefficient map is obtained according to the step 4), the extinction coefficient β in Koschmieder's law is known, and the visibility V of each pixel point can be calculated by substituting the equation (3) with ∈ ═ 0.02 or ∈ ═ 0.05, so that the visibility map is obtained.
According to the fog visibility estimation method based on deep learning, the invention also provides a fog visibility estimation system based on deep learning, which comprises the following steps: the transmissivity graph determining module is used for inputting the fog-carrying images into a preset transmissivity estimation neural network for transmissivity estimation to obtain a transmissivity graph of the fog-carrying images; the defogged image determining module is used for recovering and obtaining a defogged image by utilizing the transmittance graph and the fog imaging model which are obtained by estimation; the depth map determining module is used for inputting the defogged image into a pre-constructed depth estimation neural network for depth estimation to obtain a depth map; the extinction coefficient map determining module is used for calculating to obtain an extinction coefficient map through the depth map and the transmissivity map; and the visibility map determining module is used for calculating the visibility map by utilizing the Koschmieder law through the extinction coefficient map.
Further, the transmittance map determination module includes: the transmissivity estimation neural network construction module is used for constructing and initializing a transmissivity estimation neural network; the first network parameter loading module is used for loading the network parameters of the pre-trained transmittance estimation neural network and setting the network parameters into a test mode; the first preprocessing module is used for preprocessing the input image with fog according to requirements; and the transmissivity estimation module is used for inputting the preprocessed foggy image into a transmissivity estimation neural network for transmissivity estimation to obtain a transmissivity graph.
Further, the depth map determination module comprises: the depth estimation neural network construction module is used for constructing and initializing a depth estimation neural network; the second network parameter loading module is used for loading the network parameters of the pre-trained deep estimation neural network and setting the network parameters as a test mode; the second preprocessing module is used for preprocessing the defogged image according to requirements; and the depth estimation module is used for inputting the preprocessed and defogged image into a depth estimation neural network for depth estimation to obtain a depth map.
The first embodiment is as follows:
the embodiment is used for estimating the visibility in foggy days on a certain road, and comprises the following steps:
1) for the input fogged image, a transmittance map is estimated by a transmittance estimation neural network.
As shown in fig. 5, DehazeNet is selected as a transmittance estimation neural network, a network is built using a Pytorch programming environment (version 1.6) and parameters are initialized, network parameters downloaded from the internet are loaded, the transmittance estimation neural network is set to a test mode, an input image is subjected to preprocessing for normalization of mean and variance, and then input into DehazeNet, so that a transmittance map is obtained.
2) And recovering and obtaining the defogged image by utilizing the transmittance graph and the fog imaging model which are obtained by estimation.
Specifically, the method comprises the following steps:
2.1) obtaining an atmospheric light component A;
2.1.1) solving the minimum value of each pixel of the input image on three channels of RGB;
2.1.2) carrying out minimum value filtering on the minimum value graph obtained in the step 2.1.1), wherein the size of a filter is selected to be 5 x 5, and obtaining a dark channel graph;
2.1.3) taking the first 0.1% of pixels according to the brightness of the dark channel map obtained in the step 2.1.2), finding the corresponding pixel values of the pixels in the input image, and taking the average value of the pixel values as the atmospheric light component A.
And 2.2) substituting t (x), I (x) and A into a formula of the fog imaging model to obtain a defogged image.
3) And inputting the defogged image into a depth estimation neural network estimation depth map.
As shown in fig. 6, select a DORN (deep atomic Regression network) as a depth estimation neural network, build a network using a Pytorch programming environment (version 1.6) and initialize parameters, load network parameters downloaded from the internet, set the depth estimation neural network as a test mode, perform preprocessing of normalization of mean and variance on an input image, and input the input image into the DORN to obtain a depth estimation image.
4) And calculating an extinction coefficient map through the depth map and the transmittance map.
Obtaining a depth map according to the step 3), so that when t (x) and d (x) in the fog imaging model are known, the extinction coefficient beta corresponding to each pixel can be obtained through calculation by a formula of the fog imaging model, and the extinction coefficient map is obtained.
5) The visibility map is calculated by Koschmieder's law through an extinction coefficient map.
And 4) obtaining an extinction coefficient graph according to the step 4), wherein the extinction coefficient beta in the Koschmieder law is known, epsilon is taken to be 0.02, and the epsilon is substituted into a formula to calculate the visibility V of each pixel point, so that the visibility graph is obtained.
A specific embodiment is given above, but the invention is not limited to the described embodiment. The basic idea of the present invention lies in the above solution, and it is obvious to those skilled in the art that it is not necessary to spend creative efforts to design various modified models, formulas and parameters according to the teaching of the present invention. Variations, modifications, substitutions and alterations may be made to the embodiments without departing from the principles and spirit of the invention, and still fall within the scope of the invention.

Claims (10)

1. A fog visibility estimation method based on deep learning is characterized by comprising the following steps:
1) inputting the foggy image into a pre-constructed transmittance estimation neural network for transmittance estimation to obtain a transmittance graph of the foggy image;
2) processing the defogged image by using the transmittance image and the fog imaging model obtained by estimation, and recovering to obtain a defogged image;
3) inputting the defogged image into a depth estimation neural network which is constructed in advance for depth estimation to obtain a depth map;
4) calculating to obtain an extinction coefficient map through the depth map and the transmittance map;
5) and calculating the visibility graph by using Koschmieder law through an extinction coefficient graph.
2. The fog visibility estimation method based on deep learning as claimed in claim 1, wherein: in the step 1), the method for obtaining the transmittance map by inputting the foggy image into a pre-constructed transmittance estimation neural network for transmittance estimation comprises the following steps:
1.1) constructing and initializing a transmittance estimation neural network;
1.2) loading the network parameters of the pre-trained transmittance estimation neural network and setting the network parameters into a test mode;
1.3) preprocessing the input fog image according to the requirement of the transmittance estimation neural network;
1.4) inputting the preprocessed fog-carrying image into a transmittance estimation neural network to obtain a transmittance map.
3. The fog visibility estimation method based on deep learning as claimed in claim 1, wherein: in the step 2), the method for recovering the defogged image through the transmittance map and the fogging map comprises the following steps:
2.1) calculating to obtain an atmospheric light component A according to the input fogged image;
2.2) substituting the fog image, the transmittance image obtained in the step 1) and the atmospheric light component A into a fog imaging model to obtain a defogged image.
4. The fog visibility estimation method based on deep learning as claimed in claim 3, wherein: in the step 2.1), the method for calculating and obtaining the atmospheric light component a according to the input fogged image comprises the following steps:
2.1.1) solving the minimum value of each pixel of the input fog-carrying image on three channels of RGB;
2.1.2) carrying out minimum value filtering on the minimum value image obtained in the step 2.1.1) to obtain a dark channel image;
2.1.3) taking the first 0.1% pixels according to the brightness of the dark channel map obtained in the step 2.1.2), finding the corresponding pixel values of the pixels in the input fog image, and taking the average value of the pixel values as the atmospheric light component A.
5. The fog visibility estimation method based on deep learning as claimed in claim 1, wherein: in the step 3), the method for inputting the defogged image into the pre-constructed depth estimation neural network for depth estimation to obtain the depth map comprises the following steps:
3.1) constructing and initializing a depth estimation neural network;
3.2) loading the pre-trained network parameters of the deep estimation neural network, and setting the network parameters as a test mode;
3.3) preprocessing the defogged image according to requirements;
and 3.4) inputting the preprocessed and defogged image into a depth estimation neural network for depth estimation to obtain a depth map.
6. The fog visibility estimation method based on deep learning as claimed in claim 1, wherein: in the step 4), the method for obtaining the extinction coefficient map through calculation by using the depth map and the transmittance map comprises the following steps:
firstly, calculating the extinction coefficient corresponding to each pixel according to the transmittance map and the depth map, wherein the calculation formula is as follows:
t(x)=e-β(x)d(x)
wherein, t (x) is the transmittance of the transmittance graph at a pixel point; β (x) is an extinction coefficient corresponding to each pixel; d (x) is the depth of the depth map at one pixel;
next, an extinction coefficient map is obtained from the extinction coefficient β (x) corresponding to each pixel.
7. The fog visibility estimation method based on deep learning as claimed in claim 1, wherein: in the step 5), the method for calculating the visibility map by using the Koschmieder law through the extinction coefficient map comprises the following steps:
firstly, calculating the visibility of each pixel point according to an extinction coefficient diagram and a Koschmieder law, wherein the calculation formula is as follows:
Figure FDA0003140507920000021
wherein, v (x) is the visibility of a pixel; ε is the eye contrast threshold; beta (x) is the extinction coefficient of the extinction coefficient graph at a pixel point;
and secondly, obtaining a visibility graph according to the visibility V (x) of each pixel point.
8. A fog visibility estimation system based on deep learning and adopting the method as claimed in any one of claims 1 to 7, which is characterized by comprising:
the transmissivity graph determining module is used for inputting the fog images into a preset transmissivity estimation neural network for transmissivity estimation to obtain a transmissivity graph;
the defogged image determining module is used for recovering and obtaining a defogged image by utilizing the transmittance graph and the fog imaging model which are obtained by estimation;
the depth map determining module is used for inputting the defogged image into a pre-constructed depth estimation neural network for depth estimation to obtain a depth map;
the extinction coefficient map determining module is used for calculating to obtain an extinction coefficient map through the depth map and the transmissivity map;
and the visibility map determining module is used for calculating the visibility map by utilizing the Koschmieder law through the extinction coefficient map.
9. The fog visibility estimation system based on deep learning as claimed in claim 8, wherein: the transmittance map determination module includes:
the transmissivity estimation neural network construction module is used for constructing and initializing a transmissivity estimation neural network;
the first network parameter loading module is used for loading the network parameters of the pre-trained transmittance estimation neural network and setting the network parameters into a test mode;
the first preprocessing module is used for preprocessing the input image with fog according to requirements;
and the transmissivity estimation module is used for inputting the preprocessed foggy image into a transmissivity estimation neural network for transmissivity estimation to obtain a transmissivity graph.
10. The fog visibility estimation system based on deep learning as claimed in claim 8, wherein: the depth map determination module comprises:
the depth estimation neural network construction module is used for constructing and initializing a depth estimation neural network;
the second network parameter loading module is used for loading the network parameters of the pre-trained deep estimation neural network and setting the network parameters as a test mode;
the second preprocessing module is used for preprocessing the defogged image according to requirements;
and the depth estimation module is used for inputting the preprocessed and defogged image into a depth estimation neural network for depth estimation to obtain a depth map.
CN202110737978.6A 2021-06-30 2021-06-30 Deep learning-based foggy day visibility estimation method and system Active CN113469912B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110737978.6A CN113469912B (en) 2021-06-30 2021-06-30 Deep learning-based foggy day visibility estimation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110737978.6A CN113469912B (en) 2021-06-30 2021-06-30 Deep learning-based foggy day visibility estimation method and system

Publications (2)

Publication Number Publication Date
CN113469912A true CN113469912A (en) 2021-10-01
CN113469912B CN113469912B (en) 2024-06-14

Family

ID=77876590

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110737978.6A Active CN113469912B (en) 2021-06-30 2021-06-30 Deep learning-based foggy day visibility estimation method and system

Country Status (1)

Country Link
CN (1) CN113469912B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116664448A (en) * 2023-07-24 2023-08-29 南京邮电大学 Medium-high visibility calculation method and system based on image defogging

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680494A (en) * 2015-03-14 2015-06-03 西安电子科技大学 Optimal fog image recovery method based on artificial fog addition
CN104809707A (en) * 2015-04-28 2015-07-29 西南科技大学 Method for estimating visibility of single fog-degraded image
US9288458B1 (en) * 2015-01-31 2016-03-15 Hrl Laboratories, Llc Fast digital image de-hazing methods for real-time video processing
CN107194924A (en) * 2017-05-23 2017-09-22 重庆大学 Expressway foggy-dog visibility detecting method based on dark channel prior and deep learning
CN107749052A (en) * 2017-10-24 2018-03-02 中国科学院长春光学精密机械与物理研究所 Image defogging method and system based on deep learning neutral net
CN112365467A (en) * 2020-11-11 2021-02-12 武汉长江通信智联技术有限公司 Foggy image visibility estimation method based on single image depth estimation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9288458B1 (en) * 2015-01-31 2016-03-15 Hrl Laboratories, Llc Fast digital image de-hazing methods for real-time video processing
CN104680494A (en) * 2015-03-14 2015-06-03 西安电子科技大学 Optimal fog image recovery method based on artificial fog addition
CN104809707A (en) * 2015-04-28 2015-07-29 西南科技大学 Method for estimating visibility of single fog-degraded image
CN107194924A (en) * 2017-05-23 2017-09-22 重庆大学 Expressway foggy-dog visibility detecting method based on dark channel prior and deep learning
CN107749052A (en) * 2017-10-24 2018-03-02 中国科学院长春光学精密机械与物理研究所 Image defogging method and system based on deep learning neutral net
CN112365467A (en) * 2020-11-11 2021-02-12 武汉长江通信智联技术有限公司 Foggy image visibility estimation method based on single image depth estimation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
于佳松,韩夏清: "基于图像识别技术白天能见度的算法分析", 电子测量技术, 31 October 2020 (2020-10-31) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116664448A (en) * 2023-07-24 2023-08-29 南京邮电大学 Medium-high visibility calculation method and system based on image defogging
CN116664448B (en) * 2023-07-24 2023-10-03 南京邮电大学 Medium-high visibility calculation method and system based on image defogging

Also Published As

Publication number Publication date
CN113469912B (en) 2024-06-14

Similar Documents

Publication Publication Date Title
Li et al. Haze visibility enhancement: A survey and quantitative benchmarking
Li et al. Image dehazing using residual-based deep CNN
Ma et al. Perceptual quality assessment for multi-exposure image fusion
Wu et al. Accurate transmission estimation for removing haze and noise from a single image
CN107424133B (en) Image defogging method and device, computer storage medium and mobile terminal
CN104809707A (en) Method for estimating visibility of single fog-degraded image
Hautière et al. Enhanced fog detection and free-space segmentation for car navigation
CN110135434B (en) Underwater image quality improvement method based on color line model
EP2846306B1 (en) Image processing apparatus for removing haze contained in still image and method thereof
CN103020914A (en) Rapid image defogging method based on spatial continuity principle
CN116309781B (en) Cross-modal fusion-based underwater visual target ranging method and device
CN113313702A (en) Aerial image defogging method based on boundary constraint and color correction
TW201435806A (en) Image recovery method
CN113469912B (en) Deep learning-based foggy day visibility estimation method and system
Yuan et al. A confidence prior for image dehazing
Kumari et al. Fast and efficient visibility restoration technique for single image dehazing and defogging
CN112017130A (en) Novel image restoration method based on self-adaptive anisotropic total variation regularization
Singh et al. Visibility enhancement and dehazing: Research contribution challenges and direction
Husain et al. VRHAZE: The simulation of synthetic haze based on visibility range for dehazing method in single image
CN117152016A (en) Image defogging method and system based on improved dark channel prior
Gao et al. Variable exponent regularization approach for blur kernel estimation of remote sensing image blind restoration
Mittal et al. FEMT: a computational approach for fog elimination using multiple thresholds
CN113763261B (en) Real-time detection method for far small target under sea fog weather condition
CN113888420A (en) Underwater image restoration method and device based on correction model and storage medium
CN112598777B (en) Haze fusion method based on dark channel prior

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant