CN111553856A - Image defogging method based on depth estimation assistance - Google Patents

Image defogging method based on depth estimation assistance Download PDF

Info

Publication number
CN111553856A
CN111553856A CN202010331563.4A CN202010331563A CN111553856A CN 111553856 A CN111553856 A CN 111553856A CN 202010331563 A CN202010331563 A CN 202010331563A CN 111553856 A CN111553856 A CN 111553856A
Authority
CN
China
Prior art keywords
layer
image
convolution unit
convolution
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010331563.4A
Other languages
Chinese (zh)
Other versions
CN111553856B (en
Inventor
王柯俨
王迪
陈静怡
许宁
王光第
吴宪云
李云松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202010331563.4A priority Critical patent/CN111553856B/en
Publication of CN111553856A publication Critical patent/CN111553856A/en
Application granted granted Critical
Publication of CN111553856B publication Critical patent/CN111553856B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The invention discloses an image defogging method based on depth estimation assistance, which mainly solves the problems of poor fog distribution estimation and loss of recovered image texture details in the prior art. The scheme is as follows: respectively constructing a depth estimation network and a defogging network under a Pythrch frame; obtaining a group of fog-free image sets J, and performing depth estimation and artificial fog adding on the J to obtain a depth image set D and a fog image set I; respectively training a depth estimation network and a defogging network by using the depth image set and the foggy image set to obtain the trained depth estimation network and defogging network; image I to be defoggedcInputting the depth value to a trained depth estimation network, and outputting an estimated depth value Dc(ii) a Image I to be defoggedcAnd depth value DcInputting the image into a trained defogging network and outputting a clear image. The invention can well recoverThe details and the tone of the complex image, the peak signal-to-noise ratio and the structural similarity are higher than or close to those of the prior art, and the method can be used for the sharpening processing of the foggy image.

Description

Image defogging method based on depth estimation assistance
Technical Field
The invention belongs to the technical field of digital image processing, and particularly relates to an image defogging method which can be used for the clarification of a single foggy image shot by an imaging system in a foggy environment.
Background
The scattering and absorption of media are common reasons causing image degradation, under the influence of haze, image acquisition equipment is affected by the scattering and absorption of a large number of suspended particles in the air to light, and a shot image is low in visibility and contrast and cannot be distinguished by partial objects. Thereby severely affecting the effectiveness of various types of image-based information processing systems. Therefore, the research on the image defogging under the haze condition has important significance and application prospect.
Early defogging algorithms were mostly based on prior knowledge and atmospheric scattering models, and the key issue was how to estimate atmospheric light and transmittance. The method takes the forming principle of the fog image as a basis, extracts the characteristics of the fog image through various prior assumptions, estimates the parameters in the atmospheric scattering model by using the characteristics, and substitutes the parameters into a model formula to realize image defogging. Known Dark channel prior defogging methods (DCP) as proposed by He et al, see He K, SUN J, tangx. single image frequency removal using Dark channel prior J/IEEE transaction on analysis and machine interaction, 2011,33(12): 2341-.
With the rise of deep learning technology, the defogging method based on deep learning gradually becomes a research hotspot in the field of image defogging at present. The method solves the difficulty of An artificial design feature extraction algorithm by utilizing the general features of a fog map extracted layer by the convolutional neural network, and although the method has good effect, the method selects data based on batch during training, and the fog map of certain scenes has color distortion phenomenon. Then Ren et al proposed a multi-scale estimation algorithm based on convolutional neural network to estimate the transmittance in the atmospheric scattering model — MSCNN network, see REN W, LIU S, ZHANG H, et al.
In order to solve the color difference caused by inaccurate atmospheric light estimation and the attenuation phenomenon of light in the transmission process, most of the defogging methods which appear recently adopt a convolutional neural network to directly estimate the defogged image, and realize end-to-end processing from the defogging image to the defogging image. A typical method comprises:
1) ren et al proposed a door Fusion defogging Network in 2018, see Wenqi Ren, Lin Ma, JiaweiZhang, jin shan Pan, et al, gate Fusion Network for Single Image Dehazing [ C ]// Proceedings of the IEEE Conference on Computer Vision and pattern recognition.2018, 3253 doping 3261. it first performs preprocessing such as white balance, contrast enhancement and gamma enhancement on a foggy Image, and then performs weighted Fusion with three feature maps output by the Network, finally obtains a defogged Image. (ii) a
2) Liu et al propose a common Model Agnostic Convolutional Neural Network for defogging images, see [18] Z.Liu, B.Xiao, M.Alrabeiah, K.Wang and J.Chen, Single Image Dehazing with an aGeneric Model-adaptive computational consistent Neural Network [ J ]// Signal processing letters,2019,26(6):833-837.
In summary, the existing end-to-end defogging neural network has the following disadvantages: firstly, the door fusion network is not designed with an estimation structure aiming at fog distribution, so that the phenomenon of incomplete defogging occurs; secondly, the unknown convolutional neural network model is simple in structure, adopts an deconvolution layer, is restricted in nonlinear fitting capacity and is easy to generate a chessboard effect.
Disclosure of Invention
The invention aims to provide an image defogging method based on depth estimation assistance aiming at the defects of the prior art, so as to completely utilize a convolutional neural network to perform defogging, reduce the loss of recovered image texture details and improve the defogging effect.
In order to achieve the above object, the depth-aided estimation image defogging method of the invention comprises the following steps:
1) respectively constructing a depth estimation network and a defogging network under a Pythrch framework:
the depth estimation network comprises eight convolution units, six residual error module layers, six upsampling layers and five pooling layers; the defogging network comprises five convolution units, six recalibration module layers, an upper sampling layer and five pooling layers;
each convolution unit comprises a convolution layer and a normalization layer, wherein the convolution layer comprises a convolution operation layer and a LeakyReLU activation function layer;
2) obtainTaking a group of clear fog-free image sets JtAnd depth image set DtFrom a set of depth images DtCalculating to obtain a transmissivity image set T, and carrying out fog-free image JtCarrying out artificial fogging according to the transmissivity image set T and the artificially set atmospheric light value A to obtain a foggy image set ItSet of fog images ItAnd depth image set DtAs a training image set of a depth estimation network, a fog-free image set J is adoptedtAnd fog image set ItAs a training image set of the defogging network;
3) training the depth estimation network:
3a) will train the image set ItEach image in the depth estimation network is sequentially input into the depth estimation network, and the estimated corresponding depth image is output
Figure BDA0002465124910000031
3b) The depth image D corresponding to the depth image settDepth image corresponding to the estimated
Figure BDA0002465124910000032
Substituting the weighted total loss formula into the weighted total loss formula, and calculating a weighted total loss value corresponding to each image in the training set;
3c) updating a direction formula by using network parameters of an Adam algorithm, taking the minimum weighted total loss value as a target, and updating parameters in the depth estimation network to obtain a trained depth estimation network;
4) training the defogging network:
4a) will train the image set ItEach image in the depth estimation system is sequentially input into a trained depth estimation network, and an estimated corresponding depth image D is output*
4b) Will train the image set ItEach image in (a) and its corresponding depth image D*Inputting into a defogging network, and outputting the corresponding estimated defogged image J*
4c) Concentrating the corresponding fog-free image JtDefogged image J corresponding to the estimated*Substitution intoA weighted total loss formula, which is used for calculating a weighted total loss value corresponding to each image in the training set;
4e) updating a direction formula by using network parameters of an Adam algorithm, taking the minimum weighted total loss value as a target, and updating parameters in a depth estimation network to obtain a trained defogging network;
5) a foggy image I needing defogging processing is processedcInputting the depth image into a trained depth estimation network and outputting a depth image Dc
6) Depth image D to be outputcAnd a foggy image IcInputting the images into a trained defogging network, and outputting a defogged image Jc
Compared with the prior art, the invention has the following beneficial effects:
firstly, the depth of the image is estimated in advance through the depth estimation network, so that the problem of incomplete defogging in the edge region of the scene in the prior art is solved;
second, the invention provides better preservation of detail information of restored images due to updating of network parameters based on weighted total loss for texture details.
Simulation results show that the method has an obvious defogging effect compared with the two methods of the existing door fusion network and the model-agnostic convolution neural network on the premise of keeping the contrast and the color saturation of the recovered image, can better recover the background information of the image and improve the visual effect; and the peak signal-to-noise ratio PNSR is superior to that of the prior art, and the structural similarity SSIM is close to that of the prior art.
Drawings
FIG. 1 is a general flow chart of an implementation of the present invention;
FIG. 2 is a diagram of a depth estimation network architecture constructed in accordance with the present invention;
FIG. 3 is a diagram of a defogging network constructed in the present invention;
FIG. 4 is a graph comparing the defogging effects of a synthesized fog image by using the present invention and a prior deep learning defogging algorithm;
fig. 5 is a comparison graph of defogging effects of the real fog images by using the defogging algorithm based on the deep learning of the invention and the defogging algorithm based on the deep learning.
Detailed Description
The following further describes embodiments of the present invention with reference to the accompanying drawings:
referring to fig. 1, the specific implementation steps of the present invention are as follows:
step 1: and constructing a depth estimation network architecture under a Pythrch framework.
As shown in fig. 2, the depth estimation network constructed under the Pytorch framework of the present invention includes a pre-processing portion, a post-processing portion, and an output portion, and sets eight convolution units, six residual module layers, six upsampling layers, and five pooling layers, where each convolution unit includes a convolution layer and a normalization layer, and the convolution layer includes a convolution operation and a leakage relu activation function layer.
The pretreatment part sequentially comprises the following structures: the first convolution unit → the second convolution unit → the third convolution unit → the first residual module layer → the first pooling layer → the second residual module layer → the second pooling layer → the third residual module layer → the third pooling layer → the fourth residual module layer → the first upsampling layer → the fifth residual module layer → the second upsampling layer → the sixth residual module layer → the third upsampling layer; the first residual error module layer is simultaneously connected with the sixth residual error module layer; the second residual error module layer is simultaneously connected with the fifth residual error module layer; the third residual module layer is simultaneously connected with the fourth residual module layer;
the post-processing part comprises three branch structures: the first branch is in turn: the fourth pooling layer → the fourth convolution unit → the fourth upsampling layer; the second branch is sequentially as follows: fifth pooling layer → fifth convolution unit → fifth upsampling layer; the third branch is sequentially as follows: sixth pooling layer → sixth convolution unit → sixth upsampling layer; the fourth pooling layer, the fifth pooling layer and the sixth pooling layer are all connected with the third upper sampling layer;
the structure of the output part is as follows in sequence: seventh convolution unit → eighth convolution unit; the seventh convolution unit is simultaneously connected with the fourth upsampling layer, the fifth upsampling layer and the sixth upsampling layer.
The parameters of each layer are set as follows:
the convolution kernels of the first convolution unit and the second convolution unit are 7 × 7 and 5 × 5 respectively;
the sizes of convolution kernels of the third convolution unit, the fourth convolution unit, the fifth convolution unit, the sixth convolution unit, the seventh convolution unit and the eighth convolution unit are all 3 x 3; the convolution step length of all convolution units is 1;
the scaling factors of the first, second and third pooling layers are all 2;
the scaling factor of the fourth pooling layer is 4, the scaling factor of the fifth pooling layer is 8, and the scaling factor of the sixth pooling layer is 16;
the scaling factors of the first upsampling layer, the second upsampling layer and the third upsampling layer are all 2;
the scaling factor for the fourth upsampling layer is 4, the scaling factor for the fifth upsampling layer is 8, and the scaling factor for the sixth upsampling layer is 16.
Step 2: and constructing a defogging network architecture under a Pythrch framework.
As shown in fig. 3, the defogging network constructed in this example includes an encoding portion and a decoding portion, and has five convolution units, six recalibration module layers, one upsampling layer, and five pooling layers, each convolution unit including a convolution layer and a normalization layer, the convolution layer including a convolution operation and a leakage relu activation function layer. Each recalibration module layer comprises two parts: the first part is as follows: the first convolution unit → the second convolution unit → the first fully connected layer → the LeakyReLU activation function layer → the second fully connected layer → the Sigmod activation function layer; the second part is as follows in sequence: III convolution unit → I upsampling layer → IV convolution unit. Connecting the Sigmod activation function layer with a third convolution unit, and connecting a second convolution unit with a third convolution unit, wherein the parameters of the recalibration module layer are set as follows: the convolution kernels of the first convolution unit, the second convolution unit, the third convolution unit and the fourth convolution unit are all 3 x 3; the convolution step length of the first convolution unit is 2; the convolution step lengths of the second convolution unit, the third convolution unit and the fourth convolution unit are all 1; the scaling factor of the I upsampling layer is 2; the number of input feature maps and the number of output feature maps of the first fully-connected layer are respectively 64 and 32; the number of input profiles and the number of output profiles for the IIth fully-connected layer are 32 and 64, respectively.
The structure of the coding part is as follows in sequence: the 1 st convolution unit → the 2 nd convolution unit → the 3 rd convolution unit → the 4 th convolution unit → the 1 st retargeting module layer → the 2 nd retargeting module layer → the 3 rd retargeting module layer;
the structure of the decoding part is as follows in sequence: the 4 th recalibration module layer → the 5 th recalibration module layer → the 6 th recalibration module layer → the 1 st upper sampling layer → the 5 th convolution unit;
the 4 th convolution unit is simultaneously connected with the 2 nd re-calibration module layer, the 3 rd re-calibration module layer, the 4 th re-calibration module layer, the 5 th re-calibration module layer and the 6 th re-calibration module layer respectively.
The parameters of each layer are set as follows:
the convolution kernel size of the 1 st convolution unit is 7 x 7, and the convolution kernel size of the 2 nd convolution unit is 5 x 5;
the convolution kernels of the 3 rd convolution unit, the 4 th convolution unit and the 5 th convolution unit are all 3 x 3;
the convolution step lengths of the 1 st convolution unit, the 2 nd convolution unit and the 3 rd convolution unit and the 5 th convolution unit are all 1;
the convolution step of the 4 th convolution unit is 2;
the scaling factor for the 1 st upsampling layer is 2.
And step 3: and (5) making a training image set.
3.1) downloading 1000 fog-free image sets J of different scenes from the networktUsing a bilinear interpolation algorithm to interpolate JtIs uniformly scaled to 256 × 256;
3.2) set J of fog-free imagestRespectively estimating depth information D corresponding to each image by using a depth-of-field estimation CNN modelt
3.3) randomly generating a transmissivity parameter β between 1.0 and 2.5 by using random function, and calculating the transmissivity of each graph
Figure BDA0002465124910000064
3.4) randomly generating an atmospheric light value A between 0.8 and 1.0 by using random function, and calculating to obtain 20000 fog images It=JtT + A (1-T) as a set of foggy images;
3.5) set of fog images ItAnd depth information DtA training image set as a depth estimation network;
3.6) set of fog-free images JtAnd fog image set ItAs a training image set for the defogging network.
And 4, step 4: and training the deep estimation network.
4.1) set of training images ItEach image in the depth estimation network is sequentially input into the depth estimation network, and the estimated corresponding depth image is output
Figure BDA0002465124910000061
4.2) according to the corresponding depth image D in the depth image settAnd estimated corresponding depth image
Figure BDA0002465124910000062
And calculating the weighted total loss value of the deep training set by the following formula:
Figure BDA0002465124910000063
wherein | |2To perform a two-norm operation on the matrix, | - | is an absolute value operation on the matrix, m is the number of pixels of the input image, and Sobel (·) is a convolution operation using a four-direction Sobel operator.
The calculation of the weighted total loss value for the deep training set is not limited to the above formula.
4.3) updating a direction formula by using network parameters of an Adam algorithm, updating parameters of the depth estimation network by using the minimum weighted total loss value as a target, and repeating the steps 4.1), 4.2) and 4.3) 10000000 times to obtain the trained depth estimation network.
And 5: and training the defogging network.
5.1) estimating the depth image:
will train the image set ItEach image in the depth estimation network is input into the trained depth estimation network in sequence, and the estimated corresponding depth image is output
Figure BDA0002465124910000075
5.2) estimating defogged images:
will train the image set ItEach image in (1) and its corresponding depth image
Figure BDA0002465124910000071
Inputting into a defogging network, and outputting the corresponding estimated defogged image
Figure BDA0002465124910000072
5.3) concentrating the corresponding fog-free image J according to the fog-free imagetAnd estimated corresponding defogged image
Figure BDA0002465124910000073
And calculating the weighted total loss value of the defogging training set by the following formula:
Figure BDA0002465124910000074
wherein | - |2To perform a two-norm operation on the matrix, | - | is an absolute value operation on the matrix, m is the number of pixels of the input image, and Sobel (·) is a convolution operation using a four-direction Sobel operator.
The calculation of the weighted total loss value for the defogging training set is not limited to the above formula.
5.4) updating a direction formula by using network parameters of an Adam algorithm, updating parameters of the defogging network by using the minimum weighted total loss value as a target, and repeating the steps 5.1), 5.2), 5.3) and 5.4) for 8000000 times to obtain the trained defogging network.
Step 6: and carrying out image defogging.
6.1) foggy image I that will require defoggingcInputting the trained depthDegree estimation network, network output depth image Dc
6.2) image I will be foggedcAnd depth image DcInputting the trained defogging network together, and outputting a clear image J through the networkc
The effects of the present invention are further illustrated by the following simulations:
simulation conditions
1. Testing pictures: four real fog maps and Reside OTS and Make3D datasets downloaded from the network;
2. the simulation method comprises the following steps: the existing DCP algorithm, the DehazeNet algorithm, the GMAN algorithm, the DuRB-US algorithm and the invention are used for five methods;
secondly, simulation test content:
simulation test 1: the four resultant fog maps were defogged using the five methods described above, the results are shown in fig. 4, where:
figure 4a is a composite four hazy image,
figure 4b is the result of the defogging process using the DCP algorithm on the foggy image of figure 4a,
figure 4c shows the result of defogging the foggy image of figure 4a using the DehazeNet algorithm,
figure 4d is the result of defogging the foggy image of figure 4a using the GMAN algorithm,
figure 4e is the result of the defogging process using the DuRB-US algorithm on the foggy image of figure 4a,
figure 4f is the result of the defogging process applied to the foggy image of figure 4a using the method of the present invention,
FIG. 4g is four fog-free images;
as can be seen from fig. 4, the images restored by using the existing DCP algorithm and the DuRB-US algorithm have a certain color cast, and the images restored by using the existing DehazeNet algorithm and the GMAN algorithm still have residual fog. The image effect recovered by the method is better than other four defogging algorithms and is closer to 4g of a fog-free image.
Simulation test 2: the five methods are used for carrying out defogging treatment on the four real fog images respectively, and the effect is shown in fig. 5, wherein:
figure 5a shows four true fogging images,
figure 5b is the result of the defogging process using the DCP algorithm on the foggy image of figure 5a,
figure 5c is the result of defogging the foggy image of figure 5a using the DehazeNet algorithm,
figure 5d is the result of defogging the fogged image of figure 5a using the GMAN algorithm,
figure 5e is the result of the defogging process using the DuRB-US algorithm on the foggy image of figure 5a,
FIG. 5f is the result of defogging the image of FIG. 5a using the method of the present invention;
as can be seen from fig. 5, the images restored by using the existing DCP algorithm and the DuRB-US algorithm have severe color cast, and the images restored by using the existing DehazeNet algorithm and the GMAN algorithm still have obvious haze in the edge region of the object. The image effect recovered by the method of the invention is better than other four defogging algorithms.
Simulation test 3: using the above five methods to perform defogging processing on the Reside OTS and Make3D data sets, comparing the structural similarity SSIM index with the peak signal-to-noise ratio PNSR index, as shown in Table 1:
TABLE 1
Figure BDA0002465124910000081
From table 1, the PSNR and SSIM values of the method of the present invention are higher than or equal to those of the three algorithms DCP, DehazeNet, and DuRB-US, while the SSIM index is slightly lower than that of the GMAN algorithm, the PSNR index is greatly higher than that of the GMAN algorithm, and the overall effect is better than that of the other four algorithms.
By combining the comparison of the simulation results of the five algorithms, the defogging effect of the method on various foggy images is superior to that of other four algorithms.

Claims (10)

1. An image defogging method based on depth estimation assistance is characterized by comprising the following steps:
1) respectively constructing a depth estimation network and a defogging network under a Pythrch framework:
the depth estimation network comprises eight convolution units, six residual error module layers, six upsampling layers and five pooling layers; the defogging network comprises five convolution units, six recalibration module layers, an upper sampling layer and five pooling layers;
each convolution unit comprises a convolution layer and a normalization layer, wherein the convolution layer comprises a convolution operation layer and a LeakyReLU activation function layer;
2) obtaining a set of clear fog-free image sets JtRespectively estimating depth-of-field information D corresponding to each image by using a depth-of-field estimation CNN modeltFrom a set of depth images DtCalculating to obtain a transmissivity image set T, and carrying out fog-free image JtCarrying out artificial fogging according to the transmissivity image set T and the artificially set atmospheric light value A to obtain a foggy image set ItSet of fog images ItAnd depth image set DtAs a training image set of a depth estimation network, a fog-free image set J is adoptedtAnd fog image set ItAs a training image set of the defogging network;
3) training the depth estimation network:
3a) will train the image set ItEach image in the image data is sequentially input into a depth estimation network, and the estimated corresponding depth is output
Image of a person
Figure FDA0002465124900000011
3b) The depth image D corresponding to the depth image settDepth image corresponding to the estimated
Figure FDA0002465124900000012
Substituting the weighted total loss formula designed for the texture details, and calculating the weighted total loss value corresponding to each image in the training set;
3c) updating a direction formula by using network parameters of an Adam algorithm, taking the minimum weighted total loss value as a target, and updating parameters in the depth estimation network to obtain a trained depth estimation network;
4) training the defogging network:
4a) will train the image set ItEach image in the depth estimation system is sequentially input into a trained depth estimation network, and an estimated corresponding depth image D is output*
4b) Will train the image set ItEach image in (a) and its corresponding depth image D*Inputting into a defogging network, and outputting the corresponding estimated defogged image J*
4c) Concentrating the corresponding fog-free image JtDefogged image J corresponding to the estimated*Substituting the weighted total loss formula designed for the texture details, and calculating the weighted total loss value corresponding to each image in the training set;
4e) updating a direction formula by using network parameters of an Adam algorithm, taking the minimum weighted total loss value as a target, and updating parameters in a depth estimation network to obtain a trained defogging network;
5) a foggy image I needing defogging processing is processedcInputting the depth image into a trained depth estimation network and outputting a depth image Dc
6) Depth image D to be outputcAnd a foggy image IcInputting the images into a trained defogging network, and outputting a defogged image Jc
2. The method of claim 1, wherein: 1) the depth estimation network constructed in (1) comprises a pre-processing part, a post-processing part and an output part;
the pretreatment part comprises the following steps: the first convolution unit → the second convolution unit → the third convolution unit → the first residual module layer → the first pooling layer → the second residual module layer → the second pooling layer → the third residual module layer → the third pooling layer → the fourth residual module layer → the first upsampling layer → the fifth residual module layer → the second upsampling layer → the sixth residual module layer → the third upsampling layer; the first residual error module layer is simultaneously connected with the sixth residual error module layer; the second residual module layer is simultaneously connected with the fifth residual module layer, and the third residual module layer is simultaneously connected with the fourth residual module layer;
the post-processing section has three branches:
the first branch is in turn: the fourth pooling layer → the fourth convolution unit → the fourth upsampling layer;
the second branch is sequentially as follows: fifth pooling layer → fifth convolution unit → fifth upsampling layer;
the third branch is sequentially as follows: sixth pooling layer → sixth convolution unit → sixth upsampling layer; the fourth pooling layer, the fifth pooling layer and the sixth pooling layer are all connected with the third upper sampling layer;
the output part is sequentially as follows: seventh convolution unit → eighth convolution unit; the seventh convolution unit is simultaneously connected with the fourth upsampling layer, the fifth upsampling layer and the sixth upsampling layer.
3. The method of claim 2, wherein: the parameters of each layer of the depth estimation network are set as follows:
the convolution kernels of the first convolution unit and the second convolution unit are 7 × 7 and 5 × 5 respectively;
the sizes of convolution kernels of the third convolution unit, the fourth convolution unit, the fifth convolution unit, the sixth convolution unit, the seventh convolution unit and the eighth convolution unit are all 3 x 3; the convolution step length of all convolution units is 1;
the scaling factors of the first, second and third pooling layers are all 2;
the scaling factor of the fourth pooling layer is 4, the scaling factor of the fifth pooling layer is 8, and the scaling factor of the sixth pooling layer is 16;
the scaling factors of the first upsampling layer, the second upsampling layer and the third upsampling layer are all 2;
the scaling factor for the fourth upsampling layer is 4, the scaling factor for the fifth upsampling layer is 8, and the scaling factor for the sixth upsampling layer is 16.
4. The method of claim 1, wherein: 1) the defogging network constructed in (1), which includes an encoding portion and a decoding portion;
the coding part is sequentially as follows: the 1 st convolution unit → the 2 nd convolution unit → the 3 rd convolution unit → the 4 th convolution unit → the 1 st retargeting module layer → the 2 nd retargeting module layer → the 3 rd retargeting module layer;
the decoding part is sequentially as follows: the 4 th recalibration module layer → the 5 th recalibration module layer → the 6 th recalibration module layer → the 1 st upper sampling layer → the 5 th convolution unit;
and simultaneously and respectively connecting the 4 th convolution unit with the 2 nd re-calibration module layer, the 3 rd re-calibration module layer, the 4 th re-calibration module layer, the 5 th re-calibration module layer and the 6 th re-calibration module layer to obtain the constructed defogging network.
5. The method of claim 4, wherein: the parameters of each layer of the defogging network are set as follows:
the convolution kernel size of the 1 st convolution unit is 7 x 7, and the convolution kernel size of the 2 nd convolution unit is 5 x 5;
the convolution kernels of the 3 rd convolution unit, the 4 th convolution unit and the 5 th convolution unit are all 3 x 3;
the convolution step lengths of the 1 st convolution unit, the 2 nd convolution unit and the 3 rd convolution unit and the 5 th convolution unit are all 1;
the convolution step of the 4 th convolution unit is 2;
the scaling factor for the 1 st upsampling layer is 2.
6. The method of claim 1, wherein: 1) each recalibration module layer in (2) has a structure of two parts:
the first part is as follows in sequence: the first convolution unit → the second convolution unit → the first fully connected layer → the LeakyReLU activation function layer → the second fully connected layer → the Sigmod activation function layer;
the second part is sequentially as follows: III convolution unit → I upsampling layer → IV convolution unit. And connecting the Sigmod activation function layer with the III convolution unit, and connecting the II convolution unit with the III convolution unit to obtain the recalibration module layer.
7. The method of claim 6, wherein: the parameters of the recalibration module layer are set as follows:
the convolution kernels of the first convolution unit, the second convolution unit, the third convolution unit and the fourth convolution unit are all 3 x 3;
the convolution step length of the first convolution unit is 2;
the convolution step lengths of the second convolution unit, the third convolution unit and the fourth convolution unit are all 1;
the scaling factor of the I upsampling layer is 2;
the number of input feature maps and the number of output feature maps of the first fully-connected layer are respectively 64 and 32;
the number of input profiles and the number of output profiles for the IIth fully-connected layer are 32 and 64, respectively.
8. The method of claim 1, wherein: 2) according to depth image set DtThe transmittance image set T is obtained through calculation, and the transmittance image set T is calculated through the following formula:
Figure FDA0002465124900000041
where e is the base of the logarithm of the natural number, β is the atmospheric scattering coefficient, where it is assumed that the fog is uniformly distributed in the image, β is randomly generated between 1.0 and 2.5 using random function, DtFor corresponding depth image sets Dt
9. The method of claim 1, wherein the weighted total loss formula in 3b) is expressed as follows:
Figure FDA0002465124900000042
wherein | |2To operate on a matrix in a two-norm manner, | - | is to operate on the matrix in an absolute value manner, | -, m is the number of pixels of the input image, Sobel (. -) is to use a four-way methodPerforming a convolution operation on the Sobel operator, DtFor a corresponding depth image in the depth image set,
Figure FDA0002465124900000043
is the estimated corresponding depth image.
10. The method of claim 1, wherein the weighted total loss formula in 4c) is expressed as follows:
Figure FDA0002465124900000044
wherein | - |2For a two-norm operation on a matrix, | - | is an absolute value operation on the matrix, m is the number of pixels of the input image, Sobel (. -) is a convolution operation using a four-way Sobel operator, JtIs a corresponding fog-free image in the fog-free image set,
Figure FDA0002465124900000045
is the estimated corresponding defogged image.
CN202010331563.4A 2020-04-24 2020-04-24 Image defogging method based on depth estimation assistance Active CN111553856B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010331563.4A CN111553856B (en) 2020-04-24 2020-04-24 Image defogging method based on depth estimation assistance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010331563.4A CN111553856B (en) 2020-04-24 2020-04-24 Image defogging method based on depth estimation assistance

Publications (2)

Publication Number Publication Date
CN111553856A true CN111553856A (en) 2020-08-18
CN111553856B CN111553856B (en) 2023-03-24

Family

ID=72003942

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010331563.4A Active CN111553856B (en) 2020-04-24 2020-04-24 Image defogging method based on depth estimation assistance

Country Status (1)

Country Link
CN (1) CN111553856B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112365476A (en) * 2020-11-13 2021-02-12 南京信息工程大学 Fog visibility detection method based on dual-channel deep network
CN112734679A (en) * 2021-01-26 2021-04-30 西安理工大学 Fusion defogging method for medical operation video images

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100259651A1 (en) * 2009-04-08 2010-10-14 Raanan Fattal Method, apparatus and computer program product for single image de-hazing
US20180231871A1 (en) * 2016-06-27 2018-08-16 Zhejiang Gongshang University Depth estimation method for monocular image based on multi-scale CNN and continuous CRF
CN108805839A (en) * 2018-06-08 2018-11-13 西安电子科技大学 Combined estimator image defogging method based on convolutional neural networks
CN109146810A (en) * 2018-08-08 2019-01-04 国网浙江省电力有限公司信息通信分公司 A kind of image defogging method based on end-to-end deep learning
CN109272455A (en) * 2018-05-17 2019-01-25 西安电子科技大学 Based on the Weakly supervised image defogging method for generating confrontation network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100259651A1 (en) * 2009-04-08 2010-10-14 Raanan Fattal Method, apparatus and computer program product for single image de-hazing
US20180231871A1 (en) * 2016-06-27 2018-08-16 Zhejiang Gongshang University Depth estimation method for monocular image based on multi-scale CNN and continuous CRF
CN109272455A (en) * 2018-05-17 2019-01-25 西安电子科技大学 Based on the Weakly supervised image defogging method for generating confrontation network
CN108805839A (en) * 2018-06-08 2018-11-13 西安电子科技大学 Combined estimator image defogging method based on convolutional neural networks
CN109146810A (en) * 2018-08-08 2019-01-04 国网浙江省电力有限公司信息通信分公司 A kind of image defogging method based on end-to-end deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
丛晓峰等: "基于对偶学习的图像去雾网络", 《应用光学》 *
邢晓敏等: "二阶段端到端的图像去雾生成网络", 《计算机辅助设计与图形学学报》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112365476A (en) * 2020-11-13 2021-02-12 南京信息工程大学 Fog visibility detection method based on dual-channel deep network
CN112365476B (en) * 2020-11-13 2023-12-08 南京信息工程大学 Fog day visibility detection method based on double-channel depth network
CN112734679A (en) * 2021-01-26 2021-04-30 西安理工大学 Fusion defogging method for medical operation video images

Also Published As

Publication number Publication date
CN111553856B (en) 2023-03-24

Similar Documents

Publication Publication Date Title
CN112288658B (en) Underwater image enhancement method based on multi-residual joint learning
CN107123089B (en) Remote sensing image super-resolution reconstruction method and system based on depth convolution network
CN108230264B (en) Single image defogging method based on ResNet neural network
CN111915530B (en) End-to-end-based haze concentration self-adaptive neural network image defogging method
CN110517203B (en) Defogging method based on reference image reconstruction
CN110288550B (en) Single-image defogging method for generating countermeasure network based on priori knowledge guiding condition
CN110349087B (en) RGB-D image high-quality grid generation method based on adaptive convolution
CN112241939B (en) Multi-scale and non-local-based light rain removal method
CN113870124B (en) Weak supervision-based double-network mutual excitation learning shadow removing method
CN111539246B (en) Cross-spectrum face recognition method and device, electronic equipment and storage medium thereof
CN111553856B (en) Image defogging method based on depth estimation assistance
CN113160286A (en) Near-infrared and visible light image fusion method based on convolutional neural network
CN114820408A (en) Infrared and visible light image fusion method based on self-attention and convolutional neural network
CN115511708A (en) Depth map super-resolution method and system based on uncertainty perception feature transmission
CN113379606B (en) Face super-resolution method based on pre-training generation model
CN115222614A (en) Priori-guided multi-degradation-characteristic night light remote sensing image quality improving method
CN110889868A (en) Monocular image depth estimation method combining gradient and texture features
CN113052776A (en) Unsupervised image defogging method based on multi-scale depth image prior
CN116128768B (en) Unsupervised image low-illumination enhancement method with denoising module
CN112150395A (en) Encoder-decoder network image defogging method combining residual block and dense block
CN116051396B (en) Image denoising method based on feature enhancement network and GRU network
CN107301625B (en) Image defogging method based on brightness fusion network
CN114581304A (en) Image super-resolution and defogging fusion method and system based on circulating network
CN113935916A (en) End-to-end underwater image restoration method based on ambient light perception
CN115705493A (en) Image defogging modeling method based on multi-feature attention neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant