CN110807743B - Image defogging method based on convolutional neural network - Google Patents

Image defogging method based on convolutional neural network Download PDF

Info

Publication number
CN110807743B
CN110807743B CN201911016783.1A CN201911016783A CN110807743B CN 110807743 B CN110807743 B CN 110807743B CN 201911016783 A CN201911016783 A CN 201911016783A CN 110807743 B CN110807743 B CN 110807743B
Authority
CN
China
Prior art keywords
image
defogging
fog
map
mist
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911016783.1A
Other languages
Chinese (zh)
Other versions
CN110807743A (en
Inventor
左峥嵘
化彦伶
吴双忱
桑农
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201911016783.1A priority Critical patent/CN110807743B/en
Publication of CN110807743A publication Critical patent/CN110807743A/en
Application granted granted Critical
Publication of CN110807743B publication Critical patent/CN110807743B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The invention discloses an image defogging method based on a convolutional neural network, which comprises the following steps of: s1, inputting the foggy image to be defogged into a pre-trained transmissivity estimation model to obtain a transmissivity image; s2, obtaining a mist region mask image and a dense mist region mask image of the fog image according to the obtained transmissivity image and a preset transmissivity threshold; s3, dividing the foggy image to be defogged into a mist image and a dense fog image according to the mist region mask image and the dense fog region mask image; and S4, inputting the acquired mist map and the acquired dense mist map into a pre-trained defogging model simultaneously for defogging to acquire a non-fogging map. The haze image is divided into a haze area and a dense haze area by estimating a transmissivity diagram of the haze image, the haze area and the dense haze area are simultaneously and respectively input into a defogging network and a defogging network in a defogging model, the areas with different concentrations are processed by adopting different parameters, the obtained results are combined in the defogging model in a pixel level mode, a complete and clear defogging image is obtained, and the defogging effect is good.

Description

Image defogging method based on convolutional neural network
Technical Field
The invention belongs to the technical field of image processing and deep learning, and particularly relates to an image defogging method based on a convolutional neural network.
Background
Single image defogging is a challenging problem of inadequate image recovery. The problems of color distortion and offset, small dynamic compression range, low contrast and the like easily appear in the image acquired under the foggy weather condition, the required visual effect cannot be achieved, and the difficulty is increased for target detection and identification.
The existing image defogging method mainly comprises the following steps: a priori knowledge based defogging algorithm and a learning based defogging algorithm. The defogging algorithm based on prior needs to manually design image characteristics, and the performance of the algorithm depends on the accuracy of the prior knowledge, so the generalization capability is poor. The defogging algorithm based on learning is divided into two types according to whether an atmospheric scattering model is adopted to design the network structure, the transmissivity graph and the atmospheric light value of the fog graph are respectively estimated by adopting the method of designing the network structure by the atmospheric scattering model, and then a clear graph is restored according to the atmospheric scattering model. The method has two limitations, namely inaccurate estimation of the transmittance map, the atmospheric light value and the like, and insufficient simulation of a complex fogging mechanism due to the linear structure of the atmospheric scattering model, so that the defogging effect and the image definition of the fogging map are influenced. The method for designing the network structure without adopting the atmospheric scattering model recovers the clear image from the fog image directly and learns the complex transformation function from the fog image to the clear image through the network. However, due to the lack of the limitation of the physical model and the property of parameter sharing of the convolutional neural network, the areas with different fog concentrations in the fog image adopt the same parameters, so that the image after defogging is locally dark or has high brightness, and the defogging is not thorough.
In summary, it is an urgent need to solve the above-mentioned problems to provide an image defogging method with a better defogging effect.
Disclosure of Invention
Aiming at the defects or the improvement requirements of the prior art, the invention provides an image defogging method based on a convolutional neural network, aiming at solving the problem of poor defogging effect caused by the fact that regions with different fog concentrations in a fog image are defogged by adopting the same parameters in the prior art.
In order to achieve the above object, there is provided an image defogging method based on a convolutional neural network according to the present invention, including the steps of:
s1, inputting the foggy image to be defogged into a pre-trained transmissivity estimation model to obtain a transmissivity image;
s2, obtaining a mist region mask image and a dense mist region mask image of the fog image according to the obtained transmissivity image and a preset transmissivity threshold;
s3, dividing the foggy image to be defogged into a mist image and a dense fog image according to the mist region mask image and the dense fog region mask image;
and S4, inputting the acquired mist map and the acquired dense mist map into a pre-trained defogging model simultaneously for defogging to acquire a non-fogging map.
By adopting the method, areas with different concentrations in the foggy image are defogged simultaneously by adopting different networks, so that color distortion and over-high or over-low brightness caused by integral defogging are avoided, more detailed information is restored, and the defogging effect is better.
Further preferably, the loss function L of the transmittance estimation modeltransIs composed of
Figure BDA0002245945590000021
Wherein, T (I) is the output of the transmittance estimation model function, I is a fogging graph, t is a target transmittance graph, w, h and c are the length, width and channel number of the fogging graph respectively, V (·) is a feature graph of the output of the VGG-19 network Pool-4 layer, w ', h ' and c ' are the length, width and channel number of the feature graph respectively, and σ is a proportionality coefficient. The loss function can better recover the edge information of the image.
Further preferably, the fog diagram is used as a sample of the first training set, the transmittance diagram corresponding to the fog diagram is used as a sample label of the first training set, and the transmittance diagram is input to the convolutional neural network for training, so as to obtain a pre-trained transmittance estimation model.
Further preferably, the mist region mask map ML and the dense mist region mask map MD are determined by comparing pixel values in the transmittance map with a preset transmittance threshold value, and are expressed as follows:
Figure BDA0002245945590000031
Figure BDA0002245945590000032
wherein ML (x, y) is a mist region mask value corresponding to the pixel in the x-th row and y-th column in the mist map, MD is a dense mist region mask, MD (x, y) is a dense mist region mask value corresponding to the pixel in the x-th row and y-th column in the mist map, T (x, y) is a pixel value in the x-th row and y-th column in the transmittance map, and w is a preset transmittance threshold.
Further preferably, the fog image and each pixel point of the fog region mask image are multiplied respectively to obtain a fog image, and the fog image and each pixel point of the dense fog region mask image are multiplied respectively to obtain a dense fog image.
Preferably, the defogging model comprises two parallel convolutional neural networks which are respectively recorded as a defogging network and a defogging network, the defogging network and the defogging network are respectively input into the defogging network and the defogging network, and the outputs of the defogging network and the defogging network are superposed in the defogging model to obtain the output of the defogging model, namely the defogging model.
Further preferably, the loss function L of the defogging modeldehazeComprises the following steps:
Figure BDA0002245945590000033
wherein D isS(IS) For the output of the mist network in the defogging model, DB(IB) Is the output of the dense fog network in the defogging model, ISIs a haze pattern, IBThe method is characterized in that the method is a dense fog diagram, C is a target non-fog diagram, w, h and C are respectively the length, width and channel number of the thin fog diagram and the dense fog diagram, V (-) is a characteristic diagram of the output of the Pool-4 layer in the VGG-19 network, w ', h ' and C ' are respectively the length, width and channel number of the characteristic diagram, and sigma is a proportionality coefficient.
Preferably, the fog image is segmented to obtain a fog image and a dense fog image, the fog image is used as a sample of a second training set, a clear image corresponding to the fog image is used as a sample label of the second training set, the dense fog image is used as a sample of a third training set, a clear image corresponding to the fog image is used as a sample label of the third training set, and meanwhile, the second training set and the third training set are respectively input into a defogging network and a defogging network of a defogging model to be trained, so that the pre-trained defogging model is obtained.
In general, compared with the prior art, the above technical solution contemplated by the present invention can achieve the following beneficial effects:
1. the invention provides an image defogging method based on a convolutional neural network, which divides a foggy image into a defogging region and a dense defogging region by estimating a transmissivity graph of a defogging image, simultaneously and respectively inputs the regions into a defogging network and a defogging network in a defogging model, processes the regions with different concentrations by adopting different parameters, and combines the obtained results in the defogging model in a pixel level to obtain a complete and clear defogging image.
2. The image defogging method based on the convolutional neural network solves the problem that the defogging algorithm based on the atmospheric scattering model is restricted in nonlinear fitting capacity, adopts the convolutional neural network to learn a complex transformation function from a fog image to a clear image, and simultaneously adopts different networks to perform defogging on different fog concentration areas, so that color distortion and inappropriate brightness caused by integral defogging are avoided, and more detailed information is recovered.
Drawings
FIG. 1 is a flow chart of an image defogging method based on a convolutional neural network according to the present invention;
FIG. 2 is a haze plot and a corresponding transmittance plot provided by an embodiment of the present invention; wherein, the graph (a) is a fogging graph, and the graph (b) is a resultant transmittance graph;
FIG. 3 is a graph of transmittance values obtained by estimating a loss function using different transmittances, respectively; wherein, the graph (a) is a fogging graph, the graph (b) is a real transmittance graph corresponding to the fogging graph, the graph (c) is a transmittance estimation graph obtained by using an L1 loss function, and the graph (d) is a transmittance estimation graph obtained by using a transmittance estimation loss function proposed by the present invention;
FIG. 4 is a mask view of mist and dense mist zones provided by an embodiment of the present invention; wherein, the image (a) is a mist region mask image, and the image (b) is a dense mist region mask image;
FIG. 5 is a mist map and a fog map after the fog map has been divided in an embodiment of the present invention; wherein, the graph (a) is a mist graph, and the graph (b) is a dense mist graph;
FIG. 6 is a schematic diagram of a convolutional neural network in a defogging model according to an embodiment of the present invention;
FIG. 7 is a defogging chart obtained by respectively adopting different defogging methods; wherein, the graph (a) is a fogging graph, the graph (b) is a defogging graph obtained by processing the fogging graph by a DCP algorithm, the graph (c) is a defogging graph obtained by processing the fogging graph by an AOD-net algorithm, the graph (d) is a defogging graph obtained by processing the fogging graph by a Dehaze-GAN algorithm, and the graph (e) is a defogging graph obtained by processing the fogging graph by the method provided by the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
In order to achieve the above object, the present invention provides an image defogging method based on a convolutional neural network, the flowchart of which is shown in fig. 1, including the following steps:
s1, inputting the fogging map H to be defogged into the pre-trained transmittance estimation model to obtain a transmittance map T, as shown in fig. 2, where (a) is the fogging map H and (b) is the obtained transmittance map T;
specifically, the fog diagram is used as a sample of a first training set, the transmittance diagram corresponding to the fog diagram is used as a sample label of the first training set, and the transmittance diagram is input to a convolutional neural network for training to obtain a pre-trained transmittance estimation model. Specifically, the foggy graph H is the network input, the corresponding transmittance graph T is the network output, the transmittance graph estimation loss function is minimized, and the weight of the current network is updated; the above steps are repeated until the preset number of iterations is reached, in this embodiment, the preset number of iterations is 60.
Wherein the loss function of the transmittance estimation modelLtransComprises the following steps:
Figure BDA0002245945590000051
wherein, t (I) is the output of the transmittance estimation model function, I represents a foggy graph, t is a target transmittance graph, w, h, and c are the length, width, and number of channels of the foggy graph, respectively, V (·) is a feature graph of the output of Pool-4 layer in VGG-19 network, w ', h ', and c ' are the length, width, and number of channels of the feature graph, respectively, σ is a proportionality coefficient, and σ is 0.0001 in this embodiment. Specifically, comparing the estimated transmittance loss function proposed by the present invention with the L1 loss function, as shown in fig. 3, where (a) is a foggy graph, (b) is a real transmittance graph corresponding to the foggy graph, (c) is a transmittance estimated graph obtained by using the L1 loss function, and (d) is a transmittance estimated graph obtained by using the estimated transmittance loss function proposed by the present invention, it can be seen from the graph that the estimated transmittance loss function proposed by the present invention can better recover the edge information of the image.
S2, obtaining a mist region mask image MD and a dense mist region mask image ML of the fog image according to the obtained transmissivity image T and a preset transmissivity threshold value w;
specifically, in this embodiment, the mist region mask map ML and the dense mist region mask map MD are determined by comparing the pixel values in the transmittance map T with the preset transmittance threshold value w, and are expressed as follows:
Figure BDA0002245945590000061
Figure BDA0002245945590000062
where ML (x, y) is a haze region mask value corresponding to a pixel in the x-th row and y-th column in the haze map, MD (x, y) is a dense haze region mask value corresponding to a pixel in the x-th row and y-th column in the haze map, and T (x, y) is a pixel value in the x-th row and y-th column in the transmittance map. The resultant mist region mask map and the dense mist region mask map are shown in fig. 4, in which a mask map of the mist region is shown in fig. (a), and a mask map of the dense mist region is shown in fig. (b).
Specifically, in this embodiment, the transmittance threshold w is determined through a large number of experiments, specifically, a plurality of transmittance thresholds w within a range of 0 to 1 are selected and corresponding networks are trained, the trained networks are tested on the public data set SOTS, and the w corresponding to the network with the largest Score value is selected as the final threshold. Specifically, firstly, w is taken to be 0.5 for defogging, and on the basis, w is adjusted by a step length of 0.1, so as to obtain different defogging effects, as shown in table 1, a Score value comparison table when different transmittance thresholds w are selected is shown, wherein Score is PSNR/20+ SSIM, PSNR is a peak signal-to-noise ratio, and SSIM is a structural similarity measure. The larger the PSNR and SSIM, the larger the Score value, the better the defogging effect. The transmittance threshold w with the highest Score value is experimentally selected as the preset transmittance threshold w of the embodiment, where w is 0.6.
TABLE 1
w PSNR SSIM Score
0.2 22.512 0.914 2.0396
0.3 22.838 0.913 2.0549
0.4 23.674 0.915 2.0987
0.5 22.864 0.9096 2.0528
0.6 23.718 0.928 2.1139
0.7 22.688 0.913 2.047
0.8 23.067 0.914 2.067
S3, dividing the foggy image to be defogged into a mist image and a dense fog image according to the mist region mask and the dense fog region mask;
specifically, multiplying the pixel points of the fog image and the fog region mask image respectively to obtain a fog image L, multiplying the pixel points of the fog image and the fog region mask image respectively to obtain a fog image D, which is expressed as:
L(x,y)=H(x,y)×ML(x,y)
D(x,y)=H(x,y)×MD(x,y)
wherein, L (x, y) is a pixel value of the x-th row and y-th column in the fog map, H (x, y) is a pixel value of the x-th row and y-th column in the fog map, ML (x, y) is a fog region mask value corresponding to a pixel of the x-th row and y-th column in the fog map, and MD (x, y) is a dense fog region mask value corresponding to a pixel of the x-th row and y-th column in the fog map. The resulting mist pattern L and the dense mist pattern D are shown in fig. 5 as a pattern (a) and a pattern (b), respectively.
And S4, inputting the acquired mist map and the acquired dense mist map into a pre-trained defogging model simultaneously for defogging to acquire a non-fogging map.
Specifically, the defogging model comprises two parallel convolutional neural networks which are respectively marked as a defogging network and a defogging network. Specifically, the structure of the mist network is the same as that of the defogging network, and taking the defogging network as an example, as shown in fig. 6, the defogging network includes an input convolution unit, a down-sampling unit, a feature extraction unit, an up-sampling unit, and an output convolution unit.
The input convolution unit is composed of convolution layers CI with convolution kernel size of 3 × 3 and output channel number of 18.
The down-sampling unit comprises 5 dense connection blocks CB, the number of output channels of which is 36, and a down-sampling block CD is connected behind each dense connection block. Each dense connecting block CB is composed of 3 identical submodules, each submodule is of a bn-relu-conv-dropout series structure, and the submodules are connected end to end. The downsampling block consists of bn-conv-dropout-pool. Specifically, the size of all convolution kernels in the down-sampling unit is 3 × 3.
The characteristic extraction unit comprises 1 dense connection block CB, the number of output channels of the dense connection block CB is 54, the dense connection block consists of 6 identical sub-modules, and each sub-module is of a bn-relu-conv-drop series structure. The size of all convolution kernels in the feature extraction unit is 3 x 3.
The upsampling unit comprises 5 upsampling blocks CU, each of which is connected to a dense connection block CB via a feature fusion module. The upsampling block CU is a deconvolution layer and has an output channel number of 72. Each up-sampling block and the corresponding dense connecting block in the down-sampling unit are connected to each feature fusion module together, and the output of the up-sampling block is added in each feature fusion module. The dense connection block consists of 3 identical submodules, each submodule is of a bn-relu-conv-dropout serial structure, and each submodule is connected end to end. The size of all convolution kernels in the upsampling unit is 3 x 3.
The output convolution unit is composed of convolution layers with convolution kernel size of 3 x 3 and output channel of 3.
Wherein the loss function L of the defogging modeldehazeComprises the following steps:
Figure BDA0002245945590000081
wherein D isS(IS) For the output of the mist network in the defogging model, DB(IB) Is the output of the dense fog network in the defogging model, ISIs a haze pattern, IBThe graph is a dense fog graph, C is a target non-fog graph, w, h and C are respectively the length, width and channel number of the thin fog graph and the dense fog graph, V (-) is a feature graph of the output of the VGG-19 network Pool-4 layer, w ', h ' and C ' are respectively the length, width and channel number of the feature graph, sigma is a proportionality coefficient, and the value of sigma is 0.0001 in the embodiment.
Specifically, a mist map and a dense mist map are obtained by segmenting the mist map, the mist map is used as a sample of a second training set, a clear image corresponding to the mist map is used as a sample label of the second training set, the dense mist map is used as a sample of a third training set, a clear image corresponding to the mist map is used as a sample label of the third training set, and meanwhile, the second training set and the third training set are respectively input into a defogging network and a defogging network of a defogging model to be trained, so that a pre-trained defogging model is obtained. The parameters of the mist network and the dense mist network in the defogging model are trained simultaneously and respectively, and different parameters are adopted to process images with different mist concentrations, so that the defogging effect is good.
And simultaneously and respectively inputting the acquired mist map and the acquired dense mist map into a mist removing network and a dense mist removing network, and overlapping the outputs of the mist removing network and the dense mist removing network in a mist removing model to acquire the output of the mist removing model, namely a non-mist map.
The SOTS data set in the public data set RESIDE is used as a test set, and the test set comprises 500 outdoor fog pictures and clear pictures, and does not overlap with a training database. The DCP algorithm, the CAP algorithm, the AOD-net algorithm, the PFF-net algorithm, the Dehaze-GAN algorithm and the defogging method provided by the invention are respectively adopted to carry out verification on the test set, and the obtained results are shown in Table 2.
TABLE 2
Model PSNR SSIM
DCP 19.155 0.899
CAP 18.481 0.773
AOD-net 20.292 0.875
PFF-net 21.252 0.839
Dehaze-GAN 23.595 0.909
The method proposed by the invention 23.674 0.915
As can be seen from Table 2, compared with other algorithms, the method provided by the invention has higher PSNR and SSIM values and better defogging effect. To further illustrate the defogging effect of the method provided by the present invention, as shown in fig. 7, a graph (a) is a fogging map, a graph (b) is a defogging map obtained by processing the fogging map by a DCP algorithm, a graph (c) is a defogging map obtained by processing the fogging map by an AOD-net algorithm, a graph (d) is a defogging map obtained by processing the fogging map by a Dehaze-GAN algorithm, and a graph (e) is a defogging map obtained by processing the fogging map by the method provided by the present invention, it can be seen from the graphs that the defogging maps obtained by the DCP algorithm and the Dehaze-GAN algorithm retain much haze, and the defogging maps obtained by the AOD algorithm, the AOD algorithm and the Dehaze-GAN algorithm have the phenomena of too dark local areas and information loss. The defogging image of the method provided by the invention can better recover the real color information, and the defogging effect is better.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (7)

1. An image defogging method based on a convolutional neural network is characterized by comprising the following steps:
s1, inputting the fogging image to be processed into a pre-trained transmissivity estimation model to obtain a transmissivity image;
s2, obtaining a mist region mask image and a dense mist region mask image of the fog image according to the obtained transmissivity image and a preset transmissivity threshold;
s3, dividing the foggy image to be defogged into a mist image and a dense fog image according to the mist region mask image and the dense fog region mask image;
s4, inputting the acquired mist map and the acquired dense mist map into a pre-trained defogging model for defogging at the same time to acquire a non-mist map;
the defogging model comprises two parallel convolutional neural networks which are respectively recorded as a defogging network and a defogging network, the defogging network and the defogging network are respectively input into the defogging network and the defogging network at the same time, and the output of the defogging network are superposed in the defogging model to obtain the output of the defogging model, namely a defogging image.
2. The convolutional neural network-based image defogging method according to claim 1, wherein a loss function L of the transmittance estimation modeltransComprises the following steps:
Figure FDA0003388741370000011
wherein, T (I) is the output of the transmittance estimation model function, I is a fogging graph, t is a target transmittance graph, w, h and c are the length, width and channel number of the fogging graph respectively, V (·) is a feature graph of output of Pool-4 layer in VGG-19 network, w ', h ' and c ' are the length, width and channel number of the feature graph respectively, and σ is a proportionality coefficient.
3. The image defogging method based on the convolutional neural network as claimed in claim 1, wherein a fog map is used as a sample of a first training set, a transmittance map corresponding to the fog map is used as a sample label of the first training set, and the fog map is input into the convolutional neural network for training to obtain a pre-trained transmittance estimation model.
4. The convolutional neural network-based image defogging method according to claim 1, wherein the haze region mask pattern ML and the fog region mask pattern MD are determined by comparing pixel values in the transmittance pattern with a preset transmittance threshold value, and are represented as follows:
Figure FDA0003388741370000021
Figure FDA0003388741370000022
wherein ML (x, y) is a mist region mask value corresponding to the pixel in the x-th row and y-th column in the mist map, MD is a dense mist region mask, MD (x, y) is a dense mist region mask value corresponding to the pixel in the x-th row and y-th column in the mist map, T (x, y) is a pixel value in the x-th row and y-th column in the transmittance map, and w is a preset transmittance threshold.
5. The convolutional neural network-based image defogging method according to claim 1, wherein the fog image is multiplied by each pixel point of the fog region mask image to obtain a fog image, and the fog image is multiplied by each pixel point of the fog region mask image to obtain a fog image.
6. The convolutional neural network-based image defogging method according to any one of claims 1 to 5, wherein a loss function L of the defogging modeldehazeComprises the following steps:
Figure FDA0003388741370000023
wherein D isS(IS) For the output of the mist network in the defogging model, DB(IB) Is the output of the dense fog network in the defogging model, ISIs a haze pattern, IBThe method is characterized in that the method is a dense fog diagram, C is a target non-fog diagram, w, h and C are respectively the length, width and channel number of the thin fog diagram and the dense fog diagram, V (-) is a characteristic diagram of the output of the Pool-4 layer in the VGG-19 network, w ', h ' and C ' are respectively the length, width and channel number of the characteristic diagram, and sigma is a proportionality coefficient.
7. The convolutional neural network-based image defogging method according to any one of claims 1 to 5, wherein the fog map is segmented to obtain a fog map and a dense fog map, the fog map is used as a sample of a second training set, a clear image corresponding to the fog map is used as a sample label of the second training set, the dense fog map is used as a sample of a third training set, and a clear image corresponding to the fog map is used as a sample label of the third training set, and the second training set and the third training set are input into the defogging network and the defogging network of the defogging model respectively and are trained to obtain the pre-trained defogging model.
CN201911016783.1A 2019-10-24 2019-10-24 Image defogging method based on convolutional neural network Active CN110807743B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911016783.1A CN110807743B (en) 2019-10-24 2019-10-24 Image defogging method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911016783.1A CN110807743B (en) 2019-10-24 2019-10-24 Image defogging method based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN110807743A CN110807743A (en) 2020-02-18
CN110807743B true CN110807743B (en) 2022-02-15

Family

ID=69489061

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911016783.1A Active CN110807743B (en) 2019-10-24 2019-10-24 Image defogging method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN110807743B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052778A (en) * 2021-04-16 2021-06-29 哈尔滨理工大学 Image defogging method based on HSV color space separation
CN116468974B (en) * 2023-06-14 2023-10-13 华南理工大学 Smoke detection method, device and storage medium based on image generation

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102968760A (en) * 2011-08-30 2013-03-13 富士通株式会社 Image dehazing method and image dehazing system
EP2568438A3 (en) * 2011-09-08 2016-09-14 Fujitsu Limited Image defogging method and system
CN106127693A (en) * 2015-05-08 2016-11-16 韩华泰科株式会社 Demister system and defogging method
CN107301624A (en) * 2017-06-05 2017-10-27 天津大学 The convolutional neural networks defogging algorithm pre-processed based on region division and thick fog
CN109523480A (en) * 2018-11-12 2019-03-26 上海海事大学 A kind of defogging method, device, computer storage medium and the terminal of sea fog image
CN109934781A (en) * 2019-02-27 2019-06-25 合刃科技(深圳)有限公司 Image processing method, device, terminal device and computer readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102968760A (en) * 2011-08-30 2013-03-13 富士通株式会社 Image dehazing method and image dehazing system
EP2568438A3 (en) * 2011-09-08 2016-09-14 Fujitsu Limited Image defogging method and system
CN106127693A (en) * 2015-05-08 2016-11-16 韩华泰科株式会社 Demister system and defogging method
CN107301624A (en) * 2017-06-05 2017-10-27 天津大学 The convolutional neural networks defogging algorithm pre-processed based on region division and thick fog
CN109523480A (en) * 2018-11-12 2019-03-26 上海海事大学 A kind of defogging method, device, computer storage medium and the terminal of sea fog image
CN109934781A (en) * 2019-02-27 2019-06-25 合刃科技(深圳)有限公司 Image processing method, device, terminal device and computer readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Deep fully convolutional regression networks for single image haze removal";Xi Zhao 等;《2017 IEEE Visual Communications and Image Processing (VCIP)》;20180301;全文 *
"图像去雾技术研究与实现";崔运前;《中国优秀博硕士学位论文全文数据库(硕士)-信息科技辑》;20170715;第2017年卷(第7期);I138-680 *

Also Published As

Publication number Publication date
CN110807743A (en) 2020-02-18

Similar Documents

Publication Publication Date Title
CN106910175B (en) Single image defogging algorithm based on deep learning
CN109712083B (en) Single image defogging method based on convolutional neural network
CN110503613B (en) Single image-oriented rain removing method based on cascade cavity convolution neural network
EP2568438A2 (en) Image defogging method and system
CN108269244B (en) Image defogging system based on deep learning and prior constraint
CN109410144B (en) End-to-end image defogging processing method based on deep learning
CN110443761B (en) Single image rain removing method based on multi-scale aggregation characteristics
CN110555465A (en) Weather image identification method based on CNN and multi-feature fusion
Tang et al. Single image dehazing via lightweight multi-scale networks
CN109509156B (en) Image defogging processing method based on generation countermeasure model
Gao et al. Single image dehazing via self-constructing image fusion
CN111179196B (en) Multi-resolution depth network image highlight removing method based on divide-and-conquer
CN110807743B (en) Image defogging method based on convolutional neural network
CN107292830A (en) Low-light (level) image enhaucament and evaluation method
CN110827218A (en) Airborne image defogging method based on image HSV transmissivity weighted correction
CN103914820A (en) Image haze removal method and system based on image layer enhancement
CN112634171B (en) Image defogging method and storage medium based on Bayesian convolutional neural network
CN113284070A (en) Non-uniform fog image defogging algorithm based on attention transfer mechanism
Yu et al. Image and video dehazing using view-based cluster segmentation
CN110335221A (en) A kind of more exposure image fusion methods based on unsupervised learning
CN109685735B (en) Single picture defogging method based on fog layer smoothing prior
CN112419163B (en) Single image weak supervision defogging method based on priori knowledge and deep learning
CN113487509B (en) Remote sensing image fog removal method based on pixel clustering and transmissivity fusion
CN111598793A (en) Method and system for defogging image of power transmission line and storage medium
Wang et al. Haze removal algorithm based on single-images with chromatic properties

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant