CN110363727B - Image defogging method based on multi-scale dark channel prior cascade deep neural network - Google Patents
Image defogging method based on multi-scale dark channel prior cascade deep neural network Download PDFInfo
- Publication number
- CN110363727B CN110363727B CN201910673412.4A CN201910673412A CN110363727B CN 110363727 B CN110363727 B CN 110363727B CN 201910673412 A CN201910673412 A CN 201910673412A CN 110363727 B CN110363727 B CN 110363727B
- Authority
- CN
- China
- Prior art keywords
- image
- scale
- transmittance
- original
- foggy
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 74
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 24
- 238000005286 illumination Methods 0.000 claims abstract description 55
- 238000012549 training Methods 0.000 claims abstract description 22
- 238000002834 transmittance Methods 0.000 claims description 81
- 239000000126 substance Substances 0.000 claims description 12
- 230000008569 process Effects 0.000 claims description 9
- 238000005457 optimization Methods 0.000 claims description 7
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 230000002194 synthesizing effect Effects 0.000 claims description 3
- 230000006870 function Effects 0.000 abstract description 25
- 238000013527 convolutional neural network Methods 0.000 abstract description 6
- 230000009466 transformation Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 230000004927 fusion Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000001575 pathological effect Effects 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 230000002411 adverse Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 210000003734 kidney Anatomy 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 208000001491 myopia Diseases 0.000 description 1
- 230000004379 myopia Effects 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 230000010287 polarization Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000002207 retinal effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an image defogging method based on a multi-scale dark channel prior cascade deep neural network, which comprises the following steps: firstly, establishing a training set of atomized images; secondly, defogging of a single random foggy image; thirdly, calculating a loss objective function of the original single foggy image; fourthly, updating the weight parameter set; fifthly, taking a new single random foggy image, circulating the steps from the second step to the fourth step until the loss objective function of the original single foggy image is smaller than the loss objective function threshold value, and determining a final cascading defogging model; sixthly, defogging the single actual foggy image. According to the method, the dark channel and the global illumination parameter are estimated on the images with different scales by using the convolutional neural network, then the dark channel and the defogged image are fused step by step, and finally the defogged image is obtained by supervised learning.
Description
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an image defogging method based on a multi-scale dark channel prior cascade deep neural network.
Background
The quality degradation phenomenon can appear in the image of gathering under the bad weather of fog, haze and so on because the effect of atmosphere scattering, makes image color offwhite, and the contrast reduces, and the object characteristic is difficult to discern, not only makes the visual effect variation, and the image sight reduces, still can lead to the understanding of image content to appear the deviation. Image defogging refers to the reduction or elimination of the adverse effects of airborne particles on an image by specific methods and means. The single image defogging refers to the defogging treatment of the single image to obtain a clear image under the condition that only one foggy image exists.
The existing single image defogging method mainly comprises three categories: the first category is image enhancement based methods, the second category is physical model based methods, and the third category is deep learning based methods.
The essence of the image enhancement based method is to enhance the degraded image, improving the quality of the image. Such as common histogram equalization, logarithmic transformation, power law transformation, sharpening, wavelet transformation, etc. By these methods, the contrast of the image is enhanced or the features of the image are highlighted. In contrast to common contrast enhancement methods, another common method of image enhancement is the Retinex method based on color constancy and retinal cortex theory. According to the method, the image is decomposed into the product of the essential image and the illumination image, so that the influence of the illumination factor shielded by the haze on the image imaging is eliminated. Compared with the traditional contrast improvement method, the Retinex method has the advantages that the obtained defogged image has better local contrast and smaller color distortion. However, since the Retinex method is also a pathological problem, only approximate estimation can be performed, and thus the image defogging effect is also influenced to a certain extent.
The method based on the physical model utilizes an atmospheric scattering model (I ═ JT + (1-T) a, wherein I represents a foggy image and J represents a fogless image) to respectively estimate a scene medium perspective ratio T and global atmospheric illumination a, thereby obtaining a clear fogless image. However, under only a single foggy image, estimating T and a is also a pathological problem, and only myopia estimation can be performed. The method for restoring the foggy image to the fogless image by utilizing the atmospheric scattering model can be generally divided into three types, namely a method based on depth information in the 1 st type; class 2 is a defogging algorithm based on the polarization characteristics of atmospheric light; class 3 is a priori knowledge based approach. The first two methods usually require manual cooperation to obtain a better result, while the 3 rd method is a common method at present, such as a dark channel statistical prior-based method and a color statistical prior-based method. Due to the fact that the methods are knowledge obtained through statistical information, the methods cannot adapt to all scenes, for example, a dark channel priori knowledge-based method can generate deviation when a perspective system is estimated for a bright area such as sky, and the whole defogged image is dark. Meanwhile, the method has the problem that more parameters need to be manually set according to scenes.
The deep learning-based method utilizes technologies such as artificially synthesized foggy image data sets and convolutional neural networks to realize defogging, and is specifically divided into two types: (1) the deep neural network is used for representing an atmospheric scattering model, and corresponding T and A are automatically learned and estimated. Different from methods based on prior knowledge and the like for estimating a perspective coefficient and atmospheric illumination, the method mainly learns from data so as to overcome the deviation of partial prior knowledge, but the method usually needs to know the scene depth to synthesize and obtain T so as to carry out supervised learning; (2) the defogging process is directly considered as the transformation or image synthesis of the image without any assumption or estimation on T and A. The image synthesis-based method generally preprocesses the foggy image by using methods such as contrast enhancement, white balance and the like, and then learns a weight function through a neural network so as to fuse the preprocessed image, thereby realizing defogging. However, the method is easy to have strong dependence on the preprocessed image, and the single-frame image processing time is long. The image transformation-based method directly utilizes a neural network to learn a non-linear transformation function between the fog image and the fog-free image, thereby obtaining the fog-free image. However, this method lacks contrast of real scenes, and thus has a very strong dependence on data.
Disclosure of Invention
The technical problem to be solved by the invention is to provide an image defogging method based on a multi-scale dark channel prior cascade deep neural network aiming at the defects in the prior art, the estimation of dark channels and global illumination parameters is carried out on images with different scales by using a convolutional neural network, then the dark channels and the defogged images are fused step by step, and finally the defogged images are obtained by supervised learning.
In order to solve the technical problems, the invention adopts the technical scheme that: the image defogging method based on the multi-scale dark channel prior cascade deep neural network is characterized by comprising the following steps of:
step one, establishing a training set of atomized images: synthesizing a group of foggy image training sets by using an image data set with known depth according to an atmospheric scattering model;
step two, defogging of a single random foggy image, which comprises the following steps:
step 201, randomly extracting a foggy image from the foggy image training set in the step one, and normalizing the image size of a single random foggy image to obtain an image with the size of 2m×2nOriginal single foggy image ofWherein m and n are not positive integers less than 8;
step 202, the original single foggy image is processedDown-sampling is carried out to respectively obtain original foggy images with a first scaleSecond scale original hazy imageThird scale original hazy imageAnd fourth scale original hazy imageWherein the original foggy image of the first scaleHas a resolution of 2m-4×2n-4Second-scale original hazy image I2 hHas a resolution of 2m-3×2n-3Third-scale original fogging imageHas a resolution of 2m-2×2n-2Fourth scale original hazy imageHas a resolution of 2m-1×2n-1;
Step 203, utilizing the first deep convolutional networkFor the original foggy image of the first scaleEstimating a first global atmospheric illumination A1First transmittance image T1And a first transmittance image T1Up-sampled image T of1 uI.e. first deep convolutional networkIs a first scale original hazy imageThe output is first global atmospheric illumination A1First transmittance image T1And a first transmittance image T1Up-sampled image T of1 uWherein w is1For a first deep convolutional networkA first global atmospheric illuminationFirst transmittance imageFirst transmittance image T1Up-sampled image T of1 u=Deconv(T1) Conv (. cndot.) is a convolution module, Maxpool (. cndot.) is a maximum pooling module, Gfl (. cndot.) is a guided filtering module, Deconv (. cndot.) is a deconvolution module;
obtaining a first-scale defogged image D by using an atmospheric scattering model1Wherein, in the step (A),
step 204, utilizing a second deep convolutional networkFor the original foggy image of the second scaleEstimating a second global atmospheric illumination A2A second transmittance image T2And a second transmittance image T2Up-sampled image T of2 uI.e. second deep convolutional networkIs a second scale original hazy imageThe output is second global atmospheric illumination A2A second transmittance image T2And a second transmittance image T2Up-sampled image T of2 uWherein w is2For the second deep convolutional networkA set of weight parameters of, a second global atmospheric illuminationSecond transmittance imageSecond transmittance image T2Up-sampled image T of2 u=Deconv(T2);
Obtaining a second-scale defogging temporary image by using the atmospheric scattering modelWherein the content of the first and second substances,concat (. cndot.) is a superposition function;
Step 205, utilizing a third deep convolutional networkFor original foggy image of third scaleEstimating third Global atmospheric illumination A3And a third transmittance image T3And a third transmittance image T3Up-sampled image ofI.e. the third deep convolutional networkIs the original foggy image of the third scaleThe output is third global atmospheric illumination A3And a third transmittance image T3And a third transmittance image T3Up-sampled image ofWherein, w3For a third deep convolutional networkA set of weight parameters of, a third global atmospheric illuminationThird transmittance imageThird transmittance image T3Up-sampled image of
Obtaining a third-scale defogging temporary image by using the atmospheric scattering modelWherein the content of the first and second substances,
Step 206, utilizing a fourth deep convolutional networkFor the fourth-scale original foggy imageEstimating a fourth global atmospheric illumination A4And a fourth transmittance image T4And a fourth transmittance image T4Up-sampled image ofI.e. the fourth deep convolutional networkIs the fourth scale original hazy imageThe output is the fourth global atmospheric illumination A4And a fourth transmittance image T4And a fourth transmittance image T4Up-sampled image ofWherein, w4For a fourth deep convolutional networkA fourth global atmospheric illuminationFourth transmittance imageFourth transmittance image T4Up-sampled image of
Obtaining a fourth-scale defogging temporary image by using the atmospheric scattering modelWherein the content of the first and second substances,
Step 207, utilizing a fifth deep convolutional networkTo original single fogged imageEstimating a fifth global atmospheric illumination A5And a fifth transmittance image T5I.e. fifth deep convolutional networkIs the original single foggy imageThe output is the fifth global atmospheric illumination A5And a fifth transmittance image T5Wherein w is5For a fifth deep convolutional networkWeight parameter set of (1), fifth global atmospheric lightingFifth transmittance image
Obtaining an original defogging temporary image by using an atmospheric scattering modelWherein the content of the first and second substances,
Step three, according to the formulaComputing original single foggy imagesThe loss objective function L of (1), wherein i is a scale number, the numeric range of i is 1-5, and GiAs an image DiCorresponding reference truth image, NiAs an image DiNumber of upper pixels, LiAs an image DiCorresponding countermeasure loss;
step four, updating the weight parameter set: original single foggy imageSending the loss objective function L into an Adam optimizer to carry out cascade defogging on the modelTraining optimization, wherein each weight parameter set in the process of updating is obtained;
step five, taking a new single random foggy image, and circulating the step two to the step four until the original single foggy imageIs a loss objective function L<Δ, at this time, a cascade defogging model f is obtainedwThe training result w ═ w of each weight parameter set in the training set1,w2,w3,w4,w5And determining a final cascade defogging model fwWherein Δ is a loss objective function threshold;
step six, defogging of a single actual foggy image: using a trained cascade defogging model fwIn the method, the single actual foggy image is defogged to obtain the single actual foggy image
The image defogging method based on the multi-scale dark channel prior cascade deep neural network is characterized by comprising the following steps of: and the value ranges of m and n are both 8-12.
The above-mentioned multi-rulerThe image defogging method of the dullness channel prior cascade deep neural network is characterized by comprising the following steps: the first deep convolutional network in the second stepSecond deep convolutional networkThird deep convolutional networkAnd a fourth deep convolutional networkFirst depth convolutional network for initial useSet of weight parameters w1A second deep convolutional networkSet of weight parameters w2A third deep convolutional networkSet of weight parameters w3And a fourth deep convolutional networkSet of weight parameters w4Is a random initialization value.
The image defogging method based on the multi-scale dark channel prior cascade deep neural network is characterized by comprising the following steps of: the image dataset of known depth comprises a NYU image dataset.
The image defogging method based on the multi-scale dark channel prior cascade deep neural network is characterized by comprising the following steps of: the value range of the loss objective function threshold value delta is as follows: 0< Δ < 0.004.
Compared with the prior art, the invention has the following advantages:
1. according to the method, the dark channel priori estimation is carried out from low resolution, a preliminary defogging result is obtained, the characteristics of multiple scales, namely medium transmissivity and defogging images, are continuously fused, the defogging image with high resolution is finally obtained, the global illumination and the medium transmissivity are estimated according to the dark channel priori mode, the spatial multi-scale defogging result convolution fusion and the multi-scale loss function optimization training are adopted for defogging, and the method is good in real-time performance, high in accuracy and convenient to popularize and use.
2. The invention can realize the end-to-end high-resolution defogging result by using less weight parameters, and has the advantages of reliability, stability and good use effect.
3. The method has simple steps, simulates the dark channel prior estimation and defogging process, utilizes the full convolution neural network to carry out global illumination estimation and multilevel characteristic fusion of multiple scales, automatically learns the parameter data required to be set in the dark channel estimation process, and is convenient for popularization and use.
4. The method utilizes the convolutional neural network to estimate the dark channel and the global illumination parameter on the images with different scales, then gradually fuses the dark channel and the defogged image, and finally obtains the defogged image through supervised learning.
5. Defogging of each single random fogging image is carried out in multiple image scales by utilizing a cascade deep neural network; when a loss objective function of an original single foggy image is calculated, calculating loss functions on a plurality of image scales respectively, and carrying out weighted average; and the optimizer is used for gradient descent optimization, and the weight parameter set is updated, so that the real-time performance is good, and the accuracy is high.
In conclusion, the method utilizes the convolutional neural network to estimate the dark channel and the global illumination parameter on the images with different scales, then gradually fuses the dark channel and the defogged image, and finally obtains the defogged image through supervised learning, effectively utilizes the characteristic modeling capability of the deep neural network to realize the parameter fusion with different scales, can obtain the defogged image with high resolution under the condition of less model parameters, better adapts to outdoor scenes, has the advantages of small model parameters, good real-time performance and high accuracy, and is convenient to popularize and use.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
FIG. 1 is a block diagram of a process flow of the method of the present invention.
Detailed Description
As shown in fig. 1, the image defogging method based on the multi-scale dark channel prior cascade deep neural network of the invention comprises the following steps:
step one, establishing a training set of atomized images: synthesizing a grouped foggy image training set by using an image data set with a known depth according to an atmospheric scattering model, and effectively expanding the image data volume of the foggy image training set;
in this embodiment, the image data set with the known depth includes an NYU image data set, and the experimental result is trained by using a public standard data set, so that the method is strong in adaptability, high in image processing accuracy, and good in defogging effect.
Step two, defogging of a single random foggy image, which comprises the following steps:
step 201, randomly extracting a foggy image from the foggy image training set in the step one, and normalizing the image size of a single random foggy image to obtain an image with the size of 2m×2nOriginal single foggy image ofWherein m and n are not positive integers less than 8;
step 202, the original single foggy image is processedDown-sampling is carried out to respectively obtain original foggy images with a first scaleSecond scale original hazy imageThird scale original hazy imageAnd fourth scale original hazy imageWherein the original foggy image of the first scaleHas a resolution of 2m-4×2n-4Second scale original hazy imageHas a resolution of 2m-3×2n-3Third-scale original fogging imageHas a resolution of 2m-2×2n-2Fourth scale original hazy imageHas a resolution of 2m-1×2n-1;
Step 203, utilizing the first deep convolutional networkFor the original foggy image of the first scaleEstimating a first global atmospheric illumination A1First transmittance image T1And a first transmittance image T1Up-sampled image T of1 uI.e. first deep convolutional networkIs a first scale original hazy imageThe output is first global atmospheric illumination A1First transmittance image T1And a first transmittance image T1Up-sampled image T of1 uWherein w is1For a first deep convolutional networkA first global atmospheric illuminationFirst transmittance imageFirst transmittance image T1Up-sampled image T of1 u=Deconv(T1) Conv (. cndot.) is a convolution module, Maxpool (. cndot.) is a maximum pooling module, Gfl (. cndot.) is a guided filtering module, Deconv (. cndot.) is a deconvolution module;
obtaining a first-scale defogged image D by using an atmospheric scattering model1Wherein, in the step (A),
step 204, utilizing a second deep convolutional networkFor the original foggy image of the second scaleEstimating a second global atmospheric illumination A2A second transmittance image T2And a second transmittance image T2Up-sampled image ofI.e. the second deep convolutional netCollaterals of kidney meridianIs a second scale original hazy imageThe output is second global atmospheric illumination A2A second transmittance image T2And a second transmittance image T2Up-sampled image ofWherein, w2For the second deep convolutional networkA set of weight parameters of, a second global atmospheric illuminationSecond transmittance imageSecond transmittance image T2Up-sampled image of
Obtaining a second-scale defogging temporary image by using the atmospheric scattering modelWherein the content of the first and second substances,concat (. cndot.) is a superposition function;
Step 205, utilizing a third deep convolutional networkFor original foggy image of third scaleEstimating third Global atmospheric illumination A3And a third transmittance image T3And a third transmittance image T3Up-sampled image T of3 uI.e. the third deep convolutional networkIs the original foggy image of the third scaleThe output is third global atmospheric illumination A3And a third transmittance image T3And a third transmittance image T3Up-sampled image T of3 uWherein w is3For a third deep convolutional networkA set of weight parameters of, a third global atmospheric illuminationThird transmittance imageThird transmittance image T3Up-sampled image of
Obtaining a third-scale defogging temporary image by using the atmospheric scattering modelWherein the content of the first and second substances,
Step 206, utilizing a fourth deep convolutional networkFor the fourth-scale original foggy imageEstimating a fourth global atmospheric illumination A4And a fourth transmittance image T4And a fourth transmittance image T4Up-sampled image ofI.e. the fourth deep convolutional networkIs the fourth scale original hazy imageThe output is the fourth global atmospheric illumination A4And a fourth transmittance image T4And a fourth transmittance image T4Up-sampled image ofWherein, w4For a fourth deep convolutional networkA fourth global atmospheric illuminationFourth transmittance imageFourth transmittance image T4Up-sampled image of
Obtaining a fourth-scale defogging temporary image by using the atmospheric scattering modelWherein the content of the first and second substances,
Step 207, utilizing a fifth deep convolutional networkTo original single fogged imageEstimating a fifth global atmospheric illumination A5And a fifth transmittance image T5I.e. fifth deep convolutional networkIs the original single foggy imageThe output is the fifth global atmospheric illumination A5And a fifth transmittance image T5Wherein w is5For a fifth deep convolutional networkWeight parameter set of (1), fifth global atmospheric lightingFifth transmittance image
Obtaining an original defogging temporary image by using an atmospheric scattering modelWherein the content of the first and second substances,
In this embodiment, the first deep convolutional network in the second stepSecond deep convolutional networkThird deep convolutional networkAnd a fourth deep convolutional networkFirst depth convolutional network for initial useSet of weight parameters w1A second deep convolutional networkSet of weight parameters w2A third deep convolutional networkSet of weight parameters w3And a fourth deep convolutional networkRight of (1)Set of heavy parameters w4Is a random initialization value.
In the embodiment, the dark channel prior estimation is carried out from low resolution to obtain a preliminary defogging result, the characteristics of multiple scales, namely medium transmissivity and a defogging image, are continuously fused to finally obtain a defogging image with high resolution, the global illumination and the medium transmissivity are estimated according to the dark channel prior mode, the spatial multi-scale defogging result convolution fusion and the multi-scale loss function optimization training are adopted to carry out defogging, the real-time performance is good, the accuracy is high, the full convolution neural network is utilized to carry out the global illumination estimation and the multi-level characteristic fusion of multiple scales by simulating the dark channel prior estimation and defogging processes, and the parameter data required to be set in the dark channel estimation process is automatically learned; the method comprises the steps of estimating dark channels and global illumination parameters on images with different scales by using a convolutional neural network, then fusing the dark channels and defogged images step by step, and finally obtaining the defogged images through supervised learning.
Step three, according to the formulaComputing original single foggy imagesThe loss objective function L of (1), wherein i is a scale number, the numeric range of i is 1-5, and GiAs an image DiCorresponding reference truth image, NiAs an image DiNumber of upper pixels, LiAs an image DiCorresponding countermeasure loss;
step four, updating the weight parameter set: original single foggy imageSending the loss objective function L into an Adam optimizer to carry out cascade defogging on the modelTraining optimization, wherein each weight parameter set in the process of updating is obtained;
it should be noted that, end-to-end high-resolution defogging results can be realized with fewer weight parameters, and the reliability and stability are achieved.
Step five, taking a new single random foggy image, and circulating the step two to the step four until the original single foggy imageIs a loss objective function L<Δ, at this time, a cascade defogging model f is obtainedwThe training result w ═ w of each weight parameter set in the training set1,w2,w3,w4,w5And determining a final cascade defogging model fwWherein Δ is a loss objective function threshold;
in this embodiment, the value range of the loss objective function threshold Δ is: 0< Δ < 0.004.
Step six, defogging of a single actual foggy image: using a trained cascade defogging model fwIn the method, the single actual foggy image is defogged to obtain the single actual foggy image
In the embodiment, the value ranges of m and n are both 8-12.
When the method is used, the defogging of each single random foggy image utilizes the cascade deep neural network to perform defogging in a plurality of image scales; when a loss objective function of an original single foggy image is calculated, calculating loss functions on a plurality of image scales respectively, and carrying out weighted average; the optimizer is used for gradient descent optimization, the weight parameter set is updated, and the method is good in real-time performance and high in accuracy; determining a final cascading defogging model until the loss objective function of the original single foggy image is smaller than a loss objective function threshold value; and finally, defogging the single actual foggy image by using the final cascading defogging model.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and all simple modifications, changes and equivalent structural changes made to the above embodiment according to the technical spirit of the present invention still fall within the protection scope of the technical solution of the present invention.
Claims (5)
1. The image defogging method based on the multi-scale dark channel prior cascade deep neural network is characterized by comprising the following steps of:
step one, establishing a training set of atomized images: synthesizing a group of foggy image training sets by using an image data set with known depth according to an atmospheric scattering model;
step two, defogging of a single random foggy image, which comprises the following steps:
step 201, randomly extracting a foggy image from the foggy image training set in the step one, and normalizing the image size of a single random foggy image to obtain an image with the size of 2m×2nOriginal single foggy image ofWherein m and n are not positive integers less than 8;
step 202, the original single foggy image is processedDown-sampling is carried out to respectively obtain original foggy images with a first scaleSecond scale original hazy imageThird scale original hazy imageAnd fourth scale original hazy imageWherein the original foggy image of the first scaleHas a resolution of 2m-4×2n-4Second scale original hazy imageHas a resolution of 2m-3×2n-3Third-scale original fogging imageHas a resolution of 2m-2×2n-2Fourth scale original hazy imageHas a resolution of 2m-1×2n-1;
Step 203, utilizing the first deep convolutional networkFor the original foggy image of the first scaleEstimating a first global atmospheric illumination A1First transmittance image T1And a first transmittance image T1Up-sampled image ofI.e. the first deep convolutional networkIs a first scale original hazy imageThe output is first global atmospheric illumination A1First transmittance image T1And a first transmittance image T1Up-sampled image ofWherein, w1For a first deep convolutional networkA first global atmospheric illuminationFirst transmittance imageFirst transmittance image T1Up-sampled image ofConv (-) is a convolution module, Maxpool (-) is a maximum pooling module, Gfl (-) is a guided filtering module, Deconv (-) is a deconvolution module;
obtaining a first-scale defogged image D by using an atmospheric scattering model1Wherein, in the step (A),
step 204, utilizing a second deep convolutional networkFor the original foggy image of the second scaleEstimating a second global atmospheric illumination A2A second transmittance image T2And a second transmittance image T2Up-sampled image ofI.e. the second deep convolutional networkIs a second scale original hazy imageThe output is second global atmospheric illumination A2A second transmittance image T2And a second transmittance image T2Up-sampled image ofWherein, w2For the second deep convolutional networkA set of weight parameters of, a second global atmospheric illuminationSecond transmittance imageSecond transmittance image T2Up-sampled image of
Obtaining a second-scale defogging temporary image by using the atmospheric scattering modelWherein the content of the first and second substances,concat (. cndot.) is a superposition function;
Step 205, utilizing a third deep convolutional networkFor original foggy image of third scaleEstimating third Global atmospheric illumination A3And a third transmittance image T3And a third transmittance image T3Up-sampled image ofI.e. the third deep convolutional networkIs the original foggy image of the third scaleThe output is third global atmospheric illumination A3And a third transmittance image T3And a third transmittance image T3Up-sampled image ofWherein, w3For a third deep convolutional networkA set of weight parameters of, a third global atmospheric illuminationThird transmittance imageThird transmittance image T3Up-sampled image of
Obtaining a third-scale defogging temporary image by using the atmospheric scattering modelWherein the content of the first and second substances,
Step 206, utilizing a fourth deep convolutional networkFor the fourth-scale original foggy imageEstimating a fourth global atmospheric illumination A4And a fourth transmittance image T4And a fourth transmittance image T4Up-sampled image ofI.e. the fourth deep convolutional networkIs the fourth scale original hazy imageThe output is the fourth global atmospheric illumination A4And a fourth transmittance image T4And a fourth transmittance image T4Up-sampled image ofWherein, w4For a fourth deep convolutional networkA fourth global atmospheric illuminationFourth transmittance imageFourth transmittance image T4Up-sampled image of
Obtaining a fourth-scale defogging temporary image by using the atmospheric scattering modelWherein the content of the first and second substances,
Step 207, utilizing a fifth deep convolutional networkTo original single fogged imageEstimating a fifth global atmospheric illumination A5And a fifth transmittance image T5I.e. fifth deep convolutional networkIs the original single foggy imageThe output is the fifth global atmospheric illumination A5And a fifth transmittance image T5Wherein w is5For a fifth deep convolutional networkWeight parameter set of (1), fifth global atmospheric lightingFifth transmittance image
Obtaining an original defogging temporary image by using an atmospheric scattering modelWherein the content of the first and second substances,
Step three, according to the formulaComputing original single foggy imagesThe loss objective function L of (1), wherein i is a scale number, the numeric range of i is 1-5, and GiAs an image DiCorresponding reference truth image, NiAs an image DiNumber of upper pixels, LiAs an image DiCorresponding countermeasure loss;
Step four, updating the weight parameter set: original single foggy imageSending the loss objective function L into an Adam optimizer to carry out cascade defogging on the modelTraining optimization, wherein each weight parameter set in the process of updating is obtained;
step five, taking a new single random foggy image, and circulating the step two to the step four until the original single foggy imageIs a loss objective function L<Δ, at this time, a cascade defogging model f is obtainedwThe training result w ═ w of each weight parameter set in the training set1,w2,w3,w4,w5And determining a final cascade defogging model fwWherein Δ is a loss objective function threshold;
2. The image defogging method based on the multi-scale dark channel prior cascade deep neural network as claimed in claim 1, wherein: and the value ranges of m and n are both 8-12.
3. The image defogging method based on the multi-scale dark channel prior cascade deep neural network as claimed in claim 1, wherein: the first deep convolutional network in the second stepSecond deep convolutional networkThird deep convolutional networkAnd a fourth deep convolutional networkFirst depth convolutional network for initial useSet of weight parameters w1A second deep convolutional networkSet of weight parameters w2A third deep convolutional networkSet of weight parameters w3And a fourth deep convolutional networkSet of weight parameters w4Is a random initialization value.
4. The image defogging method based on the multi-scale dark channel prior cascade deep neural network as claimed in claim 1, wherein: the image dataset of known depth comprises a NYU image dataset.
5. The image defogging method based on the multi-scale dark channel prior cascade deep neural network as claimed in claim 1, wherein: the value range of the loss objective function threshold value delta is as follows: 0< Δ < 0.004.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910673412.4A CN110363727B (en) | 2019-07-24 | 2019-07-24 | Image defogging method based on multi-scale dark channel prior cascade deep neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910673412.4A CN110363727B (en) | 2019-07-24 | 2019-07-24 | Image defogging method based on multi-scale dark channel prior cascade deep neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110363727A CN110363727A (en) | 2019-10-22 |
CN110363727B true CN110363727B (en) | 2020-06-12 |
Family
ID=68220887
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910673412.4A Active CN110363727B (en) | 2019-07-24 | 2019-07-24 | Image defogging method based on multi-scale dark channel prior cascade deep neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110363727B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111161160B (en) * | 2019-12-04 | 2023-07-18 | 新奇点企业管理集团有限公司 | Foggy weather obstacle detection method and device, electronic equipment and storage medium |
CN111833272B (en) * | 2020-07-17 | 2021-07-16 | 南京理工大学 | Image defogging method and system based on progressive feature fusion |
CN111861939B (en) * | 2020-07-30 | 2022-04-29 | 四川大学 | Single image defogging method based on unsupervised learning |
CN112767275B (en) * | 2021-01-25 | 2021-10-22 | 中国人民解放军火箭军工程大学 | Single image defogging method based on artificial sparse annotation information guidance |
CN115272122B (en) * | 2022-07-31 | 2023-03-21 | 中国人民解放军火箭军工程大学 | Priori-guided single-stage distillation image defogging method |
CN115456913A (en) * | 2022-11-07 | 2022-12-09 | 四川大学 | Method and device for defogging night fog map |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106780356A (en) * | 2016-11-15 | 2017-05-31 | 天津大学 | Image defogging method based on convolutional neural networks and prior information |
US9965835B2 (en) * | 2014-11-28 | 2018-05-08 | Axis Ab | Defogging images and video |
CN109712083A (en) * | 2018-12-06 | 2019-05-03 | 南京邮电大学 | A kind of single image to the fog method based on convolutional neural networks |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102584522B1 (en) * | 2016-12-27 | 2023-10-05 | 한화비전 주식회사 | Image processing device and image enhancing method |
CN108230264B (en) * | 2017-12-11 | 2020-05-15 | 华南农业大学 | Single image defogging method based on ResNet neural network |
-
2019
- 2019-07-24 CN CN201910673412.4A patent/CN110363727B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9965835B2 (en) * | 2014-11-28 | 2018-05-08 | Axis Ab | Defogging images and video |
CN106780356A (en) * | 2016-11-15 | 2017-05-31 | 天津大学 | Image defogging method based on convolutional neural networks and prior information |
CN109712083A (en) * | 2018-12-06 | 2019-05-03 | 南京邮电大学 | A kind of single image to the fog method based on convolutional neural networks |
Non-Patent Citations (1)
Title |
---|
《基于卷积神经网络的单幅图像去雾算法的研究与应用》;左庆;《www.cnki.net》;20190501;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN110363727A (en) | 2019-10-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110363727B (en) | Image defogging method based on multi-scale dark channel prior cascade deep neural network | |
CN110288550B (en) | Single-image defogging method for generating countermeasure network based on priori knowledge guiding condition | |
CN106910175B (en) | Single image defogging algorithm based on deep learning | |
Wang et al. | Fast image dehazing method based on linear transformation | |
CN110555465B (en) | Weather image identification method based on CNN and multi-feature fusion | |
CN111738942A (en) | Generation countermeasure network image defogging method fusing feature pyramid | |
CN109584188B (en) | Image defogging method based on convolutional neural network | |
CN111161360B (en) | Image defogging method of end-to-end network based on Retinex theory | |
CN110349093B (en) | Single image defogging model construction and defogging method based on multi-stage hourglass structure | |
CN109816605A (en) | A kind of MSRCR image defogging method based on multichannel convolutive | |
CN111667433A (en) | Unmanned aerial vehicle image defogging method based on simple linear iterative clustering optimization | |
CN114219732A (en) | Image defogging method and system based on sky region segmentation and transmissivity refinement | |
CN112419163A (en) | Single image weak supervision defogging method based on priori knowledge and deep learning | |
CN113160286A (en) | Near-infrared and visible light image fusion method based on convolutional neural network | |
CN110189262B (en) | Image defogging method based on neural network and histogram matching | |
CN110349113B (en) | Adaptive image defogging method based on dark primary color priori improvement | |
CN112785517B (en) | Image defogging method and device based on high-resolution representation | |
CN112950521B (en) | Image defogging method and generator network | |
CN117726545A (en) | Image defogging method using non-local foggy line and multiple exposure fusion | |
CN107301625B (en) | Image defogging method based on brightness fusion network | |
CN116664448B (en) | Medium-high visibility calculation method and system based on image defogging | |
CN113628143A (en) | Weighted fusion image defogging method and device based on multi-scale convolution | |
CN113487509A (en) | Remote sensing image fog removing method based on pixel clustering and transmissivity fusion | |
CN112907461A (en) | Defogging and enhancing method for infrared degraded image in foggy day | |
CN116385293A (en) | Foggy-day self-adaptive target detection method based on convolutional neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |