CN111161159B - Image defogging method and device based on combination of priori knowledge and deep learning - Google Patents
Image defogging method and device based on combination of priori knowledge and deep learning Download PDFInfo
- Publication number
- CN111161159B CN111161159B CN201911226437.6A CN201911226437A CN111161159B CN 111161159 B CN111161159 B CN 111161159B CN 201911226437 A CN201911226437 A CN 201911226437A CN 111161159 B CN111161159 B CN 111161159B
- Authority
- CN
- China
- Prior art keywords
- image
- layer
- base layer
- convolution
- layer image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000013135 deep learning Methods 0.000 title claims abstract description 22
- 238000012545 processing Methods 0.000 claims abstract description 27
- 238000002834 transmittance Methods 0.000 claims abstract description 23
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 9
- 230000006870 function Effects 0.000 claims description 23
- 239000013598 vector Substances 0.000 claims description 14
- 238000005070 sampling Methods 0.000 claims description 11
- 230000004913 activation Effects 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 6
- 238000010586 diagram Methods 0.000 claims description 3
- 238000010030 laminating Methods 0.000 claims description 3
- 238000003491 array Methods 0.000 claims description 2
- 238000000605 extraction Methods 0.000 claims description 2
- 238000010276 construction Methods 0.000 claims 1
- 238000007689 inspection Methods 0.000 claims 1
- 230000000694 effects Effects 0.000 description 3
- 230000003321 amplification Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000003199 nucleic acid amplification method Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007123 defense Effects 0.000 description 1
- 239000000428 dust Substances 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012634 optical imaging Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000004804 winding Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The invention relates to an image defogging method and device based on combination of priori knowledge and deep learning. The method comprises the following steps: decomposing the input original foggy image Z by using a weighted guide filter to obtain a detail layer image Z e And base layer image Z b (ii) a Using a quadtree search method to search the base layer image Z b Processing to obtain a global atmosphere light component image A c Wherein c belongs to { r, g, b }; constructing a deep convolutional neural network on the base layer image Z b Processing is carried out to obtain a transmissivity image t; utilizing the global atmospheric light component image A based on an atmospheric scattering model c The transmittance image t and the detail layer image Z e And restoring the original foggy image Z to obtain a defogged image J.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to an image defogging method and device based on combination of priori knowledge and deep learning.
Background
In the foggy weather, floating particles in the sun, fog or dust cause the picture to fade and be blurred, the contrast and softness are reduced, the image quality is severely restricted, and the applications of video monitoring and analysis, target identification, urban traffic, aerial photography, military and national defense and the like are limited. Therefore, the method has a direct relationship with the civil life to the clear processing of the foggy image, and has great practical significance to the production and the life of people.
The existing defogging algorithms are mainly divided into three categories: non-model based defogging algorithms, and deep learning based defogging algorithms. The defogging algorithm based on the non-model mainly achieves the aim of improving the image by directly stretching the contrast of the image. The common methods are as follows: histogram equalization, homomorphic filtering algorithms, retinex model-based algorithms, and Retinex model-improved-based algorithms. The methods carry out defogging according to the optical imaging principle, so that the contrast among the image colors is more balanced, the image effect is softer, but the obtained image cannot be effectively enhanced in contrast, the method weakens the dark or bright area in the original image, and blurs the key points of the image.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides an image defogging method and device based on combination of priori knowledge and deep learning.
In a first aspect, the present invention provides an image defogging method based on a combination of priori knowledge and deep learning, including:
decomposing the input original foggy image Z by using a weighted guide filter to obtain a detail layer image Z e And base layer image Z b ;
Using a quadtree search method to search the base layer image Z b Processing to obtain a global atmosphere light component image A c Wherein c belongs to { r, g, b };
constructing a deep convolutional neural network on the base layer image Z b Processing is carried out to obtain a transmissivity image t;
utilizing the global atmospheric light component image A based on an atmospheric scattering model c The transmittance image t and the detail layer image Z e And restoring the original foggy image Z to obtain a defogged image J.
Preferably, the original fogging image Z is decomposed by using a weighted guided filter to obtain a base layer image Z b The method specifically comprises the following steps:
acquiring a pixel gray value Z (x, y) at any point (x, y) in the original foggy image Z;
obtaining a weighted filter coefficient a at the point (x, y) in the Z of the original foggy image by using the following formula p (x, y) and b p (x,y):
Wherein, mu Z,ρ (x, y) is the mean of the pixel grey values of the window centered at point (x, y) in the original hazy image Z, with ρ being the radius;is the variance of the pixel gray value of the window with point (x, y) as the center and rho as the radius in the original foggy image Z; Γ Y (x, Y) is the luminance component of the original hazy image Z at point (x, Y), Y = max { Z { n } r ,Z g ,Z b },Z r ,Z g ,Z b R, G, B values at the point (x, y) in the hazy image Z, respectively; λ is a constant greater than 1;
obtaining the base layer image Z by the following formula b :
Z b (x,y)=a p (x,y)Z(x,y)+b p (x,y);
Wherein, Z b (x, y) is the base layer image Z b Of the pixel grey value at point (x, y).
Preferably, the base layer image Z is searched by using a quadtree search method b Processing to obtain a global atmosphere light component image A c The method specifically comprises the following steps:
step A, base layer image Z b Are equally divided into four rectangular areas Z b-i (i∈{1,2,3,4}),Z b-i Respectively has a length and a width of Z b 1/2 of the length and width of (A);
b, defining the fraction of each rectangular area as the average value of the pixel gray value minus the standard deviation of the pixel gray value in the area;
step C, selecting the rectangular area with the highest score and further dividing the rectangular area into four rectangular areas;
step D, repeating the steps B and C, and carrying out iterative updating on the rectangular area with the highest score for n times to obtain a final subdivision area Z b-end ;
Obtaining the atmospheric light component image A by using the following formula c :
|A c (c∈{r,g,b})|=min|((Z b-end-r (x,y),Z b-end-g (x,y),Z b-end-b (x,y))-(255,255,255))|,
In the formula (Z) b-end-r (x,y),Z b-end-g (x,y),Z b-end-b (x, y)) is the final subdivided area Z b-end The color vector at the midpoint (x, y), (255 ) is a pure white vector.
Preferably, the deep convolutional neural network is constructed on the base layer image Z b The transmittance image t is obtained by processing, and the method specifically comprises the following steps:
using the first convolution layer to pair the base layer image Z b Down-sampling to obtain low-resolution base layer image Z b ' post-extracting low-layer features, wherein the base layer image Z is aligned using a first convolution layer b The formula for downsampling is:
wherein,the low resolution base layer image Z for the ith of the first convolutional layer at the index of channel c b ' low-level features; x is the base layer image Z b Abscissa of (a), y is the base layer image Z b The ordinate of (a); x' is a low resolution base layer image Z b 'abscissa, y' is the low resolution base layer image Z b The ordinate of `;For the ith layer of the first winding layerStacking convolution weight arrays under the index channels of the layers c and c';A deviation vector of the ith convolution layer in the first convolution layer under the c layer index channel is obtained; σ (= max (, 0) represents the ReLU activation function and zero padding is used as a boundary condition for all of the first convolutional layers; s is the step length of the convolution kernel of the first convolution layer;
the obtained low-layer characteristicsThe number of input layers is n L Extracting a local feature L from a second convolution layer of =2, wherein the size of a convolution kernel in the second convolution layer is 3 × 3, and the step length is 1;
low layer feature obtained by laminating the second convolution layerThe number of input layers is n G1 After a third convolutional layer with the size of 3 × 3 and the step length of 2, =2 and convolutional kernel, the number of layers is input again G2 A full connection layer of =2, obtaining a global feature G; />
Adding the local feature L and the global feature G, and inputting the sum into an activation function to obtain a low-level featureCorresponding hybrid feature map F L =σ(L+G);
For mixed feature map F L Convolution processing is carried out for (= sigma (L + G)) to obtain the characteristic of the lower layerAnd (3) performing up-sampling on the corresponding preliminary atmospheric refractive index characteristic graph, and outputting to obtain a transmittance image t.
Preferably, the global atmospheric light component image A is utilized based on an atmospheric scattering model c The transmittance image t and the detail layer image Z e And recovering the original foggy image Z, wherein the formula for obtaining the defogged image J is as follows:
wherein J (x, y) is the gray value of the pixel at the point (x, y) in the defogged image J, and Z e (x, y) is the detail layer image Z e The gray value of the pixel at the middle point (x, y), Z b (x, y) is the base layer image Z b A pixel gray value at a midpoint (x, y), t (x, y) being a transmittance at the midpoint (x, y) of the transmittance image t,where η is a constant greater than zero.
Preferably, the pair of mixed feature maps F L Convolution processing is carried out for (= sigma (L + G)) to obtain the characteristic of the lower layerAnd then, performing up-sampling on the atmospheric refractive index characteristic diagram, and outputting a transmittance image t, wherein the method further comprises the following steps:
the loss function is constructed as follows:
L=L r +w c L c
wherein L is r To reconstruct the loss function, L c As a function of color loss, w c To be assigned to the color loss function L c Weight of, L r Is shown asL c Is expressed as->
J is a defogged image, Z is a fogging image, c belongs to (R, G, B) as a channel index, and angle (J (x, y), Z (x, y)) represents an included angle of three-dimensional color vectors of the fogging image and the defogged image at a pixel point (x, y);
and performing parameter adjustment processing on the deep convolutional neural network by using the loss function.
In a second aspect, the invention provides an image defogging device based on combination of priori knowledge and deep learning, wherein the device comprises a memory and a processor;
the memory for storing a computer program;
the processor is configured to, when executing the computer program, implement the image defogging method based on the combination of the priori knowledge and the deep learning.
The image defogging method and device based on the combination of the priori knowledge and the deep learning have the advantages that the weighted guide filter is used for decomposing the foggy image to obtain the detail layer image and the basic layer image, then the quad-tree search method and the deep neural network are used for processing the basic layer image to obtain the global atmosphere light component image and the transmissivity image, and finally the global atmosphere light component image, the transmissivity image and the detail layer image are used for recovering the foggy image to obtain the defogged image. Because the global atmosphere light component image and the transmissivity image are estimated or calculated by utilizing the base layer image, the amplification of image noise is avoided, and the defogging treatment can be better carried out on the foggy image.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flowchart of an image defogging method based on combination of priori knowledge and deep learning according to an embodiment of the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1, the single-image defogging method based on the combination of the prior knowledge and the deep learning described in the present invention includes the following steps:
decomposing the input original foggy image Z by using a weighted guide filter to obtain a detail layer image Z e And base layer image Z b ;
Using a quadtree search method to search the base layer image Z b Processing to obtain a global atmosphere optical component image A c ;
Constructing a deep convolutional neural network on the base layer image Z b Processing is carried out to obtain a transmissivity image t;
utilizing the global atmospheric light component image A based on an atmospheric scattering model c The transmittance image t and the detail layer image Z e And restoring the original foggy image Z to obtain a defogged image J.
Specifically, the original input foggy image Z is decomposed by using a weighted guide filter to obtain a detail layer image Z e And base layer image Z b The method specifically comprises the following steps:
acquiring a pixel gray value Z (x, y) at any point (x, y) in the original foggy image Z;
obtaining a weighted filter coefficient a at the point (x, y) in the Z of the original foggy image by using the following formula p (x, y) and b p (x,y):
Wherein, mu Z,ρ (x, y) is the mean of the pixel grey values of the window centered at point (x, y) in the original hazy image Z, with ρ being the radius;
is the pixel gray value of the window with point (x, y) as the center and rho as the radius in the original foggy image ZThe variance of (a);
Γ Y (x, y) is the luminance component of point (x, y) of the original hazy image Z;
λ is a constant greater than 1;
obtaining the base layer image Z using the following formula b :
Z b (x,y)=a p (x,y)Z(x,y)+b p (x,y);
Wherein Z is b (x, y) is the base layer image Z b The pixel gray value at point (x, y);
z (x, y) is a pixel gray value at any point (x, y) in the original foggy image Z;
a p (x, y) and b p (x, y) weighted filter coefficients at point (x, y) in the original hazy image Z.
Specifically, the base layer image Z is searched by using a quadtree search method b Processing to obtain a global atmosphere light component image A c The method specifically comprises the following steps:
step A, base layer image Z b Are equally divided into four rectangular areas Z b-i (i∈{1,2,3,4}),Z b-i Respectively has a length and a width of Z b 1/2 of the length and width of (A);
b, defining the fraction of each rectangular area as the average value of the pixel gray value minus the standard deviation of the pixel gray value in the area;
step C, selecting the rectangular area with the highest score and further dividing the rectangular area into four rectangular areas;
repeating the steps B and C and carrying out iterative updating on the rectangular area with the highest score for n times to obtain a final subdivision area Z b-end ;
Obtaining the atmospheric light component image A by using the following formula c :
|A c (c∈{r,g,b})|=min|((Z b-end-r (x,y),Z b-end-g (x,y),Z b-end-b (x,y))-(255,255,255))|,
In the formula (Z) b-end-r (x,y),Z b-end-g (x,y),Z b-end-b (x, y)) is the final subdivided region Z b-end Ren of (2)The color vector of a pixel (255 ) is a pure white vector.
Specifically, the deep convolutional neural network is constructed on the base layer image Z b The processing to obtain the transmittance image t specifically includes the following steps:
using the first convolution layer to pair the base layer image Z b Down-sampling to obtain low-resolution base layer image Z b ' post-extraction of low-level features, wherein the base-level image Z is mapped using a first convolution layer b The down-sampling is performed according to the formula:
wherein,the low resolution base layer image Z for the ith convolutional layer at the index of channel c b The low-level feature of';
x is the base layer image Z b The abscissa of (a);
y is the base layer image Z b The ordinate of (a);
x' is a low resolution base layer image Z b The abscissa of';
y' is the low resolution base layer image Z b The ordinate of `;
a convolution weight array of the ith convolution layer under the index channels of the layers c and c';
σ (= max (, 0) represents the ReLU activation function and zero padding is used as a boundary condition for all convolutional layers;
s is the step size of the convolution kernel;
low layer characteristics to be obtainedThe number of input layers is n L Extracting a local feature L from a second convolution layer of =2, wherein the size of a convolution kernel in the second convolution layer is 3 × 3, and the step length is 1;
low layer feature obtained by laminating a second convolution layerThe number of input layers is n G1 After a third convolutional layer with the size of 3 × 3 and the step length of 2, =2 and convolutional kernel, the number of layers is input again G2 A full connection layer of =2, obtaining a global feature G;
adding the local feature L and the global feature G, and inputting the sum into an activation function to obtain a low-level featureCorresponding hybrid feature map F L =σ(L+G);
For mixed feature map F L Convolution processing is carried out for (= sigma (L + G)) to obtain the characteristic of the lower layerAnd (3) performing up-sampling on the corresponding preliminary atmospheric refractive index characteristic graph, and outputting to obtain a transmittance image t (x, y). />
Specifically, the global atmospheric light component image A is utilized based on an atmospheric scattering model c The transmittance image t and the detail layer image Z e And recovering the original foggy image Z, wherein the formula for obtaining the defogged image J is as follows:
wherein J (x, y) is the gray value of the pixel at the point (x, y) in the defogged image J,
Z e (x, y) is the detailsLayer image Z e The gray value of the pixel at the midpoint (x, y),
Z b (x, y) is the base layer image Z b The gray value of the pixel at the midpoint (x, y),
t (x, y) is the output resulting in a transmittance image,
In the step, according to an atmospheric scattering model Z (x, y) = t (x, y) J (x, y) + A c (1-t (x, y)), J (x, y) is a defogged image, and (x, y) is the space coordinate of a pixel point, the recovery of the fogging image is carried out, the obtained recovered image can be expressed as,
wherein t is 0 Is a parameter for ensuring the processing effect of the dense fog area, Z (x, y) = J (x, y) + n (x, y), J is a noiseless image, and n is noise, namelyThe noise will be amplified, taking into account that the noise is mainly contained in the detail layer image Z e And noise on the image, the present invention recovers the later image as,
the atmospheric light component a is obtained by step 2.1, the transmittance t is obtained by step 2.2,in order to reduce the effect of noise on the restored image, which is shown as,
eta is a constant, eta =6 in the embodiment, and the experiment shows that t (x, y) epsilon [0,1]When t (x, y) < 1/eta, (x, y) data sky region pixel point,noise of the sky area is prevented from being amplified;
specifically, the pair of mixed feature maps F L Carrying out convolution processing on the signals so as to obtain the characteristics of the lower layerAnd then, performing up-sampling on the atmospheric refractive index characteristic diagram, and outputting a transmittance image t, wherein the method further comprises the following steps:
the loss function is constructed as follows:
L=L r +w c L c
wherein L is r To reconstruct the loss function, L c As a function of color loss, w c To be assigned to the color loss function L c Weight of, L r Is shown asL c Is expressed as->
J is a defogged image, Z is a fogging image, c belongs to (R, G, B) as a channel index, and angle (J (x, y), Z (x, y)) represents an included angle of three-dimensional color vectors of the fogging image and the defogged image at a pixel point (x, y); although the similarity of the original image and the defogged image can be measured by the reconstruction error, the consistent angles of the color vectors of the original image and the defogged image cannot be ensured, so that the consistent angles of the color vectors of the same pixel point of the images before and after defogging are ensured by adding the color error function.
The image defogging method and device based on the combination of the priori knowledge and the deep learning have the advantages that the weighted guide filter is used for decomposing the foggy image to obtain the detail layer image and the basic layer image, then the quad-tree search method and the deep neural network are used for processing the basic layer image to obtain the global atmosphere light component image and the transmissivity image, and finally the global atmosphere light component image, the transmissivity image and the detail layer image are used for recovering the foggy image to obtain the defogged image. Because the global atmosphere light component image and the transmissivity image are estimated or calculated by utilizing the base layer image, the amplification of image noise is avoided, and the defogging treatment can be better carried out on the foggy image. And calculating a loss function by using the defogged image to perform feedback parameter adjustment, and adding color loss into the loss function to improve the robustness of the algorithm.
In another embodiment of the invention, an image defogging device based on combination of a priori knowledge and deep learning comprises a memory and a processor. The memory is used for storing the computer program. The processor is configured to implement the image defogging method based on the combination of the priori knowledge and the deep learning when the computer program is executed.
The reader should understand that in the description of this specification, reference to the description of the terms "one embodiment," "some embodiments," "an example," "a specific example" or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Those skilled in the art will appreciate that various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.
Claims (6)
1. An image defogging method based on combination of priori knowledge and deep learning is characterized by comprising the following steps:
decomposing the input original foggy image Z by using a weighted guide filter to obtain a detail layer image Z e And base layer image Z b ;
Using a quadtree search method to search the base layer image Z b Processing to obtain a global atmosphere light component image A c Wherein c belongs to { r, g, b };
constructing a deep convolutional neural network on the base layer image Z b Processing to obtain a transmissivity image t;
utilizing the global atmospheric light component image A based on an atmospheric scattering model c The transmittance image t and the detail layer image Z e Restoring the original foggy image Z to obtain a defogged image J;
the atmosphere scattering model is used for utilizing the global atmosphere light component image A c The transmittance image t and the detail layer image Z e And restoring the original foggy image Z to obtain a defogged image J according to the formula:
wherein J (x, y) is the gray value of the pixel at the point (x, y) in the defogged image J, and Z e (x, y) is the detail layer image Z e The gray value of the pixel at the middle point (x, y), Z b (x, y) is the base layer image Z b A pixel gray value at a midpoint (x, y), t (x, y) being a transmittance at a midpoint (x, y) of the transmittance image t,where η is a constant greater than zero.
2. The preamble-based according to claim 1The image defogging method combining knowledge inspection and deep learning is characterized in that an input original foggy image Z is decomposed by using a weighted guide filter to obtain a basic layer image Z b The method specifically comprises the following steps:
acquiring a pixel gray value Z (x, y) at any point (x, y) in the original foggy image Z;
obtaining a weighted filter coefficient a at the point (x, y) in the Z of the original foggy image by using the following formula p (x, y) and b p (x,y):
Wherein, mu Z,ρ (x, y) is the mean of the pixel grey values of the window centered at point (x, y) in the original hazy image Z, with ρ being the radius;is the variance of the pixel gray value of the window with point (x, y) as the center and rho as the radius in the original foggy image Z; gamma-shaped Y (x, Y) is the luminance component of the original foggy image Z at point (x, Y), Y = max { Z { (X, Y) } r ,Z g ,Z b },Z r ,Z g ,Z b R, G, B values at the point (x, y) in the hazy image Z, respectively; λ is a constant greater than 1;
obtaining the base layer image Z using the following formula b :
Z b (x,y)=a p (x,y)Z(x,y)+b p (x,y);
Wherein Z is b (x, y) is the base layer image Z b The pixel gray value at point (x, y).
3. The image defogging method based on the combination of the prior knowledge and the deep learning as claimed in claim 1, wherein said base layer image Z is searched by using a quadtree search method b Processing to obtain a global atmosphere light component image A c The method specifically comprises the following steps:
step A, base layer image Z b Are equally divided into four rectangular areas Z b-i (i∈{1,2,3,4}),Z b-i Respectively has a length and a width of Z b 1/2 of the length and width of (A);
b, defining the fraction of each rectangular area as the average value of the pixel gray value minus the standard deviation of the pixel gray value in the area;
step C, selecting the rectangular area with the highest score and further dividing the rectangular area into four rectangular areas;
step D, repeating the steps B and C, and carrying out iterative updating on the rectangular area with the highest score for n times to obtain a final subdivision area Z b-end ;
Obtaining the atmospheric light component image A by using the following formula c :
|A c (c∈{r,g,b})|=min|((Z b-end-r (x,y),Z b-end-g (x,y),Z b-end-b (x,y))-(255,255,255))|,
In the formula (Z) b-end-r (x,y),Z b-end-g (x,y),Z b-end-b (x, y)) is the final subdivided region Z b-end The color vector at the midpoint (x, y), (255 ) is a pure white vector.
4. The image defogging method based on the combination of the prior knowledge and the deep learning as claimed in claim 1, wherein the construction of the deep convolutional neural network is performed on the basic layer image Z b The transmittance image t is obtained by processing, and the method specifically comprises the following steps:
using the first convolution layer to pair the base layer image Z b Down-sampling to obtain low-resolution base layer image Z b ' post-extraction of low-layer features, wherein the base layer image Z is mapped using a first convolution layer b The down-sampling is performed according to the formula:
wherein,the low resolution base layer image Z for the ith of the first convolutional layer at the index of channel c b The low-level feature of'; x is the base layer image Z b Abscissa of (c), y is the base layer image Z b The ordinate of (a); x' is a low resolution base layer image Z b 'abscissa, y' is the low resolution base layer image Z b The ordinate of `;Convolution weight arrays of the ith convolution layer in the first convolution layer under the index channels of the layers c and c';A deviation vector of the ith convolution layer in the first convolution layer under the c layer index channel is obtained; σ (= max (, 0) represents a ReLU activation function and uses zero padding as a boundary condition for all of the first convolutional layers; s is the step length of the convolution kernel of the first convolution layer;
the obtained low-layer characteristicsThe number of input layers is n L Extracting a local feature L from a second convolutional layer of =2, wherein the size of a convolutional kernel in the second convolutional layer is 3 × 3, and the step size is 1;
low layer feature obtained by laminating a second convolution layerThe number of input layers is n G1 After a third convolutional layer with the size of 3 × 3 and the step length of 2, =2 and convolutional kernel, the number of layers is input again G2 A full connection layer of =2, obtaining a global feature G;
adding the local feature L and the global feature G, and inputting the sum into an activation function to obtain a low-level featureCorresponding hybrid feature map F L =σ(L+G);
5. The image defogging method based on the combination of the prior knowledge and the deep learning as claimed in claim 4, wherein the pair of mixed feature maps F L Convolution processing is carried out for (= sigma (L + G)) to obtain the characteristic of the lower layerAnd then, performing up-sampling on the atmospheric refractive index characteristic diagram, and outputting a transmittance image t, wherein the method further comprises the following steps:
the loss function is constructed as follows:
L=L r +w c L c
wherein L is r To reconstruct the loss function, L c As a function of color loss, w c To be assigned to the color loss function L c Weight of, L r Is shown asL c Is expressed as->
J is a defogged image, Z is a fogging image, c belongs to (R, G, B) as a channel index, and angle (J (x, y), Z (x, y)) represents an included angle of three-dimensional color vectors of the fogging image and the defogged image at a pixel point (x, y);
and performing parameter adjustment processing on the deep convolutional neural network by using the loss function.
6. An image defogging device based on combination of priori knowledge and deep learning is characterized by comprising a memory and a processor;
the memory for storing a computer program;
the processor, when executing the computer program, is configured to implement the image defogging method according to any one of claims 1 to 5 based on the combination of the prior knowledge and the deep learning.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911226437.6A CN111161159B (en) | 2019-12-04 | 2019-12-04 | Image defogging method and device based on combination of priori knowledge and deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911226437.6A CN111161159B (en) | 2019-12-04 | 2019-12-04 | Image defogging method and device based on combination of priori knowledge and deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111161159A CN111161159A (en) | 2020-05-15 |
CN111161159B true CN111161159B (en) | 2023-04-18 |
Family
ID=70556359
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911226437.6A Active CN111161159B (en) | 2019-12-04 | 2019-12-04 | Image defogging method and device based on combination of priori knowledge and deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111161159B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111861939B (en) * | 2020-07-30 | 2022-04-29 | 四川大学 | Single image defogging method based on unsupervised learning |
CN111932365B (en) * | 2020-08-11 | 2021-09-10 | 上海华瑞银行股份有限公司 | Financial credit investigation system and method based on block chain |
CN114331874A (en) * | 2021-12-07 | 2022-04-12 | 西安邮电大学 | Unmanned aerial vehicle aerial image defogging method and device based on residual detail enhancement |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107749052A (en) * | 2017-10-24 | 2018-03-02 | 中国科学院长春光学精密机械与物理研究所 | Image defogging method and system based on deep learning neutral net |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014168587A1 (en) * | 2013-04-12 | 2014-10-16 | Agency For Science, Technology And Research | Method and system for processing an input image |
US20180122051A1 (en) * | 2015-03-30 | 2018-05-03 | Agency For Science, Technology And Research | Method and device for image haze removal |
KR102461144B1 (en) * | 2015-10-16 | 2022-10-31 | 삼성전자주식회사 | Image haze removing device |
-
2019
- 2019-12-04 CN CN201911226437.6A patent/CN111161159B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107749052A (en) * | 2017-10-24 | 2018-03-02 | 中国科学院长春光学精密机械与物理研究所 | Image defogging method and system based on deep learning neutral net |
Non-Patent Citations (3)
Title |
---|
余春艳 ; 林晖翔 ; 徐小丹 ; 叶鑫焱 ; .雾天退化模型参数估计与CUDA设计.计算机辅助设计与图形学学报.2018,(第02期),全文. * |
谢伟 ; 周玉钦 ; 游敏 ; .融合梯度信息的改进引导滤波.中国图象图形学报.2016,(第09期),全文. * |
陈永 ; 郭红光 ; 艾亚鹏 ; .基于双域分解的多尺度深度学习单幅图像去雾.光学学报.2019,(第02期),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN111161159A (en) | 2020-05-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Li et al. | Single image de-hazing using globally guided image filtering | |
Santra et al. | Learning a patch quality comparator for single image dehazing | |
CN111161159B (en) | Image defogging method and device based on combination of priori knowledge and deep learning | |
CN111915530B (en) | End-to-end-based haze concentration self-adaptive neural network image defogging method | |
CN110796009A (en) | Method and system for detecting marine vessel based on multi-scale convolution neural network model | |
CN110675340A (en) | Single image defogging method and medium based on improved non-local prior | |
CN111539246B (en) | Cross-spectrum face recognition method and device, electronic equipment and storage medium thereof | |
CN112614063B (en) | Image enhancement and noise self-adaptive removal method for low-illumination environment in building | |
Guo et al. | Joint raindrop and haze removal from a single image | |
CN112164010A (en) | Multi-scale fusion convolution neural network image defogging method | |
CN113962889A (en) | Thin cloud removing method, device, equipment and medium for remote sensing image | |
Qian et al. | CIASM-Net: a novel convolutional neural network for dehazing image | |
CN113284061A (en) | Underwater image enhancement method based on gradient network | |
CN115063318A (en) | Adaptive frequency-resolved low-illumination image enhancement method and related equipment | |
CN113724134A (en) | Aerial image blind super-resolution reconstruction method based on residual distillation network | |
CN113436124A (en) | Single-image defogging method applied to marine foggy environment | |
CN115937019A (en) | Non-uniform defogging method combining LSD (local Scale decomposition) quadratic segmentation and deep learning | |
CN115187474A (en) | Inference-based two-stage dense fog image defogging method | |
CN113487509B (en) | Remote sensing image fog removal method based on pixel clustering and transmissivity fusion | |
Zhou et al. | Sparse representation with enhanced nonlocal self-similarity for image denoising | |
CN113421210B (en) | Surface point Yun Chong construction method based on binocular stereoscopic vision | |
CN114140361A (en) | Generation type anti-network image defogging method fusing multi-stage features | |
Li et al. | DLT-Net: deep learning transmittance network for single image haze removal | |
CN109544470A (en) | A kind of convolutional neural networks single image to the fog method of boundary constraint | |
Senthilkumar et al. | A review on haze removal techniques |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |