CN111161159B - Image defogging method and device based on combination of priori knowledge and deep learning - Google Patents

Image defogging method and device based on combination of priori knowledge and deep learning Download PDF

Info

Publication number
CN111161159B
CN111161159B CN201911226437.6A CN201911226437A CN111161159B CN 111161159 B CN111161159 B CN 111161159B CN 201911226437 A CN201911226437 A CN 201911226437A CN 111161159 B CN111161159 B CN 111161159B
Authority
CN
China
Prior art keywords
image
base layer
layer
low
layer image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201911226437.6A
Other languages
Chinese (zh)
Other versions
CN111161159A (en
Inventor
郑超兵
伍世虔
徐望明
方顺
陈思摇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Science and Technology WHUST
Original Assignee
Wuhan University of Science and Technology WHUST
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Science and Technology WHUST filed Critical Wuhan University of Science and Technology WHUST
Priority to CN201911226437.6A priority Critical patent/CN111161159B/en
Publication of CN111161159A publication Critical patent/CN111161159A/en
Application granted granted Critical
Publication of CN111161159B publication Critical patent/CN111161159B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to an image defogging method and device based on combination of priori knowledge and deep learning. The method comprises the following steps: decomposing the input original foggy image Z by using a weighted guide filter to obtain a detail layer image Z e And base layer image Z b (ii) a Using a quadtree search method to search the base layer image Z b Processing to obtain a global atmosphere light component image A c Wherein c belongs to { r, g, b }; constructing a deep convolutional neural network on the base layer image Z b Processing is carried out to obtain a transmissivity image t; utilizing the global atmospheric light component image A based on an atmospheric scattering model c The transmittance image t and the detail layer image Z e And restoring the original foggy image Z to obtain a defogged image J.

Description

一种基于先验知识与深度学习结合的图像去雾方法和装置Image defogging method and device based on the combination of prior knowledge and deep learning

技术领域Technical Field

本发明涉及图像处理技术领域,具体涉及一种基于先验知识与深度学习结合的图像去雾方法和装置。The present invention relates to the field of image processing technology, and in particular to an image defogging method and device based on the combination of prior knowledge and deep learning.

背景技术Background Art

在雾天情况下,太阳、雾或灰尘中的浮动颗粒导致图片褪色、模糊,对比度和柔和度都降低,严重制约图像质量,限制了视频监控与分析、目标识别、城市交通、航空拍摄、军事国防等应用。因此,对有雾图像的清晰化处理直接关系着民计民生,对人们的生产生活有着重大的现实意义。In foggy weather, floating particles in the sun, fog or dust cause the image to fade and blur, and the contrast and softness are reduced, which seriously restricts the image quality and limits applications such as video surveillance and analysis, target recognition, urban transportation, aerial photography, and military defense. Therefore, the clarity of foggy images is directly related to people's livelihood and has great practical significance for people's production and life.

目前去雾的算法主要分为三类:基于非模型的去雾算法、基于模型的去雾算法和基于深度学习的去雾算法。基于非模型的去雾算法主要通过直接拉伸图像的对比度,从而使得图像增强达到改善图像的目的。常用的方法有:直方图均衡、同态滤波算法、基于Retinex模型的算法以及基于Retinex改进模型的算法。这些方法依据光学成像原理进行去雾,使图像颜色之间的对比度更加均衡,图像效果更加柔和,但得到的图像在对比度上得不到有效的增强,该方法弱化了原图中暗或亮的区域,模糊了图像的重点。At present, dehazing algorithms are mainly divided into three categories: non-model-based dehazing algorithms, model-based dehazing algorithms, and deep learning-based dehazing algorithms. Non-model-based dehazing algorithms mainly enhance the image by directly stretching the contrast of the image. Commonly used methods include: histogram equalization, homomorphic filtering algorithm, algorithm based on Retinex model, and algorithm based on improved Retinex model. These methods perform dehazing based on the principle of optical imaging, making the contrast between image colors more balanced and the image effect softer, but the resulting image cannot be effectively enhanced in contrast. This method weakens the dark or bright areas in the original image and blurs the focus of the image.

发明内容Summary of the invention

针对现有技术的不足,本发明提供一种基于先验知识与深度学习结合的图像去雾方法和装置。In view of the deficiencies in the prior art, the present invention provides an image defogging method and device based on a combination of prior knowledge and deep learning.

第一方面,本发明提供了一种基于先验知识与深度学习结合的图像去雾方法,该方法包括:In a first aspect, the present invention provides an image defogging method based on a priori knowledge combined with deep learning, the method comprising:

利用加权引导滤波器对输入的原始有雾图像Z进行分解,获得细节层图像Ze和基础层图像ZbDecompose the input original foggy image Z using a weighted guided filter to obtain a detail layer image Ze and a base layer image Zb ;

利用四叉树搜索方法对所述基础层图像Zb进行处理获得全局大气光成分图像Ac,其中,c∈{r,g,b};The base layer image Z b is processed by a quadtree search method to obtain a global atmospheric light component image A c , where c∈{r,g,b};

构建深度卷积神经网络对所述基础层图像Zb进行处理获得透射率图像t;Constructing a deep convolutional neural network to process the base layer image Z b to obtain a transmittance image t;

基于大气散射模型,利用所述全局大气光成分图像Ac、所述透射率图像t和所述细节层图像Ze对所述原始有雾图像Z进行恢复,获得去雾图像J。Based on the atmospheric scattering model, the original foggy image Z is restored using the global atmospheric light component image A c , the transmittance image t and the detail layer image Ze to obtain a defogged image J.

优选的,利用加权引导滤波器对输入的原始有雾图像Z进行分解,获得基础层图像Zb,具体包括如下步骤:Preferably, the input original foggy image Z is decomposed by using a weighted guided filter to obtain a base layer image Z b , which specifically includes the following steps:

获取原始有雾图像Z中任一点(x,y)处的像素灰度值Z(x,y);Get the pixel gray value Z(x,y) at any point (x,y) in the original foggy image Z;

利用下式获得原始有雾图像Z中点(x,y)处的加权滤波系数ap(x,y)和bp(x,y):The weighted filter coefficients a p (x, y) and bp (x, y) at the midpoint (x, y) of the original foggy image Z are obtained using the following formula:

Figure GDA0004110135570000021
Figure GDA0004110135570000021

其中,μZ,ρ(x,y)是在原始有雾图像Z中以点(x,y)为中心,ρ为半径的窗体的像素灰度值的均值;

Figure GDA0004110135570000022
是在原始有雾图像Z中以点(x,y)为中心,ρ为半径的窗体的像素灰度值的方差;ΓY(x,y)是原始有雾图像Z在点(x,y)处的亮度分量,Y=max{Zr,Zg,Zb},Zr,Zg,Zb分别是有雾图像Z中点(x,y)处的R,G,B值;λ为大于1的常数;Wherein, μ Z,ρ (x, y) is the mean value of the pixel grayscale value of the window with point (x, y) as the center and ρ as the radius in the original foggy image Z;
Figure GDA0004110135570000022
is the variance of the pixel grayscale value of the window with point (x, y) as the center and ρ as the radius in the original foggy image Z; ΓY(x, y) is the brightness component of the original foggy image Z at point (x, y), Y = max{ Zr , Zg , Zb }, Zr , Zg , Zb are the R, G, B values at point (x, y) in the foggy image Z respectively; λ is a constant greater than 1;

利用下式获得所述基础层图像ZbThe base layer image Z b is obtained using the following formula:

Zb(x,y)=ap(x,y)Z(x,y)+bp(x,y);Z b (x, y) = a p (x, y) Z (x, y) + b p (x, y);

其中,Zb(x,y)为所述基础层图像Zb中在点(x,y)处的像素灰度值。Wherein, Z b (x, y) is the grayscale value of the pixel at the point (x, y) in the base layer image Z b .

优选的,所述利用四叉树搜索方法对所述基础层图像Zb进行处理获得全局大气光成分图像Ac,具体包括如下步骤:Preferably, the method of using a quadtree search method to process the base layer image Z b to obtain a global atmospheric light component image A c specifically includes the following steps:

步骤A,将基础层图像Zb均分为四个矩形区域Zb-i(i∈{1,2,3,4}),Zb-i的长和宽分别为Zb的长和宽的1/2;Step A, divide the base layer image Z b into four rectangular regions Z bi (i∈{1,2,3,4}), the length and width of Z bi are 1/2 of the length and width of Z b respectively;

步骤B,将每个矩形区域的分数定义为像素灰度值的平均值减去该区域内像素灰度值的标准偏差;Step B, defining the score of each rectangular area as the mean value of the pixel grayscale values minus the standard deviation of the pixel grayscale values in the area;

步骤C,选择所述分数最高的矩形区域并将其进一步均分为四个矩形区域;Step C, selecting the rectangular area with the highest score and further dividing it into four rectangular areas;

步骤D,重复步骤B和C并对所述分数最高的矩形区进行迭代更新n次获得最终细分区域Zb-endStep D, repeating steps B and C and iteratively updating the rectangular area with the highest score n times to obtain the final subdivided area Z b-end ;

利用下式获得所述大气光成分图像AcThe atmospheric light component image A c is obtained using the following formula:

|Ac(c∈{r,g,b})|=min|((Zb-end-r(x,y),Zb-end-g(x,y),Zb-end-b(x,y))-(255,255,255))|,|A c (c∈{r,g,b})|=min|((Z b-end-r (x,y),Z b-end-g (x,y),Z b-end-b (x,y))-(255,255,255))|,

式中(Zb-end-r(x,y),Zb-end-g(x,y),Zb-end-b(x,y))为所述最终细分区域Zb-end中点(x,y)处的颜色矢量,(255,255,255)为纯白色矢量。Wherein (Z b-end-r (x, y), Z b-end-g (x, y), Z b-end-b (x, y)) is the color vector at the midpoint (x, y) of the final subdivision area Z b-end , and (255, 255, 255) is a pure white vector.

优选的,所述构建深度卷积神经网络对所述基础层图像Zb进行处理获得透射率图像t,具体包括如下步骤:Preferably, the construction of a deep convolutional neural network to process the base layer image Z b to obtain the transmittance image t specifically includes the following steps:

利用第一卷积层对所述基础层图像Zb进行下采样,获得低分辨率基础层图像Zb’后提取低层特征,其中,利用第一卷积层对所述基础层图像Zb进行下采样的公式为:The base layer image Z b is downsampled by using the first convolutional layer to obtain a low-resolution base layer image Z b ' and then extract low-level features, wherein the formula for downsampling the base layer image Z b by using the first convolutional layer is:

Figure GDA0004110135570000031
Figure GDA0004110135570000031

其中,

Figure GDA0004110135570000032
为所述第一卷积层中第i层卷积层在通道c的索引下,所述低分辨率基础层图像Zb’的低层特征;x为基础层图像Zb的横坐标,y为基础层图像Zb的纵坐标;x'为低分辨率基础层图像Zb’的横坐标,y'为低分辨率基础层图像Zb’的纵坐标;
Figure GDA0004110135570000033
为所述第一卷积层中第i层卷积层在c和c'图层索引通道下的卷积权重数组;
Figure GDA0004110135570000034
为所述第一卷积层中第i层卷积层在c图层索引通道下的偏差向量;σ(·)=max(·,0)表示ReLU激活函数,并使用零填充作为所述第一卷积层中所有层卷积层的边界条件;s为所述第一卷积层的卷积核的步长;in,
Figure GDA0004110135570000032
is the low-level feature of the low-resolution base layer image Z b ′ under the index of channel c of the i-th convolution layer in the first convolution layer; x is the abscissa of the base layer image Z b , and y is the ordinate of the base layer image Z b ; x' is the abscissa of the low-resolution base layer image Z b ′, and y' is the ordinate of the low-resolution base layer image Z b ′;
Figure GDA0004110135570000033
is the convolution weight array of the i-th convolution layer in the first convolution layer under the c and c' layer index channels;
Figure GDA0004110135570000034
is the bias vector of the i-th convolutional layer in the first convolutional layer under the c-th layer index channel; σ(·)=max(·,0) represents the ReLU activation function, and zero padding is used as the boundary condition of all convolutional layers in the first convolutional layer; s is the step size of the convolution kernel of the first convolutional layer;

将得到的低层特征

Figure GDA0004110135570000041
输入层数为nL=2的第二卷积层提取局部特征L,其中第二卷积层中卷积核的尺寸为3×3,步长为1;The obtained low-level features
Figure GDA0004110135570000041
The second convolutional layer with the input layer number n L = 2 extracts the local feature L, where the size of the convolution kernel in the second convolutional layer is 3×3 and the stride is 1;

将第二卷积层得到的低层特征

Figure GDA0004110135570000042
输入层数为nG1=2、卷积核的尺寸为3×3和步长为2的第三卷积层后,再输入层数为nG2=2的全连接层,得到全局特征G;The low-level features obtained by the second convolutional layer
Figure GDA0004110135570000042
After inputting the third convolutional layer with n G1 = 2 layers, a convolution kernel size of 3×3 and a stride of 2, the fully connected layer with n G2 = 2 layers is input to obtain the global feature G;

将局部特征L和全局特征G相加,并输入至激活函数中,得到低层特征

Figure GDA0004110135570000043
对应的混合特征图FL=σ(L+G);Add the local feature L and the global feature G and input them into the activation function to get the low-level feature
Figure GDA0004110135570000043
The corresponding mixed feature map F L =σ(L+G);

对混合特征图FL=σ(L+G)进行卷积处理获得与低层特征

Figure GDA0004110135570000044
对应的初步大气折射率特征图,然后再对所述初步大气折射率特征图进行上采样,输出得到透射率图像t。Convolution is performed on the mixed feature map F L =σ(L+G) to obtain the low-level features
Figure GDA0004110135570000044
The corresponding preliminary atmospheric refractive index characteristic map is then upsampled to output a transmittance image t.

优选的,基于大气散射模型,利用所述全局大气光成分图像Ac、所述透射率图像t和所述细节层图像Ze对所述原始有雾图像Z进行恢复,获得去雾图像J的公式为:Preferably, based on the atmospheric scattering model, the global atmospheric light component image A c , the transmittance image t and the detail layer image Ze are used to restore the original foggy image Z, and the formula for obtaining the defogging image J is:

Figure GDA0004110135570000045
Figure GDA0004110135570000045

其中,J(x,y)为去雾图像J中点(x,y)处的像素灰度值,Ze(x,y)为所述细节层图像Ze中点(x,y)处的像素灰度值,Zb(x,y)为所述基础层图像Zb中点(x,y)处的像素灰度值,t(x,y)为所述透射率图像t中点(x,y)处的透射率,

Figure GDA0004110135570000046
其中η为大于零的常数。Wherein, J(x,y) is the pixel grayscale value at the midpoint (x, y) of the dehazed image J, Ze (x,y) is the pixel grayscale value at the midpoint (x, y) of the detail layer image Ze , Zb (x,y) is the pixel grayscale value at the midpoint (x, y) of the base layer image Zb , t(x,y) is the transmittance at the midpoint (x, y) of the transmittance image t,
Figure GDA0004110135570000046
Where η is a constant greater than zero.

优选的,所述对混合特征图FL=σ(L+G)进行卷积处理获得与低层特征

Figure GDA0004110135570000047
对应的初步大气折射率特征图,然后再对所述大气折射率特征图进行上采样,输出得到透射率图像t之后还包括如下步骤:Preferably, the mixed feature map F L =σ(L+G) is convolved to obtain the low-level feature
Figure GDA0004110135570000047
The corresponding preliminary atmospheric refractive index characteristic map is then upsampled to obtain the transmittance image t, and then the following steps are included:

按照下式构建损失函数:The loss function is constructed as follows:

L=Lr+wcLc L= Lr + wcLc

其中,Lr为重构损失函数,Lc为颜色损失函数,wc为分配给颜色损失函数Lc的权值,Lr表示为

Figure GDA0004110135570000051
Lc表示为
Figure GDA0004110135570000052
Among them, Lr is the reconstruction loss function, Lc is the color loss function, wc is the weight assigned to the color loss function Lc , and Lr is expressed as
Figure GDA0004110135570000051
L c is expressed as
Figure GDA0004110135570000052

J为去雾的图像,Z为有雾图像,c∈(R,G,B)为通道索引,∠(J(x,y),Z(x,y))表示有雾图像和去雾图像在像素点(x,y)的三维颜色向量的夹角;J is the defogging image, Z is the foggy image, c∈(R,G,B) is the channel index, ∠(J(x,y),Z(x,y)) represents the angle between the three-dimensional color vectors of the foggy image and the defogging image at the pixel point (x,y);

利用所述损失函数对所述深度卷积神经网络进行调参处理。The loss function is used to adjust the parameters of the deep convolutional neural network.

第二方面,本发明提供了一种基于先验知识与深度学习结合的图像去雾装置,所述装置包括存储器和处理器;In a second aspect, the present invention provides an image defogging device based on a priori knowledge combined with deep learning, the device comprising a memory and a processor;

所述存储器,用于存储计算机程序;The memory is used to store computer programs;

所述处理器,用于当执行所述计算机程序时,实现如前所述的基于先验知识与深度学习结合的图像去雾方法。The processor is used to implement the image defogging method based on the combination of prior knowledge and deep learning as described above when executing the computer program.

本发明提供的基于先验知识与深度学习结合的图像去雾方法和装置的有益效果是,利用加权引导滤波器对有雾图像分解获得细节层图像和基础层图像,再利用四叉树搜索法和深度神经网络对基础层图像处理获得全局大气光成分图像和透射率图像,最后利用全局大气光成分图像、透射率图像和细节层图像对有雾图像进行恢复得到去雾图像。由于全局大气光成分图像与透射率图像都利用了基础层图像进行估计或计算,避免了图像噪声的放大,能够较好地对有雾图像进行去雾处理。The beneficial effect of the image defogging method and device based on the combination of prior knowledge and deep learning provided by the present invention is that the foggy image is decomposed by a weighted guided filter to obtain a detail layer image and a base layer image, and then the base layer image is processed by a quadtree search method and a deep neural network to obtain a global atmospheric light component image and a transmittance image, and finally the foggy image is restored by using the global atmospheric light component image, the transmittance image and the detail layer image to obtain a defogged image. Since both the global atmospheric light component image and the transmittance image use the base layer image for estimation or calculation, the amplification of image noise is avoided, and the foggy image can be defogged better.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following briefly introduces the drawings required for use in the embodiments or the description of the prior art. Obviously, the drawings described below are some embodiments of the present invention. For ordinary technicians in this field, other drawings can be obtained based on these drawings without paying creative labor.

图1为本发明实施例的一种基于先验知识与深度学习结合的图像去雾方法的流程示意图。FIG1 is a flow chart of an image defogging method based on a combination of prior knowledge and deep learning according to an embodiment of the present invention.

具体实施方式DETAILED DESCRIPTION

以下结合附图对本发明的原理和特征进行描述,所举实例只用于解释本发明,并非用于限定本发明的范围。The principles and features of the present invention are described below in conjunction with the accompanying drawings. The examples given are only used to explain the present invention and are not used to limit the scope of the present invention.

参见图1,本发明所描述的一种基于先验知识与深度学习结合的单张图像去雾方法,包括以下步骤:Referring to FIG. 1 , the present invention describes a single image defogging method based on a priori knowledge combined with deep learning, comprising the following steps:

利用加权引导滤波器对输入的原始有雾图像Z进行分解,获得细节层图像Ze和基础层图像ZbDecompose the input original foggy image Z using a weighted guided filter to obtain a detail layer image Ze and a base layer image Zb ;

利用四叉树搜索方法对所述基础层图像Zb进行处理获得全局大气光成分图像AcProcessing the base layer image Z b using a quadtree search method to obtain a global atmospheric light component image A c ;

构建深度卷积神经网络对所述基础层图像Zb进行处理获得透射率图像t;Constructing a deep convolutional neural network to process the base layer image Z b to obtain a transmittance image t;

基于大气散射模型,利用所述全局大气光成分图像Ac、所述透射率图像t和所述细节层图像Ze对所述原始有雾图像Z进行恢复,获得去雾图像J。Based on the atmospheric scattering model, the original foggy image Z is restored using the global atmospheric light component image A c , the transmittance image t and the detail layer image Ze to obtain a defogged image J.

具体地,所述利用加权引导滤波器对输入的原始有雾图像Z进行分解,获得细节层图像Ze和基础层图像Zb,具体包括如下步骤:Specifically, the method of decomposing the input original foggy image Z by using a weighted guided filter to obtain a detail layer image Ze and a base layer image Zb specifically includes the following steps:

获取原始有雾图像Z中任一点(x,y)处的像素灰度值Z(x,y);Get the pixel gray value Z(x,y) at any point (x,y) in the original foggy image Z;

利用下式获得原始有雾图像Z中点(x,y)处的加权滤波系数ap(x,y)和bp(x,y):The weighted filter coefficients a p (x, y) and bp (x, y) at the midpoint (x, y) of the original foggy image Z are obtained using the following formula:

Figure GDA0004110135570000061
Figure GDA0004110135570000061

其中,μZ,ρ(x,y)是在原始有雾图像Z中以点(x,y)为中心,ρ为半径的窗体的像素灰度值的均值;Wherein, μ Z,ρ (x, y) is the mean value of the pixel grayscale value of the window with point (x, y) as the center and ρ as the radius in the original foggy image Z;

Figure GDA0004110135570000071
是在原始有雾图像Z中以点(x,y)为中心,ρ为半径的窗体的像素灰度值的方差;
Figure GDA0004110135570000071
It is the variance of the pixel grayscale value of the window with point (x, y) as the center and ρ as the radius in the original foggy image Z;

ΓY(x,y)是原始有雾图像Z的点(x,y)的亮度分量;Γ Y (x, y) is the brightness component of the point (x, y) of the original foggy image Z;

λ为大于1的常数;λ is a constant greater than 1;

利用下式获得所述基础层图像ZbThe base layer image Z b is obtained using the following formula:

Zb(x,y)=ap(x,y)Z(x,y)+bp(x,y);Z b (x, y) = a p (x, y) Z (x, y) + b p (x, y);

其中,Zb(x,y)为所述基础层图像Zb中在点(x,y)处的像素灰度值;Wherein, Z b (x, y) is the grayscale value of the pixel at the point (x, y) in the base layer image Z b ;

Z(x,y)为原始有雾图像Z中任一点(x,y)处的像素灰度值;Z(x,y) is the grayscale value of the pixel at any point (x,y) in the original foggy image Z;

ap(x,y)和bp(x,y)原始有雾图像Z中点(x,y)处的加权滤波系数。a p (x, y) and bp (x, y) are the weighted filter coefficients at the midpoint (x, y) of the original foggy image Z.

具体地,所述利用四叉树搜索方法对所述基础层图像Zb进行处理获得全局大气光成分图像Ac,具体包括如下步骤:Specifically, the method of using the quadtree search method to process the base layer image Z b to obtain the global atmospheric light component image A c specifically includes the following steps:

步骤A,将基础层图像Zb均分为四个矩形区域Zb-i(i∈{1,2,3,4}),Zb-i的长和宽分别为Zb的长和宽的1/2;Step A, divide the base layer image Z b into four rectangular regions Z bi (i∈{1,2,3,4}), the length and width of Z bi are 1/2 of the length and width of Z b respectively;

步骤B,将每个矩形区域的分数定义为像素灰度值的平均值减去该区域内像素灰度值的标准偏差;Step B, defining the score of each rectangular area as the mean value of the pixel grayscale values minus the standard deviation of the pixel grayscale values in the area;

步骤C,选择所述分数最高的矩形区域并将其进一步均分为四个矩形区域;Step C, selecting the rectangular area with the highest score and further dividing it into four rectangular areas;

重复步骤B和C并对所述分数最高的矩形区进行迭代更新n次获得最终细分区域Zb-endRepeat steps B and C and iteratively update the rectangular area with the highest score n times to obtain the final subdivided area Z b-end ;

利用下式获得所述大气光成分图像AcThe atmospheric light component image A c is obtained using the following formula:

|Ac(c∈{r,g,b})|=min|((Zb-end-r(x,y),Zb-end-g(x,y),Zb-end-b(x,y))-(255,255,255))|,|A c (c∈{r,g,b})|=min|((Z b-end-r (x,y),Z b-end-g (x,y),Z b-end-b (x,y))-(255,255,255))|,

式中(Zb-end-r(x,y),Zb-end-g(x,y),Zb-end-b(x,y))为所述最终细分区域Zb-end的任一像素点的颜色矢量,(255,255,255)为纯白色矢量。Wherein (Z b-end-r (x, y), Z b-end-g (x, y), Z b-end-b (x, y)) is the color vector of any pixel point of the final subdivision area Z b-end , and (255, 255, 255) is a pure white vector.

具体地,所述构建深度卷积神经网络对所述基础层图像Zb进行处理获得透射率图像t具体包括如下步骤:Specifically, the construction of a deep convolutional neural network to process the base layer image Zb to obtain the transmittance image t specifically includes the following steps:

利用第一卷积层对所述基础层图像Zb进行下采样,获得低分辨率基础层图像Zb’后提取低层特征,其中所述利用第一卷积层对所述基础层图像Zb进行下采样的公式为:The base layer image Z b is downsampled by using the first convolutional layer to obtain a low-resolution base layer image Z b ' and then the low-level features are extracted, wherein the formula for downsampling the base layer image Z b by using the first convolutional layer is:

Figure GDA0004110135570000081
Figure GDA0004110135570000081

其中,

Figure GDA0004110135570000082
为第i层卷积层在通道c的索引下,所述低分辨率基础层图像Zb’的低层特征;in,
Figure GDA0004110135570000082
is the low-level features of the low-resolution base layer image Z b ′ at the index of channel c at the i-th convolutional layer;

x为基础层图像Zb的横坐标;x is the horizontal coordinate of the base layer image Z b ;

y为基础层图像Zb的纵坐标;y is the ordinate of the base layer image Z b ;

x'为低分辨率基础层图像Zb’的横坐标;x' is the abscissa of the low-resolution base layer image Z b ';

y'为低分辨率基础层图像Zb’的纵坐标;y' is the ordinate of the low-resolution base layer image Z b ';

Figure GDA0004110135570000083
为第i层卷积层在c和c'图层索引通道下的卷积权重数组;
Figure GDA0004110135570000083
is the convolution weight array of the i-th convolution layer under the c and c' layer index channels;

Figure GDA0004110135570000084
为第i层卷积层在c图层索引通道下的偏差向量;
Figure GDA0004110135570000084
is the bias vector of the i-th convolutional layer under the c-th layer index channel;

σ(·)=max(·,0)表示ReLU激活函数,并使用零填充作为所有卷积层的边界条件;σ(·)=max(·,0) represents the ReLU activation function, and zero padding is used as the boundary condition of all convolutional layers;

s为卷积核的步长;s is the step size of the convolution kernel;

将得到的低层特征

Figure GDA0004110135570000085
输入层数为nL=2的第二卷积层提取局部特征L,其中第二卷积层中卷积核的尺寸为3×3,步长为1;The obtained low-level features
Figure GDA0004110135570000085
The second convolutional layer with the input layer number n L = 2 extracts the local feature L, where the size of the convolution kernel in the second convolutional layer is 3×3 and the stride is 1;

将第二卷积层得到的低层特征

Figure GDA0004110135570000086
输入层数为nG1=2、卷积核的尺寸为3×3和步长为2的第三卷积层后,再输入层数为nG2=2的全连接层,得到全局特征G;The low-level features obtained by the second convolutional layer
Figure GDA0004110135570000086
After inputting the third convolutional layer with n G1 = 2 layers, a convolution kernel size of 3×3 and a stride of 2, the fully connected layer with n G2 = 2 layers is input to obtain the global feature G;

将局部特征L和全局特征G相加,并输入至激活函数中,得到低层特征

Figure GDA0004110135570000087
对应的混合特征图FL=σ(L+G);Add the local feature L and the global feature G and input them into the activation function to get the low-level feature
Figure GDA0004110135570000087
The corresponding mixed feature map F L =σ(L+G);

对混合特征图FL=σ(L+G)进行卷积处理获得与低层特征

Figure GDA0004110135570000088
对应的初步大气折射率特征图,然后再对所述初步大气折射率特征图进行上采样,输出得到透射率图像t(x,y)。Convolution is performed on the mixed feature map F L =σ(L+G) to obtain the low-level features
Figure GDA0004110135570000088
The corresponding preliminary atmospheric refractive index characteristic map is then upsampled to output a transmittance image t(x, y).

具体地,基于大气散射模型,利用所述全局大气光成分图像Ac、所述透射率图像t和所述细节层图像Ze对所述原始有雾图像Z进行恢复,获得去雾图像J的公式为:Specifically, based on the atmospheric scattering model, the global atmospheric light component image A c , the transmittance image t and the detail layer image Ze are used to restore the original foggy image Z, and the formula for obtaining the defogging image J is:

Figure GDA0004110135570000091
Figure GDA0004110135570000091

其中,J(x,y)为去雾图像J中点(x,y)处的像素灰度值,Where J(x,y) is the grayscale value of the pixel at the midpoint (x, y) of the dehazed image J.

Ze(x,y)为所述细节层图像Ze中点(x,y)处的像素灰度值, Ze (x,y) is the grayscale value of the pixel at the midpoint (x, y) of the detail layer image Ze .

Zb(x,y)为所述基础层图像Zb中点(x,y)处的像素灰度值,Z b (x, y) is the grayscale value of the pixel at the midpoint (x, y) of the base layer image Z b ,

t(x,y)为所述输出得到透射率图像,t(x,y) is the output transmittance image,

Figure GDA0004110135570000092
其中t(x,y)为所述输出得到透射率图像,η为大于零的常数。
Figure GDA0004110135570000092
Wherein t(x, y) is the output transmittance image, and η is a constant greater than zero.

本步骤中根据大气散射模型Z(x,y)=t(x,y)J(x,y)+Ac(1-t(x,y)),J(x,y)为去雾图像,(x,y)为像素点的空间坐标,进行有雾图像的恢复,得到的恢复后的图像可以表示为,In this step, the foggy image is restored according to the atmospheric scattering model Z(x,y)=t(x,y)J(x,y)+A c (1-t(x,y)), where J(x,y) is the defogging image and (x,y) is the spatial coordinate of the pixel point. The restored image can be expressed as:

Figure GDA0004110135570000093
Figure GDA0004110135570000093

其中t0是为了保证浓雾区域处理效果的参数,Z(x,y)=J(x,y)+n(x,y),J为无噪声图像,n为噪声,即

Figure GDA0004110135570000094
噪声将被放大,考虑噪声主要包含在细节层图像Ze中以及噪声对图像影响,本发明恢复后发的图像表示为,Among them, t 0 is a parameter to ensure the processing effect of dense fog area, Z(x,y)=J(x,y)+n(x,y), J is the noise-free image, n is the noise, that is,
Figure GDA0004110135570000094
The noise will be amplified. Considering that the noise is mainly contained in the detail layer image Ze and the influence of the noise on the image, the image after restoration of the present invention is expressed as:

Figure GDA0004110135570000095
Figure GDA0004110135570000095

大气光成分A通过步骤2.1获取,透射率t通过步骤2.2获取,

Figure GDA0004110135570000096
是为了降低噪声对恢复图像的影响,其表示为,The atmospheric light component A is obtained through step 2.1, and the transmittance t is obtained through step 2.2.
Figure GDA0004110135570000096
It is to reduce the impact of noise on the restored image, which is expressed as,

Figure GDA0004110135570000101
Figure GDA0004110135570000101

η为一个常数,本实施例中η=6,实验发现,t(x,y)∈[0,1],当t(x,y)<1/η时,(x,y)数据天空区域的像素点,

Figure GDA0004110135570000102
避免天空区域的噪声被放大;η is a constant. In this embodiment, η=6. Experiments show that t(x,y)∈[0,1], when t(x,y)<1/η, the pixel point in the sky area of (x,y) data is
Figure GDA0004110135570000102
Avoid amplifying noise in the sky area;

具体地,所述对混合特征图FL=σ(L+G)进行卷积处理获得与低层特征

Figure GDA0004110135570000103
对应的初步大气折射率特征图,然后再对所述大气折射率特征图进行上采样,输出得到透射率图像t之后还包括如下步骤:Specifically, the mixed feature map F L =σ(L+G) is convolved to obtain the low-level feature
Figure GDA0004110135570000103
The corresponding preliminary atmospheric refractive index characteristic map is then upsampled to obtain the transmittance image t, and then the following steps are included:

按照下式构建损失函数:The loss function is constructed as follows:

L=Lr+wcLc L= Lr + wcLc

其中,Lr为重构损失函数,Lc为颜色损失函数,wc为分配给颜色损失函数Lc的权值,Lr表示为

Figure GDA0004110135570000104
Lc表示为
Figure GDA0004110135570000105
Among them, Lr is the reconstruction loss function, Lc is the color loss function, wc is the weight assigned to the color loss function Lc , and Lr is expressed as
Figure GDA0004110135570000104
L c is expressed as
Figure GDA0004110135570000105

J为去雾的图像,Z为有雾图像,c∈(R,G,B)为通道索引,∠(J(x,y),Z(x,y))表示有雾图像和去雾图像在像素点(x,y)的三维颜色向量的夹角;重构误差虽然可以衡量原始图像和去雾后图像的相似度,但是不能确保它们颜色向量的角度一致,因此加入了颜色误差函数确保了去雾前后图片同一像素点的颜色向量的角度一致。J is the defogged image, Z is the foggy image, c∈(R,G,B) is the channel index, ∠(J(x,y),Z(x,y)) represents the angle between the three-dimensional color vectors of the foggy image and the defogged image at the pixel point (x,y). Although the reconstruction error can measure the similarity between the original image and the defogged image, it cannot ensure that the angles of their color vectors are consistent. Therefore, a color error function is added to ensure that the angles of the color vectors of the same pixel point in the image before and after defogging are consistent.

本发明提供的基于先验知识与深度学习结合的图像去雾方法和装置的有益效果是,利用加权引导滤波器对有雾图像分解获得细节层图像和基础层图像,再利用四叉树搜索法和深度神经网络对基础层图像处理获得全局大气光成分图像和透射率图像,最后利用全局大气光成分图像、透射率图像和细节层图像对有雾图像进行恢复得到去雾图像。由于全局大气光成分图像与透射率图像都利用了基础层图像进行估计或计算,避免了图像噪声的放大,能够较好地对有雾图像进行去雾处理。利用去雾后的图像计算损失函数进行反馈调参,并且损失函数中加入了颜色损失,提升了算法的鲁棒性。The beneficial effect of the image defogging method and device based on the combination of prior knowledge and deep learning provided by the present invention is that the foggy image is decomposed by a weighted guided filter to obtain a detail layer image and a base layer image, and then the base layer image is processed by a quadtree search method and a deep neural network to obtain a global atmospheric light component image and a transmittance image, and finally the foggy image is restored by using the global atmospheric light component image, the transmittance image and the detail layer image to obtain a defogged image. Since both the global atmospheric light component image and the transmittance image use the base layer image for estimation or calculation, the amplification of image noise is avoided, and the foggy image can be defogged better. The loss function is calculated using the defogged image for feedback parameter adjustment, and color loss is added to the loss function to improve the robustness of the algorithm.

在本发明另一实施例中,一种基于先验知识与深度学习结合的图像去雾装置包括存储器和处理器。所述存储器,用于存储计算机程序。所述处理器,用于当执行所述计算机程序时,实现如上所述的基于先验知识与深度学习结合的图像去雾方法。In another embodiment of the present invention, an image defogging device based on a priori knowledge combined with deep learning includes a memory and a processor. The memory is used to store a computer program. The processor is used to implement the image defogging method based on a priori knowledge combined with deep learning as described above when executing the computer program.

读者应理解,在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。尽管上面已经示出和描述了本发明的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本发明的限制,本领域的普通技术人员在本发明的范围内可以对上述实施例进行变化、修改、替换和变型。The reader should understand that in the description of this specification, the description with reference to the terms "one embodiment", "some embodiments", "example", "specific example" or "some examples" etc. means that the specific features, structures, materials or characteristics described in conjunction with the embodiment or example are included in at least one embodiment or example of the present invention. In this specification, the schematic representation of the above terms does not necessarily refer to the same embodiment or example. Moreover, the specific features, structures, materials or characteristics described may be combined in any one or more embodiments or examples in a suitable manner. Those skilled in the art may combine and combine the different embodiments or examples described in this specification and the features of the different embodiments or examples. Although the embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and cannot be construed as limitations on the present invention, and those of ordinary skill in the art may change, modify, replace and modify the above embodiments within the scope of the present invention.

Claims (6)

1.一种基于先验知识与深度学习结合的图像去雾方法,其特征在于,所述方法包括:1. An image defogging method based on a priori knowledge combined with deep learning, characterized in that the method comprises: 利用加权引导滤波器对输入的原始有雾图像Z进行分解,获得细节层图像Ze和基础层图像ZbDecompose the input original foggy image Z using a weighted guided filter to obtain a detail layer image Ze and a base layer image Zb ; 利用四叉树搜索方法对所述基础层图像Zb进行处理获得全局大气光成分图像Ac,其中,c∈{r,g,b};The base layer image Z b is processed by a quadtree search method to obtain a global atmospheric light component image A c , where c∈{r,g,b}; 构建深度卷积神经网络对所述基础层图像Zb进行处理获得透射率图像t;Constructing a deep convolutional neural network to process the base layer image Z b to obtain a transmittance image t; 基于大气散射模型,利用所述全局大气光成分图像Ac、所述透射率图像t和所述细节层图像Ze对所述原始有雾图像Z进行恢复,获得去雾图像J;Based on the atmospheric scattering model, the original foggy image Z is restored using the global atmospheric light component image A c , the transmittance image t and the detail layer image Ze to obtain a defogged image J; 所述基于大气散射模型,利用所述全局大气光成分图像Ac、所述透射率图像t和所述细节层图像Ze对所述原始有雾图像Z进行恢复,获得去雾图像J的公式为:The formula for restoring the original foggy image Z based on the atmospheric scattering model using the global atmospheric light component image Ac , the transmittance image t and the detail layer image Ze to obtain the defogging image J is:
Figure FDA0004110135560000011
Figure FDA0004110135560000011
其中,J(x,y)为去雾图像J中点(x,y)处的像素灰度值,Ze(x,y)为所述细节层图像Ze中点(x,y)处的像素灰度值,Zb(x,y)为所述基础层图像Zb中点(x,y)处的像素灰度值,t(x,y)为所述透射率图像t中点(x,y)处的透射率,
Figure FDA0004110135560000012
其中η为大于零的常数。
Wherein, J(x,y) is the pixel grayscale value at the midpoint (x, y) of the dehazed image J, Ze (x,y) is the pixel grayscale value at the midpoint (x, y) of the detail layer image Ze , Zb (x,y) is the pixel grayscale value at the midpoint (x, y) of the base layer image Zb , t(x,y) is the transmittance at the midpoint (x, y) of the transmittance image t,
Figure FDA0004110135560000012
Where η is a constant greater than zero.
2.根据权利要求1所述的基于先验知识与深度学习结合的图像去雾方法,其特征在于,利用加权引导滤波器对输入的原始有雾图像Z进行分解,获得基础层图像Zb,具体包括如下步骤:2. The image defogging method based on the combination of prior knowledge and deep learning according to claim 1 is characterized in that the input original foggy image Z is decomposed by using a weighted guided filter to obtain a base layer image Z b , which specifically comprises the following steps: 获取原始有雾图像Z中任一点(x,y)处的像素灰度值Z(x,y);Get the pixel gray value Z(x,y) at any point (x,y) in the original foggy image Z; 利用下式获得原始有雾图像Z中点(x,y)处的加权滤波系数ap(x,y)和bp(x,y):The weighted filter coefficients a p (x, y) and bp (x, y) at the midpoint (x, y) of the original foggy image Z are obtained using the following formula:
Figure FDA0004110135560000021
Figure FDA0004110135560000021
其中,μZ,ρ(x,y)是在原始有雾图像Z中以点(x,y)为中心,ρ为半径的窗体的像素灰度值的均值;
Figure FDA0004110135560000022
是在原始有雾图像Z中以点(x,y)为中心,ρ为半径的窗体的像素灰度值的方差;ΓY(x,y)是原始有雾图像Z在点(x,y)处的亮度分量,Y=max{Zr,Zg,Zb},Zr,Zg,Zb分别是有雾图像Z中点(x,y)处的R,G,B值;λ为大于1的常数;
Wherein, μ Z,ρ (x, y) is the mean value of the pixel grayscale value of the window with point (x, y) as the center and ρ as the radius in the original foggy image Z;
Figure FDA0004110135560000022
is the variance of the pixel grayscale value of the window with point (x, y) as the center and ρ as the radius in the original foggy image Z; Γ Y (x, y) is the brightness component of the original foggy image Z at point (x, y), Y = max{Z r , Z g , Z b }, Z r , Z g , Z b are the R, G, B values at point (x, y) in the foggy image Z respectively; λ is a constant greater than 1;
利用下式获得所述基础层图像ZbThe base layer image Z b is obtained using the following formula: Zb(x,y)=ap(x,y)Z(x,y)+bp(x,y);Z b (x, y) = a p (x, y) Z (x, y) + b p (x, y); 其中,Zb(x,y)为所述基础层图像Zb中在点(x,y)处的像素灰度值。Wherein, Z b (x, y) is the grayscale value of the pixel at the point (x, y) in the base layer image Z b .
3.根据权利要求1所述的基于先验知识与深度学习结合的图像去雾方法,其特征在于,所述利用四叉树搜索方法对所述基础层图像Zb进行处理获得全局大气光成分图像Ac,具体包括如下步骤:3. The image defogging method based on the combination of prior knowledge and deep learning according to claim 1 is characterized in that the base layer image Z b is processed by a quadtree search method to obtain a global atmospheric light component image A c , which specifically comprises the following steps: 步骤A,将基础层图像Zb均分为四个矩形区域Zb-i(i∈{1,2,3,4}),Zb-i的长和宽分别为Zb的长和宽的1/2;Step A, divide the base layer image Z b into four rectangular regions Z bi (i∈{1,2,3,4}), the length and width of Z bi are 1/2 of the length and width of Z b respectively; 步骤B,将每个矩形区域的分数定义为像素灰度值的平均值减去该区域内像素灰度值的标准偏差;Step B, defining the score of each rectangular area as the mean value of the pixel grayscale values minus the standard deviation of the pixel grayscale values in the area; 步骤C,选择所述分数最高的矩形区域并将其进一步均分为四个矩形区域;Step C, selecting the rectangular area with the highest score and further dividing it into four rectangular areas; 步骤D,重复步骤B和C并对所述分数最高的矩形区进行迭代更新n次获得最终细分区域Zb-endStep D, repeating steps B and C and iteratively updating the rectangular area with the highest score n times to obtain the final subdivided area Z b-end ; 利用下式获得所述大气光成分图像AcThe atmospheric light component image A c is obtained using the following formula: |Ac(c∈{r,g,b})|=min|((Zb-end-r(x,y),Zb-end-g(x,y),Zb-end-b(x,y))-(255,255,255))|,|A c (c∈{r,g,b})|=min|((Z b-end-r (x,y),Z b-end-g (x,y),Z b-end-b (x,y))-(255,255,255))|, 式中(Zb-end-r(x,y),Zb-end-g(x,y),Zb-end-b(x,y))为所述最终细分区域Zb-end中点(x,y)处的颜色矢量,(255,255,255)为纯白色矢量。Wherein (Z b-end-r (x, y), Z b-end-g (x, y), Z b-end-b (x, y)) is the color vector at the midpoint (x, y) of the final subdivision area Z b-end , and (255, 255, 255) is a pure white vector. 4.根据权利要求1所述的基于先验知识与深度学习结合的图像去雾方法,其特征在于,所述构建深度卷积神经网络对所述基础层图像Zb进行处理获得透射率图像t,具体包括如下步骤:4. The image defogging method based on the combination of prior knowledge and deep learning according to claim 1 is characterized in that the construction of a deep convolutional neural network processes the base layer image Z b to obtain the transmittance image t, specifically comprising the following steps: 利用第一卷积层对所述基础层图像Zb进行下采样,获得低分辨率基础层图像Zb’后提取低层特征,其中,利用第一卷积层对所述基础层图像Zb进行下采样的公式为:The base layer image Z b is downsampled by using the first convolutional layer to obtain a low-resolution base layer image Z b ' and then extract low-level features, wherein the formula for downsampling the base layer image Z b by using the first convolutional layer is:
Figure FDA0004110135560000031
Figure FDA0004110135560000031
其中,
Figure FDA0004110135560000032
为所述第一卷积层中第i层卷积层在通道c的索引下,所述低分辨率基础层图像Zb’的低层特征;x为基础层图像Zb的横坐标,y为基础层图像Zb的纵坐标;x'为低分辨率基础层图像Zb’的横坐标,y'为低分辨率基础层图像Zb’的纵坐标;
Figure FDA0004110135560000033
为所述第一卷积层中第i层卷积层在c和c'图层索引通道下的卷积权重数组;
Figure FDA0004110135560000034
为所述第一卷积层中第i层卷积层在c图层索引通道下的偏差向量;σ(·)=max(·,0)表示ReLU激活函数,并使用零填充作为所述第一卷积层中所有层卷积层的边界条件;s为所述第一卷积层的卷积核的步长;
in,
Figure FDA0004110135560000032
is the low-level feature of the low-resolution base layer image Z b ′ under the index of channel c of the i-th convolution layer in the first convolution layer; x is the abscissa of the base layer image Z b , and y is the ordinate of the base layer image Z b ; x' is the abscissa of the low-resolution base layer image Z b ′, and y' is the ordinate of the low-resolution base layer image Z b ′;
Figure FDA0004110135560000033
is the convolution weight array of the i-th convolution layer in the first convolution layer under the c and c' layer index channels;
Figure FDA0004110135560000034
is the bias vector of the i-th convolutional layer in the first convolutional layer under the c-th layer index channel; σ(·)=max(·,0) represents the ReLU activation function, and zero padding is used as the boundary condition of all convolutional layers in the first convolutional layer; s is the step size of the convolution kernel of the first convolutional layer;
将得到的低层特征
Figure FDA0004110135560000035
输入层数为nL=2的第二卷积层提取局部特征L,其中第二卷积层中卷积核的尺寸为3×3,步长为1;
The obtained low-level features
Figure FDA0004110135560000035
The second convolutional layer with the input layer number n L = 2 extracts the local feature L, where the size of the convolution kernel in the second convolutional layer is 3×3 and the stride is 1;
将第二卷积层得到的低层特征
Figure FDA0004110135560000036
输入层数为nG1=2、卷积核的尺寸为3×3和步长为2的第三卷积层后,再输入层数为nG2=2的全连接层,得到全局特征G;
The low-level features obtained by the second convolutional layer
Figure FDA0004110135560000036
After inputting the third convolutional layer with n G1 = 2 layers, a convolution kernel size of 3×3 and a stride of 2, the fully connected layer with n G2 = 2 layers is input to obtain the global feature G;
将局部特征L和全局特征G相加,并输入至激活函数中,得到低层特征
Figure FDA0004110135560000041
对应的混合特征图FL=σ(L+G);
Add the local feature L and the global feature G and input them into the activation function to get the low-level feature
Figure FDA0004110135560000041
The corresponding mixed feature map F L =σ(L+G);
对混合特征图FL=σ(L+G)进行卷积处理获得与低层特征
Figure FDA0004110135560000042
对应的初步大气折射率特征图,然后再对所述初步大气折射率特征图进行上采样,输出得到透射率图像t。
Convolution is performed on the mixed feature map F L =σ(L+G) to obtain the low-level features
Figure FDA0004110135560000042
The corresponding preliminary atmospheric refractive index characteristic map is then upsampled to output a transmittance image t.
5.根据权利要求4所述的基于先验知识与深度学习结合的图像去雾方法,其特征在于,所述对混合特征图FL=σ(L+G)进行卷积处理获得与低层特征
Figure FDA0004110135560000045
对应的初步大气折射率特征图,然后再对所述大气折射率特征图进行上采样,输出得到透射率图像t之后还包括如下步骤:
5. The image defogging method based on prior knowledge and deep learning according to claim 4 is characterized in that the mixed feature map F L =σ(L+G) is convolved to obtain the low-level feature map
Figure FDA0004110135560000045
The corresponding preliminary atmospheric refractive index characteristic map is then upsampled to obtain the transmittance image t, and then the following steps are included:
按照下式构建损失函数:The loss function is constructed as follows: L=Lr+wcLc L= Lr + wcLc 其中,Lr为重构损失函数,Lc为颜色损失函数,wc为分配给颜色损失函数Lc的权值,Lr表示为
Figure FDA0004110135560000043
Lc表示为
Figure FDA0004110135560000044
Among them, Lr is the reconstruction loss function, Lc is the color loss function, wc is the weight assigned to the color loss function Lc , and Lr is expressed as
Figure FDA0004110135560000043
L c is expressed as
Figure FDA0004110135560000044
J为去雾的图像,Z为有雾图像,c∈(R,G,B)为通道索引,∠(J(x,y),Z(x,y))表示有雾图像和去雾图像在像素点(x,y)的三维颜色向量的夹角;J is the defogging image, Z is the foggy image, c∈(R,G,B) is the channel index, ∠(J(x,y),Z(x,y)) represents the angle between the three-dimensional color vectors of the foggy image and the defogging image at the pixel point (x,y); 利用所述损失函数对所述深度卷积神经网络进行调参处理。The loss function is used to adjust the parameters of the deep convolutional neural network.
6.一种基于先验知识与深度学习结合的图像去雾装置,其特征在于,所述装置包括存储器和处理器;6. An image defogging device based on a priori knowledge combined with deep learning, characterized in that the device includes a memory and a processor; 所述存储器,用于存储计算机程序;The memory is used to store computer programs; 所述处理器,用于当执行所述计算机程序时,实现如权利要求1至5任一项所述的基于先验知识与深度学习结合的图像去雾方法。The processor is used to implement the image defogging method based on the combination of prior knowledge and deep learning as described in any one of claims 1 to 5 when executing the computer program.
CN201911226437.6A 2019-12-04 2019-12-04 Image defogging method and device based on combination of priori knowledge and deep learning Expired - Fee Related CN111161159B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911226437.6A CN111161159B (en) 2019-12-04 2019-12-04 Image defogging method and device based on combination of priori knowledge and deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911226437.6A CN111161159B (en) 2019-12-04 2019-12-04 Image defogging method and device based on combination of priori knowledge and deep learning

Publications (2)

Publication Number Publication Date
CN111161159A CN111161159A (en) 2020-05-15
CN111161159B true CN111161159B (en) 2023-04-18

Family

ID=70556359

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911226437.6A Expired - Fee Related CN111161159B (en) 2019-12-04 2019-12-04 Image defogging method and device based on combination of priori knowledge and deep learning

Country Status (1)

Country Link
CN (1) CN111161159B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111861939B (en) * 2020-07-30 2022-04-29 四川大学 Single image defogging method based on unsupervised learning
CN111932365B (en) * 2020-08-11 2021-09-10 上海华瑞银行股份有限公司 Financial credit investigation system and method based on block chain
CN114331874B (en) * 2021-12-07 2024-12-06 西安邮电大学 Method and device for defogging UAV aerial images based on residual detail enhancement

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107749052A (en) * 2017-10-24 2018-03-02 中国科学院长春光学精密机械与物理研究所 Image defogging method and system based on deep learning neutral net

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9754356B2 (en) * 2013-04-12 2017-09-05 Agency For Science, Technology And Research Method and system for processing an input image based on a guidance image and weights determined thereform
US20180122051A1 (en) * 2015-03-30 2018-05-03 Agency For Science, Technology And Research Method and device for image haze removal
KR102461144B1 (en) * 2015-10-16 2022-10-31 삼성전자주식회사 Image haze removing device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107749052A (en) * 2017-10-24 2018-03-02 中国科学院长春光学精密机械与物理研究所 Image defogging method and system based on deep learning neutral net

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
余春艳 ; 林晖翔 ; 徐小丹 ; 叶鑫焱 ; .雾天退化模型参数估计与CUDA设计.计算机辅助设计与图形学学报.2018,(第02期),全文. *
谢伟 ; 周玉钦 ; 游敏 ; .融合梯度信息的改进引导滤波.中国图象图形学报.2016,(第09期),全文. *
陈永 ; 郭红光 ; 艾亚鹏 ; .基于双域分解的多尺度深度学习单幅图像去雾.光学学报.2019,(第02期),全文. *

Also Published As

Publication number Publication date
CN111161159A (en) 2020-05-15

Similar Documents

Publication Publication Date Title
CN112288658B (en) Underwater image enhancement method based on multi-residual joint learning
Li et al. Single image de-hazing using globally guided image filtering
CN111861939B (en) Single image defogging method based on unsupervised learning
CN111161159B (en) Image defogging method and device based on combination of priori knowledge and deep learning
Kapoor et al. Fog removal in images using improved dark channel prior and contrast limited adaptive histogram equalization
CN108564549B (en) Image defogging method based on multi-scale dense connection network
CN111986075B (en) Style migration method for target edge clarification
CN112348747A (en) Image enhancement method, device and storage medium
CN113052755A (en) High-resolution image intelligent matting method based on deep learning
Saleem et al. A non-reference evaluation of underwater image enhancement methods using a new underwater image dataset
Anoop et al. Advancements in low light image enhancement techniques and recent applications
CN117196980A (en) Low-light image enhancement method based on illumination and scene texture attention map
CN112614063B (en) Image enhancement and noise self-adaptive removal method for low-illumination environment in building
Zhao et al. Deep pyramid generative adversarial network with local and nonlocal similarity features for natural motion image deblurring
CN116363011B (en) Multi-branch low-illumination image enhancement method based on frequency domain frequency division
Wang et al. Single underwater image enhancement based on $ l_ {P} $-norm decomposition
Meng et al. Gia-net: Global information aware network for low-light imaging
Chang et al. A self-adaptive single underwater image restoration algorithm for improving graphic quality
CN115170415B (en) A low-light image enhancement method, system and readable storage medium
CN115861113A (en) A Semi-supervised Dehazing Method Based on Fusion Depth Map and Feature Mask
CN115035011B (en) A low-light image enhancement method based on adaptive RetinexNet under a fusion strategy
Li et al. Zero-shot enhancement of low-light image based on retinex decomposition
US20250078230A1 (en) Low-light image enhancement method and device based on wavelet transform and retinex-net
CN114331894A (en) A face image inpainting method based on latent feature reconstruction and mask perception
Kosugi et al. Crowd-powered photo enhancement featuring an active learning based local filter

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20230418