CN117853371B - Multi-branch frequency domain enhanced real image defogging method, system and terminal - Google Patents
Multi-branch frequency domain enhanced real image defogging method, system and terminal Download PDFInfo
- Publication number
- CN117853371B CN117853371B CN202410252937.1A CN202410252937A CN117853371B CN 117853371 B CN117853371 B CN 117853371B CN 202410252937 A CN202410252937 A CN 202410252937A CN 117853371 B CN117853371 B CN 117853371B
- Authority
- CN
- China
- Prior art keywords
- image
- frequency domain
- residual
- enhancement
- spectrum
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biodiversity & Conservation Biology (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Image Processing (AREA)
Abstract
本发明提供了一种多分支频域增强的真实图像去雾方法、系统及终端,该方法包括:将样本增强集输入图像去雾模型进行多分支网络的特征提取,得到样本图像特征,对样本图像特征进行语义分割、残差注意力处理、特征融合,得到融合特征;对残差输出图像进行混合跳过连接,得到残差图像,对残差图像和融合特征进行图像恢复、频域增强,得到残差增强图像和融合增强图像,将残差增强图像和融合增强图像进行融合,得到去雾图像;根据去雾图像确定模型损失,根据模型损失对图像去雾模型进行参数更新;将待去雾图像输入图像去雾模型进行去雾处理,得到去雾输出图像。本发明实施例,能有效地利用频率域信息细化去雾图像中的去雾效果,提高了图像去雾的准确性。
The present invention provides a real image defogging method, system and terminal with multi-branch frequency domain enhancement, the method comprising: inputting a sample enhancement set into an image defogging model to perform feature extraction of a multi-branch network to obtain sample image features, performing semantic segmentation, residual attention processing and feature fusion on the sample image features to obtain fusion features; performing mixed skip connection on the residual output image to obtain a residual image, performing image restoration and frequency domain enhancement on the residual image and the fusion features to obtain a residual enhanced image and a fusion enhanced image, fusing the residual enhanced image and the fusion enhanced image to obtain a defogging image; determining a model loss according to the defogging image, and updating parameters of the image defogging model according to the model loss; inputting the image to be defogged into the image defogging model to perform defogging processing to obtain a defogging output image. The embodiment of the present invention can effectively use frequency domain information to refine the defogging effect in the defogging image, thereby improving the accuracy of image defogging.
Description
技术领域Technical Field
本发明涉及图像处理技术领域,尤其涉及一种多分支频域增强的真实图像去雾方法、系统及终端。The present invention relates to the field of image processing technology, and in particular to a real image defogging method, system and terminal with multi-branch frequency domain enhancement.
背景技术Background technique
图像去雾是一项低级视觉任务,通常是提高高级视觉任务性能的预处理步骤,如人群计数、目标检测或图像分割。一般来说,在高级视觉任务中,需要清晰的无雾图像。因此,图像去雾问题越来越受到学术界和业界所关注。Image dehazing is a low-level vision task, usually a preprocessing step to improve the performance of high-level vision tasks, such as crowd counting, object detection, or image segmentation. Generally speaking, in high-level vision tasks, clear haze-free images are required. Therefore, the image dehazing problem has attracted more and more attention from academia and industry.
现有的图像去雾过程中,一般是仅基于样本数据集的图像特征信息,来学习无雾图像和有雾图像之间的映射关系,以此来生成无雾图像,但仅基于图像特征信息的方式,导致生成的无雾图像质量低下,降低了图像去雾的准确性。In the existing image dehazing process, the mapping relationship between haze-free images and haze-containing images is generally learned based only on the image feature information of the sample data set to generate haze-free images. However, the method based only on image feature information results in poor quality of the generated haze-free images, which reduces the accuracy of image dehazing.
发明内容Summary of the invention
本发明实施例的目的在于提供一种多分支频域增强的真实图像去雾方法、系统及终端,旨在解决现有的图像去雾准确性低下的问题。The purpose of the embodiments of the present invention is to provide a real image defogging method, system and terminal with multi-branch frequency domain enhancement, aiming to solve the problem of low accuracy of existing image defogging.
本发明实施例是这样实现的,一种多分支频域增强的真实图像去雾方法,所述方法包括:The embodiment of the present invention is implemented as follows: a real image defogging method with multi-branch frequency domain enhancement, the method comprising:
获取样本数据集,并对所述样本数据集进行数据增强,得到样本增强集;Acquire a sample data set, and perform data enhancement on the sample data set to obtain a sample enhancement set;
将所述样本增强集输入图像去雾模型进行多分支网络的特征提取,得到样本图像特征,并分别对各分支网络的样本图像特征进行语义分割,得到语义分割特征;Inputting the sample enhancement set into the image dehazing model to perform feature extraction of a multi-branch network to obtain sample image features, and performing semantic segmentation on the sample image features of each branch network to obtain semantic segmentation features;
分别对各分支网络的语义分割特征进行残差注意力处理,得到掩码特征映射和残差输出图像,并将各分支网络的掩码特征映射进行融合,得到融合特征;Perform residual attention processing on the semantic segmentation features of each branch network respectively to obtain mask feature maps and residual output images, and fuse the mask feature maps of each branch network to obtain fused features;
对各分支网络的残差输出图像进行混合跳过连接,得到残差图像,并分别对所述残差图像和所述融合特征进行图像恢复,得到残差恢复图像和融合恢复图像;Performing a mixed skip connection on the residual output images of each branch network to obtain a residual image, and performing image restoration on the residual image and the fusion feature respectively to obtain a residual restoration image and a fusion restoration image;
对所述残差恢复图像和所述融合恢复图像进行频域增强,得到残差增强图像和融合增强图像,并将所述残差增强图像和所述融合增强图像进行融合,得到去雾图像;Performing frequency domain enhancement on the residual restored image and the fused restored image to obtain a residual enhanced image and a fused enhanced image, and fusing the residual enhanced image and the fused enhanced image to obtain a defogging image;
根据所述去雾图像和所述样本数据集的真实图像确定模型损失,并根据所述模型损失对所述图像去雾模型进行参数更新,直至所述图像去雾模型收敛;Determining a model loss according to the defogging image and a real image of the sample data set, and updating parameters of the image defogging model according to the model loss until the image defogging model converges;
将待去雾图像输入收敛后的所述图像去雾模型进行去雾处理,得到去雾输出图像。The image to be defogged is input to the converged image defogging model for defogging to obtain a defogged output image.
本发明实施例的另一目的在于提供一种多分支频域增强的真实图像去雾系统,所述系统包括:Another object of an embodiment of the present invention is to provide a real image defogging system with multi-branch frequency domain enhancement, the system comprising:
数据增强模块,用于获取样本数据集,并对所述样本数据集进行数据增强,得到样本增强集;A data enhancement module is used to obtain a sample data set and perform data enhancement on the sample data set to obtain a sample enhancement set;
特征提取模块,用于将所述样本增强集输入图像去雾模型进行多分支网络的特征提取,得到样本图像特征,并分别对各分支网络的样本图像特征进行语义分割,得到语义分割特征;A feature extraction module is used to input the sample enhancement set into the image defogging model to perform feature extraction of a multi-branch network to obtain sample image features, and to perform semantic segmentation on the sample image features of each branch network to obtain semantic segmentation features;
分别对各分支网络的语义分割特征进行残差注意力处理,得到掩码特征映射和残差输出图像,并将各分支网络的掩码特征映射进行融合,得到融合特征;Perform residual attention processing on the semantic segmentation features of each branch network respectively to obtain mask feature maps and residual output images, and fuse the mask feature maps of each branch network to obtain fused features;
对各分支网络的残差输出图像进行混合跳过连接,得到残差图像,并分别对所述残差图像和所述融合特征进行图像恢复,得到残差恢复图像和融合恢复图像;Performing a mixed skip connection on the residual output images of each branch network to obtain a residual image, and performing image restoration on the residual image and the fusion feature respectively to obtain a residual restoration image and a fusion restoration image;
频率域增强模块,用于对所述残差恢复图像和所述融合恢复图像进行频域增强,得到残差增强图像和融合增强图像,并将所述残差增强图像和所述融合增强图像进行融合,得到去雾图像;A frequency domain enhancement module, used for performing frequency domain enhancement on the residual restored image and the fused restored image to obtain a residual enhanced image and a fused enhanced image, and fusing the residual enhanced image and the fused enhanced image to obtain a defogging image;
参数更新模块,用于根据所述去雾图像和所述样本数据集的真实图像确定模型损失,并根据所述模型损失对所述图像去雾模型进行参数更新,直至所述图像去雾模型收敛;A parameter updating module, used to determine a model loss according to the defogging image and a real image of the sample data set, and update parameters of the image defogging model according to the model loss until the image defogging model converges;
去雾处理模块,用于将待去雾图像输入收敛后的所述图像去雾模型进行去雾处理,得到去雾输出图像。The defogging processing module is used to perform defogging processing on the image to be defogged after inputting the converged image defogging model to obtain a defogged output image.
本发明实施例的另一目的在于提供一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如上述方法的步骤。Another object of an embodiment of the present invention is to provide a terminal device, comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the above method when executing the computer program.
本发明实施例的另一目的在于提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现上述方法的步骤。Another object of an embodiment of the present invention is to provide a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps of the above method are implemented.
本发明实施例,通过对样本数据集进行数据增强,有效地提高了样本数据集数据的多样性,通过将样本增强集输入图像去雾模型进行多分支网络的特征提取,以采用多通道的方式提取样本数据集中的样本图像特征,通过对残差恢复图像和融合恢复图像进行频域增强,能有效地利用频率域信息细化残差恢复图像和融合恢复图像中的去雾效果,提高了去雾图像的质量和图像去雾的准确性。The embodiments of the present invention effectively improve the diversity of the sample data set by performing data enhancement on the sample data set, extract the sample image features in the sample data set in a multi-channel manner by inputting the sample enhancement set into the image defogging model for feature extraction of the multi-branch network, and effectively utilize the frequency domain information to refine the defogging effect in the residual restored image and the fused restored image by performing frequency domain enhancement on the residual restored image and the fused restored image, thereby improving the quality of the defogging image and the accuracy of the image defogging.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1是本发明第一实施例提供的多分支频域增强的真实图像去雾方法的流程图;FIG1 is a flow chart of a real image defogging method with multi-branch frequency domain enhancement provided by a first embodiment of the present invention;
图2是本发明第二实施例提供的多分支频域增强的真实图像去雾系统的结构示意图;FIG2 is a schematic structural diagram of a real image defogging system with multi-branch frequency domain enhancement provided by a second embodiment of the present invention;
图3是本发明第三实施例提供的终端设备的结构示意图。FIG3 is a schematic diagram of the structure of a terminal device provided in a third embodiment of the present invention.
具体实施方式Detailed ways
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。In order to make the purpose, technical solution and advantages of the present invention more clearly understood, the present invention is further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the present invention and are not intended to limit the present invention.
为了说明本发明所述的技术方案,下面通过具体实施例来进行说明。In order to illustrate the technical solution of the present invention, a specific embodiment is provided below for illustration.
实施例一Embodiment 1
请参阅图1,是本发明第一实施例提供的多分支频域增强的真实图像去雾方法的流程图,该多分支频域增强的真实图像去雾方法可以应用于任一系统,该多分支频域增强的真实图像去雾方法包括步骤:Please refer to FIG1, which is a flowchart of a real image defogging method with multi-branch frequency domain enhancement provided by a first embodiment of the present invention. The real image defogging method with multi-branch frequency domain enhancement can be applied to any system. The real image defogging method with multi-branch frequency domain enhancement includes the following steps:
步骤S10,获取样本数据集,并对所述样本数据集进行数据增强,得到样本增强集;Step S10, obtaining a sample data set, and performing data enhancement on the sample data set to obtain a sample enhanced set;
其中,通过对样本数据集进行数据增强,有效地提高了数据集的数量,使得训练后的模型能够灵活地适应真实世界的图像,并具有较好的模型性能。Among them, by performing data enhancement on the sample data set, the number of data sets is effectively increased, so that the trained model can flexibly adapt to real-world images and have better model performance.
该步骤中,对所述样本数据集进行数据增强采用的公式包括:In this step, the formula used to perform data enhancement on the sample data set includes:
其中,为样本增强集,/>为样本数据集中第/>个真实无雾图像,A为全球大气光,为介质透射率图,/>为场景深度,/>为雾度密度,该步骤中,可以基于深度估计器对全球大气光和雾度密度进行数值扰动,优选的,能将全球大气光从(0.7,1.0)调整到(0.5,1.8),能将雾度密度从(0.6,1.8)调整到(0.8,2.8),通过将全球大气光和雾度密度进行放大,能有效地覆盖更具有挑战性的现实情况。如严重的雾霾条件,由于浓雾导致信息的严重丢失,模型输入中的监督信号显著减少,因此,通过全球大气光和雾度密度的放大,有助于增强模型的泛化能力。in, is the sample enhancement set,/> For the sample data set, A is a real fog-free image, A is the global atmospheric light, is the medium transmittance diagram, /> is the scene depth, /> The global atmospheric light and haze density are numerically perturbed based on the depth estimator in this step. Preferably, the global atmospheric light can be adjusted from (0.7, 1.0) to (0.5, 1.8), and the haze density can be adjusted from (0.6, 1.8) to (0.8, 2.8). By amplifying the global atmospheric light and haze density, more challenging real-world situations can be effectively covered. For example, in severe haze conditions, the supervision signal in the model input is significantly reduced due to the severe loss of information caused by dense fog. Therefore, by amplifying the global atmospheric light and haze density, it is helpful to enhance the generalization ability of the model.
频域中的雾霾效应通常被认为是由低频分量组成的空间静态信号。本实施例还通过交换两幅图像的低频频谱,在不影响高级语义感知的情况下,改变了雾霾的模式,实现图像风格的迁移,从而使雾霾模式多样化,但对场景的理解没有改变。此外,还分解了雾霾图像对场景深度的唯一位置依赖关系,从而产生更多的非均匀的雾霾模式,进一步提高了数据集的多样性。The haze effect in the frequency domain is usually considered to be a spatial static signal composed of low-frequency components. This embodiment also changes the haze pattern by exchanging the low-frequency spectra of the two images without affecting the high-level semantic perception, thereby achieving image style transfer, thereby diversifying the haze pattern, but the understanding of the scene is not changed. In addition, the unique positional dependency of the haze image on the scene depth is decomposed, thereby generating more non-uniform haze patterns, further improving the diversity of the dataset.
其中,为由有雾图像/>和有雾图像/>生成的增广图像,/>和/>为由真实无雾图像/>和真实无雾图像/>合成的有雾图像,/>和/>为样本数据集中第a个和第b个真实无雾图像,/>为傅里叶变换的振幅,/>为傅里叶变换的相位分量,/>为图像掩码,/>为函数的复合运算。in, For foggy images/> and foggy images/> The generated augmented image, /> and/> For a real fog-free image/> and real fog-free image/> Synthesized foggy image, /> and/> are the ath and bth real fog-free images in the sample dataset,/> is the amplitude of the Fourier transform, /> is the phase component of Fourier transform, /> is the image mask, /> A composite operation of functions.
其中,H和W为图像的高度和宽度,和/>为图像的高度和宽度,/>为比例/>,用于控制低频分量的迁移范围,/>表示指示函数。Where H and W are the height and width of the image, and/> is the height and width of the image, /> For the ratio , used to control the migration range of low-frequency components, /> Represents the indicator function.
的低频分量(即雾霾模式)的范围被/>的范围所取代,同时/>被扰动,以保持/>中的语义信息,为了减轻/>硬阈值处理的负面影响,可以应用高斯滤波器来平滑图像。 The range of low frequency components (i.e. haze mode) is / > The scope is replaced by / > is disturbed to maintain/> In order to reduce the semantic information in /> To avoid the negative effects of hard thresholding, a Gaussian filter can be applied to smooth the image.
本实施例,通过对样本数据集进行数据增强,能有效缩小真实数据和合成数据之间的分布差异,以此来提高模型在真实场景下的去雾性能。In this embodiment, by performing data enhancement on the sample data set, the distribution difference between the real data and the synthetic data can be effectively reduced, thereby improving the defogging performance of the model in real scenes.
步骤S20,将所述样本增强集输入图像去雾模型进行多分支网络的特征提取,得到样本图像特征,并分别对各分支网络的样本图像特征进行语义分割,得到语义分割特征;Step S20, inputting the sample enhancement set into the image defogging model to perform feature extraction of a multi-branch network to obtain sample image features, and performing semantic segmentation on the sample image features of each branch network to obtain semantic segmentation features;
其中,图像去雾模型中设置有多个分支网络,每个分支网络通过两个3x3卷积和一个ReLU函数层提取样本图像特征(浅层特征),所有的分支网络都有3层U-Net网络架构对样本图像特征进行语义分割,每个尺度3块,可以使用不同的注意块来替换每个分支中的朴素卷积。Among them, multiple branch networks are set up in the image dehazing model. Each branch network extracts sample image features (shallow features) through two 3x3 convolutions and a ReLU function layer. All branch networks have a 3-layer U-Net network architecture to perform semantic segmentation on sample image features, with 3 blocks at each scale. Different attention blocks can be used to replace the naive convolution in each branch.
步骤S30,分别对各分支网络的语义分割特征进行残差注意力处理,得到掩码特征映射和残差输出图像,并将各分支网络的掩码特征映射进行融合,得到融合特征;Step S30, performing residual attention processing on the semantic segmentation features of each branch network respectively to obtain a mask feature map and a residual output image, and fusing the mask feature maps of each branch network to obtain a fused feature;
其中,每个分支网络均设置有残差注意力模块(RAM),RAM生成两个输出,分别为掩码特征映射和残差输出图像,掩码特征映射由残差输出图像通过sigmoid函数传递得到的掩码所生成,残差输出图像为通过分支网络中3×3卷积传递退化的输出图像。Among them, each branch network is equipped with a residual attention module (RAM), which generates two outputs, namely, a mask feature map and a residual output image. The mask feature map is generated by the mask obtained by passing the residual output image through the sigmoid function, and the residual output image is the output image degraded by 3×3 convolution in the branch network.
步骤S40,对各分支网络的残差输出图像进行混合跳过连接,得到残差图像,并分别对所述残差图像和所述融合特征进行图像恢复,得到残差恢复图像和融合恢复图像;Step S40, performing a mixed skip connection on the residual output images of each branch network to obtain a residual image, and performing image restoration on the residual image and the fusion feature respectively to obtain a residual restoration image and a fusion restoration image;
其中,将各分支网络的残差输出图像进行混合跳过连接(Mixed SkipConnection,MSC) ,得到残差图像,分别对残差图像和融合特征进行3x3的卷积进行图像恢复,得到残差恢复图像和融合恢复图像。Among them, the residual output images of each branch network are mixed skip connection (MSC) to obtain the residual image, and the residual image and the fusion feature are respectively convolved by 3x3 to restore the image to obtain the residual restoration image and the fusion restoration image.
步骤S50,对所述残差恢复图像和所述融合恢复图像进行频域增强,得到残差增强图像和融合增强图像,并将所述残差增强图像和所述融合增强图像进行融合,得到去雾图像;Step S50, performing frequency domain enhancement on the residual restored image and the fused restored image to obtain a residual enhanced image and a fused enhanced image, and fusing the residual enhanced image and the fused enhanced image to obtain a defogging image;
可选的,对所述残差恢复图像和所述融合恢复图像进行频域增强,包括:Optionally, frequency domain enhancement is performed on the residual restored image and the fused restored image, including:
获取所述残差恢复图像和所述融合恢复图像的空间域特征,并根据所述空间域特征确定频域特征;Acquire spatial domain features of the residual restored image and the fused restored image, and determine frequency domain features according to the spatial domain features;
根据所述频域特征确定振幅谱和相位谱,并对所述振幅谱进行修复,得到振幅修复谱;Determine an amplitude spectrum and a phase spectrum according to the frequency domain characteristics, and repair the amplitude spectrum to obtain an amplitude repair spectrum;
根据所述振幅修复谱和所述振幅谱确定残差振幅,并根据所述残差振幅确定注意力图;Determine a residual amplitude according to the amplitude repair spectrum and the amplitude spectrum, and determine an attention map according to the residual amplitude;
根据所述注意力图对所述相位谱进行相位变化,得到相位变化谱,并根据所述相位变化谱和所述振幅修复谱确定频域增强实部和频域增强虚部;Performing a phase change on the phase spectrum according to the attention map to obtain a phase change spectrum, and determining a frequency domain enhanced real part and a frequency domain enhanced imaginary part according to the phase change spectrum and the amplitude repair spectrum;
根据所述频域增强实部和所述频域增强虚部确定频域增强特征,并根据所述频域增强特征确定空间域增强特征;Determine a frequency domain enhancement feature according to the frequency domain enhancement real part and the frequency domain enhancement imaginary part, and determine a spatial domain enhancement feature according to the frequency domain enhancement feature;
根据所述空间域增强特征对所述残差恢复图像和所述融合恢复图像进行特征增强,得到所述残差增强图像和所述融合增强图像;Performing feature enhancement on the residual restored image and the fused restored image according to the spatial domain enhancement feature to obtain the residual enhanced image and the fused enhanced image;
其中,根据所述空间域特征确定频域特征采用的公式包括:The formula used to determine the frequency domain features according to the spatial domain features includes:
其中,x为所述残差恢复图像和所述融合恢复图像的空间域特征,为所述残差恢复图像和所述融合恢复图像在频率域中的坐标,/>为图像的水平方向的频率分量,/>为图像的垂直方向的频率分量,/>和/>为图像的高度和宽度,/>和/>为图像垂直方向和水平方向的坐标,/>为所述残差恢复图像和所述融合恢复图像的频域特征;Wherein, x is the spatial domain feature of the residual restored image and the fused restored image, are the coordinates of the residual restored image and the fused restored image in the frequency domain, /> is the frequency component of the image in the horizontal direction, /> is the frequency component of the image in the vertical direction, /> and/> is the height and width of the image, /> and/> are the vertical and horizontal coordinates of the image, /> The frequency domain features of the residual restored image and the fused restored image;
根据所述频域特征确定振幅谱和相位谱采用的公式包括:The formulas used to determine the amplitude spectrum and phase spectrum according to the frequency domain characteristics include:
其中,为频域特征的实部,/>为频域特征的虚部;in, is the real part of the frequency domain feature, /> is the imaginary part of the frequency domain feature;
其中,为振幅谱,/>为相位谱;in, is the amplitude spectrum, /> is the phase spectrum;
对所述振幅谱进行修复,得到振幅修复谱,根据所述振幅修复谱和所述振幅谱确定残差振幅采用的公式包括:The amplitude spectrum is repaired to obtain an amplitude repair spectrum. The formula used to determine the residual amplitude according to the amplitude repair spectrum and the amplitude spectrum includes:
其中,由于采样和信号损坏等多种情况,振幅可能被严重扭曲,因此,可以使用1x1卷积修复振幅,为振幅修复谱,/>为振幅修复谱与振幅谱之间的残差振幅,/>为卷积算子,/>为卷积核大小为1x1像素的滤波器。Among them, due to various situations such as sampling and signal damage, the amplitude may be severely distorted, so 1x1 convolution can be used to repair the amplitude. is the amplitude repair spectrum, /> is the residual amplitude between the amplitude repair spectrum and the amplitude spectrum,/> is the convolution operator,/> is a filter with a convolution kernel size of 1x1 pixels.
根据所述残差振幅确定注意力图,根据所述注意力图对所述相位谱进行相位变化采用的公式包括:An attention map is determined according to the residual amplitude, and a formula used to change the phase spectrum according to the attention map includes:
其中,为注意力图,GAP为全局平均池化,/>为相位变化谱,/>为元素级乘积操作;in, is the attention map, GAP is the global average pooling, /> is the phase change spectrum, /> It is an element-wise product operation;
根据所述相位变化谱和所述振幅修复谱确定频域增强实部和频域增强虚部采用的公式包括:The formula used to determine the frequency domain enhancement real part and the frequency domain enhancement imaginary part according to the phase change spectrum and the amplitude repair spectrum includes:
其中,为频域增强实部,/>为频域增强虚部;in, is the frequency domain enhanced real part,/> Enhance the imaginary part in the frequency domain;
根据所述频域增强实部和所述频域增强虚部确定频域增强特征采用的公式包括:The formula used to determine the frequency domain enhancement feature according to the frequency domain enhancement real part and the frequency domain enhancement imaginary part includes:
其中,为残差增强图像和融合增强图像的频域增强特征,该步骤中,采用傅里叶反变换将频域增强特征变换为空域增强特征。in, It is the frequency domain enhancement feature of the residual enhanced image and the fusion enhanced image. In this step, the frequency domain enhancement feature is transformed into the spatial domain enhancement feature by using inverse Fourier transform.
步骤S60,根据所述去雾图像和所述样本数据集的真实图像确定模型损失,并根据所述模型损失对所述图像去雾模型进行参数更新,直至所述图像去雾模型收敛;Step S60, determining a model loss according to the defogging image and a real image of the sample data set, and updating parameters of the image defogging model according to the model loss until the image defogging model converges;
可选的,根据所述去雾图像和所述样本数据集的真实图像确定模型损失采用的公式包括:Optionally, the formula used to determine the model loss based on the dehazed image and the real image of the sample data set includes:
其中,X为所述去雾图像,Y为所述真实图像,为所述模型损失,/>为峰值信噪比和结构相似度指数组成的损失,/>为边缘损失,/>、/>和/>为预设常数,/>为拉普拉斯算子,PSNR为峰值信噪比,SSIM为结构相似性,MAX2 Y 为图像可取到的最大像素值,MSE为去雾图像和对应在真实图像的均方误差,/>、/>、/>分别表示真实图像和去雾图像的亮度、对比度和饱和度的比较,/>、/>和/>为超参数。优选的,/>可以设置为0.005,/>可以设置为0.05,/>可以设置为10-3,/>一般设置为1。SSIM从亮度、对比度和结构三个方面来衡量图像的相似度。Wherein, X is the defogging image, Y is the real image, is the model loss,/> is the loss composed of peak signal-to-noise ratio and structural similarity index,/> is the edge loss,/> 、/> and/> is a preset constant, /> is the Laplace operator, PSNR is the peak signal-to-noise ratio, SSIM is the structural similarity, MAX 2 Y is the maximum pixel value that can be obtained in the image, MSE is the mean square error between the dehazed image and the corresponding real image, /> 、/> 、/> Respectively represent the comparison of brightness, contrast and saturation of the real image and the dehazed image,/> 、/> and/> is a hyperparameter. Preferably, /> Can be set to 0.005,/> Can be set to 0.05,/> Can be set to 10 -3 ,/> It is usually set to 1. S SIM measures the similarity of images from three aspects: brightness, contrast, and structure.
步骤S70,将待去雾图像输入收敛后的所述图像去雾模型进行去雾处理,得到去雾输出图像。Step S70, inputting the image to be defogged into the converged image defogging model to perform defogging processing to obtain a defogged output image.
本实施例中,通过对样本数据集进行数据增强,有效地提高了样本数据集数据的多样性,通过将样本增强集输入图像去雾模型进行多分支网络的特征提取,以采用多通道的方式提取样本数据集中的样本图像特征,通过对残差恢复图像和融合恢复图像进行频域增强,能有效地利用频率域信息细化残差恢复图像和融合恢复图像中的去雾效果,提高了去雾图像的质量和图像去雾的准确性。In this embodiment, by performing data enhancement on the sample data set, the diversity of the data in the sample data set is effectively improved, and by inputting the sample enhancement set into the image dehazing model for feature extraction of the multi-branch network, the sample image features in the sample data set are extracted in a multi-channel manner, and by performing frequency domain enhancement on the residual restored image and the fused restored image, the frequency domain information can be effectively utilized to refine the dehazing effect in the residual restored image and the fused restored image, thereby improving the quality of the dehazed image and the accuracy of image dehazing.
实施例二Embodiment 2
请参阅图2,是本发明第二实施例提供的多分支频域增强的真实图像去雾系统100的结构示意图,包括:Please refer to FIG. 2 , which is a schematic diagram of the structure of a real image defogging system 100 with multi-branch frequency domain enhancement provided by a second embodiment of the present invention, including:
数据增强模块10,用于获取样本数据集,并对所述样本数据集进行数据增强,得到样本增强集。The data enhancement module 10 is used to obtain a sample data set and perform data enhancement on the sample data set to obtain a sample enhanced set.
可选的,对所述样本数据集进行数据增强采用的公式包括:Optionally, the formula used to perform data enhancement on the sample data set includes:
其中,为样本增强集,/>为样本数据集中第/>个真实无雾图像,A为全球大气光,为介质透射率图,/>为场景深度,/>为雾度密度;in, is the sample enhancement set,/> For the sample data set, A is a real fog-free image, A is the global atmospheric light, is the medium transmittance diagram, /> is the scene depth, /> is the fog density;
其中,为由/>和/>生成的增广图像,/>和/>为样本数据集中第a个和第b个真实无雾图像,/>和/>为由图像/>和/>合成的有雾图像,/>为傅里叶变换的振幅,/>为傅里叶变换的相位分量,/>为图像掩码,/>为函数的复合运算。in, For/> and/> The generated augmented image, /> and/> are the ath and bth real fog-free images in the sample dataset,/> and/> For the image /> and/> Synthesized foggy image, /> is the amplitude of the Fourier transform, /> is the phase component of Fourier transform, /> is the image mask, /> A composite operation of functions.
特征提取模块11,用于将所述样本增强集输入图像去雾模型进行多分支网络的特征提取,得到样本图像特征,并分别对各分支网络的样本图像特征进行语义分割,得到语义分割特征;The feature extraction module 11 is used to input the sample enhancement set into the image defogging model to perform feature extraction of the multi-branch network to obtain sample image features, and perform semantic segmentation on the sample image features of each branch network to obtain semantic segmentation features;
分别对各分支网络的语义分割特征进行残差注意力处理,得到掩码特征映射和残差输出图像,并将各分支网络的掩码特征映射进行融合,得到融合特征;Perform residual attention processing on the semantic segmentation features of each branch network respectively to obtain mask feature maps and residual output images, and fuse the mask feature maps of each branch network to obtain fused features;
对各分支网络的残差输出图像进行混合跳过连接,得到残差图像,并分别对所述残差图像和所述融合特征进行图像恢复,得到残差恢复图像和融合恢复图像。The residual output images of each branch network are subjected to mixed skip connection to obtain a residual image, and the residual image and the fusion feature are respectively subjected to image restoration to obtain a residual restoration image and a fusion restoration image.
频率域增强模块12,用于对所述残差恢复图像和所述融合恢复图像进行频域增强,得到残差增强图像和融合增强图像,并将所述残差增强图像和所述融合增强图像进行融合,得到去雾图像。The frequency domain enhancement module 12 is used to perform frequency domain enhancement on the residual restored image and the fused restored image to obtain a residual enhanced image and a fused enhanced image, and fuse the residual enhanced image and the fused enhanced image to obtain a defogging image.
可选的,频率域增强模块12还用于:获取所述残差恢复图像和所述融合恢复图像的空间域特征,并根据所述空间域特征确定频域特征;Optionally, the frequency domain enhancement module 12 is further used to: obtain spatial domain features of the residual restored image and the fused restored image, and determine frequency domain features according to the spatial domain features;
根据所述频域特征确定振幅谱和相位谱,并对所述振幅谱进行修复,得到振幅修复谱;Determine an amplitude spectrum and a phase spectrum according to the frequency domain characteristics, and repair the amplitude spectrum to obtain an amplitude repair spectrum;
根据所述振幅修复谱和所述振幅谱确定残差振幅,并根据所述残差振幅确定注意力图;Determine a residual amplitude according to the amplitude repair spectrum and the amplitude spectrum, and determine an attention map according to the residual amplitude;
根据所述注意力图对所述相位谱进行相位变化,得到相位变化谱,并根据所述相位变化谱和所述振幅修复谱确定频域增强实部和频域增强虚部;Performing a phase change on the phase spectrum according to the attention map to obtain a phase change spectrum, and determining a frequency domain enhanced real part and a frequency domain enhanced imaginary part according to the phase change spectrum and the amplitude repair spectrum;
根据所述频域增强实部和所述频域增强虚部确定频域增强特征,并根据所述频域增强特征确定空间域增强特征;Determine a frequency domain enhancement feature according to the frequency domain enhancement real part and the frequency domain enhancement imaginary part, and determine a spatial domain enhancement feature according to the frequency domain enhancement feature;
根据所述空间域增强特征对所述残差恢复图像和所述融合恢复图像进行特征增强,得到所述残差增强图像和所述融合增强图像。Feature enhancement is performed on the residual restored image and the fused restored image according to the spatial domain enhancement feature to obtain the residual enhanced image and the fused enhanced image.
进一步地,根据所述空间域特征确定频域特征采用的公式包括:Furthermore, the formula used to determine the frequency domain feature according to the spatial domain feature includes:
其中,x为所述残差恢复图像和所述融合恢复图像的空间域特征,为所述残差恢复图像和所述融合恢复图像在频率域中的坐标,/>为图像的水平方向的频率分量,/>为图像的垂直方向的频率分量,/>和/>为图像的高度和宽度,/>和/>为图像垂直方向和水平方向的坐标,/>为所述残差恢复图像和所述融合恢复图像的频域特征;Wherein, x is the spatial domain feature of the residual restored image and the fused restored image, are the coordinates of the residual restored image and the fused restored image in the frequency domain, /> is the frequency component of the image in the horizontal direction, /> is the frequency component of the image in the vertical direction, /> and/> is the height and width of the image, /> and/> are the vertical and horizontal coordinates of the image, /> The frequency domain features of the residual restored image and the fused restored image;
根据所述频域特征确定振幅谱和相位谱采用的公式包括:The formulas used to determine the amplitude spectrum and phase spectrum according to the frequency domain characteristics include:
其中,为频域特征的实部,/>为频域特征的虚部;in, is the real part of the frequency domain feature, /> is the imaginary part of the frequency domain feature;
其中,为振幅谱,/>为相位谱;in, is the amplitude spectrum, /> is the phase spectrum;
对所述振幅谱进行修复,得到振幅修复谱,根据所述振幅修复谱和所述振幅谱确定残差振幅采用的公式包括:The amplitude spectrum is repaired to obtain an amplitude repair spectrum. The formula used to determine the residual amplitude according to the amplitude repair spectrum and the amplitude spectrum includes:
其中,为振幅修复谱,/>为振幅修复谱与振幅谱之间的残差振幅,/>为卷积算子,/>为滤波器;in, is the amplitude repair spectrum, /> is the residual amplitude between the amplitude repair spectrum and the amplitude spectrum,/> is the convolution operator,/> is the filter;
根据所述残差振幅确定注意力图,根据所述注意力图对所述相位谱进行相位变化采用的公式包括:An attention map is determined according to the residual amplitude, and a formula used to change the phase spectrum according to the attention map includes:
其中,为注意力图,GAP为全局平均池化,/>为相位变化谱,/>为元素级乘积操作;in, is the attention map, GAP is the global average pooling, /> is the phase change spectrum, /> It is an element-wise product operation;
根据所述相位变化谱和所述振幅修复谱确定频域增强实部和频域增强虚部采用的公式包括:The formula used to determine the frequency domain enhancement real part and the frequency domain enhancement imaginary part according to the phase change spectrum and the amplitude repair spectrum includes:
其中,为频域增强实部,/>为频域增强虚部;in, is the frequency domain enhanced real part,/> Enhance the imaginary part in the frequency domain;
根据所述频域增强实部和所述频域增强虚部确定频域增强特征采用的公式包括:The formula used to determine the frequency domain enhancement feature according to the frequency domain enhancement real part and the frequency domain enhancement imaginary part includes:
其中,为残差增强图像和融合增强图像的频域增强特征。in, It is the frequency domain enhancement feature of the residual enhanced image and the fusion enhanced image.
参数更新模块13,用于根据所述去雾图像和所述样本数据集的真实图像确定模型损失,并根据所述模型损失对所述图像去雾模型进行参数更新,直至所述图像去雾模型收敛。The parameter updating module 13 is used to determine the model loss according to the defogging image and the real image of the sample data set, and update the parameters of the image defogging model according to the model loss until the image defogging model converges.
可选的,根据所述去雾图像和所述样本数据集的真实图像确定模型损失采用的公式包括:Optionally, the formula used to determine the model loss based on the dehazed image and the real image of the sample data set includes:
其中,X为所述去雾图像,Y为所述真实图像,为所述模型损失,/>为峰值信噪比和结构相似度指数组成的损失,/>为边缘损失,/>、/>和/>为预设常数,/>为拉普拉斯算子,PSNR为峰值信噪比,SSIM为结构相似性,MAX2 Y 为图像可取到的最大像素值,MSE为去雾图像和对应在真实图像的均方误差,/>、/>、/>分别表示真实图像和去雾图像的亮度、对比度和饱和度的比较,/>、/>和/>为超参数。Wherein, X is the defogging image, Y is the real image, is the model loss,/> is the loss composed of peak signal-to-noise ratio and structural similarity index,/> is the edge loss,/> 、/> and/> is a preset constant, /> is the Laplace operator, PSNR is the peak signal-to-noise ratio, SSIM is the structural similarity, MAX 2 Y is the maximum pixel value that can be obtained in the image, MSE is the mean square error between the dehazed image and the corresponding real image, /> 、/> 、/> Respectively represent the comparison of brightness, contrast and saturation of the real image and the dehazed image,/> 、/> and/> is a hyperparameter.
去雾处理模块14,用于将待去雾图像输入收敛后的所述图像去雾模型进行去雾处理,得到去雾输出图像。The defogging processing module 14 is used to input the image to be defogged into the converged image defogging model for defogging processing to obtain a defogged output image.
本实施例,通过对样本数据集进行数据增强,有效地提高了样本数据集数据的多样性,通过将样本增强集输入图像去雾模型进行多分支网络的特征提取,以采用多通道的方式提取样本数据集中的样本图像特征,通过对残差恢复图像和融合恢复图像进行频域增强,能有效地利用频率域信息细化残差恢复图像和融合恢复图像中的去雾效果,提高了去雾图像的质量和图像去雾的准确性。In this embodiment, by performing data enhancement on the sample data set, the diversity of the data in the sample data set is effectively improved. By inputting the sample enhancement set into the image dehazing model for feature extraction of the multi-branch network, the sample image features in the sample data set are extracted in a multi-channel manner. By performing frequency domain enhancement on the residual restored image and the fused restored image, the frequency domain information can be effectively used to refine the dehazing effect in the residual restored image and the fused restored image, thereby improving the quality of the dehazed image and the accuracy of image dehazing.
实施例三Embodiment 3
图3是本申请第三实施例提供的一种终端设备2的结构框图。如图3所示,该实施例的终端设备2包括:处理器20、存储器21以及存储在所述存储器21中并可在所述处理器20上运行的计算机程序22,例如多分支频域增强的真实图像去雾方法的程序。处理器20执行所述计算机程序22时实现上述各个多分支频域增强的真实图像去雾方法各实施例中的步骤。FIG3 is a block diagram of a terminal device 2 provided in the third embodiment of the present application. As shown in FIG3, the terminal device 2 of this embodiment includes: a processor 20, a memory 21, and a computer program 22 stored in the memory 21 and executable on the processor 20, such as a program of a real image defogging method with multi-branch frequency domain enhancement. When the processor 20 executes the computer program 22, the steps in each embodiment of the real image defogging method with multi-branch frequency domain enhancement are implemented.
示例性的,所述计算机程序22可以被分割成一个或多个模块,所述一个或者多个模块被存储在所述存储器21中,并由所述处理器20执行,以完成本申请。所述一个或多个模块可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述所述计算机程序22在所述终端设备2中的执行过程。所述终端设备可包括,但不仅限于,处理器20、存储器21。Exemplarily, the computer program 22 may be divided into one or more modules, which are stored in the memory 21 and executed by the processor 20 to complete the present application. The one or more modules may be a series of computer program instruction segments capable of completing specific functions, which are used to describe the execution process of the computer program 22 in the terminal device 2. The terminal device may include, but is not limited to, a processor 20 and a memory 21.
所称处理器20可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。The processor 20 may be a central processing unit (CPU), or other general-purpose processors, digital signal processors (DSP), application-specific integrated circuits (ASIC), field-programmable gate arrays (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general-purpose processor may be a microprocessor or any conventional processor, etc.
所述存储器21可以是所述终端设备2的内部存储单元,例如终端设备2的硬盘或内存。所述存储器21也可以是所述终端设备2的外部存储设备,例如所述终端设备2上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器21还可以既包括所述终端设备2的内部存储单元也包括外部存储设备。所述存储器21用于存储所述计算机程序以及所述终端设备所需的其他程序和数据。所述存储器21还可以用于暂时地存储已经输出或者将要输出的数据。The memory 21 may be an internal storage unit of the terminal device 2, such as a hard disk or memory of the terminal device 2. The memory 21 may also be an external storage device of the terminal device 2, such as a plug-in hard disk, a smart media card (SMC), a secure digital (SD) card, a flash card, etc. equipped on the terminal device 2. Further, the memory 21 may also include both an internal storage unit and an external storage device of the terminal device 2. The memory 21 is used to store the computer program and other programs and data required by the terminal device. The memory 21 may also be used to temporarily store data that has been output or is to be output.
另外,在本申请各个实施例中的各功能模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional module in each embodiment of the present application can be integrated into one processing unit, or each unit can exist physically separately, or two or more units can be integrated into one unit. The above integrated unit can be implemented in the form of hardware or in the form of software functional units.
集成的模块如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读存储介质中。其中,计算机可读存储介质可以是非易失性的,也可以是易失性的。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,计算机程序包括计算机程序代码,计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。计算机可读存储介质可以包括:能够携带计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-OnlyMemory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。需要说明的是,计算机可读存储介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读存储介质不包括电载波信号和电信信号。If the integrated module is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer-readable storage medium. Among them, the computer-readable storage medium can be non-volatile or volatile. Based on this understanding, the present application implements all or part of the processes in the above-mentioned embodiment method, and can also be completed by instructing the relevant hardware through a computer program. The computer program can be stored in a computer-readable storage medium, and the computer program can implement the steps of the above-mentioned various method embodiments when executed by the processor. Among them, the computer program includes computer program code, and the computer program code can be in source code form, object code form, executable file or some intermediate form. The computer-readable storage medium may include: any entity or device capable of carrying computer program code, recording medium, U disk, mobile hard disk, disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electrical carrier signal, telecommunication signal and software distribution medium, etc. It should be noted that the content contained in computer-readable storage media can be appropriately increased or decreased according to the requirements of legislation and patent practices in the jurisdiction. For example, in some jurisdictions, based on legislation and patent practices, computer-readable storage media do not include electrical carrier signals and telecommunication signals.
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。The embodiments described above are only used to illustrate the technical solutions of the present application, rather than to limit them. Although the present application has been described in detail with reference to the aforementioned embodiments, a person skilled in the art should understand that the technical solutions described in the aforementioned embodiments may still be modified, or some of the technical features may be replaced by equivalents. Such modifications or replacements do not deviate the essence of the corresponding technical solutions from the spirit and scope of the technical solutions of the embodiments of the present application, and should all be included in the protection scope of the present application.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410252937.1A CN117853371B (en) | 2024-03-06 | 2024-03-06 | Multi-branch frequency domain enhanced real image defogging method, system and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410252937.1A CN117853371B (en) | 2024-03-06 | 2024-03-06 | Multi-branch frequency domain enhanced real image defogging method, system and terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117853371A CN117853371A (en) | 2024-04-09 |
CN117853371B true CN117853371B (en) | 2024-05-31 |
Family
ID=90532825
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410252937.1A Active CN117853371B (en) | 2024-03-06 | 2024-03-06 | Multi-branch frequency domain enhanced real image defogging method, system and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117853371B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119810000A (en) * | 2025-03-17 | 2025-04-11 | 清华大学 | A multi-domain image intelligent enhancement method based on memory and decision fusion |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111383192A (en) * | 2020-02-18 | 2020-07-07 | 清华大学 | SAR-fused visible light remote sensing image defogging method |
CN111915531A (en) * | 2020-08-06 | 2020-11-10 | 温州大学 | A multi-level feature fusion and attention-guided neural network image dehazing method |
CN114283078A (en) * | 2021-12-09 | 2022-04-05 | 北京理工大学 | An adaptive fusion image dehazing method based on two-way convolutional neural network |
CN114764752A (en) * | 2021-01-15 | 2022-07-19 | 西北大学 | Night image defogging algorithm based on deep learning |
CN117111000A (en) * | 2023-03-24 | 2023-11-24 | 西安电子科技大学 | A SAR comb spectrum interference suppression method based on dual-channel attention residual network |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11663705B2 (en) * | 2021-09-17 | 2023-05-30 | Nanjing University Of Posts And Telecommunications | Image haze removal method and apparatus, and device |
-
2024
- 2024-03-06 CN CN202410252937.1A patent/CN117853371B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111383192A (en) * | 2020-02-18 | 2020-07-07 | 清华大学 | SAR-fused visible light remote sensing image defogging method |
CN111915531A (en) * | 2020-08-06 | 2020-11-10 | 温州大学 | A multi-level feature fusion and attention-guided neural network image dehazing method |
CN114764752A (en) * | 2021-01-15 | 2022-07-19 | 西北大学 | Night image defogging algorithm based on deep learning |
CN114283078A (en) * | 2021-12-09 | 2022-04-05 | 北京理工大学 | An adaptive fusion image dehazing method based on two-way convolutional neural network |
CN117111000A (en) * | 2023-03-24 | 2023-11-24 | 西安电子科技大学 | A SAR comb spectrum interference suppression method based on dual-channel attention residual network |
Non-Patent Citations (3)
Title |
---|
Hu Yu 等.Frequency and Spatial Dual Guidance for Image Dehazing .《Computer Vsion-ECCV 2022》.2022,全文. * |
孙航 等.层级特征交互与增强感受野双分支遥感图像去雾网络.《遥感学报》.2023,全文. * |
徐岩 等.基于多特征融合的卷积神经网络图像去雾算法.《激光与光电子学进展》.2018,全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN117853371A (en) | 2024-04-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhang et al. | Single image defogging based on multi-channel convolutional MSRCR | |
Zhang et al. | Underwater image enhancement via minimal color loss and locally adaptive contrast enhancement | |
CN105894484B (en) | A HDR reconstruction algorithm based on histogram normalization and superpixel segmentation | |
CN106910175B (en) | A single image dehazing algorithm based on deep learning | |
Li et al. | Color correction based on cfa and enhancement based on retinex with dense pixels for underwater images | |
US9053540B2 (en) | Stereo matching by census transform and support weight cost aggregation | |
CN117853371B (en) | Multi-branch frequency domain enhanced real image defogging method, system and terminal | |
CN114418869B (en) | Document image geometric correction method, system, device and medium | |
CN106204494A (en) | A kind of image defogging method comprising large area sky areas and system | |
Wang et al. | Brightness perceiving for recursive low-light image enhancement | |
CN108764235A (en) | Neural network model, object detection method, equipment and medium | |
CN113674187A (en) | Image reconstruction method, system, terminal device and storage medium | |
CN114283288B (en) | A method, system, device and storage medium for nighttime vehicle image enhancement | |
Zhou et al. | Multiscale Fusion Method for the Enhancement of Low‐Light Underwater Images | |
Wang et al. | Fusion-based low-light image enhancement | |
CN111402166A (en) | Image denoising method and device, service terminal and computer readable storage medium | |
CN114897732B (en) | Image defogging method and device based on association of physical model and feature density | |
CN118096587A (en) | Document image deblurring method, system and equipment based on deep learning | |
Xiao et al. | Video denoising algorithm based on improved dual‐domain filtering and 3D block matching | |
Wang | Frequency-Based Unsupervised Low-Light Image Enhancement Framework | |
CN111383187A (en) | Image processing method and device and intelligent terminal | |
Sun et al. | Fractal pyramid low-light image enhancement network with illumination information | |
Xiao et al. | Single-image dehazing algorithm based on convolutional neural networks | |
Fleitmann et al. | Noise Reduction Methods for Charge Stability Diagrams of Double Quantum Dots | |
CN118096750B (en) | Super-resolution image quality evaluation method and system based on high-low frequency feature enhancement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |