CN113689346A - Compact deep learning defogging method based on contrast learning - Google Patents

Compact deep learning defogging method based on contrast learning Download PDF

Info

Publication number
CN113689346A
CN113689346A CN202110940699.XA CN202110940699A CN113689346A CN 113689346 A CN113689346 A CN 113689346A CN 202110940699 A CN202110940699 A CN 202110940699A CN 113689346 A CN113689346 A CN 113689346A
Authority
CN
China
Prior art keywords
model
defogging
deep
picture
fog
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110940699.XA
Other languages
Chinese (zh)
Inventor
谢源
吴海燕
林绍辉
张志忠
马利庄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
East China Normal University
Original Assignee
East China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by East China Normal University filed Critical East China Normal University
Priority to CN202110940699.XA priority Critical patent/CN113689346A/en
Publication of CN113689346A publication Critical patent/CN113689346A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a compact deep learning defogging method based on contrast learning, which comprises the following steps: firstly, a predicted restoration image is obtained by training a compact deep learning model, and then an input fog image, a corresponding clear image and the predicted restoration image are respectively input into a pre-trained VGGNet model to obtain an intermediate feature; finally, these features are input to a reconstruction loss function and a proposed contrast regularization term for training. The method can effectively relieve the problems of artifact, color distortion and the like in image restoration, has the plug-and-play characteristic, and can be flexibly used for various models and various image restoration tasks.

Description

Compact deep learning defogging method based on contrast learning
Technical Field
The invention relates to a defogging method, in particular to a compact deep learning defogging method based on contrast learning
Background
In recent years, the computer vision technology is greatly improved due to the rapid development of deep learning, and the daily life of people is greatly facilitated, such as automatic driving and the like. Image processing techniques, as an important part of computer vision, often determine the performance of more complex visual systems. In the real world, severe weather (fog, rain, etc.) often severely impairs the quality of the acquired images, thereby increasing the difficulty of subsequent tasks. In order to ensure the accuracy and stability of the vision system, it is necessary to recover damaged pictures in severe weather.
Among them, the texture and color of the picture are particularly seriously damaged in the haze weather. Single picture defogging is a difficult and ill-posed problem. On the one hand, accurate estimation of the atmospheric light and the refractive index is often difficult, and a wrong estimation of both will cause an accumulative error, resulting in poor recovery. On the other hand, the existing proposed priors are limited to specific assumed scenes, and real-world fog distribution tends to be complex, so that the traditional method based on the priors tends to fail in some scenes.
The existing defogging method for a single picture is mainly divided into two types: a priori based approach and a learning based approach. Prior-based methods often require manual assumptions of a certain prior from a large number of picture observations, such as dark channel prior documents [ HeK, Sun J, Tang X. Single image frequency removal using dark channel prior [ J ]. IEEE transaction on pattern analysis and machine interaction, 2010,33(12): 2341-. These manual assumptions are often a priori restricted to certain assumed scenarios (e.g., dark channel priors can only be used for non-sky regions). However, in the real world, fog distribution is very complex, designing a generic robust prior is difficult, and many well-designed prior may not be effective in real scenarios. In recent years, thanks to the rapid development of deep learning, more and more learning-based defogging methods have appeared, such as a stacked combination Attention module designed in the literature [ Qin, x, Wang, z, Bai, y, Xie, x, & Jia, H. (2020). FFA-Net: Feature Fusion attachment Network for Single Image deletion. proceedings of the AAAI Conference on scientific Attention, 34(07),11908-11915.(ICIP) IEEE,2020:923-927], which achieves higher performance on the SOTS data set; the document [ Hong M, Xie Y, Li C, et al, Distingling Image Dehazing with heterogeneous Task Imitnation [ C ]//2020IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE,2020] proposes a method for enhancing defogging performance by distilling a defogging model with a clearness pre-trained teacher model. Such methods show superiority in 3 commonly used standard datasets SOTS [ Li B, RenW, Fu D, et al. Benchmarking size-Image deletion and description [ J ]. IEEE Transactions on Image Processing,2018,28(1):492-, 34(07), 11908-.
Disclosure of Invention
The invention aims to provide a compact deep learning defogging method based on contrast learning, aiming at the problems that the lower bound of a solution space is not fully considered and the number of model parameters is large in the existing defogging method. The method effectively improves the defogging performance and the image restoration quality, and greatly reduces the model parameters.
The specific technical scheme for realizing the purpose of the invention is as follows:
a compact deep learning defogging method based on contrast learning, the method comprising:
step 1: data set preparation and preprocessing
1.1) collecting paired fog images and clear images as an image pair, wherein the fog images and the clear images have the same content, but partial areas of the fog images are blocked by fog to cause the blurring and the damage of texture colors; the file format of the image pair is PNG, and the resolution is the same; after the image pair is collected, dividing the image pair into a training set and a testing set according to the proportion of 8: 2;
1.2) carrying out data augmentation on the training set picture, wherein the augmentation mode is random turning and random cutting;
step 2: deep defogging model feature extraction and restoration
2.1) the deep defogging model consists of a down-sampling module, a feature extraction module and an up-sampling module, wherein the down-sampling module consists of 3 layers of convolution, the original input picture is reduced by 4 times by setting convolution parameters, namely convolution kernel and step length, and correspondingly, the down-sampling module consists of 3 layers of convolution and is used for up-sampling the features to achieve the effect of amplifying by 4 times; finally, the feature is subjected to the down-sampling and the up-sampling by 4 times, and the size of the output picture is restored to be consistent with that of the original input picture;
2.2) the feature extraction module is used for extracting depth features, and the feature module is arranged behind the down-sampling module, so that a large amount of operations of the network are concentrated in a feature space with small resolution; it comprises 2 submodules, namely an attention module and a deformable convolution module;
2.3) taking the training set data processed in the step 1 as the input of a deep defogging model, and finally obtaining a 3-channel restored picture through a down-sampling module, a feature extraction module and an up-sampling module respectively;
and step 3: calculating a loss function value for picture reconstruction
3.1) calculating the reconstruction loss function value of the prediction restoration picture and the clear picture obtained in the step 2, wherein the formula (1) defines the reconstruction loss function of the deep defogging model and is used for measuring the error between the output result of the deep defogging model and the original label value;
L1(φ(I,w),J)=min|J-φ(I,w)|1 (1)
wherein I represents an input fog diagram, J represents a clear diagram corresponding to the input fog diagram, phi (-) represents a depth defogging model, w represents a depth defogging model parameter, and phi (I, w) represents the output of the depth defogging model;
and 4, step 4: calculating a comparison regularization function value
4.1) respectively inputting the input fog picture, the clear picture for supervision and the prediction restoration picture obtained in the step 2 into a pre-trained VGGNet19 model, and extracting high-dimensional features of a1 st layer, a 3 rd layer, a 5 th layer, a 9 th layer and a 13 th layer;
4.2) calculating a comparison regular function value; defining a comparison regular pattern by adopting a formula (2), and enabling a result restored by the deep defogging model to be close to the clear image and far away from the fog image;
Figure BDA0003214736640000031
wherein G isi(. to) represent the pre-trained VGGNet19 model and the deep defogging model, Gi(φ(I,w)),Gi(J) And Gi(I) Respectively representing the high-dimensional representation obtained after the output of the defogging model, namely the anchor point, the clear image, namely the positive sample, and the fog image, namely the negative sample are respectively subjected to the pre-trained VGGNet19 model; omegaiWeights representing selected layer features of the VGGNet19 model;
and 5: training model
5.1) training a deep defogging model on the training set according to the reconstruction loss function, the function value of the comparison regular function and the training parameters; wherein the training parameters are: the learning rate is 0.0002;
5.2) stopping training when the iteration times of the deep defogging model are smaller than the set threshold value;
and 5.3) testing the deep defogging model through a reserved test set, and determining the test precision according to the test result, wherein the test precision comprises two indexes of peak signal to noise ratio (PSNR) and Structural Similarity (SSIM), the numerical range of the Structural Similarity (SSIM) is [ -1,1], and the larger the numerical value of the peak signal to noise ratio and the structural similarity is, the better the restoration effect is.
The attention module described in step 2 is based on a combination of two attention layers: channel-level attention and pixel-level attention; wherein, the channel level attention calculates the weight of each channel of the feature, and the pixel level attention calculates the weight of each pixel of the feature, both of which multiply the calculated weight by the original feature to obtain a new feature; the combination of the two is cascade connection.
The step 3 and the step 4 are performed simultaneously, and are calculated in a weighted summation mode, wherein the weight of the reconstruction loss function is 1, and the weight of the contrast regular function is 0.1.
The invention has the following outstanding advantages:
the invention provides a new loss function aiming at the defogging problem, which is based on comparison learning, not only considers the upper bound of an understanding space, but also restricts the lower bound of the understanding space. Experiments show that the function can effectively avoid artifacts and color distortion, so that the restored clear image is more real and natural. In addition, the invention provides a deep defogging model with parameters and performance, and by designing and using an adaptive feature mixing operation and dynamic convolution module, the superior performance is obtained and simultaneously fewer parameters are used compared with other methods.
The invention has the following beneficial effects:
the invention provides a new comparison regular pattern based on comparison learning, which can restrict the lower bound of the solution space of the defogging problem, so that the restored image is closer to the corresponding clear image. In addition, experiments show that compared with other existing methods, the method has superiority in processing artifacts and color distortion.
The invention gives full consideration to the performance and the model parameter quantity, and extracts more information with less parameter quantity by providing the self-adaptive mixed characteristic and dynamic fusion module. The method has the advantages that the high performance is obtained, meanwhile, the model parameters are reduced, and the method is simpler and more efficient in practical application.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a flow chart of the present invention including a deep defogging model structure;
FIG. 3 is a diagram of a deformable convolution module of the present invention;
FIG. 4 is a graph showing a partial recovery effect of the present invention;
fig. 5 is a comparative regular effect graph proposed by the present invention.
Detailed Description
For the purpose of facilitating understanding, the present invention will be described in detail below with reference to the accompanying drawings.
The specific process of the invention is shown in fig. 1, and in the training phase, the process is mainly divided into three parts, namely data set preprocessing, model defogging and loss function calculation.
A1: the data sets used by the invention are public data sets RESIDE, Dense-Haze and NH-Haze, including a composite data set (RESIDE) and real data sets (Dense-Haze and NH-Haze [), indoor scenes and outdoor scenes. The RESIDE dataset contains 5 subsets (ITS, OTS, SOTS, RTTS and HSTS), and in the present invention the ITS subset is used, 13990 pair maps (fog map and clarity map) containing the indoor scene are used as training set, and 500 indoor pair maps of the SOTS subset are used as test set. The Dense-Haze data set comprises 50 picture pairs shot in a real fog scene, and is characterized in that fog is distributed densely, the scene is an extreme fog scene, and the texture of the picture is seriously damaged. In the present invention, 45 sheets were used as training sets and 5 sheets were used as test sets. The NH-HAZE dataset consists of 50 fog map pairs (5 of which are not disclosed for clarity) in real fog scenes, characterized by an uneven fog distribution. In the present invention, 40 sheets were used as training sets and 5 sheets were used as test sets;
a2: during the training process, all the training set pictures are preprocessed to increase the generalization capability of the model, mainly including scaling the pictures to 256 × 256 resolution, horizontal and vertical flipping, and the like.
B1: the down-sampling module is composed of 3 layers of convolution, and the purpose of down-sampling 4 times is realized by setting convolution parameters (convolution kernel, step length and the like);
b2-the feature extraction module consists of 6 attention layers and 2 deformable convolutions. The attention module is formed by combining a channel level attention module and a pixel level attention module in a cascading mode, and the calculated weight is multiplied by the original feature to obtain a new feature. The deformable convolution has the ability to learn more flexible convolution kernels, as shown in fig. 3. The invention combines 2 deformable convolutions in series into a dynamic fusion module (see figure 2);
b3: the B1 has a symmetrical structure, the output is the characteristic of up sampling by 4 times, and the number of channels of the final output is 3.
C1: in order to ensure that the output of the defogging model is as close as possible to the corresponding clear image, the invention adopts a reconstruction loss function (formula (1)) to constrain the model;
where φ (I, w) represents the output of the defogging network and J represents a clean image corresponding to the input fog image;
c2: aiming at the problem that the defogging model is limited by reconstruction loss, artifact and color distortion tend to occur, the invention provides a novel contrast regulation based on contrast learning. Contrast learning refers to giving a sample (assumed to be an anchor point) close to the positive sample and far from the negative sample. In the invention, the restoration result of the defogging model is the anchor point, the input fog image is a negative sample, the corresponding clear image is a positive sample, a triple is formed, and the triple is input into the pre-trained VGGNet19 model, so that the anchor point, the positive sample and the negative sample are all in the same high-dimensional space. Finally, the invention proposes a novel contrast regularization such that the result of the restoration is close to the positive samples and far from the negative samples. In particular, equation (2) defines the contrast regularization proposed by the present invention, with the aim of bringing the results of model restoration closer to the clearness map and further away from the fog map.
The specific structure of the deep defogging model is shown in fig. 2.
During training, the present invention uses a total loss function for training. Equation (3) defines the total loss function of the training model of the present invention, which is intended to be used for training the model.
Loss=α1*L1(φ(I,w),J)+α2*CR(Gi(φ(I,w)),Gi(J),Gi(I)) (3)
Wherein alpha is1,α2The values are divided into loss weights which are set to 1 and 0.1 respectively in the present invention, and CR is formula (2). Use 1 NVIDIA TITAN RTX GPU, Adam as optimizer (β)1=0.9,β20.999), the learning rate is initialized to 0.0002, and the learning rate is adjusted by using a cosine annealing strategy.
D1, calculating the PSNR and SSIM values of the restored image output by the defogging model.
Concrete experimental results
Currently, an evaluation index of image restoration quality is generally measured using a peak signal-to-noise ratio (PSNR) and a Structural Similarity (SSIM). The higher the value of both, the closer the two pictures are. In the invention, the performance of the invention is measured by calculating the PSNR and SSIM of the picture restored by the method provided by the invention and the corresponding clear picture. The invention was tested on 3 public data sets, SOTS, Dense-Haze and NH-Haze respectively, while comparing 6 existing defogging methods. The test results are shown in Table 1.
Table 1 comparison of the performance of the present invention with the existing method on 3 data sets
Figure BDA0003214736640000061
The invention obtains the best performance on SOTS and NH-HAZE, and obtains the best PSNR on Dense-HAZE, SSIM is slightly worse than MSBDN, but the model parameter quantity of the MSBDN is 12 times of the invention.
The method provided by the invention is also superior in visual effect. As shown in fig. 4, (a) is a fog image, (b) is a restored image, and (c) is a clear image, the present invention can obtain good results in both indoor and outdoor scenes. Furthermore, the contrast regularization proposed by the present invention can significantly avoid artifacts and color distortions. As shown in fig. 5, fig. 5(a) is a fog diagram, fig. 5(b) is a diagram of only using reconstruction loss, which shows that black artifacts and color distortion may be brought about when only using reconstruction loss, fig. 5(c) is a case where the contrast rule proposed by the present invention can significantly avoid fig. 5(b), which is closer to a clear diagram, and fig. 5(d) is a clear diagram.
In addition, the comparative regularization proposed by the present invention also has generalization. As shown in table 2, the performance can be improved by adding the contrast regularization proposed by the present invention to the existing deep learning method.
TABLE 2 generalization versus regularization
Method PSNR SSIM
GridDehazeNet 32.99(↑0.83) 0.9863(↑0.0027)
FFA-Net 36.74(↑0.35) 0.9906(↑0.0020)
KDDN 35.18(↑0.46) 0.9854(↑0.0009)
MSBDN 34.45(↑0.66) 0.9861(↑0.0021)

Claims (3)

1. A compact deep learning defogging method based on contrast learning, which is characterized by comprising the following steps:
step 1: data set preparation and preprocessing
1.1) collecting paired fog images and clear images as an image pair, wherein the fog images and the clear images have the same content, but partial areas of the fog images are blocked by fog to cause the blurring and the damage of texture colors; the file format of the image pair is PNG, and the resolution is the same; after the image pair is collected, dividing the image pair into a training set and a testing set according to the proportion of 8: 2;
1.2) carrying out data augmentation on the training set picture, wherein the augmentation mode is random turning and random cutting;
step 2: deep defogging model feature extraction and restoration
2.1) the deep defogging model consists of a down-sampling module, a feature extraction module and an up-sampling module, wherein the down-sampling module consists of 3 layers of convolution, the original input picture is reduced by 4 times by setting convolution parameters, namely convolution kernel and step length, and correspondingly, the down-sampling module consists of 3 layers of convolution and is used for up-sampling the features to achieve the effect of amplifying by 4 times; finally, the size of the output picture is restored to be consistent with that of the original input picture after the characteristics are subjected to the down-sampling and the up-sampling;
2.2) the feature extraction module is used for extracting depth features, and the feature module is arranged behind the down-sampling module, so that a large amount of operations of the network are concentrated in a feature space with small resolution; it comprises 2 submodules, namely an attention module and a deformable convolution module;
2.3) taking the training set data processed in the step 1 as the input of a deep defogging model, and finally obtaining a 3-channel restored picture through a down-sampling module, a feature extraction module and an up-sampling module respectively;
and step 3: calculating a loss function value for picture reconstruction
3.1) calculating the reconstruction loss function value of the prediction restoration picture and the clear picture obtained in the step 2, wherein the formula (1) defines the reconstruction loss function of the deep defogging model and is used for measuring the error between the output result of the deep defogging model and the original label value;
L1(φ(I,w),J)=min|J-φ(I,w)|1 (1)
wherein I represents an input fog diagram, J represents a clear diagram corresponding to the input fog diagram, phi (-) represents a depth defogging model, w represents a depth defogging model parameter, and phi (I, w) represents the output of the depth defogging model;
and 4, step 4: calculating a comparison regularization function value
4.1) respectively inputting the input fog picture, the clear picture for supervision and the prediction restoration picture obtained in the step 2 into a pre-trained VGGNet19 model, and extracting high-dimensional features of a1 st layer, a 3 rd layer, a 5 th layer, a 9 th layer and a 13 th layer;
4.2) calculating a comparison regular function value; defining a comparison regular pattern by adopting a formula (2), and enabling a result restored by the deep defogging model to be close to the clear image and far away from the fog image;
Figure FDA0003214736630000011
wherein G isi(. to) represent the pre-trained VGGNet19 model and the deep defogging model, Gi(φ(I,w)),Gi(J) And Gi(I) Respectively representing the high-dimensional representation obtained after the output of the defogging model, namely the anchor point, the clear image, namely the positive sample, and the fog image, namely the negative sample are respectively subjected to the pre-trained VGGNet19 model; omegaiWeights representing selected layer features of the VGGNet19 model;
and 5: training model
5.1) training a deep defogging model on the training set according to the reconstruction loss function, the function value of the comparison regular function and the training parameters; wherein the training parameters are: the learning rate is 0.0002;
5.2) stopping training when the iteration times of the deep defogging model are smaller than the set threshold value;
and 5.3) testing the deep defogging model through a reserved test set, and determining the test precision according to the test result, wherein the test precision comprises two indexes of peak signal to noise ratio (PSNR) and Structural Similarity (SSIM), the numerical range of the Structural Similarity (SSIM) is [ -1,1], and the larger the numerical value of the peak signal to noise ratio and the structural similarity is, the better the restoration effect is.
2. The compact deep learning defogging method based on contrast learning according to claim 1 and wherein said attention module in step 2 is based on the combination of two attention layers: channel-level attention and pixel-level attention; wherein, the channel level attention calculates the weight of each channel of the feature, and the pixel level attention calculates the weight of each pixel of the feature, both of which multiply the calculated weight by the original feature to obtain a new feature; the combination of the two is cascade connection.
3. The compact deep learning defogging method based on contrast learning as claimed in claim 1, wherein said steps 3 and 4 are performed simultaneously and calculated in a weighted summation manner, wherein the weight of the reconstruction loss function is 1, and the weight of the contrast regular function is 0.1.
CN202110940699.XA 2021-08-17 2021-08-17 Compact deep learning defogging method based on contrast learning Pending CN113689346A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110940699.XA CN113689346A (en) 2021-08-17 2021-08-17 Compact deep learning defogging method based on contrast learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110940699.XA CN113689346A (en) 2021-08-17 2021-08-17 Compact deep learning defogging method based on contrast learning

Publications (1)

Publication Number Publication Date
CN113689346A true CN113689346A (en) 2021-11-23

Family

ID=78580134

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110940699.XA Pending CN113689346A (en) 2021-08-17 2021-08-17 Compact deep learning defogging method based on contrast learning

Country Status (1)

Country Link
CN (1) CN113689346A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820388A (en) * 2022-06-22 2022-07-29 合肥工业大学 Image defogging method based on codec structure

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WU HAIYAN等: "Contrastive Learning for Compact Single Image Dehazing", ARXIV, pages 1 - 10 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820388A (en) * 2022-06-22 2022-07-29 合肥工业大学 Image defogging method based on codec structure
CN114820388B (en) * 2022-06-22 2022-09-06 合肥工业大学 Image defogging method based on codec structure

Similar Documents

Publication Publication Date Title
Dudhane et al. RYF-Net: Deep fusion network for single image haze removal
CN106952228B (en) Super-resolution reconstruction method of single image based on image non-local self-similarity
Liu et al. Cross-SRN: Structure-preserving super-resolution network with cross convolution
CN110517203B (en) Defogging method based on reference image reconstruction
CN110378849B (en) Image defogging and rain removing method based on depth residual error network
CN110503613B (en) Single image-oriented rain removing method based on cascade cavity convolution neural network
CN111489303A (en) Maritime affairs image enhancement method under low-illumination environment
CN113392711B (en) Smoke semantic segmentation method and system based on high-level semantics and noise suppression
CN111882489A (en) Super-resolution graph recovery method for simultaneously enhancing underwater images
CN107590779A (en) A kind of image denoising deblurring method based on image block cluster dictionary training
CN112070688A (en) Single image defogging method for generating countermeasure network based on context guidance
CN116309070A (en) Super-resolution reconstruction method and device for hyperspectral remote sensing image and computer equipment
CN115526779A (en) Infrared image super-resolution reconstruction method based on dynamic attention mechanism
Wang et al. Single image haze removal via attention-based transmission estimation and classification fusion network
CN115049921A (en) Method for detecting salient target of optical remote sensing image based on Transformer boundary sensing
CN113689346A (en) Compact deep learning defogging method based on contrast learning
Zhao et al. A multi-scale U-shaped attention network-based GAN method for single image dehazing
CN116128768B (en) Unsupervised image low-illumination enhancement method with denoising module
CN112598604A (en) Blind face restoration method and system
CN113066025A (en) Image defogging method based on incremental learning and feature and attention transfer
CN117036182A (en) Defogging method and system for single image
CN117408924A (en) Low-light image enhancement method based on multiple semantic feature fusion network
Hua et al. Iterative residual network for image dehazing
Wang et al. Uneven image dehazing by heterogeneous twin network
CN116363001A (en) Underwater image enhancement method combining RGB and HSV color spaces

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination