CN113744146A - Image defogging method based on contrast learning and knowledge distillation - Google Patents

Image defogging method based on contrast learning and knowledge distillation Download PDF

Info

Publication number
CN113744146A
CN113744146A CN202110969454.XA CN202110969454A CN113744146A CN 113744146 A CN113744146 A CN 113744146A CN 202110969454 A CN202110969454 A CN 202110969454A CN 113744146 A CN113744146 A CN 113744146A
Authority
CN
China
Prior art keywords
image
network
fog
layer
free
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110969454.XA
Other languages
Chinese (zh)
Inventor
孙建德
李燕
李静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Normal University
Original Assignee
Shandong Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Normal University filed Critical Shandong Normal University
Priority to CN202110969454.XA priority Critical patent/CN113744146A/en
Publication of CN113744146A publication Critical patent/CN113744146A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image defogging method based on contrast learning and knowledge distillation, which mainly comprises the following steps: s1, acquiring image data, namely acquiring a fog image and a fog-free image which are paired in the same scene as training data; s2, constructing an image defogging model, and constructing the model by using a knowledge distillation network, wherein the model is divided into a teacher network branch and a student network branch; s3, inputting the fog-free image into a teacher network of the model, firstly mapping the fog-free image to a low-dimensional feature space through an encoder, and then reconstructing the fog-free image by using the low-dimensional features through a decoder; s4, taking the trained decoder in the teacher network as a student network, inputting the foggy image into the student network of the model for image defogging, and outputting a fogless image; s5, contrast loss is constructed by using a contrast learning strategy, so that the defogged image output by the network is close to a natural fog-free image and far away from a fog image.

Description

Image defogging method based on contrast learning and knowledge distillation
Technical Field
The invention relates to an image defogging method based on contrast learning and knowledge distillation, and belongs to the technical field of image processing.
Background
Haze is a common atmospheric phenomenon, and floating particles such as dust and smoke in the atmosphere can absorb and scatter light, so that the quality of a shot image is reduced. Foggy images typically lose contrast, color fidelity, and edge information, thereby reducing the visibility of the scene. Such image degradation can greatly reduce the accuracy and robustness of subsequent high-level computer vision tasks. Therefore, image defogging is of great significance to the development and performance improvement of computer vision tasks.
In recent years, the research of image defogging algorithms has made a great progress. The mainstream image defogging method is mainly divided into two types: the image defogging method based on the priori knowledge and the image defogging method based on the deep learning. Prior-knowledge based defogging methods are artificially designed prior knowledge or assumptions based on the statistical properties of the image that can be used to distinguish between foggy and fogless images, such as dark channel priors and color attenuation priors. However, the methods based on prior have a common limitation that the prior describing them only adapts to some specific scenes and cannot adapt to all scenes. The deep learning-based method overcomes the defects of the traditional defogging method based on the priori knowledge, and has the advantages that the characteristics are not defined artificially, the network can learn the characteristics through training data, and the defogging effect and the universality are better.
Most of the existing methods based on deep learning adopt clear fog-free images as positive samples to guide network training, and effective utilization of negative samples is neglected. Given a sample, contrast learning may bring the sample closer to the positive sample and further away from the negative sample. Therefore, the positive and negative samples can be used for supervising network training simultaneously through comparing the learning strategy, and the defogging effect is further improved.
The knowledge distillation network is divided into a teacher network and a student network, the teacher network trains through the positive samples, important knowledge in the positive samples can be acquired and transferred to the student network, and therefore the student network can take the positive samples as additional information, utilization of the positive samples can be further enhanced, and defogging capacity of the network is improved.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an image defogging method based on contrast learning and knowledge distillation. Meanwhile, the network training is supervised by using the positive samples and the negative samples through comparison and learning, so that the image defogging effect is further improved.
The purpose of the invention can be realized by the following technical scheme:
an image defogging method based on contrast learning and knowledge distillation, comprising the following steps:
s1, acquiring image data, namely acquiring a fog image and a fog-free image which are paired in the same scene as training data;
s2, constructing an image defogging model, and constructing the model by using a knowledge distillation network, wherein the model is divided into a teacher network T and a student network S, the teacher network is of an encoder structure, and an encoder in the trained teacher network is used as the student network;
s3, in the training stage, a teacher network is trained by using the fog-free images, and important knowledge of the fog-free images is obtained through training;
s4, in the training stage, the student network is trained by using the foggy images, the trained decoder in the teacher network is used as an initialization network of the student network, and the student network is further trained by using the foggy images;
s5, establishing contrast loss by using a contrast learning strategy, and enabling the defogged images output by the network to be closer to natural fog-free images and far away from fog images through the contrast loss in the teacher network and student network training stages;
and S6, in the testing stage, inputting the foggy image into a student network for image defogging, and outputting a fogless image.
Further, step S1 specifically includes:
acquiring a fog-free image of a scene, and generating a fog image corresponding to the fog-free image according to an atmospheric scattering model, wherein the atmospheric scattering model has the specific formula as follows:
I(x)=J(x)t(x)+A(x)(1-t(x)),
wherein x represents a pixel, i (x) represents a foggy image, j (x) represents a fogless image, t (x) represents a transmittance map, and a (x) represents an atmospheric light map.
Further, the image defogging model in step S2 specifically includes:
an encoder in the teacher network T is a convolutional layer firstly, and then is connected with combination blocks in series, wherein each combination block consists of dense blocks and transition blocks which are connected in series;
the dense block is formed by connecting a plurality of dense layers in series, each dense layer is formed by a batch normalization layer, a ReLU active layer, a convolution layer, a batch normalization layer, a ReLU active layer and a convolution layer which are connected in series, residual errors exist among the dense layers and are connected, the residual errors refer to the difference between the output and the input of the dense layers, the residual error connection is that the sum of the residual error and the input of the previous dense layer is used as the input of the next dense layer, and the characteristic information can be obtained to the maximum extent and the redundancy of the information can be avoided through the design of connection of the dense block and the residual errors;
the transition block consists of a batch normalization layer, a ReLU activation layer, a convolution layer and a pooling layer which are connected in series;
the decoder in the teacher network T is a combination block which is connected in series, each combination block is composed of a dense block and a transition block which are connected in series, the dense block and the dense block in the encoder have the same structure, and the transition block is composed of a batch normalization layer, a ReLU activation layer and a deconvolution layer which are connected in series; the last series connection of the whole network is a convolution layer and a ReLU activation layer;
the student network S consists of a decoder which is structurally identical to the decoder described in the teacher network T.
Further, step S3 specifically includes:
the teacher network T firstly maps the fog-free image J (x) to a low-dimensional feature space, and then reconstructs the fog-free image J through the low-dimensional featurest(x) The student network S is an image defogging network, inputting a fogging image I (x), and outputting a non-fogging image Js(x) After training by using the fog-free image, the decoder of the teacher network has the capability of reconstructing the fog-free image.
Further, step S5 specifically includes:
the contrast loss formula constructed using the contrast learning strategy is as follows:
Figure BDA0003225093600000031
wherein x represents a pixel, J (x) represents a fog-free image as a positive sample, I (x) represents a foggy image as a negative sample,
Figure BDA0003225093600000032
representing a fog-free image of the network reconstruction when
Figure BDA0003225093600000033
Get Jt(x) When it represents a fog-free image reconstructed by the teacher's network, when
Figure BDA0003225093600000034
Get Js(x) When it represents the fog-free image reconstructed by the student network, | ·| luminance1Represents the L1 distance; and in the teacher network and student network training stages, contrast loss is adopted for supervision, and the contrast loss enables the reconstructed image to be closer to the positive sample and far away from the negative sample by minimizing the difference between the reconstructed image and the positive sample and maximizing the difference between the reconstructed image and the negative sample.
Compared with the prior art, the invention has the following advantages:
1. the invention takes the knowledge distillation network as a model frame, fully utilizes the important information of the positive sample to train the teacher network, and transfers the strong representation capability of the teacher network to the student network as the defogging network, thereby greatly improving the defogging capability of the defogging network.
2. According to the invention, the training of the defogging network is guided by adopting a comparison learning strategy, and meanwhile, the positive and negative samples are used as the monitoring information of the network, so that the defogging image is closer to the natural defogging image serving as the positive sample and is far away from the defogging image serving as the negative sample, and the defogging effect is further improved.
Drawings
FIG. 1 is a flow chart of image defogging according to an embodiment of the present invention;
FIG. 2 is an overall network architecture diagram of an embodiment of the present invention;
FIG. 3 is a block diagram of a teacher network in accordance with an embodiment of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The specific embodiments described herein are merely illustrative of the present invention and do not delimit the scope of the invention.
Referring to fig. 1,2 and 3, the invention discloses an image defogging method based on contrast learning and knowledge distillation, which comprises the following steps:
s1, acquiring image data, namely acquiring a fog image and a fog-free image which are paired in the same scene as training data;
s2, constructing an image defogging model, and constructing the model by using a knowledge distillation network, wherein the model is divided into a teacher network T and a student network S, the teacher network is of an encoder structure, and an encoder in the trained teacher network is used as the student network;
s3, in the training stage, a teacher network is trained by using the fog-free images, and important knowledge of the fog-free images is obtained through training;
s4, in the training stage, the student network is trained by using the foggy images, the trained decoder in the teacher network is used as an initialization network of the student network, and the student network is further trained by using the foggy images;
s5, establishing contrast loss by using a contrast learning strategy, and enabling the defogged images output by the network to be closer to natural fog-free images and far away from fog images through the contrast loss in the teacher network and student network training stages;
and S6, in the testing stage, inputting the foggy image into a student network for image defogging, and outputting a fogless image.
Acquiring image data, in this embodiment, acquiring a fog-free image of a scene, and generating a fog-free image corresponding to the fog-free image according to an atmospheric scattering model, where the atmospheric scattering model has a specific formula as follows:
I(x)=J(x)t(x)+A(x)(1-t(x)),
wherein x represents a pixel, i (x) represents a foggy image, j (x) represents a fogless image, t (x) represents a transmittance map, and a (x) represents an atmospheric light map.
In the present embodiment, the image pair of the fog-free image and the fog-containing image obtained in the above process is used as the data set for the image defogging model training. The data set is as follows 5: 2 into training and test sets.
In this embodiment, the teacher network T is composed of an encoder and a decoder, the encoder is a convolutional layer first, and then three combination blocks are connected in series, each combination block is composed of a dense block and a transition block connected in series, the dense block is composed of a plurality of dense layers connected in series, each dense layer is composed of a batch normalization layer, a ReLU activation layer, a convolution kernel of 1 × 1, a step size of 1, a convolutional layer without padding, a batch normalization layer, a ReLU activation layer, a convolution kernel of 3 × 3, a step size of 1, and a convolutional layer with padding of 1, there is a residual connection between the ith (i ═ 1,2, …,5) dense layer and the ith-n (n ═ 1,2, …, i-1) dense layer, the residual refers to the difference between the output and the input of the dense layer, the residual connection is the sum of the residual and the input of the previous dense layer as the input of the next dense layer, the design of the dense residual block network can avoid redundancy of information while acquiring characteristic information to the maximum extent, and the transition block consists of a batch normalization layer, a ReLU activation layer, a convolution layer and a pooling layer which are connected in series; the decoder is firstly three combined blocks which are connected in series, each combined block consists of a dense block and a transition block which are connected in series, wherein the dense block has the same structure as the dense block in the encoder, and the transition block consists of a batch normalization layer, a ReLU activation layer and an deconvolution layer which are connected in series, the size of the convolution kernel is 1 multiplied by 1, the step length is 1 and no filling is added; the last convolution layer with convolution kernel size of 3 x 3, step size of 1, padding of 1 and a ReLU active layer are connected in series throughout the network. The student network S is composed of a decoder, the decoder has the same structure as the decoder in the teacher network, and the initialization parameters of the student network are the network parameters trained by the decoder in the teacher network.
In this embodiment, a network model is constructed under a Pytorch frame, and is trained by using an england RTX2080Ti graphics card, the number of times of training of the teacher network is 40, the number of times of training of the student network is 100, the optimizers of the teacher network and the student network are Adam optimizers, and the initial learning rate is set to 0.0001.
In this embodiment, the loss functions of the teacher network and the student network are consistent, and both the loss function of the teacher network and the loss function of the student network are composed of L1 loss and contrast loss, and the specific formula is as follows:
Ltotal=λ1L12Lcontrast,
wherein L is1Represents a loss of L1, LcontrastDenotes the loss of contrast, λ1And λ2Is a balance parameter, and the values are 1.0 and 0.7 respectively.
The L1 loss equation is as follows:
Figure BDA0003225093600000051
wherein x denotes a pixel, J (x) denotes a fog-free image as a positive sample,
Figure BDA0003225093600000053
representing a fog-free image of the network reconstruction when
Figure BDA0003225093600000054
Get Jt(x) When it represents a fog-free image reconstructed by the teacher's network, when
Figure BDA0003225093600000055
Get Js(x) When it represents the fog-free image reconstructed by the student network, | ·| luminance1Indicating the L1 distance.
The comparative loss formula is as follows:
Figure BDA0003225093600000052
wherein x represents a pixel, J (x) represents a fog-free image as a positive sample, I (x) represents a foggy image as a negative sample,
Figure BDA0003225093600000056
representing a fog-free image of the network reconstruction when
Figure BDA0003225093600000057
Get Jt(x) When it represents a fog-free image reconstructed by the teacher's network, when
Figure BDA0003225093600000058
Get Js(x) When it represents the fog-free image reconstructed by the student network, | ·| luminance1Indicating the L1 distance.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and should not be taken as limiting the scope of the present invention, and it will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit and scope of the invention.

Claims (5)

1. An image defogging method based on contrast learning and knowledge distillation, comprising the following steps:
s1, acquiring image data, namely acquiring a fog image and a fog-free image which are paired in the same scene as training data;
s2, constructing an image defogging model, and constructing the model by using a knowledge distillation network, wherein the model is divided into a teacher network T and a student network S, the teacher network is of an encoder structure, and an encoder in the trained teacher network is used as the student network;
s3, in the training stage, a teacher network is trained by using the fog-free images, and important knowledge of the fog-free images is obtained through training;
s4, in the training stage, the student network is trained by using the foggy images, the trained decoder in the teacher network is used as an initialization network of the student network, and the student network is further trained by using the foggy images;
s5, establishing contrast loss by using a contrast learning strategy, and enabling the defogged images output by the network to be closer to natural fog-free images and far away from fog images through the contrast loss in the teacher network and student network training stages;
and S6, in the testing stage, inputting the foggy image into a student network for image defogging, and outputting a fogless image.
2. The image defogging method based on the comparative learning and knowledge distillation as claimed in claim 1, wherein: step S1 specifically includes:
acquiring a fog-free image of a scene, and generating a fog image corresponding to the fog-free image according to an atmospheric scattering model, wherein the atmospheric scattering model has the specific formula as follows:
I(x)=J(x)t(x)+A(x)(1-t(x)),
wherein x represents a pixel, i (x) represents a foggy image, j (x) represents a fogless image, t (x) represents a transmittance map, and a (x) represents an atmospheric light map.
3. The image defogging method based on the comparative learning and knowledge distillation as claimed in claim 1, wherein: the image defogging model in step S2 specifically includes:
an encoder in the teacher network T is a convolutional layer firstly, and then is connected with combination blocks in series, wherein each combination block consists of dense blocks and transition blocks which are connected in series;
the dense block is formed by connecting a plurality of dense layers in series, each dense layer is formed by a batch normalization layer, a ReLU active layer, a convolution layer, a batch normalization layer, a ReLU active layer and a convolution layer which are connected in series, residual errors exist among the dense layers and are connected, the residual errors refer to the difference between the output and the input of the dense layers, the residual error connection is that the sum of the residual error and the input of the previous dense layer is used as the input of the next dense layer, and the characteristic information can be obtained to the maximum extent and the redundancy of the information can be avoided through the design of connection of the dense block and the residual errors;
the transition block consists of a batch normalization layer, a ReLU activation layer, a convolution layer and a pooling layer which are connected in series;
the decoder in the teacher network T is a combination block which is connected in series, each combination block is composed of a dense block and a transition block which are connected in series, the dense block and the dense block in the encoder have the same structure, and the transition block is composed of a batch normalization layer, a ReLU activation layer and a deconvolution layer which are connected in series; the last series connection of the whole network is a convolution layer and a ReLU activation layer;
the student network S consists of a decoder which is structurally identical to the decoder described in the teacher network T.
4. The image defogging method based on the comparative learning and knowledge distillation as claimed in claim 1, wherein: step S3 specifically includes:
the teacher network T firstly maps the fog-free image J (x) to a low-dimensional feature space, and then reconstructs the fog-free image J through the low-dimensional featurest(x) The student network S is an image defogging network, inputting a fogging image I (x), and outputting a non-fogging image Js(x) After training by using the fog-free image, the decoder of the teacher network has the capability of reconstructing the fog-free image.
5. The image defogging method based on the comparative learning and knowledge distillation as claimed in claim 1, wherein: step S5 specifically includes:
the contrast loss formula constructed using the contrast learning strategy is as follows:
Figure FDA0003225093590000021
wherein x represents a pixel, J (x) represents a fog-free image as a positive sample, I (x) represents a foggy image as a negative sample,
Figure FDA0003225093590000022
representing a fog-free image of the network reconstruction when
Figure FDA0003225093590000023
Get Jt(x) When it represents a fog-free image reconstructed by the teacher's network, when
Figure FDA0003225093590000024
Get Js(x) When it represents the fog-free image reconstructed by the student network, | ·| luminance1Represents the L1 distance; and in the teacher network and student network training stages, contrast loss is adopted for supervision, and the contrast loss enables the reconstructed image to be closer to the positive sample and far away from the negative sample by minimizing the difference between the reconstructed image and the positive sample and maximizing the difference between the reconstructed image and the negative sample.
CN202110969454.XA 2021-08-23 2021-08-23 Image defogging method based on contrast learning and knowledge distillation Pending CN113744146A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110969454.XA CN113744146A (en) 2021-08-23 2021-08-23 Image defogging method based on contrast learning and knowledge distillation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110969454.XA CN113744146A (en) 2021-08-23 2021-08-23 Image defogging method based on contrast learning and knowledge distillation

Publications (1)

Publication Number Publication Date
CN113744146A true CN113744146A (en) 2021-12-03

Family

ID=78732391

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110969454.XA Pending CN113744146A (en) 2021-08-23 2021-08-23 Image defogging method based on contrast learning and knowledge distillation

Country Status (1)

Country Link
CN (1) CN113744146A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114565539A (en) * 2022-03-17 2022-05-31 中国人民解放军火箭军工程大学 Image defogging method based on online knowledge distillation
CN115601536A (en) * 2022-12-02 2023-01-13 荣耀终端有限公司(Cn) Image processing method and electronic equipment
CN116862784A (en) * 2023-06-09 2023-10-10 中国人民解放军火箭军工程大学 Single image defogging method based on multi-teacher knowledge distillation
CN118015431A (en) * 2024-04-03 2024-05-10 阿里巴巴(中国)有限公司 Image processing method, apparatus, storage medium, and program product
CN116862784B (en) * 2023-06-09 2024-06-04 中国人民解放军火箭军工程大学 Single image defogging method based on multi-teacher knowledge distillation

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111681178A (en) * 2020-05-22 2020-09-18 厦门大学 Knowledge distillation-based image defogging method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111681178A (en) * 2020-05-22 2020-09-18 厦门大学 Knowledge distillation-based image defogging method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HAIYAN WU等: "Contrastive Learning for Compact Single Image Dehazing", 《ARXIV》, 19 April 2021 (2021-04-19), pages 1 - 10 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114565539A (en) * 2022-03-17 2022-05-31 中国人民解放军火箭军工程大学 Image defogging method based on online knowledge distillation
CN115601536A (en) * 2022-12-02 2023-01-13 荣耀终端有限公司(Cn) Image processing method and electronic equipment
CN116862784A (en) * 2023-06-09 2023-10-10 中国人民解放军火箭军工程大学 Single image defogging method based on multi-teacher knowledge distillation
CN116862784B (en) * 2023-06-09 2024-06-04 中国人民解放军火箭军工程大学 Single image defogging method based on multi-teacher knowledge distillation
CN118015431A (en) * 2024-04-03 2024-05-10 阿里巴巴(中国)有限公司 Image processing method, apparatus, storage medium, and program product

Similar Documents

Publication Publication Date Title
CN113744146A (en) Image defogging method based on contrast learning and knowledge distillation
WO2021164429A1 (en) Image processing method, image processing apparatus, and device
CN111784602B (en) Method for generating countermeasure network for image restoration
CN110097519B (en) Dual-monitoring image defogging method, system, medium and device based on deep learning
CN111161360B (en) Image defogging method of end-to-end network based on Retinex theory
WO2021238420A1 (en) Image defogging method, terminal, and computer storage medium
CN110363727B (en) Image defogging method based on multi-scale dark channel prior cascade deep neural network
CN114820388B (en) Image defogging method based on codec structure
CN113066025B (en) Image defogging method based on incremental learning and feature and attention transfer
CN113762264A (en) Multi-encoder fused multispectral image semantic segmentation method
CN113160286A (en) Near-infrared and visible light image fusion method based on convolutional neural network
CN117237279A (en) Blind quality evaluation method and system for non-uniform distortion panoramic image
CN112785517B (en) Image defogging method and device based on high-resolution representation
CN111861939A (en) Single image defogging method based on unsupervised learning
CN111553856A (en) Image defogging method based on depth estimation assistance
CN114565539A (en) Image defogging method based on online knowledge distillation
CN111784699A (en) Method and device for carrying out target segmentation on three-dimensional point cloud data and terminal equipment
CN116703750A (en) Image defogging method and system based on edge attention and multi-order differential loss
CN110189262B (en) Image defogging method based on neural network and histogram matching
CN117036182A (en) Defogging method and system for single image
CN115496764A (en) Dense feature fusion-based foggy image semantic segmentation method
CN114549343A (en) Defogging method based on dual-branch residual error feature fusion
CN114612347A (en) Multi-module cascade underwater image enhancement method
CN113240589A (en) Image defogging method and system based on multi-scale feature fusion
CN112581396A (en) Reflection elimination method based on generation countermeasure network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination