CN111539886A - Defogging method based on multi-scale feature fusion - Google Patents

Defogging method based on multi-scale feature fusion Download PDF

Info

Publication number
CN111539886A
CN111539886A CN202010318381.3A CN202010318381A CN111539886A CN 111539886 A CN111539886 A CN 111539886A CN 202010318381 A CN202010318381 A CN 202010318381A CN 111539886 A CN111539886 A CN 111539886A
Authority
CN
China
Prior art keywords
features
scale
dff
module
decoder
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010318381.3A
Other languages
Chinese (zh)
Other versions
CN111539886B (en
Inventor
王飞
王杰
董航
项蕾
张昕昳
郭宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202010318381.3A priority Critical patent/CN111539886B/en
Publication of CN111539886A publication Critical patent/CN111539886A/en
Application granted granted Critical
Publication of CN111539886B publication Critical patent/CN111539886B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a defogging method based on multi-scale feature fusion, which comprises the following steps of: a multi-scale feature fusion defogging network is designed based on the framework and the reprojection technology of a U-shaped network, and comprises an encoder G with a multi-scale feature fusion module (DFF)EncAnd a decoder GDecAnd a feature restoration module G consisting of residual blocksRes. Encoder GEncAnd a decoder GDecAnd the image features fused by the DFF module at each scale are directly connected to the DFF modules at all subsequent scales as input so as to realize the fusion of the features at different scales, and the fused features are utilized to carry out image defogging operation.

Description

Defogging method based on multi-scale feature fusion
Technical Field
The invention belongs to the field of computer vision and image processing, and relates to a defogging method based on multi-scale feature fusion.
Background
Image defogging is a classical image processing problem whose task is to repair a given foggy image to estimate a clear fogless image. The problem of image defogging is widely regarded in the computer vision field because the defogging process of the image is firstly needed to obtain a clear scene in various high-level computer vision tasks (detection, identification and the like). In computer vision and computer graphics, atmospheric scattering models have been widely used as a description of the process of generating hazy images, the process of fogging being typically modeled as:
I(x)=T(x)J(x)+(1-T(x))A (1)
where I (x) represents the observed blurred image, J (x) represents the sharp image, T (x) represents the light transmission function, and A represents the atmospheric light intensity. The goal of image defogging is to recover T (x), J (x), and A from I (x).
In order to solve the problems that image prior is easy to fail and the operation efficiency is low in the traditional algorithm, a method based on deep learning is widely applied to various computer vision tasks, wherein in the field of image defogging, a neural network designed based on the framework of a U-type network (U-Net) achieves good effects. However, there are several limitations to this architecture, such as encoder GEncSpatial information of image features is lost during down-sampling of the image features and there is a lack of sufficient correlation between features of different scales between non-adjacent network layers.
Although the performance of the overall network is improved by extracting and utilizing features from different scales in the framework of the U-type network, the relationships among the features of different scales are not effectively fused. Converged features have proven to be an effective means of improving network performance in many deep learning architectures, so most networks do converged features by using dense connections, feature cascading, and weighted element progressive summation. However, most feature fusion modules attempt to fuse features with the same scale among network layers before, and therefore, the feature fusion modules cannot be used for solving the problem of feature fusion with different scales among non-adjacent network layers in the framework of the U-type network. Although some methods use strided convolutional layers in an attempt to fuse features of different scales. Although the method can fuse a plurality of features of different scales, the method combines a plurality of features which are enlarged/reduced to the same scale in a simple cascade mode, so that useful information among the features of different scales cannot be effectively extracted. Therefore, in order to better improve the performance of the U-type network architecture, a better method for solving the problem of fusion of different scale features between non-adjacent network layers needs to be provided.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a defogging method based on multi-scale feature fusion, which can effectively improve the performance of a U-shaped network architecture and realize defogging of images.
In order to achieve the above purpose, the defogging method based on the multi-scale feature fusion comprises the following steps:
a multi-scale feature fusion defogging network is designed based on the framework and the reprojection technology of a U-shaped network, and comprises an encoder G with a multi-scale feature fusion module (DFF)EncAnd a decoder GDecAnd a feature restoration module G consisting of residual blocksRes. Encoder GEncAnd a decoder GDecAnd the image features fused by the DFF module at each scale are directly connected to the DFF modules at all subsequent scales as input so as to realize the fusion of the features at different scales, and the fused features are utilized to carry out image defogging operation.
Encoder GEncDFF module of middle nth layer
Figure BDA0002460405700000031
The re-projection technology is used for fusing features of different scales in the encoder, and the operation can be described as follows:
Figure BDA0002460405700000032
wherein the content of the first and second substances,inrepresentation encoder GEncThe features in the n-th layer are,
Figure BDA0002460405700000033
representing the fused features obtained by the DFF module, L representing the encoder GEncThe number of total layers (dimensions) of,
Figure BDA0002460405700000034
is indicated in encoder GEncAnd (5) obtaining the characteristics after the first n-1 in the population are fused by a DFF module.
Decoder GDecDFF module of middle nth layer
Figure BDA0002460405700000035
The re-projection technique is used to fuse features of different scales in the decoder, and its operation can be described as:
Figure BDA0002460405700000036
wherein j isnRepresentation decoder GDecThe features in the n-th layer are,
Figure BDA0002460405700000037
representing the fused features obtained by the DFF module, L representing the decoder GDecThe number of total layers (dimensions) of,
Figure BDA0002460405700000038
is shown in the decoder GDecAnd the middle and front L-n characteristics are obtained after DFF module fusion.
And the DFF module sequentially performs multiple times of fusion on the features with different scales by utilizing a reprojection technology. The DFF module firstly up-samples or down-samples the feature to be fused under the scale to the scale of the previous feature, down-samples or up-samples the difference value of the two features to the original scale, and adds and fuses the feature to be fused. Encoder GEncDFF module of middle nth layer
Figure BDA0002460405700000039
And decoder GDecMiddle L-n layer DFF module
Figure BDA00024604057000000310
Has a similar network structure, but the down-sampling operation is positioned opposite to the up-sampling operation.
The invention has the following beneficial effects:
when the defogging network with the multi-scale feature fusion designed based on the framework of the U-shaped network and the reprojection technology is specifically operated, the DFF module can effectively correct the encoder GEncUnder sampling image features and decoder GDecInformation lost in the process of up-sampling image features is fused with different scale features between non-adjacent network layers to improve the performance of the multi-scale network. Compared with other sampling and cascading fusion methods, the DFF module can better extract high-frequency information in the image features from high-resolution features of previous network layers due to a feedback mechanism, and spatial information lost by the image features can be corrected by gradually fusing the differences into potential features of down-sampling. In addition, the DFF module can utilize all the previously obtained image high-frequency characteristics and utilize an error correction feedback mechanism to perfectly fuse the image characteristics extracted by the current network layer so as to obtain a better image defogging effect.
Drawings
FIG. 1 is a block diagram of the present invention;
FIG. 2a shows an encoder GEncSchematic diagram of the DFF module of (a);
FIG. 2b shows a decoder GDecSchematic diagram of the DFF module of (a);
FIG. 3a is an image before defogging;
FIG. 3b is an image after defogging by the U-NET method;
FIG. 3c is an image after defogging according to the present invention.
Detailed Description
The invention is described in further detail below with reference to the accompanying drawings:
referring to fig. 1, the defogging method based on multi-scale feature fusion of the invention comprises the following steps:
multiscale networks designed based on a U-network (U-Net) architecture have some limitations of the network itself. For example, encoder GEncSpatial information lost during down-sampling of image features and lack of sufficient correlation between different scale features between non-adjacent network layers. The re-projection technique in the task of image super-resolution is an effective solution to this problem, aiming at minimizing the estimated high-resolution results
Figure BDA0002460405700000051
And errors between multiple observed low resolution inputs to reconstruct the generated high resolution content, an iterative reprojection technique was developed for the case of a single low resolution input, the algorithm of which can be described as:
Figure BDA0002460405700000052
wherein the content of the first and second substances,
Figure BDA0002460405700000053
representing the high resolution output, L, estimated in the t-th iterationobDenotes a low resolution image acquired by using the down-sampling operation f, and h denotes a re-projection operation.
Based on the re-projection technology in the image super-resolution task, the invention provides a DFF module which is used for effectively correcting a coder G of a U-shaped network architectureEncThe spatial information lost in the process of down-sampling the image features can be better fused with the different scale features between non-adjacent network layers, and the performance of a multi-scale network is improved. The DFF module is intended to further fuse the features of the current network layer through an error feedback mechanism and at the encoder GEncAnd a decoder GDecIs used in the preparation of the medicament. Encoder GEncAnd a decoder GDecThe image features fused by the DFF module at each scale are directly connected to the DFF modules at all subsequent scales as input so as to realize the fusion of the features at different scales. The multi-scale feature fused defogging network comprises a multi-scale feature fusion moldEncoder G for block (DFF)EncAnd a decoder GDecAnd a feature restoration module G consisting of residual blocksRes
FIG. 2a and FIG. 2b are encoders G for a U-type network, respectivelyEncAnd a decoder GDecTo implement the structure of the DFF module. Encoder GEncDFF module of middle nth layer
Figure BDA0002460405700000054
The re-projection technology is used for fusing features of different scales in the encoder, and the operation can be described as follows:
Figure BDA0002460405700000061
wherein inRepresentation encoder GEncThe features in the n-th layer are,
Figure BDA0002460405700000062
representing the fused features obtained by the DFF module, L representing the encoder GEncThe number of total layers (dimensions) of,
Figure BDA0002460405700000063
is indicated in encoder GEncAnd (5) obtaining the characteristics after the first n-1 in the population are fused by a DFF module.
Decoder GDecDFF module of middle nth layer
Figure BDA0002460405700000064
The re-projection technique is used to fuse features of different scales in the decoder, and its operation can be described as:
Figure BDA0002460405700000065
wherein j isnRepresentation decoder GDecThe features in the n-th layer are,
Figure BDA0002460405700000066
representing fused features obtained by DFF moduleL denotes a decoder GDecThe number of total layers (dimensions) of,
Figure BDA0002460405700000067
is shown in the decoder GDecAnd the middle and front L-n characteristics are obtained after DFF module fusion.
The DFF module firstly up-samples or down-samples the feature to be fused under the scale to the scale of the previous feature, down-samples or up-samples the difference value of the two features to the original scale, and adds and fuses the feature to be fused. Wherein in the decoder GDecBy successively fusing one image feature which is fused by a DFF module
Figure BDA0002460405700000068
t ∈ {0,1, …, L-n-1} to progressively fuse the current features jnThe fusion of the multi-scale features in the decoder G can be defined in the following wayDecThe updating process in (1):
(1) calculating the feature to be fused at the scale of the t-th iteration
Figure BDA0002460405700000069
With features obtained after t-th DFF module fusion
Figure BDA00024604057000000610
Difference therebetween
Figure BDA00024604057000000611
Its operation can be described as:
Figure BDA00024604057000000612
wherein the content of the first and second substances,
Figure BDA00024604057000000613
representing features to be fused at that scale
Figure BDA00024604057000000614
Down-sampling to the tth pass beforeFeatures obtained after fusion of DFF modules
Figure BDA0002460405700000071
Reprojection operations with the same dimensions.
(2) Updating the feature needing to be fused at the scale through a reprojection operation
Figure BDA0002460405700000072
Its operation can be described as:
Figure BDA0002460405700000073
wherein the content of the first and second substances,
Figure BDA0002460405700000074
representing the difference of the t-th iteration
Figure BDA0002460405700000075
Upsampling to features to be fused at that scale
Figure BDA0002460405700000076
Reprojection operations with the same dimensions.
(3) Finally obtaining the fusion characteristics of the current network layer by fusing all the previous image characteristics fused by the DFF module under each scale
Figure BDA0002460405700000077
In the defogging network of the multi-scale feature fusion, the up-sampling and the down-sampling of the image features are respectively realized by adopting an deconvolution layer and a convolution layer with a convolution kernel of 3 × 3 and a step length of 2, a group of residual error blocks are used in an optimization unit and an encoder of each layer, each residual error block consists of three residual error sub-modules, each sub-module comprises two convolution layers with convolution kernels of 3 × 3 and a path directly connected with input and output, and a feature recovery module GResDecoder G consisting of 18 residual blocks, simultaneously in a multi-scale feature-fused defogging networkDecAnd encoder GEncEach of (1)DFF modules are respectively introduced into a network level, and a decoder GDecAnd encoder GEncWherein the DFF fusion module is implemented using deconvolution and convolution layers with convolution kernels of 4 × 4 and step size of 2.
In the training process of the defogging network with multi-scale feature fusion, a defogging training set with clear images is adopted, wherein 4000 indoor image pairs (a foggy image and a corresponding clear image) and 9000 outdoor image pairs are included, and a loss function used by the network is a minimum Mean Square Error (MSE).
As can be seen from fig. 3a, 3b and 3c, compared with the existing U-NET network architecture, the image defogging effect of the invention is better.

Claims (5)

1. A defogging method based on multi-scale feature fusion is characterized by comprising the following steps:
the defogging network with the fusion of the multi-scale features is designed based on the framework of the U-shaped network and the reprojection technology, and comprises an encoder G with a multi-scale feature fusion moduleEncAnd a decoder GDecAnd a feature restoration module G consisting of residual blocksResWherein the encoder GEncAnd a decoder GDecAnd the image features fused by the DFF module at each scale are connected to the DFF modules at all subsequent scales as input so as to realize the fusion of the features at different scales, and the fused features are utilized to carry out image defogging operation.
2. The multi-scale feature fusion based defogging method according to claim 1, wherein an encoder GEncDFF module of middle nth layer
Figure FDA0002460405690000011
And (3) fusing the features of different scales in the encoder by utilizing a reprojection technology, namely:
Figure FDA0002460405690000012
wherein inRepresentation encoder GEncThe features in the n-th layer are,
Figure FDA0002460405690000013
representing the fused features obtained by the DFF module, L representing the encoder GEncThe total number of layers of (a) and (b),
Figure FDA0002460405690000014
is indicated in encoder GEncAnd (5) obtaining the characteristics after the first n-1 in the population are fused by a DFF module.
3. The multi-scale feature fusion based defogging method according to claim 1, wherein a decoder GDecDFF module of middle nth layer
Figure FDA0002460405690000015
And (3) fusing features of different scales in the decoder by utilizing a re-projection technology, namely:
Figure FDA0002460405690000016
wherein j isnRepresentation decoder GDecThe features in the n-th layer are,
Figure FDA0002460405690000017
representing the fused features obtained by the DFF module, L representing the decoder GDecThe total number of layers of (a) and (b),
Figure FDA0002460405690000018
is shown in the decoder GDecAnd the middle and front L-n characteristics are obtained after DFF module fusion.
4. The defogging method based on multi-scale feature fusion of claim 1, wherein the DFF module utilizes a reprojection technique to successively perform multiple fusion on features of different scales, the DFF module first up-samples or down-samples the feature to be fused at the scale to the scale of the previous feature, and down-samples or up-samples the difference between the two to the original scale, and performs additive fusion with the feature to be fused.
5. The multi-scale feature fusion based defogging method according to claim 1, wherein an encoder GEncDFF module of middle nth layer
Figure FDA0002460405690000021
And decoder GDecMiddle L-n layer DFF module
Figure FDA0002460405690000022
Has a similar network structure, and the down-sampling operation is opposite to the up-sampling operation.
CN202010318381.3A 2020-04-21 2020-04-21 Defogging method based on multi-scale feature fusion Active CN111539886B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010318381.3A CN111539886B (en) 2020-04-21 2020-04-21 Defogging method based on multi-scale feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010318381.3A CN111539886B (en) 2020-04-21 2020-04-21 Defogging method based on multi-scale feature fusion

Publications (2)

Publication Number Publication Date
CN111539886A true CN111539886A (en) 2020-08-14
CN111539886B CN111539886B (en) 2023-01-03

Family

ID=71980000

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010318381.3A Active CN111539886B (en) 2020-04-21 2020-04-21 Defogging method based on multi-scale feature fusion

Country Status (1)

Country Link
CN (1) CN111539886B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112150395A (en) * 2020-10-15 2020-12-29 山东工商学院 Encoder-decoder network image defogging method combining residual block and dense block
CN112967272A (en) * 2021-03-25 2021-06-15 郑州大学 Welding defect detection method and device based on improved U-net and terminal equipment
CN113034413A (en) * 2021-03-22 2021-06-25 西安邮电大学 Low-illumination image enhancement method based on multi-scale fusion residual error codec
CN113034445A (en) * 2021-03-08 2021-06-25 桂林电子科技大学 Multi-scale connection image defogging algorithm based on UNet3+
CN113240589A (en) * 2021-04-01 2021-08-10 重庆兆光科技股份有限公司 Image defogging method and system based on multi-scale feature fusion
WO2023046136A1 (en) * 2021-09-27 2023-03-30 北京字跳网络技术有限公司 Feature fusion method, image defogging method and device
CN116416248A (en) * 2023-06-08 2023-07-11 杭州华得森生物技术有限公司 Intelligent analysis system and method based on fluorescence microscope

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110570371A (en) * 2019-08-28 2019-12-13 天津大学 image defogging method based on multi-scale residual error learning
CN110782399A (en) * 2019-08-22 2020-02-11 天津大学 Image deblurring method based on multitask CNN
US20200082219A1 (en) * 2018-09-07 2020-03-12 Toyota Research Institute, Inc. Fusing predictions for end-to-end panoptic segmentation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200082219A1 (en) * 2018-09-07 2020-03-12 Toyota Research Institute, Inc. Fusing predictions for end-to-end panoptic segmentation
CN110782399A (en) * 2019-08-22 2020-02-11 天津大学 Image deblurring method based on multitask CNN
CN110570371A (en) * 2019-08-28 2019-12-13 天津大学 image defogging method based on multi-scale residual error learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
TONG CUI 等,: "Multi-scale Densely Connected Dehazing Network", 《INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTICS AND APPLICATIONS》 *
陈永 等,: "基于多尺度卷积神经网络的单幅图像去雾方法", 《光学学报》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112150395A (en) * 2020-10-15 2020-12-29 山东工商学院 Encoder-decoder network image defogging method combining residual block and dense block
CN113034445A (en) * 2021-03-08 2021-06-25 桂林电子科技大学 Multi-scale connection image defogging algorithm based on UNet3+
CN113034445B (en) * 2021-03-08 2022-11-11 桂林电子科技大学 Multi-scale connection image defogging algorithm based on UNet3+
CN113034413A (en) * 2021-03-22 2021-06-25 西安邮电大学 Low-illumination image enhancement method based on multi-scale fusion residual error codec
CN113034413B (en) * 2021-03-22 2024-03-05 西安邮电大学 Low-illumination image enhancement method based on multi-scale fusion residual error coder-decoder
CN112967272A (en) * 2021-03-25 2021-06-15 郑州大学 Welding defect detection method and device based on improved U-net and terminal equipment
CN112967272B (en) * 2021-03-25 2023-08-22 郑州大学 Welding defect detection method and device based on improved U-net and terminal equipment
CN113240589A (en) * 2021-04-01 2021-08-10 重庆兆光科技股份有限公司 Image defogging method and system based on multi-scale feature fusion
WO2023046136A1 (en) * 2021-09-27 2023-03-30 北京字跳网络技术有限公司 Feature fusion method, image defogging method and device
CN116416248A (en) * 2023-06-08 2023-07-11 杭州华得森生物技术有限公司 Intelligent analysis system and method based on fluorescence microscope

Also Published As

Publication number Publication date
CN111539886B (en) 2023-01-03

Similar Documents

Publication Publication Date Title
CN111539886B (en) Defogging method based on multi-scale feature fusion
CN109345449B (en) Image super-resolution and non-uniform blur removing method based on fusion network
CN110120011B (en) Video super-resolution method based on convolutional neural network and mixed resolution
CN108596841B (en) Method for realizing image super-resolution and deblurring in parallel
CN112508083B (en) Image rain and fog removing method based on unsupervised attention mechanism
CN110349087B (en) RGB-D image high-quality grid generation method based on adaptive convolution
CN104657962B (en) The Image Super-resolution Reconstruction method returned based on cascading linear
CN109034198B (en) Scene segmentation method and system based on feature map recovery
CN109410144A (en) A kind of end-to-end image defogging processing method based on deep learning
CN116777764A (en) Diffusion model-based cloud and mist removing method and system for optical remote sensing image
CN112241939A (en) Light-weight rain removing method based on multi-scale and non-local
CN111539885B (en) Image enhancement defogging method based on multi-scale network
CN112200752A (en) Multi-frame image deblurring system and method based on ER network
CN116721033A (en) Single image defogging method based on random mask convolution and attention mechanism
CN111986121A (en) Based on Framellet l0Norm-constrained fuzzy image non-blind restoration method
CN115578638A (en) Method for constructing multi-level feature interactive defogging network based on U-Net
CN116433516A (en) Low-illumination image denoising and enhancing method based on attention mechanism
CN115187775A (en) Semantic segmentation method and device for remote sensing image
CN113256528B (en) Low-illumination video enhancement method based on multi-scale cascade depth residual error network
CN113450267B (en) Transfer learning method capable of rapidly acquiring multiple natural degradation image restoration models
CN115496764A (en) Dense feature fusion-based foggy image semantic segmentation method
CN115660984A (en) Image high-definition restoration method and device and storage medium
Wang et al. A CBAM‐GAN‐based method for super‐resolution reconstruction of remote sensing image
CN111046740B (en) Classification method for human action video based on full tensor cyclic neural network
CN113240589A (en) Image defogging method and system based on multi-scale feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant