CN113554567B - Robust ghost-removing system and method based on wavelet transformation - Google Patents

Robust ghost-removing system and method based on wavelet transformation Download PDF

Info

Publication number
CN113554567B
CN113554567B CN202110865456.4A CN202110865456A CN113554567B CN 113554567 B CN113554567 B CN 113554567B CN 202110865456 A CN202110865456 A CN 202110865456A CN 113554567 B CN113554567 B CN 113554567B
Authority
CN
China
Prior art keywords
ghost
mapping
input
feature
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110865456.4A
Other languages
Chinese (zh)
Other versions
CN113554567A (en
Inventor
潘潇恺
郑博仑
颜成钢
孙垚棋
张继勇
李宗鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202110865456.4A priority Critical patent/CN113554567B/en
Publication of CN113554567A publication Critical patent/CN113554567A/en
Application granted granted Critical
Publication of CN113554567B publication Critical patent/CN113554567B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20064Wavelet transform [DWT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Complex Calculations (AREA)

Abstract

The invention discloses a robust ghost-removing system and a robust ghost-removing method based on wavelet transformation. A multi-scale feature map is extracted from the low dynamic range image input and a corresponding spatial attention map is generated to guide the reference-free image to identify misaligned portions, thereby mitigating the ghost effect of the merge stage. With the help of the spatial attention map, the fusion module fuses the feature map of the LDR image and finally reconstructs the high dynamic range output from different scales. The invention provides a novel cross-transform domain learning system structure, which consists of two transforms, namely discrete wavelet transform and discrete cosine transform, wherein the discrete wavelet transform firstly decomposes the characteristics into different frequency components, and then introduces a learnable band-pass filter based on the discrete cosine transform to generate local characteristics consistent with the decomposed components.

Description

Robust ghost-removing system and method based on wavelet transformation
Technical Field
The invention belongs to the technical field of image processing, and relates to a ghost image removing method based on wavelet transformation, in particular to a ghost image removing method of a learnable band-pass filter in a discrete cosine transformation domain.
Background
Reconstructing a High Dynamic Range (HDR) scene from a dynamic range limited sensor is a fundamental problem in the field of signal processing and computer vision. To overcome the limitations of the sensor, a common approach is to capture multiple Low Dynamic Range (LDR) images of different exposure times, and then combine them into an HDR image. Estimating the camera response function and mapping the hue of the LDR image is another method of obtaining an HDR image. While these algorithms certainly produce visually satisfactory results, the scan information in those areas that are either underexposed or overexposed is never recovered. In recent years, multi-exposure image fusion has been well studied and widely used in commercial applications. These methods typically specify one LDR image as a reference and compensate for it with information from other LDR images. However, these methods always require that the camera and scene are completely static, otherwise they obviously suffer from ghost effects, which greatly limits the application of these methods in dynamic scenes. Misalignment is a major challenge in combining multiple exposure images. Recent research has focused mainly on fusion-based and motion compensation-based methods for rejecting non-aligned pixel blocks from non-reference pixel blocks and reconstructing an HDR image using the remaining non-reference pixel blocks and intensity-rendered pixel blocks. Since culling is only determined by local pixel blocks, and the consistency of neighboring pixel blocks is ignored, errors always occur around the overexposed and underexposed areas of the reference image. Motion compensation based methods perform pixel or stitching alignment between the reference image and the other exposed image prior to merging. However, alignment is always affected by brightness gaps and unpredictable noise caused by different exposure times. Further, the substantial increase in image resolution also makes the computation of pixel-level alignment burdensome.
With great success of deep neural networks, several approaches based on deep neural networks have been proposed to address HDR imaging in dynamic scenarios. Kalantari et al [1] propose a shallow convolutional neural network that adaptively estimates the alignment pixels and merging coefficients of different exposure images. However, shallow CNNs cannot handle complex motions and inevitably suffer from estimation errors of optical flow and merging coefficients. The severe [2] et al propose a attentive mechanism to guide CNN to learn the emphasized feature of dynamic scene to ghost effect. While attention modules based on convolutional neural networks may be emphasized more precisely than traditional rejection strategies, poor receptive fields limit the performance of the attention module. Lee 3 et al introduced the central nervous system to preregister the input images, reducing errors that occur in motion compensation. This preregistration requires an additional training dataset and the efficiency is quite low when the input image is added.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a robust ghost-removing system and a robust ghost-removing method based on wavelet transformation.
In this work we have focused on the consistency of the local structural representation in the direction of attention and fusion and proposed a new cross-transform domain learning architecture. The method mainly comprises two kinds of transformation, namely Discrete Wavelet Transformation (DWT) and Discrete Cosine Transformation (DCT), namely, the discrete wavelet transformation firstly decomposes the characteristics into different frequency components, and then a learnable band-pass filter based on the discrete cosine transformation is introduced to generate local characteristics consistent with the decomposed components. In addition, multi-scale fusion attention, multi-scale reconstruction, and structure loss functions have been proposed to construct networks.
The realization steps are as follows:
a robust de-ghost system based on wavelet transform includes an attention module and a fusion module.
The attention module comprises a decoding sub-network and an attention extraction module, the input picture is processed through the decoding sub-network to obtain four-scale output feature maps, and then the attention extraction module is used for extracting the attention of the output feature maps to obtain final output feature maps.
The fusion module comprises a ghost elimination module and a transposed convolution, wherein ghost correction is firstly carried out on the final output feature map output by the attention module through the ghost elimination module, and then the feature map is up-sampled to the original proportion through the transposed convolution, so that a high dynamic range image is obtained.
Attention extraction module:
(1)for non-reference feature mapping, < >>Mapping for reference featuresTaking 3 feature maps as input, respectively connecting according to the channel number, and obtaining feature map A through a convolution layer of 3x3 and a Sigmoid activation function 1 ,A 3
(2) Will A 1 ,A 3 Respectively withDot multiplication is performed according to the proportion of 1-alpha and alpha, alpha is the super parameter determined before operation, and the characteristic mapping +.>
(3) Will beMapping +.>And connecting according to the number of channels to obtain a final output characteristic map.
Ghost elimination modules (DPBs):
(1) Wavelet transformation is carried out on the input feature map to obtain feature map F shallow
(2) Mapping features F shallow Input to the 3x3 convolutional layer and the LeakyReLU activation function, and input to the dense block. The dense block consists of n inflated volumes and a LeakyReLU activation function.
(3) The characteristic mapping of the dense block output is respectively input into 4 leachable band-pass filters (LBF), the outputs of the 4 leachable band-pass filters are connected according to the channel number, and then the characteristic mapping F is used for obtaining the characteristic mapping shallow Adding to obtain an output feature map F deep
Decoding sub-network:
the input picture is subjected to 3 times of downsampling to obtain 4 feature maps with different scales, and then the feature maps are input into a 3x3 convolution layer and a LeakyReLU activation function to obtain four-scale output feature maps.
A robust ghost-removing method based on wavelet transformation comprises the following steps:
and (1) inputting three low dynamic range images with different exposures into a decoding sub-network to obtain 4 groups of feature maps with different scales.
Step (2) inputting the 4 groups of feature maps into an attention extraction module respectively to obtain 4 final output feature maps
Step (3) mapping 4 featuresInput into a ghost elimination module (DPBs), then respectively pass through a deconvolution layer of 0,1,2,3 groups of 4x4 and a LeakyReLU activation function, respectively pass through a convolution layer of 3x3 and then are added to obtain a feature map>Finally, obtaining a final output high-dynamic image through a Sigmoid activation function>
The invention has the following beneficial effects:
the advantages are as follows: a multi-scale feature map is extracted from the low dynamic range image input and a corresponding spatial attention map is generated to guide the reference-free image to identify misaligned portions, thereby mitigating the ghost effect of the merge stage. With the help of the spatial attention map, the fusion module fuses the feature map of the LDR image and finally reconstructs the high dynamic range output from different scales.
The advantages are as follows: the attention module first extracts feature maps through different scales using the decoding subnetwork. Because the decoding subnetwork operates as a basic feature extractor, the weights of the encoding network are fully shared for the three inputs. The attention extraction module then extracts the spatial attention map step by step from the highest scale to the original scale.
The method has the following advantages: the fusion module takes as input the multi-scale feature map output by the attention module. A ghost cancellation module is first introduced for correcting ghost artifacts present in the input, which is built by stacking the remaining residual blocks in different scales. And then the corrected feature map is up-sampled to the original proportion through transposed convolution, and finally a high dynamic range image without double images is output.
Drawings
Fig. 1: robust ghost-removing system structure schematic diagram based on wavelet transformation;
fig. 2: ghost elimination modules (DPBs) are shown in the schematic;
fig. 3: decoding a sub-network structure schematic diagram;
fig. 4: comparison result graphs of the embodiment of the invention and other related works.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The invention is defined and described below first:
feature mapping: feature mapping is used to make nonlinear regression complex attributes. The original input value matrix is expanded into a multi-expansion form through circulation. Doing so can obtain a more complex, rational objective function, as opposed to linear regression.
LeakyReLU activation function: is a variant of the classical ReLu activation function, the function output having a small slope to negative inputs. Since the derivative is always non-zero, this reduces the occurrence of silent neurons, allowing gradient-based learning, solving the problem of neurons not learning after the Relu function enters the negative interval.
Loss of L1: l1 Loss is also referred to as mean absolute value error (MAE), which refers to the average of the absolute difference between the model predicted value f (x) and the true value y.
Residual block: dense blocks are composed of convolution layers of 1*1 and the LeakyReLU activation function, and dense local features can be fully obtained. Hierarchical information of all convolution layers is fully utilized, the convolution features with the instruction are extracted through densely connected convolution layers, more effective features are learned from the previous local features and the current local features, and the network training is stable and wider.
Discrete cosine transform: discrete cosine transform, particularly its second type, is often used by signal processing and image processing for lossy data compression of signals and images, including still and moving images. This is due to the strong "energy concentrating" nature of the discrete cosine transform in that most of the natural signal energy (including sound and images) is concentrated in the low frequency part after the discrete cosine transform and the decorrelation of the discrete cosine transform is close to the KL transform performance when the signal has statistical properties close to the markov process.
Wavelet transformation: the orthogonal wavelet decomposition has the capability of time-frequency local decomposition, and the wavelet component presents larger amplitude when signal processing is carried out, and the uniform variation of noise in a high-frequency part is just in contrast to the obvious contrast. After wavelet decomposition, most of wavelet coefficients with larger amplitudes are useful signals, while coefficients with smaller amplitudes are generally noise, i.e. the wavelet transform coefficients of the useful signals can be considered to be larger than the wavelet transform coefficients of the noise. The threshold denoising method is to find a proper threshold, reserve the wavelet coefficient larger than the threshold, process the wavelet coefficient smaller than the threshold correspondingly, and restore the useful signal according to the processed wavelet coefficient.
PSNR, peak signal-to-noise ratio, is an index for measuring image quality, for example, in the fields of image compression, super-resolution reconstructed images, etc., and is an important index.
SSIM: structural similarity, the basic principle of which is that when natural images are considered to be highly structured, i.e. there is a strong correlation between adjacent pixels, which expresses structural information of objects in a scene.
HDR-VDP-2, which can be applied to the brightness in the whole world, namely, the quality evaluation of HDR can be carried out; secondly, it predicts visibility and quality separately, these two criteria being applicable for different purposes and uncorrelated; HDR-VDP-2 is subjected to rigorous testing and calibration to ensure high accuracy.
A robust de-ghost system based on wavelet transform includes an attention module and a fusion module.
The attention module comprises a decoding sub-network and an attention extraction module, the input picture is processed through the decoding sub-network to obtain four-scale output feature maps, and then the attention extraction module is used for extracting the attention of the output feature maps to obtain final output feature maps. The module may highlight features complementary to the reference image to exclude regions of motion and severe saturation.
The fusion module comprises a ghost elimination module and a transposed convolution, wherein the ghost elimination module is used for correcting ghost, and the transposed convolution is used for upsampling the feature map to the original proportion to obtain a high dynamic range image. The module combines shallow features with deep features and learns the remaining features, eliminating ghosts more effectively.
Attention extraction module:
(1)for non-reference feature mapping, < >>For reference feature mapping, 3 feature mappings are used as input and are respectively connected according to the number of channels, and feature mapping A is obtained after a convolution layer of 3x3 and a Sigmoid activation function 1 ,A 3
(2) Will A 1 ,A 3 Respectively withDot multiplication is performed according to the proportion of 1-alpha and alpha, alpha is the super parameter determined before operation, and the characteristic mapping +.>
(3) Will beMapping +.>And connecting according to the number of channels to obtain a final output characteristic map.
Ghost elimination modules (DPBs):
(1) Wavelet transformation is carried out on the input feature map to obtain feature map F shallow
(2) Mapping features F shallow Input to the 3x3 convolutional layer and the LeakyReLU activation function, and input to the dense block. The dense block consists of n dilation convolutions and a LeakyReLU activation function, with the purpose of dilation convolution to expand the receptive field of the dense region.
(3) The characteristic mapping of the dense block output is respectively input into 4 leachable band-pass filters (LBF), the outputs of the 4 leachable band-pass filters are connected according to the channel number, and then the characteristic mapping F is used for obtaining the characteristic mapping shallow Adding to obtain an output feature map F deep
Fig. 2: ghost elimination modules (DPBs) are shown in the schematic;
decoding sub-network:
the input picture is subjected to 3 times of downsampling to obtain 4 feature maps with different scales, and then the feature maps are input into a 3x3 convolution layer and a LeakyReLU activation function to obtain four-scale output feature maps.
FIG. 3 is a schematic diagram of a decoding sub-network structure;
as shown in fig. 1, a robust ghost-removing method based on wavelet transformation comprises the following steps:
and (1) inputting three low dynamic range images with different exposures into a decoding sub-network to obtain 4 groups of feature maps with different scales.
Step (2) the 4 groups of feature maps are respectively input into an attention extraction module to obtain 4 most significant featuresFinal output feature mapping
Step (3) mapping 4 featuresInput into a ghost elimination module (DPBs), then respectively pass through a deconvolution layer of 0,1,2,3 groups of 4x4 and a LeakyReLU activation function, respectively pass through a convolution layer of 3x3 and then are added to obtain a feature map>Finally, obtaining a final output high-dynamic image through a Sigmoid activation function>
Fig. 4: comparison result graphs of the embodiment of the invention and other related works.

Claims (4)

1. A robust ghost-removing system based on wavelet transformation, which is characterized by comprising an attention module and a fusion module;
the attention module comprises a decoding sub-network and an attention extraction module, the input picture is processed through the decoding sub-network to obtain four-scale output feature maps, and then the attention extraction module is used for extracting the attention of the output feature maps to obtain final output feature maps;
the fusion module comprises a ghost elimination module and a transposed convolution, wherein ghost correction is firstly carried out on the final output feature map output by the attention module through the ghost elimination module, and then the feature map is up-sampled to the original proportion through the transposed convolution, so that a high dynamic range image is obtained;
the ghost elimination module is specifically as follows:
(1) Wavelet transformation is carried out on the input feature map to obtain feature map F shallow
(2) Mapping features F shallow Input deviceThe convolution layer and the LeakyReLU activation function of 3x3 are input into the dense block; the dense block consists of n dilation convolutions and a LeakyReLU activation function;
(3) The characteristic mapping output by the dense block is respectively input into 4 leachable band-pass filters LBF, the outputs of the 4 leachable band-pass filters are connected according to the number of channels, and then the characteristic mapping F is used for obtaining the characteristic mapping shallow Adding to obtain an output feature map F deep
2. The robust de-ghost system based on wavelet transform according to claim 1, wherein said attention extraction module is specifically as follows:
(1)f 1 nrfor non-reference feature mapping, f 1 r ,/>For reference feature mapping, 3 feature mappings are used as input and are respectively connected according to the number of channels, and feature mapping A is obtained after a convolution layer of 3x3 and a Sigmoid activation function 1 ,A 3
(2) Will A 1 ,A 3 Respectively with f 1 nrDot multiplication is performed according to the proportion of 1-alpha and alpha, alpha is the super parameter determined before operation, and the characteristic mapping +.>
(3) Will beMapping f with reference input features 1 r ,/>And connecting according to the number of channels to obtain a final output characteristic map.
3. The robust de-ghost system based on wavelet transform according to claim 2, wherein said decoding sub-network is specifically as follows:
the input picture is subjected to 3 times of downsampling to obtain 4 feature maps with different scales, and then the feature maps are input into a 3x3 convolution layer and a LeakyReLU activation function to obtain four-scale output feature maps.
4. A robust de-ghost method based on wavelet transform, characterized by the steps of:
step (1) inputting three low dynamic range images with different exposures into a decoding sub-network to obtain 4 groups of feature mapping with different scales;
step (2) inputting the 4 groups of feature maps into an attention extraction module respectively to obtain 4 final output feature maps
Step (3) mapping 4 featuresInput into a ghost elimination module, and then respectively pass through a deconvolution layer of 0,1,2,3 groups of 4x4 and a LeakyReLU activation function, respectively pass through a convolution layer of 3x3 and then are added to obtain a feature mapFinally, obtaining a final output high-dynamic image through a Sigmoid activation function>
CN202110865456.4A 2021-07-29 2021-07-29 Robust ghost-removing system and method based on wavelet transformation Active CN113554567B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110865456.4A CN113554567B (en) 2021-07-29 2021-07-29 Robust ghost-removing system and method based on wavelet transformation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110865456.4A CN113554567B (en) 2021-07-29 2021-07-29 Robust ghost-removing system and method based on wavelet transformation

Publications (2)

Publication Number Publication Date
CN113554567A CN113554567A (en) 2021-10-26
CN113554567B true CN113554567B (en) 2024-04-02

Family

ID=78133361

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110865456.4A Active CN113554567B (en) 2021-07-29 2021-07-29 Robust ghost-removing system and method based on wavelet transformation

Country Status (1)

Country Link
CN (1) CN113554567B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115439384A (en) * 2022-09-05 2022-12-06 中国科学院长春光学精密机械与物理研究所 Ghost-free multi-exposure image fusion method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111275640A (en) * 2020-01-17 2020-06-12 天津大学 Image enhancement method for fusing two-dimensional discrete wavelet transform and generating countermeasure network
CN113160178A (en) * 2021-04-23 2021-07-23 杭州电子科技大学 High dynamic range ghost image removing imaging system and method based on attention module

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9123141B2 (en) * 2013-07-30 2015-09-01 Konica Minolta Laboratory U.S.A., Inc. Ghost artifact detection and removal in HDR image processing using multi-level median threshold bitmaps

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111275640A (en) * 2020-01-17 2020-06-12 天津大学 Image enhancement method for fusing two-dimensional discrete wavelet transform and generating countermeasure network
CN113160178A (en) * 2021-04-23 2021-07-23 杭州电子科技大学 High dynamic range ghost image removing imaging system and method based on attention module

Also Published As

Publication number Publication date
CN113554567A (en) 2021-10-26

Similar Documents

Publication Publication Date Title
Dong et al. Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization
CN111539879A (en) Video blind denoising method and device based on deep learning
CN111583123A (en) Wavelet transform-based image enhancement algorithm for fusing high-frequency and low-frequency information
CN111462019A (en) Image deblurring method and system based on deep neural network parameter estimation
Min et al. Blind deblurring via a novel recursive deep CNN improved by wavelet transform
CN110246088B (en) Image brightness noise reduction method based on wavelet transformation and image noise reduction system thereof
CN112465727A (en) Low-illumination image enhancement method without normal illumination reference based on HSV color space and Retinex theory
CN113658057A (en) Swin transform low-light-level image enhancement method
CN111260591A (en) Image self-adaptive denoising method based on attention mechanism
CN111105357B (en) Method and device for removing distortion of distorted image and electronic equipment
CN112270646B (en) Super-resolution enhancement method based on residual dense jump network
CN114219722A (en) Low-illumination image enhancement method by utilizing time-frequency domain hierarchical processing
CN113554567B (en) Robust ghost-removing system and method based on wavelet transformation
Zhao et al. A simple and robust deep convolutional approach to blind image denoising
CN115345791A (en) Infrared image deblurring algorithm based on attention mechanism residual error network model
CN116596792A (en) Inland river foggy scene recovery method, system and equipment for intelligent ship
CN117422653A (en) Low-light image enhancement method based on weight sharing and iterative data optimization
CN117408924A (en) Low-light image enhancement method based on multiple semantic feature fusion network
CN116823662A (en) Image denoising and deblurring method fused with original features
CN116152103A (en) Neural network light field image deblurring method based on multi-head cross attention mechanism
CN110648291B (en) Unmanned aerial vehicle motion blurred image restoration method based on deep learning
Yi et al. Attention-model guided image enhancement for robotic vision applications
CN111553860A (en) Deep learning non-neighborhood averaging processing method and system for water color remote sensing image
CN115456903B (en) Deep learning-based full-color night vision enhancement method and system
CN114998138B (en) High dynamic range image artifact removal method based on attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant