CN113610719A - Attention and dense connection residual block convolution kernel neural network image denoising method - Google Patents

Attention and dense connection residual block convolution kernel neural network image denoising method Download PDF

Info

Publication number
CN113610719A
CN113610719A CN202110811621.8A CN202110811621A CN113610719A CN 113610719 A CN113610719 A CN 113610719A CN 202110811621 A CN202110811621 A CN 202110811621A CN 113610719 A CN113610719 A CN 113610719A
Authority
CN
China
Prior art keywords
image
layer
denoising
attention
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110811621.8A
Other languages
Chinese (zh)
Inventor
宋亚林
李小艳
孙琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan University
Original Assignee
Henan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan University filed Critical Henan University
Priority to CN202110811621.8A priority Critical patent/CN113610719A/en
Publication of CN113610719A publication Critical patent/CN113610719A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an attention and dense connection residual block convolutional neural network image denoising method, which comprises the steps of constructing a training data set, and preprocessing the training data set; constructing a network denoising model by using a convolution neural network combining an attention mechanism and a dense connection residual block; setting a hyper-parameter and a loss function of a network denoising model, and optimizing the loss function; selecting images with different noise levels in a training data set, and training the network denoising model to obtain a trained network model; and image denoising is carried out according to the trained network model, and a noise image is evaluated by using the peak signal-to-noise ratio index, so that the method has the beneficial effects of improving the denoising performance and the imaging quality.

Description

Attention and dense connection residual block convolution kernel neural network image denoising method
Technical Field
The invention belongs to the field of computer vision and image processing, and particularly relates to an attention and dense connection residual block convolutional neural network image denoising method.
Background
The images are often interfered by a plurality of adverse factors in the processes of acquisition, storage, recording and transmission, are degraded and distorted to a certain extent, and have reduced quality, so that the acquired images have noise, thereby affecting the quality of the images. Therefore, in order to obtain high-quality digital images and recover the information of the original image from the noisy image, it is necessary to perform noise reduction on the image, so as to remove useless information in the signal while maintaining the integrity of the original information as much as possible, so as to facilitate subsequent applications.
Image denoising is a classical problem in the field of image processing and is also an important step in computer vision preprocessing. The traditional image denoising methods commonly include three-dimensional filtering (BM 3D) based on non-local block matching and weighted kernel norm minimization (WNNM) denoising algorithms, which can remove noise in an image, but the methods need to manually select parameters in a testing process, involve complex optimization problems, and take a lot of time and cost.
Disclosure of Invention
In order to solve the technical problem, the invention provides an attention and dense connection residual block convolutional neural network image denoising method.
The specific scheme is as follows:
a method for denoising an attention and dense connection residual block convolutional neural network image comprises the following steps:
step S1: constructing a training data set, and carrying out preprocessing operation on the training data set;
step S2: constructing a network denoising model by using a convolution neural network combining an attention mechanism and a dense connection residual block;
step S3: setting a hyper-parameter and a loss function of the network denoising model, and optimizing the loss function;
step S4: selecting images with different noise levels in a training data set, and training the network denoising model to obtain a trained network model;
step S5: and denoising the image according to the trained network model, and evaluating a noise image by using a peak signal-to-noise ratio index.
1. The attention and dense connected residual block convolutional neural network denoising method of claim 1, wherein:
in step S1, the preprocessing operation performed on the training data set includes the following steps:
s1.1): selecting training samples from the training data set as an original training set, wherein images in the training samples are all noiseless images with the same size;
s1.2) respectively zooming the images in the training data set by 0.7 time, 0.8 time, 0.9 time and 1 time, and selecting a sliding window to segment each zoomed image;
s1.3): carrying out augmentation operation on the segmented image, wherein the augmentation operation method comprises the steps of carrying out 90-degree rotation, 180-degree rotation, 270-degree rotation and up-down turning on the image;
s1.4): each image in the training set was added to a gaussian white noise rating of 15, 25 or 50, respectively.
2. The attention and dense connected residual block convolutional neural network denoising method of claim 1, wherein:
the network denoising model comprises an input layer, a hidden layer, a convolutional layer and an output layer, wherein in the network denoising model, all the layers are fully connected with each other, the fully connected layer is the input of the previous layer through characteristic mapping, and the current layer is the input of the next layer through characteristic mapping;
the input layer comprises 64 convolution kernels of 3 x 3;
the hidden layers comprise 13 hidden layers, and the first hidden layer comprises a Conv convolutional layer, a ReLU modified linear unit and a BN batch layer;
the second hidden layer is a triple Attention mechanism;
the third hidden layer is an RRDB block;
the fourth hidden layer to the thirteenth hidden layer respectively comprise a Conv convolution layer, a ReLU modified linear unit and a BN batch hidden layer;
the convolutional layer is a Conv convolutional layer;
the output layer comprises 64 convolution kernels of 3 x 3;
and the input layer and the output layer both adopt a residual error learning strategy.
The triple Attention mechanism comprises three parallel branches, wherein any two parallel branches capture cross-dimensional interaction of a channel C and cross-dimensional interaction of spaces H and W respectively, wherein C represents the number of convolution kernels in each convolution layer, H represents the height of an image, and W represents the width of the image; the third parallel branch constructs Spatial orientation, and the output of the three parallel branches is output in an average aggregation mode.
The RDDB Block comprises three Dense Block blocks, the three Dense Block blocks are connected in sequence, each Dense Block Block comprises four fusion modules and one convolution layer, the four fusion modules and one convolution layer are connected with each other in a pairwise mode, and the fusion modules are fusion modules of Conv convolution layers and Leakly ReLU functions.
The hyper-parameters of the network structure include the batch size, the initial learning rate and the number of iterations,
the loss function is:
Figure 100002_DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure 100002_DEST_PATH_IMAGE003
the total number of images in the original training data set,
Figure 100002_DEST_PATH_IMAGE004
for the first in the original training data set
Figure 578091DEST_PATH_IMAGE004
The number of images is one of the number of images,
Figure 100002_DEST_PATH_IMAGE005
representing the original images in the original training data set,
Figure 100002_DEST_PATH_IMAGE006
representing a noisy image in the original training data set,
Figure 100002_DEST_PATH_IMAGE007
representing the first in the original training data set
Figure 370597DEST_PATH_IMAGE004
The actual noise of the individual images is,
Figure 100002_DEST_PATH_IMAGE008
for the training parameter values of the network denoising model,
Figure 100002_DEST_PATH_IMAGE009
in order to estimate the noise residual,
Figure 100002_DEST_PATH_IMAGE010
the loss in loss is expressed in terms of loss,
Figure 100002_DEST_PATH_IMAGE011
and expressing norm definition, namely the square sum and the reopening of all elements in the matrix, and optimizing the loss function by using an Adam optimization algorithm in a training process.
In step S4, the method for training the network denoising model includes:
s4.1): adding Gaussian random noise with the grade of 15, 25 or 50 into each image in the original data training set;
s4.2): inputting the training image with noise into the network denoising model for training to obtain a trained network denoising model, and storing the trained network denoising model
The denoising method further comprises a test set, wherein the test set comprises a medical CT image or a remote sensing image to be denoised, and the test set is input into a trained network model for image denoising.
The peak signal-to-noise ratio is calculated as,
Figure 100002_DEST_PATH_IMAGE012
where n is the number of bits per pixel, MSE is the mean square error,
the calculation formula of the MSE is as follows:
Figure 100002_DEST_PATH_IMAGE013
where MSE represents
Figure 100002_DEST_PATH_IMAGE014
And
Figure 100002_DEST_PATH_IMAGE015
the mean-square error of the signal is calculated,
Figure 895208DEST_PATH_IMAGE014
and
Figure 290417DEST_PATH_IMAGE015
respectively representing an image to be evaluated and an original image, wherein
Figure 100002_DEST_PATH_IMAGE016
And
Figure DEST_PATH_IMAGE017
representing the pixel coordinate position, M being the height of the image and N being the width of the image.
The invention discloses an attention and dense connection residual block convolution neural network image denoising method, which considers the importance of different characteristic channels, considers global information and local information, ensures the image denoising quality, establishes a cross-layer connection through dense residual to communicate the front layer and the rear layer in a network, connects two residual block layers in the network, enables each layer in the network to receive the characteristics of all the layers in front of the layer as input, finally realizes the effective separation of noise and image content through the combination of residual learning and batch standardization, and outputs a residual image with the same size as the original image.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a network denoising model.
Fig. 3 is a schematic diagram of the attention mechanism.
Fig. 4 is a schematic diagram of a structure of a densely connected residual block.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings. It is obvious that the described embodiments are only a part of the implementations of the present invention, and not all implementations, and all other embodiments obtained by those of ordinary skill in the art based on the embodiments of the present invention without any inventive work are within the scope of the present invention.
As shown in fig. 1, a method for denoising an attention and dense connection residual block convolutional neural network image includes the following steps:
step S1: constructing a training data set, and carrying out preprocessing operation on the training data set;
step S2: constructing a network denoising model by using a convolution neural network combining an attention mechanism and a dense connection residual block;
step S3: setting a hyper-parameter and a loss function of the network denoising model, and optimizing the loss function;
step S4: selecting images with different noise levels in a training data set, and training the network denoising model to obtain a trained network model;
step S5: and denoising the image according to the trained network model, and evaluating a noise image by using a peak signal-to-noise ratio index.
In step S1, the preprocessing operation performed on the training data set includes the following steps:
s1.1): selecting training samples from the training data set as an original training set, wherein images in the training samples are all noiseless images with the same size; in this embodiment, the capacity of the training sample is preferably 400 images, and the size of the images in the training sample is preferably 180 pixels wide and 180 pixels high.
S1.2) respectively zooming the images in the training data set by 0.7 time, 0.8 time, 0.9 time and 1 time, and selecting a sliding window to segment each zoomed image;
s1.3): carrying out augmentation operation on the segmented image, wherein the augmentation operation method comprises the steps of carrying out 90-degree rotation, 180-degree rotation, 270-degree rotation and up-down turning on the image;
s1.4): each image in the training set was added to a gaussian white noise rating of 15, 25 or 50, respectively.
As shown in fig. 2, the network denoising model includes an input layer 1, a hidden layer 2, a convolutional layer 3, and an output layer 4, where in the network denoising model, each layer is fully interconnected, the fully interconnected is a feature map of a previous layer as an input of a current layer, and a feature map of the current layer is an input of a next layer;
the input layer 1 comprises 64 convolution kernels of 3 x 3;
the hidden layer 2 comprises 13 hidden layers, and the first hidden layer comprises a Conv convolutional layer, a ReLU modified linear unit and a BN batch layer;
the second hidden layer is a triple Attention mechanism;
the third hidden layer is an RRDB block;
the fourth hidden layer to the thirteenth hidden layer respectively comprise a Conv convolution layer, a ReLU modified linear unit and a BN batch hidden layer;
the convolutional layer 3 is a Conv convolutional layer;
the output layer 4 comprises 64 convolution kernels of 3 x 3;
and the input layer and the output layer both adopt a residual error learning strategy.
As shown in fig. 3, the triple Attention mechanism includes three parallel branches, where any two parallel branches capture cross-dimensional interaction of a channel C and cross-dimensional interaction of a space H or W, respectively, where C denotes the number of convolution kernels in each convolution layer, H denotes the image height, and W denotes the image width; and the third parallel branch constructs Spatial Attention, namely the calculation of the Attention weight of the channel, the output of the three parallel branches is output in an average aggregation mode, the Triplet Attention is operated through rotation, then the dependency relationship among the dimensions is established by using residual transformation, and the inter-channel and Spatial information is encoded with negligible calculation overhead.
In this embodiment, the two branches on the right side in fig. 3 are respectively used to capture the cross-dimensional interaction of the channel C dimension and the cross-dimensional interaction of the space dimension W or the space dimension H, and the branch on the left side in fig. 3 is used to calculate the attention weight of the channel, and finally, the outputs of the three parallel branches are added to calculate the average value.
As shown in fig. 4, the RDDB Block includes three sense Block blocks, the three sense Block blocks are connected in sequence, each sense Block includes four fusion modules and one convolutional layer, where the four fusion modules and the one convolutional layer are connected to each other two by two, and the fusion modules are fusion modules of a Conv convolutional layer and a leak ReLU function.
Preferably, scaling is performed on the residual value of each density Block, that is, the residual value is multiplied by a value between 0 and 1, so as to ensure the stability of the structure, and in this embodiment, the scaling value is preferably 0.2.
The hyper-parameters of the network structure include the batch size, the initial learning rate and the number of iterations,
in this embodiment, it is preferable that the size of the batchsize is set to 64, the initial learning rate is 0.001, and the number of iterations is 180.
The loss function is:
Figure DEST_PATH_IMAGE018
wherein the content of the first and second substances,
Figure 980155DEST_PATH_IMAGE003
the total number of images in the original training data set,
Figure 802618DEST_PATH_IMAGE004
for the first in the original training data set
Figure 131968DEST_PATH_IMAGE004
The number of images is one of the number of images,
Figure 330868DEST_PATH_IMAGE005
representing the original images in the original training data set,
Figure 812796DEST_PATH_IMAGE006
representing a noisy image in the original training data set,
Figure 806160DEST_PATH_IMAGE007
representing the first in the original training data set
Figure 560489DEST_PATH_IMAGE004
The actual noise of the individual images is,
Figure 359818DEST_PATH_IMAGE008
for the training parameter values of the network denoising model,
Figure 883203DEST_PATH_IMAGE009
in order to estimate the noise residual,
Figure 126097DEST_PATH_IMAGE010
the loss in loss is expressed in terms of loss,
Figure 367723DEST_PATH_IMAGE011
and expressing norm definition, namely the square sum and the reopening of all elements in the matrix, and optimizing the loss function by using an Adam optimization algorithm in a training process.
In step S4, the method for training the network denoising model includes:
s4.1): adding Gaussian random noise with the grade of 15, 25 or 50 into each image in the original data training set;
s4.2): inputting the training image with noise into the network denoising model for training to obtain a trained network denoising model, and storing the trained network denoising model
The denoising method further comprises a test set, wherein the test set comprises a medical CT image or a remote sensing image to be denoised, and the test set is input into a trained network model for image denoising.
The peak signal-to-noise ratio is calculated as,
Figure 642846DEST_PATH_IMAGE012
where n is the number of bits per pixel, MSE is the mean square error,
the calculation formula of the MSE is as follows:
Figure 348634DEST_PATH_IMAGE013
where MSE represents
Figure 683800DEST_PATH_IMAGE014
And
Figure 225771DEST_PATH_IMAGE015
the mean-square error of the signal is calculated,
Figure 304586DEST_PATH_IMAGE014
and
Figure 802563DEST_PATH_IMAGE015
respectively representing an image to be evaluated and an original image, wherein
Figure 636527DEST_PATH_IMAGE016
And
Figure 587165DEST_PATH_IMAGE017
representing the pixel coordinate position, M being the height of the image and N being the width of the image.
The invention discloses an attention and dense connection residual block convolution neural network image denoising method, which considers the importance of different characteristic channels, considers global information and local information, ensures the image denoising quality, establishes a cross-layer connection through dense residual to communicate the front layer and the rear layer in a network, connects the residual block layers in the network two by two, enables each layer in the network to receive the characteristics of all the layers in front of the layer as input, finally realizes the effective separation of noise and image content through the combination of residual learning and batch standardization, and outputs the residual image with the same size as the original image, and the method has obvious improvement on the noise reduction performance and the imaging quality, in addition, the image denoising method of the embodiment, for the medical CT image containing noise, the noise removal better retains the image detail information, directly relates to the possibility of focus analysis and the accuracy of pathological lesion judgment, the method is beneficial to the correct recognition of the image information by the doctor and the detailed examination of the lesion part, thereby helping the doctor to make the next treatment means. Meanwhile, the invention is also suitable for denoising of the remote sensing image, and the method keeps balance in removing noise and maintaining image details, and can achieve a more ideal denoising effect.
The technical means disclosed in the invention scheme are not limited to the technical means disclosed in the above embodiments, but also include the technical scheme formed by any combination of the above technical features. It should be noted that those skilled in the art can make various improvements and modifications without departing from the principle of the present invention, and such improvements and modifications are also considered to be within the scope of the present invention.

Claims (8)

1. A method for denoising an attention and dense connection residual block convolutional neural network image is characterized by comprising the following steps: the method comprises the following steps:
step S1: constructing a training data set, and carrying out preprocessing operation on the training data set;
step S2: constructing a network denoising model by using a convolution neural network combining an attention mechanism and a dense connection residual block;
step S3: setting a hyper-parameter and a loss function of the network denoising model, and optimizing the loss function;
step S4: selecting images with different noise levels in a training data set, and training the network denoising model to obtain a trained network model;
step S5: and denoising the image according to the trained network model, and evaluating a noise image by using a peak signal-to-noise ratio index.
2. The attention and dense connected residual block convolutional neural network denoising method of claim 1, wherein:
in step S1, the preprocessing operation performed on the training data set includes the following steps:
s1.1): selecting training samples from the training data set as an original training set, wherein images in the training samples are all noiseless images with the same size;
s1.2) respectively zooming the images in the training data set by 0.7 time, 0.8 time, 0.9 time and 1 time, and selecting a sliding window to segment each zoomed image;
s1.3): carrying out augmentation operation on the segmented image, wherein the augmentation operation method comprises the steps of carrying out 90-degree rotation, 180-degree rotation, 270-degree rotation and up-down turning on the image;
s1.4): each image in the training set was added to a gaussian white noise rating of 15, 25 or 50, respectively.
3. The attention and dense connected residual block convolutional neural network denoising method of claim 1, wherein:
the network denoising model comprises an input layer, a hidden layer, a convolutional layer and an output layer, wherein in the network denoising model, all the layers are fully connected with each other, the fully connected layer is the input of the previous layer through characteristic mapping, and the current layer is the input of the next layer through characteristic mapping;
the input layer comprises 64 convolution kernels of 3 x 3;
the hidden layers comprise 13 hidden layers, and the first hidden layer comprises a Conv convolutional layer, a ReLU modified linear unit and a BN batch layer;
the second hidden layer is a triple Attention mechanism;
the third hidden layer is an RRDB block;
the fourth hidden layer to the thirteenth hidden layer respectively comprise a Conv convolution layer, a ReLU modified linear unit and a BN batch hidden layer;
the convolutional layer is a Conv convolutional layer;
the output layer comprises 64 convolution kernels of 3 x 3;
and the input layer and the output layer both adopt a residual error learning strategy.
4. The attention and dense connected residual block convolutional neural network denoising method of claim 3, wherein: the triple Attention mechanism comprises three parallel branches, wherein any two parallel branches capture cross-dimensional interaction of a channel C and cross-dimensional interaction of spaces H and W respectively, wherein C represents the number of convolution kernels in each convolution layer, H represents the height of an image, and W represents the width of the image; the third parallel branch constructs Spatial orientation, and the output of the three parallel branches is output in an average aggregation mode.
5. The attention and dense connected residual block convolutional neural network denoising method of claim 3, wherein: the RDDB Block comprises three Dense Block blocks, the three Dense Block blocks are connected in sequence, each Dense Block Block comprises four fusion modules and one convolution layer, the four fusion modules and one convolution layer are connected with each other in a pairwise mode, and the fusion modules are fusion modules of Conv convolution layers and Leakly ReLU functions.
6. The attention and dense connected residual block convolutional neural network denoising method of claim 1, wherein: the hyper-parameters of the network structure include the batch size, the initial learning rate and the number of iterations,
the loss function is
Figure DEST_PATH_IMAGE001
Wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE002
the total number of images in the original training data set,
Figure DEST_PATH_IMAGE003
for the first in the original training data set
Figure 380139DEST_PATH_IMAGE003
The number of images is one of the number of images,
Figure DEST_PATH_IMAGE004
representing the original images in the original training data set,
Figure DEST_PATH_IMAGE005
representing a noisy image in the original training data set,
Figure DEST_PATH_IMAGE006
representing the first in the original training data set
Figure 48011DEST_PATH_IMAGE003
The actual noise of the individual images is,
Figure DEST_PATH_IMAGE007
for the training parameter values of the network denoising model,
Figure DEST_PATH_IMAGE008
in order to estimate the noise residual,
Figure DEST_PATH_IMAGE009
the loss in loss is expressed in terms of loss,
Figure DEST_PATH_IMAGE010
and expressing norm definition, namely the square sum and the reopening of all elements in the matrix, and optimizing the loss function by using an Adam optimization algorithm in a training process.
7. The attention and dense connected residual block convolutional neural network denoising method of claim 1, wherein: in step S4, the method for training the network denoising model includes:
s4.1): adding Gaussian random noise with the grade of 15, 25 or 50 into each image in the original data training set;
s4.2): inputting the training image with noise into the network denoising model for training to obtain a trained network denoising model, and storing the trained network denoising model
The attention and dense connected residual block convolutional neural network denoising method of claim 1, wherein: the denoising method further comprises a test set, wherein the test set comprises a medical CT image or a remote sensing image to be denoised, and the test set is input into a trained network model for image denoising.
8. The attention and dense connected residual block convolutional neural network denoising method of claim 1, wherein: the peak signal-to-noise ratio is calculated as,
Figure DEST_PATH_IMAGE011
where n is the number of bits per pixel, MSE is the mean square error,
the calculation formula of the MSE is as follows:
Figure DEST_PATH_IMAGE012
where MSE represents
Figure DEST_PATH_IMAGE013
And
Figure DEST_PATH_IMAGE014
the mean-square error of the signal is calculated,
Figure 323354DEST_PATH_IMAGE013
and
Figure 718563DEST_PATH_IMAGE014
respectively representing an image to be evaluated and an original image, wherein
Figure DEST_PATH_IMAGE015
And
Figure DEST_PATH_IMAGE016
representing the pixel coordinate position, M being the height of the image and N being the width of the image.
CN202110811621.8A 2021-07-19 2021-07-19 Attention and dense connection residual block convolution kernel neural network image denoising method Pending CN113610719A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110811621.8A CN113610719A (en) 2021-07-19 2021-07-19 Attention and dense connection residual block convolution kernel neural network image denoising method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110811621.8A CN113610719A (en) 2021-07-19 2021-07-19 Attention and dense connection residual block convolution kernel neural network image denoising method

Publications (1)

Publication Number Publication Date
CN113610719A true CN113610719A (en) 2021-11-05

Family

ID=78304798

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110811621.8A Pending CN113610719A (en) 2021-07-19 2021-07-19 Attention and dense connection residual block convolution kernel neural network image denoising method

Country Status (1)

Country Link
CN (1) CN113610719A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114140353A (en) * 2021-11-25 2022-03-04 苏州大学 Swin-Transformer image denoising method and system based on channel attention
CN114240797A (en) * 2021-12-22 2022-03-25 海南大学 OCT image denoising method, device, equipment and medium
CN114972130A (en) * 2022-08-02 2022-08-30 深圳精智达技术股份有限公司 Training method, device and training equipment for denoising neural network
CN115147315A (en) * 2022-09-05 2022-10-04 杭州涿溪脑与智能研究所 Neural network fluorescence microscopic image denoising method based on transformer module
CN115761242A (en) * 2022-11-15 2023-03-07 山东财经大学 Denoising method and terminal based on convolutional neural network and fuzzy image characteristics
CN116167940A (en) * 2023-02-24 2023-05-26 西安石油大学 Seismic image denoising method based on convolutional neural network
CN116506261A (en) * 2023-06-27 2023-07-28 南昌大学 Visible light communication sensing method and system
CN116523800A (en) * 2023-07-03 2023-08-01 南京邮电大学 Image noise reduction model and method based on residual dense network and attention mechanism
CN116797818A (en) * 2023-04-19 2023-09-22 武汉科技大学 Feature enhancement loss method and system for target detection and image classification

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114140353A (en) * 2021-11-25 2022-03-04 苏州大学 Swin-Transformer image denoising method and system based on channel attention
CN114240797A (en) * 2021-12-22 2022-03-25 海南大学 OCT image denoising method, device, equipment and medium
CN114972130A (en) * 2022-08-02 2022-08-30 深圳精智达技术股份有限公司 Training method, device and training equipment for denoising neural network
CN114972130B (en) * 2022-08-02 2022-11-18 深圳精智达技术股份有限公司 Training method, device and training equipment for denoising neural network
CN115147315A (en) * 2022-09-05 2022-10-04 杭州涿溪脑与智能研究所 Neural network fluorescence microscopic image denoising method based on transformer module
CN115761242B (en) * 2022-11-15 2023-09-19 山东财经大学 Denoising method and terminal based on convolutional neural network and fuzzy image characteristics
CN115761242A (en) * 2022-11-15 2023-03-07 山东财经大学 Denoising method and terminal based on convolutional neural network and fuzzy image characteristics
CN116167940A (en) * 2023-02-24 2023-05-26 西安石油大学 Seismic image denoising method based on convolutional neural network
CN116797818A (en) * 2023-04-19 2023-09-22 武汉科技大学 Feature enhancement loss method and system for target detection and image classification
CN116797818B (en) * 2023-04-19 2024-04-19 武汉科技大学 Feature enhancement loss method and system for target detection and image classification
CN116506261B (en) * 2023-06-27 2023-09-08 南昌大学 Visible light communication sensing method and system
CN116506261A (en) * 2023-06-27 2023-07-28 南昌大学 Visible light communication sensing method and system
CN116523800A (en) * 2023-07-03 2023-08-01 南京邮电大学 Image noise reduction model and method based on residual dense network and attention mechanism
CN116523800B (en) * 2023-07-03 2023-09-22 南京邮电大学 Image noise reduction model and method based on residual dense network and attention mechanism

Similar Documents

Publication Publication Date Title
CN113610719A (en) Attention and dense connection residual block convolution kernel neural network image denoising method
CN108550115B (en) Image super-resolution reconstruction method
CN112233038B (en) True image denoising method based on multi-scale fusion and edge enhancement
CN111275637B (en) Attention model-based non-uniform motion blurred image self-adaptive restoration method
WO2022047625A1 (en) Image processing method and system, and computer storage medium
CN111754446A (en) Image fusion method, system and storage medium based on generation countermeasure network
CN111861894B (en) Image motion blur removing method based on generation type countermeasure network
CN113012172A (en) AS-UNet-based medical image segmentation method and system
CN114463218B (en) Video deblurring method based on event data driving
CN114841856A (en) Image super-pixel reconstruction method of dense connection network based on depth residual channel space attention
CN113744136A (en) Image super-resolution reconstruction method and system based on channel constraint multi-feature fusion
CN114581330A (en) Terahertz image denoising method based on multi-scale mixed attention
CN111626927A (en) Binocular image super-resolution method, system and device adopting parallax constraint
CN112102259A (en) Image segmentation algorithm based on boundary guide depth learning
CN109871790B (en) Video decoloring method based on hybrid neural network model
CN115861094A (en) Lightweight GAN underwater image enhancement model fused with attention mechanism
Zhang et al. Dense haze removal based on dynamic collaborative inference learning for remote sensing images
Saleem et al. A non-reference evaluation of underwater image enhancement methods using a new underwater image dataset
CN114445299A (en) Double-residual denoising method based on attention allocation mechanism
CN112819705B (en) Real image denoising method based on mesh structure and long-distance correlation
CN116128768B (en) Unsupervised image low-illumination enhancement method with denoising module
CN112200752A (en) Multi-frame image deblurring system and method based on ER network
Yang et al. RSAMSR: A deep neural network based on residual self-encoding and attention mechanism for image super-resolution
CN116563554A (en) Low-dose CT image denoising method based on hybrid characterization learning
CN115272072A (en) Underwater image super-resolution method based on multi-feature image fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination