CN113610719A - Attention and dense connection residual block convolution kernel neural network image denoising method - Google Patents
Attention and dense connection residual block convolution kernel neural network image denoising method Download PDFInfo
- Publication number
- CN113610719A CN113610719A CN202110811621.8A CN202110811621A CN113610719A CN 113610719 A CN113610719 A CN 113610719A CN 202110811621 A CN202110811621 A CN 202110811621A CN 113610719 A CN113610719 A CN 113610719A
- Authority
- CN
- China
- Prior art keywords
- image
- layer
- denoising
- attention
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 title claims abstract description 27
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 8
- 238000012549 training Methods 0.000 claims abstract description 72
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 16
- 230000007246 mechanism Effects 0.000 claims abstract description 11
- 238000007781 pre-processing Methods 0.000 claims abstract description 8
- 230000006870 function Effects 0.000 claims description 15
- 230000004927 fusion Effects 0.000 claims description 12
- 238000012360 testing method Methods 0.000 claims description 10
- 230000003993 interaction Effects 0.000 claims description 8
- 230000003416 augmentation Effects 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 5
- 230000008569 process Effects 0.000 claims description 5
- 238000013507 mapping Methods 0.000 claims description 4
- 238000005457 optimization Methods 0.000 claims description 4
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 3
- 230000002776 aggregation Effects 0.000 claims description 3
- 238000004220 aggregation Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 239000000126 substance Substances 0.000 claims description 3
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 238000003384 imaging method Methods 0.000 abstract description 2
- 230000006872 improvement Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 230000002411 adverse Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 230000008807 pathological lesion Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an attention and dense connection residual block convolutional neural network image denoising method, which comprises the steps of constructing a training data set, and preprocessing the training data set; constructing a network denoising model by using a convolution neural network combining an attention mechanism and a dense connection residual block; setting a hyper-parameter and a loss function of a network denoising model, and optimizing the loss function; selecting images with different noise levels in a training data set, and training the network denoising model to obtain a trained network model; and image denoising is carried out according to the trained network model, and a noise image is evaluated by using the peak signal-to-noise ratio index, so that the method has the beneficial effects of improving the denoising performance and the imaging quality.
Description
Technical Field
The invention belongs to the field of computer vision and image processing, and particularly relates to an attention and dense connection residual block convolutional neural network image denoising method.
Background
The images are often interfered by a plurality of adverse factors in the processes of acquisition, storage, recording and transmission, are degraded and distorted to a certain extent, and have reduced quality, so that the acquired images have noise, thereby affecting the quality of the images. Therefore, in order to obtain high-quality digital images and recover the information of the original image from the noisy image, it is necessary to perform noise reduction on the image, so as to remove useless information in the signal while maintaining the integrity of the original information as much as possible, so as to facilitate subsequent applications.
Image denoising is a classical problem in the field of image processing and is also an important step in computer vision preprocessing. The traditional image denoising methods commonly include three-dimensional filtering (BM 3D) based on non-local block matching and weighted kernel norm minimization (WNNM) denoising algorithms, which can remove noise in an image, but the methods need to manually select parameters in a testing process, involve complex optimization problems, and take a lot of time and cost.
Disclosure of Invention
In order to solve the technical problem, the invention provides an attention and dense connection residual block convolutional neural network image denoising method.
The specific scheme is as follows:
a method for denoising an attention and dense connection residual block convolutional neural network image comprises the following steps:
step S1: constructing a training data set, and carrying out preprocessing operation on the training data set;
step S2: constructing a network denoising model by using a convolution neural network combining an attention mechanism and a dense connection residual block;
step S3: setting a hyper-parameter and a loss function of the network denoising model, and optimizing the loss function;
step S4: selecting images with different noise levels in a training data set, and training the network denoising model to obtain a trained network model;
step S5: and denoising the image according to the trained network model, and evaluating a noise image by using a peak signal-to-noise ratio index.
1. The attention and dense connected residual block convolutional neural network denoising method of claim 1, wherein:
in step S1, the preprocessing operation performed on the training data set includes the following steps:
s1.1): selecting training samples from the training data set as an original training set, wherein images in the training samples are all noiseless images with the same size;
s1.2) respectively zooming the images in the training data set by 0.7 time, 0.8 time, 0.9 time and 1 time, and selecting a sliding window to segment each zoomed image;
s1.3): carrying out augmentation operation on the segmented image, wherein the augmentation operation method comprises the steps of carrying out 90-degree rotation, 180-degree rotation, 270-degree rotation and up-down turning on the image;
s1.4): each image in the training set was added to a gaussian white noise rating of 15, 25 or 50, respectively.
2. The attention and dense connected residual block convolutional neural network denoising method of claim 1, wherein:
the network denoising model comprises an input layer, a hidden layer, a convolutional layer and an output layer, wherein in the network denoising model, all the layers are fully connected with each other, the fully connected layer is the input of the previous layer through characteristic mapping, and the current layer is the input of the next layer through characteristic mapping;
the input layer comprises 64 convolution kernels of 3 x 3;
the hidden layers comprise 13 hidden layers, and the first hidden layer comprises a Conv convolutional layer, a ReLU modified linear unit and a BN batch layer;
the second hidden layer is a triple Attention mechanism;
the third hidden layer is an RRDB block;
the fourth hidden layer to the thirteenth hidden layer respectively comprise a Conv convolution layer, a ReLU modified linear unit and a BN batch hidden layer;
the convolutional layer is a Conv convolutional layer;
the output layer comprises 64 convolution kernels of 3 x 3;
and the input layer and the output layer both adopt a residual error learning strategy.
The triple Attention mechanism comprises three parallel branches, wherein any two parallel branches capture cross-dimensional interaction of a channel C and cross-dimensional interaction of spaces H and W respectively, wherein C represents the number of convolution kernels in each convolution layer, H represents the height of an image, and W represents the width of the image; the third parallel branch constructs Spatial orientation, and the output of the three parallel branches is output in an average aggregation mode.
The RDDB Block comprises three Dense Block blocks, the three Dense Block blocks are connected in sequence, each Dense Block Block comprises four fusion modules and one convolution layer, the four fusion modules and one convolution layer are connected with each other in a pairwise mode, and the fusion modules are fusion modules of Conv convolution layers and Leakly ReLU functions.
The hyper-parameters of the network structure include the batch size, the initial learning rate and the number of iterations,
wherein the content of the first and second substances,the total number of images in the original training data set,for the first in the original training data setThe number of images is one of the number of images,representing the original images in the original training data set,representing a noisy image in the original training data set,representing the first in the original training data setThe actual noise of the individual images is,for the training parameter values of the network denoising model,in order to estimate the noise residual,the loss in loss is expressed in terms of loss,and expressing norm definition, namely the square sum and the reopening of all elements in the matrix, and optimizing the loss function by using an Adam optimization algorithm in a training process.
In step S4, the method for training the network denoising model includes:
s4.1): adding Gaussian random noise with the grade of 15, 25 or 50 into each image in the original data training set;
s4.2): inputting the training image with noise into the network denoising model for training to obtain a trained network denoising model, and storing the trained network denoising model
The denoising method further comprises a test set, wherein the test set comprises a medical CT image or a remote sensing image to be denoised, and the test set is input into a trained network model for image denoising.
The peak signal-to-noise ratio is calculated as,
where n is the number of bits per pixel, MSE is the mean square error,
the calculation formula of the MSE is as follows:
where MSE representsAndthe mean-square error of the signal is calculated,andrespectively representing an image to be evaluated and an original image, whereinAndrepresenting the pixel coordinate position, M being the height of the image and N being the width of the image.
The invention discloses an attention and dense connection residual block convolution neural network image denoising method, which considers the importance of different characteristic channels, considers global information and local information, ensures the image denoising quality, establishes a cross-layer connection through dense residual to communicate the front layer and the rear layer in a network, connects two residual block layers in the network, enables each layer in the network to receive the characteristics of all the layers in front of the layer as input, finally realizes the effective separation of noise and image content through the combination of residual learning and batch standardization, and outputs a residual image with the same size as the original image.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a network denoising model.
Fig. 3 is a schematic diagram of the attention mechanism.
Fig. 4 is a schematic diagram of a structure of a densely connected residual block.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings. It is obvious that the described embodiments are only a part of the implementations of the present invention, and not all implementations, and all other embodiments obtained by those of ordinary skill in the art based on the embodiments of the present invention without any inventive work are within the scope of the present invention.
As shown in fig. 1, a method for denoising an attention and dense connection residual block convolutional neural network image includes the following steps:
step S1: constructing a training data set, and carrying out preprocessing operation on the training data set;
step S2: constructing a network denoising model by using a convolution neural network combining an attention mechanism and a dense connection residual block;
step S3: setting a hyper-parameter and a loss function of the network denoising model, and optimizing the loss function;
step S4: selecting images with different noise levels in a training data set, and training the network denoising model to obtain a trained network model;
step S5: and denoising the image according to the trained network model, and evaluating a noise image by using a peak signal-to-noise ratio index.
In step S1, the preprocessing operation performed on the training data set includes the following steps:
s1.1): selecting training samples from the training data set as an original training set, wherein images in the training samples are all noiseless images with the same size; in this embodiment, the capacity of the training sample is preferably 400 images, and the size of the images in the training sample is preferably 180 pixels wide and 180 pixels high.
S1.2) respectively zooming the images in the training data set by 0.7 time, 0.8 time, 0.9 time and 1 time, and selecting a sliding window to segment each zoomed image;
s1.3): carrying out augmentation operation on the segmented image, wherein the augmentation operation method comprises the steps of carrying out 90-degree rotation, 180-degree rotation, 270-degree rotation and up-down turning on the image;
s1.4): each image in the training set was added to a gaussian white noise rating of 15, 25 or 50, respectively.
As shown in fig. 2, the network denoising model includes an input layer 1, a hidden layer 2, a convolutional layer 3, and an output layer 4, where in the network denoising model, each layer is fully interconnected, the fully interconnected is a feature map of a previous layer as an input of a current layer, and a feature map of the current layer is an input of a next layer;
the input layer 1 comprises 64 convolution kernels of 3 x 3;
the hidden layer 2 comprises 13 hidden layers, and the first hidden layer comprises a Conv convolutional layer, a ReLU modified linear unit and a BN batch layer;
the second hidden layer is a triple Attention mechanism;
the third hidden layer is an RRDB block;
the fourth hidden layer to the thirteenth hidden layer respectively comprise a Conv convolution layer, a ReLU modified linear unit and a BN batch hidden layer;
the convolutional layer 3 is a Conv convolutional layer;
the output layer 4 comprises 64 convolution kernels of 3 x 3;
and the input layer and the output layer both adopt a residual error learning strategy.
As shown in fig. 3, the triple Attention mechanism includes three parallel branches, where any two parallel branches capture cross-dimensional interaction of a channel C and cross-dimensional interaction of a space H or W, respectively, where C denotes the number of convolution kernels in each convolution layer, H denotes the image height, and W denotes the image width; and the third parallel branch constructs Spatial Attention, namely the calculation of the Attention weight of the channel, the output of the three parallel branches is output in an average aggregation mode, the Triplet Attention is operated through rotation, then the dependency relationship among the dimensions is established by using residual transformation, and the inter-channel and Spatial information is encoded with negligible calculation overhead.
In this embodiment, the two branches on the right side in fig. 3 are respectively used to capture the cross-dimensional interaction of the channel C dimension and the cross-dimensional interaction of the space dimension W or the space dimension H, and the branch on the left side in fig. 3 is used to calculate the attention weight of the channel, and finally, the outputs of the three parallel branches are added to calculate the average value.
As shown in fig. 4, the RDDB Block includes three sense Block blocks, the three sense Block blocks are connected in sequence, each sense Block includes four fusion modules and one convolutional layer, where the four fusion modules and the one convolutional layer are connected to each other two by two, and the fusion modules are fusion modules of a Conv convolutional layer and a leak ReLU function.
Preferably, scaling is performed on the residual value of each density Block, that is, the residual value is multiplied by a value between 0 and 1, so as to ensure the stability of the structure, and in this embodiment, the scaling value is preferably 0.2.
The hyper-parameters of the network structure include the batch size, the initial learning rate and the number of iterations,
in this embodiment, it is preferable that the size of the batchsize is set to 64, the initial learning rate is 0.001, and the number of iterations is 180.
wherein the content of the first and second substances,the total number of images in the original training data set,for the first in the original training data setThe number of images is one of the number of images,representing the original images in the original training data set,representing a noisy image in the original training data set,representing the first in the original training data setThe actual noise of the individual images is,for the training parameter values of the network denoising model,in order to estimate the noise residual,the loss in loss is expressed in terms of loss,and expressing norm definition, namely the square sum and the reopening of all elements in the matrix, and optimizing the loss function by using an Adam optimization algorithm in a training process.
In step S4, the method for training the network denoising model includes:
s4.1): adding Gaussian random noise with the grade of 15, 25 or 50 into each image in the original data training set;
s4.2): inputting the training image with noise into the network denoising model for training to obtain a trained network denoising model, and storing the trained network denoising model
The denoising method further comprises a test set, wherein the test set comprises a medical CT image or a remote sensing image to be denoised, and the test set is input into a trained network model for image denoising.
The peak signal-to-noise ratio is calculated as,
where n is the number of bits per pixel, MSE is the mean square error,
the calculation formula of the MSE is as follows:
where MSE representsAndthe mean-square error of the signal is calculated,andrespectively representing an image to be evaluated and an original image, whereinAndrepresenting the pixel coordinate position, M being the height of the image and N being the width of the image.
The invention discloses an attention and dense connection residual block convolution neural network image denoising method, which considers the importance of different characteristic channels, considers global information and local information, ensures the image denoising quality, establishes a cross-layer connection through dense residual to communicate the front layer and the rear layer in a network, connects the residual block layers in the network two by two, enables each layer in the network to receive the characteristics of all the layers in front of the layer as input, finally realizes the effective separation of noise and image content through the combination of residual learning and batch standardization, and outputs the residual image with the same size as the original image, and the method has obvious improvement on the noise reduction performance and the imaging quality, in addition, the image denoising method of the embodiment, for the medical CT image containing noise, the noise removal better retains the image detail information, directly relates to the possibility of focus analysis and the accuracy of pathological lesion judgment, the method is beneficial to the correct recognition of the image information by the doctor and the detailed examination of the lesion part, thereby helping the doctor to make the next treatment means. Meanwhile, the invention is also suitable for denoising of the remote sensing image, and the method keeps balance in removing noise and maintaining image details, and can achieve a more ideal denoising effect.
The technical means disclosed in the invention scheme are not limited to the technical means disclosed in the above embodiments, but also include the technical scheme formed by any combination of the above technical features. It should be noted that those skilled in the art can make various improvements and modifications without departing from the principle of the present invention, and such improvements and modifications are also considered to be within the scope of the present invention.
Claims (8)
1. A method for denoising an attention and dense connection residual block convolutional neural network image is characterized by comprising the following steps: the method comprises the following steps:
step S1: constructing a training data set, and carrying out preprocessing operation on the training data set;
step S2: constructing a network denoising model by using a convolution neural network combining an attention mechanism and a dense connection residual block;
step S3: setting a hyper-parameter and a loss function of the network denoising model, and optimizing the loss function;
step S4: selecting images with different noise levels in a training data set, and training the network denoising model to obtain a trained network model;
step S5: and denoising the image according to the trained network model, and evaluating a noise image by using a peak signal-to-noise ratio index.
2. The attention and dense connected residual block convolutional neural network denoising method of claim 1, wherein:
in step S1, the preprocessing operation performed on the training data set includes the following steps:
s1.1): selecting training samples from the training data set as an original training set, wherein images in the training samples are all noiseless images with the same size;
s1.2) respectively zooming the images in the training data set by 0.7 time, 0.8 time, 0.9 time and 1 time, and selecting a sliding window to segment each zoomed image;
s1.3): carrying out augmentation operation on the segmented image, wherein the augmentation operation method comprises the steps of carrying out 90-degree rotation, 180-degree rotation, 270-degree rotation and up-down turning on the image;
s1.4): each image in the training set was added to a gaussian white noise rating of 15, 25 or 50, respectively.
3. The attention and dense connected residual block convolutional neural network denoising method of claim 1, wherein:
the network denoising model comprises an input layer, a hidden layer, a convolutional layer and an output layer, wherein in the network denoising model, all the layers are fully connected with each other, the fully connected layer is the input of the previous layer through characteristic mapping, and the current layer is the input of the next layer through characteristic mapping;
the input layer comprises 64 convolution kernels of 3 x 3;
the hidden layers comprise 13 hidden layers, and the first hidden layer comprises a Conv convolutional layer, a ReLU modified linear unit and a BN batch layer;
the second hidden layer is a triple Attention mechanism;
the third hidden layer is an RRDB block;
the fourth hidden layer to the thirteenth hidden layer respectively comprise a Conv convolution layer, a ReLU modified linear unit and a BN batch hidden layer;
the convolutional layer is a Conv convolutional layer;
the output layer comprises 64 convolution kernels of 3 x 3;
and the input layer and the output layer both adopt a residual error learning strategy.
4. The attention and dense connected residual block convolutional neural network denoising method of claim 3, wherein: the triple Attention mechanism comprises three parallel branches, wherein any two parallel branches capture cross-dimensional interaction of a channel C and cross-dimensional interaction of spaces H and W respectively, wherein C represents the number of convolution kernels in each convolution layer, H represents the height of an image, and W represents the width of the image; the third parallel branch constructs Spatial orientation, and the output of the three parallel branches is output in an average aggregation mode.
5. The attention and dense connected residual block convolutional neural network denoising method of claim 3, wherein: the RDDB Block comprises three Dense Block blocks, the three Dense Block blocks are connected in sequence, each Dense Block Block comprises four fusion modules and one convolution layer, the four fusion modules and one convolution layer are connected with each other in a pairwise mode, and the fusion modules are fusion modules of Conv convolution layers and Leakly ReLU functions.
6. The attention and dense connected residual block convolutional neural network denoising method of claim 1, wherein: the hyper-parameters of the network structure include the batch size, the initial learning rate and the number of iterations,
Wherein the content of the first and second substances,the total number of images in the original training data set,for the first in the original training data setThe number of images is one of the number of images,representing the original images in the original training data set,representing a noisy image in the original training data set,representing the first in the original training data setThe actual noise of the individual images is,for the training parameter values of the network denoising model,in order to estimate the noise residual,the loss in loss is expressed in terms of loss,and expressing norm definition, namely the square sum and the reopening of all elements in the matrix, and optimizing the loss function by using an Adam optimization algorithm in a training process.
7. The attention and dense connected residual block convolutional neural network denoising method of claim 1, wherein: in step S4, the method for training the network denoising model includes:
s4.1): adding Gaussian random noise with the grade of 15, 25 or 50 into each image in the original data training set;
s4.2): inputting the training image with noise into the network denoising model for training to obtain a trained network denoising model, and storing the trained network denoising model
The attention and dense connected residual block convolutional neural network denoising method of claim 1, wherein: the denoising method further comprises a test set, wherein the test set comprises a medical CT image or a remote sensing image to be denoised, and the test set is input into a trained network model for image denoising.
8. The attention and dense connected residual block convolutional neural network denoising method of claim 1, wherein: the peak signal-to-noise ratio is calculated as,
where n is the number of bits per pixel, MSE is the mean square error,
the calculation formula of the MSE is as follows:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110811621.8A CN113610719A (en) | 2021-07-19 | 2021-07-19 | Attention and dense connection residual block convolution kernel neural network image denoising method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110811621.8A CN113610719A (en) | 2021-07-19 | 2021-07-19 | Attention and dense connection residual block convolution kernel neural network image denoising method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113610719A true CN113610719A (en) | 2021-11-05 |
Family
ID=78304798
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110811621.8A Pending CN113610719A (en) | 2021-07-19 | 2021-07-19 | Attention and dense connection residual block convolution kernel neural network image denoising method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113610719A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114140353A (en) * | 2021-11-25 | 2022-03-04 | 苏州大学 | Swin-Transformer image denoising method and system based on channel attention |
CN114240797A (en) * | 2021-12-22 | 2022-03-25 | 海南大学 | OCT image denoising method, device, equipment and medium |
CN114972130A (en) * | 2022-08-02 | 2022-08-30 | 深圳精智达技术股份有限公司 | Training method, device and training equipment for denoising neural network |
CN115147315A (en) * | 2022-09-05 | 2022-10-04 | 杭州涿溪脑与智能研究所 | Neural network fluorescence microscopic image denoising method based on transformer module |
CN115761242A (en) * | 2022-11-15 | 2023-03-07 | 山东财经大学 | Denoising method and terminal based on convolutional neural network and fuzzy image characteristics |
CN116167940A (en) * | 2023-02-24 | 2023-05-26 | 西安石油大学 | Seismic image denoising method based on convolutional neural network |
CN116506261A (en) * | 2023-06-27 | 2023-07-28 | 南昌大学 | Visible light communication sensing method and system |
CN116523800A (en) * | 2023-07-03 | 2023-08-01 | 南京邮电大学 | Image noise reduction model and method based on residual dense network and attention mechanism |
CN116797818A (en) * | 2023-04-19 | 2023-09-22 | 武汉科技大学 | Feature enhancement loss method and system for target detection and image classification |
-
2021
- 2021-07-19 CN CN202110811621.8A patent/CN113610719A/en active Pending
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114140353A (en) * | 2021-11-25 | 2022-03-04 | 苏州大学 | Swin-Transformer image denoising method and system based on channel attention |
CN114240797A (en) * | 2021-12-22 | 2022-03-25 | 海南大学 | OCT image denoising method, device, equipment and medium |
CN114972130A (en) * | 2022-08-02 | 2022-08-30 | 深圳精智达技术股份有限公司 | Training method, device and training equipment for denoising neural network |
CN114972130B (en) * | 2022-08-02 | 2022-11-18 | 深圳精智达技术股份有限公司 | Training method, device and training equipment for denoising neural network |
CN115147315A (en) * | 2022-09-05 | 2022-10-04 | 杭州涿溪脑与智能研究所 | Neural network fluorescence microscopic image denoising method based on transformer module |
CN115761242B (en) * | 2022-11-15 | 2023-09-19 | 山东财经大学 | Denoising method and terminal based on convolutional neural network and fuzzy image characteristics |
CN115761242A (en) * | 2022-11-15 | 2023-03-07 | 山东财经大学 | Denoising method and terminal based on convolutional neural network and fuzzy image characteristics |
CN116167940A (en) * | 2023-02-24 | 2023-05-26 | 西安石油大学 | Seismic image denoising method based on convolutional neural network |
CN116797818A (en) * | 2023-04-19 | 2023-09-22 | 武汉科技大学 | Feature enhancement loss method and system for target detection and image classification |
CN116797818B (en) * | 2023-04-19 | 2024-04-19 | 武汉科技大学 | Feature enhancement loss method and system for target detection and image classification |
CN116506261B (en) * | 2023-06-27 | 2023-09-08 | 南昌大学 | Visible light communication sensing method and system |
CN116506261A (en) * | 2023-06-27 | 2023-07-28 | 南昌大学 | Visible light communication sensing method and system |
CN116523800A (en) * | 2023-07-03 | 2023-08-01 | 南京邮电大学 | Image noise reduction model and method based on residual dense network and attention mechanism |
CN116523800B (en) * | 2023-07-03 | 2023-09-22 | 南京邮电大学 | Image noise reduction model and method based on residual dense network and attention mechanism |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113610719A (en) | Attention and dense connection residual block convolution kernel neural network image denoising method | |
CN108550115B (en) | Image super-resolution reconstruction method | |
CN112233038B (en) | True image denoising method based on multi-scale fusion and edge enhancement | |
CN111275637B (en) | Attention model-based non-uniform motion blurred image self-adaptive restoration method | |
WO2022047625A1 (en) | Image processing method and system, and computer storage medium | |
CN111754446A (en) | Image fusion method, system and storage medium based on generation countermeasure network | |
CN111861894B (en) | Image motion blur removing method based on generation type countermeasure network | |
CN113012172A (en) | AS-UNet-based medical image segmentation method and system | |
CN114463218B (en) | Video deblurring method based on event data driving | |
CN114841856A (en) | Image super-pixel reconstruction method of dense connection network based on depth residual channel space attention | |
CN113744136A (en) | Image super-resolution reconstruction method and system based on channel constraint multi-feature fusion | |
CN114581330A (en) | Terahertz image denoising method based on multi-scale mixed attention | |
CN111626927A (en) | Binocular image super-resolution method, system and device adopting parallax constraint | |
CN112102259A (en) | Image segmentation algorithm based on boundary guide depth learning | |
CN109871790B (en) | Video decoloring method based on hybrid neural network model | |
CN115861094A (en) | Lightweight GAN underwater image enhancement model fused with attention mechanism | |
Zhang et al. | Dense haze removal based on dynamic collaborative inference learning for remote sensing images | |
Saleem et al. | A non-reference evaluation of underwater image enhancement methods using a new underwater image dataset | |
CN114445299A (en) | Double-residual denoising method based on attention allocation mechanism | |
CN112819705B (en) | Real image denoising method based on mesh structure and long-distance correlation | |
CN116128768B (en) | Unsupervised image low-illumination enhancement method with denoising module | |
CN112200752A (en) | Multi-frame image deblurring system and method based on ER network | |
Yang et al. | RSAMSR: A deep neural network based on residual self-encoding and attention mechanism for image super-resolution | |
CN116563554A (en) | Low-dose CT image denoising method based on hybrid characterization learning | |
CN115272072A (en) | Underwater image super-resolution method based on multi-feature image fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |