CN110020684A - A kind of image de-noising method based on residual error convolution autoencoder network - Google Patents
A kind of image de-noising method based on residual error convolution autoencoder network Download PDFInfo
- Publication number
- CN110020684A CN110020684A CN201910276255.3A CN201910276255A CN110020684A CN 110020684 A CN110020684 A CN 110020684A CN 201910276255 A CN201910276255 A CN 201910276255A CN 110020684 A CN110020684 A CN 110020684A
- Authority
- CN
- China
- Prior art keywords
- image
- network
- residual error
- denoising
- convolution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 20
- 238000012549 training Methods 0.000 claims abstract description 29
- 238000010606 normalization Methods 0.000 claims abstract description 5
- 230000004913 activation Effects 0.000 claims description 15
- 238000012360 testing method Methods 0.000 claims description 13
- 230000000694 effects Effects 0.000 claims description 7
- 230000000007 visual effect Effects 0.000 claims description 7
- 230000007423 decrease Effects 0.000 claims description 4
- 210000002569 neuron Anatomy 0.000 claims description 4
- 241000208340 Araliaceae Species 0.000 claims description 3
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 claims description 3
- 235000003140 Panax quinquefolius Nutrition 0.000 claims description 3
- 235000008434 ginseng Nutrition 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 238000004321 preservation Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 230000009467 reduction Effects 0.000 claims description 3
- 238000005520 cutting process Methods 0.000 claims description 2
- 238000013528 artificial neural network Methods 0.000 abstract description 6
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 abstract description 2
- 238000013135 deep learning Methods 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 8
- 239000006002 Pepper Substances 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000003475 lamination Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 239000011435 rock Substances 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of image de-noising methods based on residual error convolution autoencoder network, in order to overcome traditional shallow-layer linear structural feature extractability limited, the problems such as existing image denoising model based on deep learning is weak there are generalization ability.From encoding block it is basic denoising network structure with residual block, batch normalization layer and the residual error convolution that form of self-encoding encoder, proposes multi-functional denoising residual error convolution and encode neural network certainly.Image de-noising method disclosed by the invention not only possesses blind noise removal capability while keeping higher denoising quality and denoising precision, moreover it is possible to removal and the different noise of training set type.
Description
Technical field
The present invention relates to depth learning technology fields, and in particular to one of image denoising is based on residual error convolution from coding
The image de-noising method of network.
Background technique
Traditional image denoising model can be divided into based on spatial domain, transform domain, rarefaction representation and naturally count four major class.
The characteristics of wherein representative method has: the median filtering method based on spatial domain, and the method has ignored each pixel itself,
Image will will appear more serious blooming after denoising;BLS-GSM based on transform domain, the method energy while denoising
Retain the original information of a part of image;Based on the NLSC of rarefaction representation method, it is longer that the method denoising calculates the time;Based on nature
The BM3D of statistics, the method can only filter out certain specific noise.
In order to overcome the limitation of traditional denoising model, non-linear, depth the image denoising model based on deep learning
Largely proposed.Wherein based on coding (Auto-encoder, AE) network, convolutional neural networks (Convolutional certainly
Neural Network, CNN), generate confrontation network (Generative Adversarial Networks, GAN) be widely applied
In image denoising field.The noise of X-ray image of chest can be effectively removed in DCAENN based on autoencoder network, but can only
Known noise is removed, does not have generalization ability;Based on the SSDAs of the sparse denoising autoencoder network of stack, pass through multiple SSDAs
It is combined, sparse coding is combined into carry out image denoising, but SSDAs with the deep neural network of denoising self-encoding encoder pre-training
Compared with the training for having supervision is relied on, the noise occurred in training set can only be removed;Based on the DnCNN of residual error convolutional neural networks, benefit
It being normalized to accelerate training process with residual error study and batch, DnCNN model is capable of handling the Gauss denoising of unknown noise level,
But the unknown noise of type cannot be removed;The CGAs that confrontation network is generated based on condition is detected the network trained and acutance
Network combines come the process of aiminging drill, and CGAs reduces the training difficulty for generating confrontation network, but denoises while being easily lost
Characteristic.
Summary of the invention
The invention discloses a kind of image de-noising methods based on residual error convolution autoencoder network, comprising the following steps: (1)
Make training set and test set;(2) the residual error convolution formed with residual block, BN layers and self-encoding encoder is to denoise substantially from encoding block
Network structure proposes multi-functional denoising residual error convolution and encodes neural network certainly;(3) it trains network and saves each of network for the first time
A parameter;(4) if the network that test saves for the first time does not meet denoising and requires to continue to train network, stop iteration if meeting the requirements,
Save final network model;(5) picture noise, output denoising image are removed using the final network model saved.It is of the invention public
The image de-noising method opened not only possesses blind noise removal capability, moreover it is possible to go while keeping higher denoising quality and denoising precision
Except with the different noise of training set type.
Technical solution provided by the present invention are as follows: a kind of image de-noising method based on residual error convolution autoencoder network,
It is characterized in that including the following steps:
Step 1, using the image of pretreated original image and corresponding Noise as training set and test set, specific steps are such as
Under:
(1) the triple channel original image of m*m pixel is pre-processed as single channel gray level image, and image is cut;
(2) corresponding noise is added in the gray level image after pretreatment cutting;
(3) using the gray level image of original image and its corresponding plus image made an uproar as one group of data, with the grayscale image of original image
As being used as label, training set and test set are made;
Step 2, building residual error convolution are from encoding block, and main structure is made of n+2 layers of convolutional layer, and identical mapping part is by convolution
From coding structure composition, residual error convolution is exported from encoding block are as follows:
xn+2=f (x)+xcae
xcaePass through the potential feature that convolution self-encoding encoder is extracted for input x, f (x) is input x defeated by n+2 layers of convolutional layer
Out as a result, n is positive integer greater than 1, wherein main structure level 1 volume product core size is 1*1, activation primitive Swish;2nd
Identical to (n+1)th layer of structure, addition batch normalization layer, convolution kernel size are 3*3, activation primitive Relu;N-th+level 2 volume product
Core size is 1*1, activation primitive Swish;
Wherein, Relu activation primitive are as follows:
Swish activation primitive are as follows:
β is the zooming parameter of x, β > 0 in formula;
The residual error convolution that step 3, network structure are mainly proposed by step 2 is formed from encoding block, total (n+2) the * a+8 of network
Layer, a are positive integer greater than 2, and first layer is the convolutional layer for being used to dimensionality reduction, it is intermediate by residual error convolution from encoding block and residual error
Convolution block composition, the last layer are a full articulamentum;
Step 4, by the pretreated training set of step 1, be input in the network model that step 3 is built, adopted by lining up
True value is measured at a distance from predicted value with error back propagation, and with mean square error loss function, passes through data set
Each iteration, the weight between neuron is adjusted using gradient decline to reduce cost function, and then optimize network, and with
Quantitative Y-PSNR and qualitative visual experience judge that network denoises effect, the first parameters for saving network model;
Step 4, by the pretreated training set of step 1, be input in the network model that step 3 is built, adopted by lining up
True value is measured at a distance from predicted value with error back propagation, and with mean square error loss function, passes through the every of data set
Secondary iteration is adjusted the weight between neuron using gradient decline to reduce cost function, and then optimizes network, and with quantitative
Y-PSNR and qualitative visual experience judge that network denoises effect, the first parameters for saving network model;
Mean square error loss function are as follows:
In formula, yiTo pass through the label data for lining up to read in, ziFor the data after output denoising, the smaller representative of mean square error
Data and label data after denoising are closer, and network accuracy rate is higher;
Y-PSNR formula are as follows:
Wherein MMSEIt is the mean square error between original image and processing image, the bigger expression distortion of PSNR numerical value is smaller;
Step 5, by the pretreated test set of step 1, be input to step 4 and optimize in trained network model, and lead to
It crosses quantitative Y-PSNR and qualitative visual experience judges that network denoises effect, if not meeting denoising requires return step 4
Continue training or adjust training network after ginseng, stops iteration if meeting the requirements, save final network model;
Step 6 removes picture noise, output denoising image using the final network model of preservation.
The beneficial effects of the present invention are:
(1) multi-functional denoising residual error convolution can not only filter out make an uproar identical with training data level from coding neural network
Sound can also filter out the noise of other levels;
(2) system method removal Gaussian noise generally uses linear filtering, and salt-pepper noise generally uses median filtering.However,
With the single trained multi-functional denoising residual error convolution of Gaussian noise from neural network is encoded, it not only can remove corresponding types
Noise also can remove other types of noise;
(3) image minutia can be retained, there is higher denoising quality and denoising precision.
Detailed description of the invention
Fig. 1 is flow chart of the invention;
Fig. 2 is residual error convolution of the invention from encoding block structure chart;
Fig. 3 is multi-functional residual error convolution autoencoder network overall construction drawing of the invention;
Fig. 4 is that specific implementation of the invention denoises example.
Symbol description is as follows in figure:
X: input is indicated;
F (x): output is indicated;
xcae: indicate that input x passes through the potential characteristic value that convolution self-encoding encoder is extracted;
xn+2: xn+2=f (x)+xcae;
Con: convolutional layer;
Bn: batch normalization layer;
Encoder: presentation code layer;
Maxpool: maximum pond layer is indicated;
Unpool: anti-pond layer is indicated;
Decoder: decoding layer is indicated;
RCAE Block: indicate residual error convolution from encoding block;
C Block: residual error convolution block is indicated;
Specific embodiment:
In order to efficiently remove the noise in image, this paper residual error convolution is from encoding block, as shown in Fig. 2, having built multi-functional
Residual error convolution autoencoder network model is denoised, as shown in Figure 3.Referring to Fig.1, flow chart of the invention is shown, including is walked as follows
Suddenly.
Step 1, training set and data set production method:
Using the image of pretreated original image and corresponding Noise as training set and test set, the specific steps are as follows:
(1) with the earthquake data set of Tomlinson geophysics service company's rock mass identification challenge match, by 101*101 picture
The triple channel seismic image pretreatment of element is the single channel gray level image of 100*100 pixel;
(2) corresponding noise is added in pretreated gray level image, wherein training set is added additivity Gaussian and makes an uproar
Sound, test set, which will need to be separately added into multiplying property Speckle noise, additivity Localvar noise, Salt-pepper according to experiment, makes an uproar
Sound;
(3) using the gray level image of original image and its corresponding plus image made an uproar as one group of data, with the grayscale image of original image
As being used as label, training set and test set are made, and with the storage of TFRecords format, wherein 20000 are training set, 2000
Opening is test set;
Step 2 constructs residual error convolution from encoding block:
Residual error convolution is made of from encoding block, main structure n+2 layers of convolutional layer, and identical mapping part is by convolution from coding structure
Composition, residual error convolution are exported from encoding block are as follows:
xn+2=f (x)+xcae
This example n takes 1, and residual error convolution is exported from encoding block at this time are as follows:
x3=f (x)+xcae
xcaePass through the potential feature that convolution self-encoding encoder is extracted for input x, f (x) is input x by 3 convolutional layer outputs
Result.Wherein, main structure first layer convolution kernel size is 1*1, activation primitive Swish;Second layer addition batch normalization layer,
Convolution kernel size is 3*3, activation primitive Relu;Third layer, convolution kernel size are 1*1, activation primitive Swish;It is identical to reflect
It penetrates as 4 layers of convolution autoencoder network;
Wherein, Relu activation primitive are as follows:
Swish activation primitive are as follows:
β is the zooming parameter of x in formula, this example β value takes 1;
The residual error convolution that step 3, network structure are mainly proposed by step 2 is formed from encoding block, shares a+8 layers of (n+2) *, a
For the positive integer greater than 2.This example n takes 1, a to take 3, and network structure has 17 layers altogether at this time, and first layer is the volume for being used to dimensionality reduction
Lamination, 2,3,4 layers are made of a residual error convolution from encoding block, and 5,6,7 layers are made of a residual error convolution block, and next 6
Layer is made of 2 residual error convolution from encoding block, is then made of a residual error convolution block for 14,15,16 layers, and the last layer is one
The full articulamentum of 10000 features;
Step 4, by the pretreated training set of step 1, be input in the network model that step 3 is built, adopted by lining up
True value is measured at a distance from predicted value with error back propagation, and with mean square error loss function, passes through the every of data set
Secondary iteration is adjusted the weight between neuron using gradient decline to reduce cost function, and then optimizes network, and with quantitative
Y-PSNR and qualitative visual experience judge that network denoises effect, the first parameters for saving network model;
Mean square error loss function are as follows:
In formula, yiTo pass through the label data for lining up to read in, ziFor the data after output denoising, the smaller representative of mean square error
Data and label data after denoising are closer, and network accuracy rate is higher;
Y-PSNR formula are as follows:
Wherein MMSEIt is the mean square error between original image and processing image, the bigger expression distortion of PSNR numerical value is smaller;
Step 5, by the pretreated test set of step 1, be input to step 4 and optimize in trained network model, and lead to
It crosses quantitative Y-PSNR and qualitative visual experience judges that network denoises effect, if not meeting denoising requires return step 4
Continue training or adjust training network after ginseng, stops iteration if meeting the requirements, save final network model;
Step 6, network model denoising:
Picture noise, output denoising image are removed using the final network model of preservation.
Fig. 4 is the example of present invention removal and training set type not same noise, wherein a) is free of noise pattern to be original
Picture;B) be followed successively by containing Gaussian noise, Localvar noise, multiplicative noise, salt-pepper noise image;C) it is followed successively by removal of the present invention
Gaussian noise, Localvar noise, multiplicative noise, salt-pepper noise image.
Claims (1)
1. a kind of image de-noising method based on residual error convolution autoencoder network, it is characterised in that include the following steps:
Step 1, using the image of pretreated original image and corresponding Noise as training set and test set, the specific steps are as follows:
(1) the triple channel original image of m*m pixel is pre-processed as single channel gray level image, and image is cut;
(2) corresponding noise is added in the gray level image after pretreatment cutting;
(3) using the gray level image of original image and its corresponding plus image of making an uproar as one group of data, using the gray level image of original image as
Label makes training set and test set;
Step 2, building residual error convolution are from encoding block, and main structure is made of n+2 layers of convolutional layer, and identical mapping part is self-editing by convolution
Code structure composition, residual error convolution are exported from encoding block are as follows:
xn+2=f (x)+xcae
xcaePass through the potential feature that convolution self-encoding encoder is extracted for input x, f (x) is that input x is exported by n+2 layers of convolutional layer
As a result, n is the positive integer greater than 1, wherein main structure level 1 volume product core size is 1*1, activation primitive Swish;2nd to
N+1 layers of structure are identical, addition batch normalization layer, and convolution kernel size is 3*3, activation primitive Relu;N-th+level 2 volume product core is big
Small is 1*1, activation primitive Swish;
Wherein Relu activation primitive are as follows:
Swish activation primitive are as follows:
β is the zooming parameter of x, β > 0 in formula;
The residual error convolution that step 3, network structure are mainly proposed by step 2 is formed from encoding block, and a+8 layers of network total (n+2) *, a are
Positive integer greater than 2, first layer are the convolutional layers for being used to dimensionality reduction, and middle layer is by residual error convolution from encoding block and residual error convolution
Block composition, the last layer are a full articulamentum;
Step 4, by the pretreated training set of step 1, be input in the network model that step 3 is built by lining up, using accidentally
Poor backpropagation, and true value is measured at a distance from predicted value with mean square error loss function, pass through data set every time repeatedly
In generation, is adjusted the weight between neuron using gradient decline to reduce cost function, and then optimizes network, and with quantitative peak
Value signal-to-noise ratio and qualitative visual experience judge that network denoises effect, the first parameters for saving network model;
Mean square error loss function are as follows:
In formula, yiTo pass through the label data for lining up to read in, ziFor the data after output denoising, the smaller representative denoising of mean square error
Data afterwards and label data are closer, and network accuracy rate is higher;
Y-PSNR formula are as follows:
Wherein MMSEIt is the mean square error between original image and processing image, the bigger expression distortion of PSNR numerical value is smaller;
Step 5, by the pretreated test set of step 1, be input to step 4 and optimize in trained network model, and by fixed
The Y-PSNR of amount and qualitative visual experience judge that network denoises effect, require return step 4 to continue if not meeting denoising
Training network after ginseng is adjusted in training, is stopped iteration if meeting the requirements, is saved final network model;
Step 6 removes picture noise, output denoising image using the final network model of preservation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910276255.3A CN110020684B (en) | 2019-04-08 | 2019-04-08 | Image denoising method based on residual convolution self-coding network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910276255.3A CN110020684B (en) | 2019-04-08 | 2019-04-08 | Image denoising method based on residual convolution self-coding network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110020684A true CN110020684A (en) | 2019-07-16 |
CN110020684B CN110020684B (en) | 2021-01-29 |
Family
ID=67190690
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910276255.3A Expired - Fee Related CN110020684B (en) | 2019-04-08 | 2019-04-08 | Image denoising method based on residual convolution self-coding network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110020684B (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110007347A (en) * | 2019-04-09 | 2019-07-12 | 西南石油大学 | A kind of deep learning seismic data denoising method |
CN111028163A (en) * | 2019-11-28 | 2020-04-17 | 湖北工业大学 | Convolution neural network-based combined image denoising and weak light enhancement method |
CN111260591A (en) * | 2020-03-12 | 2020-06-09 | 武汉大学 | Image self-adaptive denoising method based on attention mechanism |
CN111340729A (en) * | 2019-12-31 | 2020-06-26 | 深圳大学 | Training method for depth residual error network for removing Moire pattern of two-dimensional code |
CN111461224A (en) * | 2020-04-01 | 2020-07-28 | 西安交通大学 | Phase data unwrapping method based on residual self-coding neural network |
CN111580161A (en) * | 2020-05-21 | 2020-08-25 | 长江大学 | Earthquake random noise suppression method based on multi-scale convolution self-coding neural network |
CN111681293A (en) * | 2020-06-09 | 2020-09-18 | 西南交通大学 | SAR image compression method based on convolutional neural network |
CN111915513A (en) * | 2020-07-10 | 2020-11-10 | 河海大学 | Image denoising method based on improved adaptive neural network |
CN112561918A (en) * | 2020-12-31 | 2021-03-26 | 中移(杭州)信息技术有限公司 | Convolutional neural network training method and focus segmentation method |
CN113094993A (en) * | 2021-04-12 | 2021-07-09 | 电子科技大学 | Modulation signal denoising method based on self-coding neural network |
CN113139553A (en) * | 2020-01-16 | 2021-07-20 | 中国科学院国家空间科学中心 | U-net-based method and system for extracting aurora ovum form of ultraviolet aurora image |
CN113822437A (en) * | 2020-06-18 | 2021-12-21 | 辉达公司 | Deep layered variational automatic encoder |
CN114494047A (en) * | 2022-01-11 | 2022-05-13 | 辽宁师范大学 | Biological image denoising method based on dual-enhancement residual error network |
CN114513684A (en) * | 2020-11-16 | 2022-05-17 | 飞狐信息技术(天津)有限公司 | Method for constructing video image quality enhancement model, method and device for enhancing video image quality |
CN115794357A (en) * | 2023-01-16 | 2023-03-14 | 山西清众科技股份有限公司 | Device and method for automatically building multi-task network |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103996056A (en) * | 2014-04-08 | 2014-08-20 | 浙江工业大学 | Tattoo image classification method based on deep learning |
US20180144243A1 (en) * | 2016-11-23 | 2018-05-24 | General Electric Company | Hardware system design improvement using deep learning algorithms |
CN108376387A (en) * | 2018-01-04 | 2018-08-07 | 复旦大学 | Image deblurring method based on polymerization expansion convolutional network |
CN109118435A (en) * | 2018-06-15 | 2019-01-01 | 广东工业大学 | A kind of depth residual error convolutional neural networks image de-noising method based on PReLU |
CN109359519A (en) * | 2018-09-04 | 2019-02-19 | 杭州电子科技大学 | A kind of video anomaly detection method based on deep learning |
CN109584325A (en) * | 2018-10-30 | 2019-04-05 | 河北科技大学 | A kind of two-way coloration method for the animation image unanimously fighting network based on the U-shaped period |
-
2019
- 2019-04-08 CN CN201910276255.3A patent/CN110020684B/en not_active Expired - Fee Related
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103996056A (en) * | 2014-04-08 | 2014-08-20 | 浙江工业大学 | Tattoo image classification method based on deep learning |
US20180144243A1 (en) * | 2016-11-23 | 2018-05-24 | General Electric Company | Hardware system design improvement using deep learning algorithms |
CN108376387A (en) * | 2018-01-04 | 2018-08-07 | 复旦大学 | Image deblurring method based on polymerization expansion convolutional network |
CN109118435A (en) * | 2018-06-15 | 2019-01-01 | 广东工业大学 | A kind of depth residual error convolutional neural networks image de-noising method based on PReLU |
CN109359519A (en) * | 2018-09-04 | 2019-02-19 | 杭州电子科技大学 | A kind of video anomaly detection method based on deep learning |
CN109584325A (en) * | 2018-10-30 | 2019-04-05 | 河北科技大学 | A kind of two-way coloration method for the animation image unanimously fighting network based on the U-shaped period |
Non-Patent Citations (1)
Title |
---|
唐贤伦: "基于条件深度卷积生成对抗网络的图像识别方法", 《自动化学报》 * |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110007347B (en) * | 2019-04-09 | 2020-06-30 | 西南石油大学 | Deep learning seismic data denoising method |
CN110007347A (en) * | 2019-04-09 | 2019-07-12 | 西南石油大学 | A kind of deep learning seismic data denoising method |
CN111028163A (en) * | 2019-11-28 | 2020-04-17 | 湖北工业大学 | Convolution neural network-based combined image denoising and weak light enhancement method |
CN111028163B (en) * | 2019-11-28 | 2024-02-27 | 湖北工业大学 | Combined image denoising and dim light enhancement method based on convolutional neural network |
CN111340729A (en) * | 2019-12-31 | 2020-06-26 | 深圳大学 | Training method for depth residual error network for removing Moire pattern of two-dimensional code |
CN111340729B (en) * | 2019-12-31 | 2023-04-07 | 深圳大学 | Training method for depth residual error network for removing Moire pattern of two-dimensional code |
CN113139553A (en) * | 2020-01-16 | 2021-07-20 | 中国科学院国家空间科学中心 | U-net-based method and system for extracting aurora ovum form of ultraviolet aurora image |
CN111260591B (en) * | 2020-03-12 | 2022-04-26 | 武汉大学 | Image self-adaptive denoising method based on attention mechanism |
CN111260591A (en) * | 2020-03-12 | 2020-06-09 | 武汉大学 | Image self-adaptive denoising method based on attention mechanism |
CN111461224A (en) * | 2020-04-01 | 2020-07-28 | 西安交通大学 | Phase data unwrapping method based on residual self-coding neural network |
CN111461224B (en) * | 2020-04-01 | 2022-08-16 | 西安交通大学 | Phase data unwrapping method based on residual self-coding neural network |
CN111580161A (en) * | 2020-05-21 | 2020-08-25 | 长江大学 | Earthquake random noise suppression method based on multi-scale convolution self-coding neural network |
CN111681293B (en) * | 2020-06-09 | 2022-08-23 | 西南交通大学 | SAR image compression method based on convolutional neural network |
CN111681293A (en) * | 2020-06-09 | 2020-09-18 | 西南交通大学 | SAR image compression method based on convolutional neural network |
CN113822437A (en) * | 2020-06-18 | 2021-12-21 | 辉达公司 | Deep layered variational automatic encoder |
CN113822437B (en) * | 2020-06-18 | 2024-05-24 | 辉达公司 | Automatic variable-dividing encoder for depth layering |
CN111915513A (en) * | 2020-07-10 | 2020-11-10 | 河海大学 | Image denoising method based on improved adaptive neural network |
CN114513684A (en) * | 2020-11-16 | 2022-05-17 | 飞狐信息技术(天津)有限公司 | Method for constructing video image quality enhancement model, method and device for enhancing video image quality |
CN114513684B (en) * | 2020-11-16 | 2024-05-28 | 飞狐信息技术(天津)有限公司 | Method for constructing video image quality enhancement model, video image quality enhancement method and device |
CN112561918A (en) * | 2020-12-31 | 2021-03-26 | 中移(杭州)信息技术有限公司 | Convolutional neural network training method and focus segmentation method |
CN113094993B (en) * | 2021-04-12 | 2022-03-29 | 电子科技大学 | Modulation signal denoising method based on self-coding neural network |
CN113094993A (en) * | 2021-04-12 | 2021-07-09 | 电子科技大学 | Modulation signal denoising method based on self-coding neural network |
CN114494047A (en) * | 2022-01-11 | 2022-05-13 | 辽宁师范大学 | Biological image denoising method based on dual-enhancement residual error network |
CN114494047B (en) * | 2022-01-11 | 2024-04-02 | 辽宁师范大学 | Biological image denoising method based on dual-enhancement residual error network |
CN115794357A (en) * | 2023-01-16 | 2023-03-14 | 山西清众科技股份有限公司 | Device and method for automatically building multi-task network |
Also Published As
Publication number | Publication date |
---|---|
CN110020684B (en) | 2021-01-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110020684A (en) | A kind of image de-noising method based on residual error convolution autoencoder network | |
CN110007347A (en) | A kind of deep learning seismic data denoising method | |
CN110045419A (en) | A kind of perceptron residual error autoencoder network seismic data denoising method | |
CN110060690B (en) | Many-to-many speaker conversion method based on STARGAN and ResNet | |
CN109919204B (en) | Noise image-oriented deep learning clustering method | |
CN111640444B (en) | CNN-based adaptive audio steganography method and secret information extraction method | |
CN110221346A (en) | A kind of data noise drawing method based on the full convolutional neural networks of residual block | |
CN109272499A (en) | Non-reference picture quality appraisement method based on convolution autoencoder network | |
CN110751044A (en) | Urban noise identification method based on deep network migration characteristics and augmented self-coding | |
CN112215054B (en) | Depth generation countermeasure method for denoising underwater sound signal | |
CN111341294B (en) | Method for converting text into voice with specified style | |
CN111833277A (en) | Marine image defogging method with non-paired multi-scale hybrid coding and decoding structure | |
CN112541865A (en) | Underwater image enhancement method based on generation countermeasure network | |
CN113379601A (en) | Real world image super-resolution method and system based on degradation variational self-encoder | |
CN114509731B (en) | Radar main lobe anti-interference method based on double-stage depth network | |
CN116091288A (en) | Diffusion model-based image steganography method | |
CN110533575A (en) | A kind of depth residual error steganalysis method based on isomery core | |
CN114200520B (en) | Seismic data denoising method | |
CN116112685A (en) | Image steganography method based on diffusion probability model | |
Wang et al. | High visual quality image steganography based on encoder-decoder model | |
CN113949880B (en) | Extremely-low-bit-rate man-machine collaborative image coding training method and coding and decoding method | |
CN113936680B (en) | Single-channel voice enhancement method based on multi-scale information perception convolutional neural network | |
CN109948517A (en) | A kind of high-resolution remote sensing image semantic segmentation method based on intensive full convolutional network | |
CN113743188B (en) | Feature fusion-based internet video low-custom behavior detection method | |
CN110958417B (en) | Method for removing compression noise of video call video based on voice clue |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20210129 |
|
CF01 | Termination of patent right due to non-payment of annual fee |