CN109785249A - A kind of Efficient image denoising method based on duration memory intensive network - Google Patents

A kind of Efficient image denoising method based on duration memory intensive network Download PDF

Info

Publication number
CN109785249A
CN109785249A CN201811576192.5A CN201811576192A CN109785249A CN 109785249 A CN109785249 A CN 109785249A CN 201811576192 A CN201811576192 A CN 201811576192A CN 109785249 A CN109785249 A CN 109785249A
Authority
CN
China
Prior art keywords
network
noise
rdu
block
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811576192.5A
Other languages
Chinese (zh)
Inventor
刘辉
梁祖仲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunming University of Science and Technology
Original Assignee
Kunming University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunming University of Science and Technology filed Critical Kunming University of Science and Technology
Priority to CN201811576192.5A priority Critical patent/CN109785249A/en
Publication of CN109785249A publication Critical patent/CN109785249A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention relates to the Efficient image denoising methods based on duration memory intensive network, belong to field of image processing.The present invention chooses training set, verifying collection and test set first, and is pre-processed, and the degeneration block with noise and the clear block without noise are obtained;The degeneration block with noise is handled by shallow-layer feature extraction module, duration memory intensive network module, internal layer nested networks module and residual error evaluation module, obtains estimating denoising block;Denoising block and the known clear block calculating mean square error without noise are estimated using what network exported, to carry out loss value metric;Since network end-point, weight gradient value is sought using Adam operator and updates network parameter;Training terminates and obtains denoising model after network convergence;The image of removal noise is obtained containing gaussian noise image in the input of trained denoising model.The present invention can handle white Gaussian noise present in natural image in the case where not destroying original texture image marginal information with degree of precision.

Description

A kind of Efficient image denoising method based on duration memory intensive network
Technical field
The present invention relates to a kind of Efficient image denoising methods based on duration memory intensive network, belong to image procossing skill Art field.
Background technique
Image denoising is a classical basic problem in computer vision, and the problem is in academic research and industrial application In obtain extensive concern.The target of denoising is that clear image x is recovered from a noisy image y=x+n, therein to make an uproar Sound n is often assumed to be additive white Gaussian noise.From the point of view of Bayesian analysis, under the conditions of known to the likelihood value, image Prior model plays quite crucial effect to denoising.In past many decades, denoising method has enumerated most figure As priori, including non local self-similarity model, sparse model, gradient former and Hidden Markov domain model etc., wherein being based on The method of self similarity model, such as the denoising effect of BM3D show well.
However, existing for the denoising model based on transcendental method one be obviously disadvantageous in that, test phase need compared with The long time optimizes calculating, and limited in view of the breakthrough of original technology basic theory, and the denoising performance of image arrived bottle In the neck stage, further to increase substantially recovery effect just becomes to be increasingly difficult to.Simultaneously as denoising model is non-convex problem, The some parameters being related in optimization process are manually set, this also further constrains further mentioning for denoising performance It is high.In recent years, occurred in succession in engineering fields such as image classification, identifications compared with quantum jump, academia with depth learning technology Just attempt that depth convolutional network is used in combination to carry out image denoising, and introduces the calculation outstanding of some performances in heuristic process Method.
2008, Viren Jain et al. proposed to use method (the Natural Image Denoising of convolutional network With Convolutional Networks, abbreviation CN1) Lai Jinhang natural image denoising, this method combine specific noise Model generates training sample, by training, obtain better than based on wavelet method to based on the related of markov random file Method, the performance in blind denoising are equally matched with other models based on non-blind setting.Author thinks, based on convolutional network with Method based on markov random file (MRF) has certain connection on mathematical notation, but MRF method is in the ginseng of probabilistic model A large amount of computing resource is occupied in number estimation and deduction.The method of convolutional network then avoids carrying out the statistical of density estimation Analysis, but regression problem is converted by denoising process.2012, Xie Junyuan et al. proposed to stack sparse noise certainly Coding method (Stacked sparse denoising auto-encoder architecture, abbreviation SSDA), this method Sparse coding and depth network is used in combination to carry out image denoising, wherein the denoising of depth Web vector graphic pre-training is next from encoding It is initialized.Similar to the training process of CN1, SSDA successively increases network concealed layer, and is joined using the weight of front multilayer It counts to initialize the initial parameter value of next hidden layer.And after one hidden layer of every increase, it introduces KL divergence and works as to calculate The penalty values of preceding all layer state drags make the mean value activation value of hidden unit most by using lesser logarithm weight It is sparse to achieve the effect that may tend to zero.The network of both the above method is successively trained, and at the beginning of the weight of network Initial value not automatically generates, but requires human intervention every layer of inside, which can constrain master mould to a certain extent Performance is optimal value, and whole training process is time-consuming.Meanwhile the construction of model is all layer-by-layer in fully-connected network It stacks, this will promote the population parameter for generating model also abnormal big when network depth is shallower.
2016, Mao Xiaojiao et al. proposed that the depth convolutional encoding decoding network RED30 of symmetrical parallel link is used In image denoising, the network is by multiple convolutional layers and the layer that deconvolutes as basic unit, and directly progress is from noise image to clear End-to-end study between image.Due to having used symmetrical connection, picture signal is directly passed in the process of backpropagation It pulls over the closer shallow-layer of distance input image, therefore can largely accelerate the training of network entirety, and alleviate ladder well Spend disappearance problem;Simultaneously because convolution process is directly to carry out information extraction from original image, and the layer that deconvolutes is responsible for extraction Feature is rebuild, therefore the direct energy of flow between information avoids the loss of original image signal to the full extent.It is opened from the network Begin, neural network relevant to image denoising starts gradually to develop to deeper.2017, Zhang Kai et al. was proposed DnCNN, this method are normalized to accelerate network training using residual error study and batch.Wherein the application of residual error learning strategy makes net Network only learns noise information, obtains final output by the difference between input value and noise information.Above two algorithm exists Excellent properties are shown on image denoising, but do not make full use of the characteristic information of image in finite layer, and are only to make Simple planar, stacked is carried out with convolutional layer or is stacked using residual block.The chain type connection structure of basic block is to certain depth After the problem of being easy to cause gradient to disappear, and its depth deficiency is difficult to obtain a complicated mapping process.For this purpose, of the invention Existing mutual restricting relation between the two factors is released by introducing dense network structure.
Summary of the invention
The present invention provides a kind of Efficient image denoising methods based on duration memory intensive network, are able to achieve image Efficiently denoising, while retaining the details of original image as much as possible.
The technical scheme is that a kind of Efficient image denoising method based on duration memory intensive network (Hierarchical Nesting Dense Block for Single-image Super-resolution, referred to as DBSR), include the following steps:
Step1, training set, verifying collection and test set are chosen, and training set and verifying collection is pre-processed, obtained band and make an uproar The degeneration block of sound and the clear block without noise;
Step2, by shallow-layer feature extraction module, duration memory intensive network module, internal layer nested networks module and Residual error evaluation module handles the degeneration block with noise, obtains estimating denoising block;
Preliminary noise feature extraction is carried out to input picture using shallow-layer feature extraction module first, will then extract gained Information be transmitted to duration memory intensive network module, carry out fine noise using the stacking of residual error dense cell by the module The extraction of feature;After the three-dimensional arrangement of multiple residual error dense cells output, gained information is further transferred to two accesses Internal layer nested networks module carries out feature and refines process;Obtained information is the making an uproar of obtaining of extraction for being network at this time Sound subtract each other can be obtained and estimates denoising by the output of shallow-layer feature extraction module and the output valve of internal layer nested networks module Block;
Step3, using network export estimate denoising block and the known clear block without noise calculates mean square error, with into Row loss value metric;
Step4, since network end-point, seek weight gradient value using Adam operator and update network parameter;
Step5, circulation step Step2-Step4, training terminates and obtains denoising model after network convergence;
Step6, the image of removal noise is obtained containing gaussian noise image in the input of trained denoising model.
Further, in the step Step1,900 2K used in image restoration and enhancing match NTIRE are selected Training set, verifying collection of the high definition png image as model, wherein preceding 800 images are used for model training, latter 100 for testing Card;It selects image denoising standard data set Set12, BSD68 to test network model, is concentrated in training set and verifying, it is right Every 2K high definition original image adds white Gaussian noise, is then intercepted respectively in high definition original image and the same position of noise pattern with step-length 27 The image block that size is 96 × 96, obtains the clear block y without noisenfWith the degeneration block x with noise.
Further, in the step 2, the network first floor is preliminary using the shallow-layer feature extraction module comprising single convolution block Extract noise characteristic;Then, it using residual error dense cell as basic unit, and is held by the layer-by-layer nesting of basic unit to construct Continuous property memory intensive network module, to extract more granularity noise characteristic information;In the latter half of network, internal layer nested networks make Channel integration is carried out to noise characteristic map with two parallel sub-networks, is realized by different numbers, different size of convolution kernel The fining of noise information feature, to obtain image high-frequency noise information;Finally, network end-point will include high-frequency noise information Picture material is integrated with the noise characteristic tentatively extracted, by asking the way of difference to remove noise in residual error evaluation module Image.
Further, the specific steps of the step Step2:
Step2.1, shallow-layer feature extraction module tentatively extract noise characteristic: using the degeneration block x with noise as input value, Increase the port number of input block by shallow-layer feature extraction module, and carries out the preliminary extraction of noise information to it;
Shallow-layer feature extraction module using the degeneration block x with noise as input, by convolution operation Conv, weight bias and After band parameters revision linear unit PReLU, T is exported-1;The process is expressed with mathematical formulae are as follows: T-1=PReLU (aC1,C1), Middle C1=W1*x+b1The output of process, W are biased for convolution1For the weight parameter of convolution operation, b1For amount of bias, a is that band parameter is repaired The initial value of linear positive unit PReLU;
Step2.2, duration memory intensive network module refine noise characteristic information:
Duration memory intensive network module (persistent memory dense block, PMDB) is intensive with residual error Unit (Residual Dense Unit, RDU) is used as its basic unit, passes through the way of nesting RDU further inside RDU To achieve the purpose that extract more granularity noise characteristic information;
Specific practice are as follows: PMDB is with the output valve T of shallow-layer feature extraction module-1As input, after a convolution block Obtain output valve T0;Subsequent T0Multiple RDU units containing recurrence connection are passed to, output valve T is obtainedPMDB;Since PMDB is whole The main composition of a module is RDU unit, therefore first is unfolded to describe to RDU unit;
For convenience of understanding, the transmitting without nested RDU unit wherein information is illustrated first;With RDUkIt indicates K-th of the RDU stacked on PMDB, wherein The total number of layer RDU unit where PMDB;Then its input value For RDUk-1, output valve RDUk+1;In RDUkInside compresses the dimension of input value using a convolution block, is made an uproar Acoustic signature RDUk,b1, symbol b1 is for identifying the convolution block, and and then, which is input to a duration memory intensive Network Ndense, export RDUk,nd, symbol nd in dense network, used 8 convolution blocks, often for identifying the dense network The characteristic spectrum number of a convolution block output is 64, therefore when finally combining 8 convolution blocks in a manner of channel attached, it obtains To 64 × 8=512 characteristic spectrum;It is easy to cause memory overload since characteristic spectrum dimension is excessively high, therefore in dense network Port number is dropped to 64 from 512 using 1 × 1 convolution kernel by end;Duration memory intensive network NdenseWith mathematical notation are as follows:
RDUk,nd=Hl([d1,…,di,…,d8])
Wherein HlFor NdenseThe channel reduction process of end, diIndicate NdenseI-th of the convolution block RDU in insidek,diOutput Value, symbol [] indicate three-dimensional union operation Concatenation, finally, being connected using a global residual error by noise characteristic It extracts, mathematical expression are as follows: RDUk+1=RDUk-1-RDUk,nd=fRDU(RDUk-1), wherein fRDUIndicate that RDU unit institute is right The mapping function answered;
In the PMDB with nesting RDU unit, in addition to duration memory intensive network NdenseInterior roll block RDUk,di's Other than input value is different, with the convolution block number, channel attached mode and characteristic spectrum drop used inside nesting RDU unit Dimension is all consistent with without nested RDU unit;Dense network N inside non-nested RDU unitdense, exporting result is diIts input value of convolution block be di-1, and come in the input value with the dense network inside nesting RDU unit, each convolution block From in another RDU unit, i.e. output result is diIts input value of convolution block be fRDU(di-1);
In the end duration memory intensive network module PMDB, the mode that is connected using recurrence by the output of preceding layers into Row is three-dimensional to merge stacking, obtains the output valve T of the modulePMDB, three-dimensional union operation expression therein are as follows: TPMDB=[RDU1,…, RDUk,…,RDUD], RDU thereinkIndicate k-th of RDU unit;
Step2.3, internal layer nested networks module carry out the fining of noise information: internal layer nested networks module is with duration The output valve T of memory intensive network modulePMDBIt is internal that noise information is made into one using two parallel sub-networks as input The fining of step is extracted;Image original feature channel is dropped to 64 using the filter that size is 1 × 1 by one of network path A, another access also first reduces dimension using same method, is then reduced to 64 channels using 3 × 3 filter 32;Nested networks module can introduce more complicated Nonlinear Mapping, module end in the case where not increasing multi-parameter to network The characteristic spectrum of two paths is carried out three-dimensional merging, the output result T obtained at this time by endninTo be added in former clear image Noise information;
Step2.4, residual error evaluation module: the module is mainly by the resulting feature T of shallow-layer network abstraction-1Net nested with internal layer The output T of network moduleninSubtracted each other and convolution block is used to finely tune difference, obtains estimating denoising block yest
Further, the step Step3 includes:
Denoising block y is estimated using what network exportedestWith the known clear block y without noisenfCarry out the calculating of penalty values; Specifically distance between the two, mathematical expression are measured using square mean error amount are as follows: The wherein total number with noise degeneration block x that N is inputted by an iteration, yest (i)Indicate that i-th band noise is degenerated Block estimates denoising block as a result, y what is obtained after network trainingnf (i)Then for corresponding to i-th input tape noise degeneration block Without the clear block of noise, Θ is the general designation of all parameters in Step2.
Further, the particular content of step Step4, Step5 is as follows:
Network weight is updated using Adam operator, the stop condition of circulation step Step2-Step3, iteration is, Network losses value changes between adjacent the number of iterations is less than 1e-3;Or the number of iterations of the network in convergence reaches 200 times, repeatedly After the completion of generation, trained network model is obtained.
The beneficial effects of the present invention are:
1, characteristic extraction procedure has used duration memory intensive network module, and the noise spot for obtaining extraction is more smart Really;
2, iteration block has been used to connect in duration memory intensive network module, the different block energy of degree affected by noise It is handled by uniformly combining;
3, the noise characteristic extraction later period joined internal layer nested networks module, which helps to be added more in network internal Complicated mapping, ensure that and be unlikely to destroy the structural information of original image too much while extracting noise spot;
4, residual error evaluation module, which is added, in network end-point can largely mitigate the difficulty of network training;
5, by different training sets, which can be used for image deblocking, deblurring process, and up-sampling is added further After layer, image super-resolution rebuilding process can be applied to;
6, the present invention can be handled in natural image in the case where not destroying original texture image marginal information with degree of precision Existing white Gaussian noise.
Detailed description of the invention
Fig. 1 is the flow chart of step Step2 of the present invention;
The convergence graph and various algorithms corresponding PSNR value of model training when Fig. 2 is white Gaussian noise standard deviation sigma=45;
Fig. 3 is many algorithms on the picture " img004 " of standard testing collection Set12, when standard deviation sigma=45, denoises effect And partial enlarged view, wherein (3a) original image;(3b) noisy image;(3c) EPLL algorithm;(3d) BM3D algorithm;(3e)NCSR Algorithm;(3f) DnCNN algorithm;(3g) MemNet algorithm;DBSR algorithm (3h) of the invention;
Fig. 4 is many algorithms on the picture " img011 " of standard testing collection Set12, when standard deviation sigma=15, denoises effect And partial enlarged view, wherein (4a) original image;(4b) noisy image;(4c) EPLL algorithm;(4d) BM3D algorithm;(4e)NCSR Algorithm;(4f) DnCNN algorithm;(4g) MemNet algorithm;DBSR algorithm (4h) of the invention.
Specific embodiment
Embodiment 1: as shown in Figs 1-4, a kind of Efficient image denoising method based on duration memory intensive network, including Following steps:
Step1, training set, verifying collection and test set are chosen, and training set and verifying collection is pre-processed, obtained band and make an uproar The degeneration block of sound and the clear block without noise;
Step2, by shallow-layer feature extraction module, duration memory intensive network module, internal layer nested networks module and Residual error evaluation module handles the degeneration block with noise, obtains estimating denoising block;
Preliminary noise feature extraction is carried out to input picture using shallow-layer feature extraction module first, will then extract gained Information be transmitted to duration memory intensive network module, carry out fine noise using the stacking of residual error dense cell by the module The extraction of feature;After the three-dimensional arrangement of multiple residual error dense cells output, gained information is further transferred to two accesses Internal layer nested networks module carries out feature and refines process;Obtained information is the making an uproar of obtaining of extraction for being network at this time Sound subtract each other can be obtained and estimates denoising by the output of shallow-layer feature extraction module and the output valve of internal layer nested networks module Block;
Step3, using network export estimate denoising block and the known clear block without noise calculates mean square error, with into Row loss value metric;
Step4, since network end-point, seek weight gradient value using Adam operator and update network parameter;
Step5, circulation step Step2-Step4, training terminates and obtains denoising model after network convergence;
Step6, the image of removal noise is obtained containing gaussian noise image in the input of trained denoising model.
Further, in the step Step1,900 2K used in image restoration and enhancing match NTIRE are selected Training set, verifying collection of the high definition png image as model, wherein preceding 800 images are used for model training, latter 100 for testing Card;It selects image denoising standard data set Set12, BSD68 to test network model, is concentrated in training set and verifying, it is right Every 2K high definition original image adds white Gaussian noise, is then intercepted respectively in high definition original image and the same position of noise pattern with step-length 27 The image block that size is 96 × 96, obtains the clear block y without noisenfWith the degeneration block x with noise.
Further, in the step 2, the network first floor is preliminary using the shallow-layer feature extraction module comprising single convolution block Extract noise characteristic;Then, it using residual error dense cell as basic unit, and is held by the layer-by-layer nesting of basic unit to construct Continuous property memory intensive network module, to extract more granularity noise characteristic information;In the latter half of network, internal layer nested networks make Channel integration is carried out to noise characteristic map with two parallel sub-networks, is realized by different numbers, different size of convolution kernel The fining of noise information feature, to obtain image high-frequency noise information;Finally, network end-point will include high-frequency noise information Picture material is integrated with the noise characteristic tentatively extracted, by asking the way of difference to remove noise in residual error evaluation module Image.
Further, the specific steps of the step Step2:
Step2.1, shallow-layer feature extraction module tentatively extract noise characteristic: using the degeneration block x with noise as input value, Increase the port number of input block by shallow-layer feature extraction module, and carries out the preliminary extraction of noise information to it;
Shallow-layer feature extraction module using the degeneration block x with noise as input, by convolution operation Conv, weight bias and After band parameters revision linear unit PReLU, T is exported-1;The process is expressed with mathematical formulae are as follows: T-1=PReLU (aC1,C1), Middle C1=W1*x+b1The output of process, W are biased for convolution1For the weight parameter of convolution operation, b1For amount of bias, a is that band parameter is repaired The initial value of linear positive unit PReLU;
Step2.2, duration memory intensive network module refine noise characteristic information:
Duration memory intensive network module (persistent memory dense block, PMDB) is intensive with residual error Unit (Residual Dense Unit, RDU) is used as its basic unit, passes through the way of nesting RDU further inside RDU To achieve the purpose that extract more granularity noise characteristic information;
Specific practice are as follows: PMDB is with the output valve T of shallow-layer feature extraction module-1As input, after a convolution block Obtain output valve T0;Subsequent T0Multiple RDU units containing recurrence connection are passed to, output valve T is obtainedPMDB;Since PMDB is whole The main composition of a module is RDU unit, therefore first is unfolded to describe to RDU unit;
For convenience of understanding, the transmitting without nested RDU unit wherein information is illustrated first;With RDUkIt indicates K-th of the RDU stacked on PMDB, wherein The total number of layer RDU unit where PMDB;Then its input value For RDUk-1, output valve RDUk+1;In RDUkInside compresses the dimension of input value using a convolution block, is made an uproar Acoustic signature RDUk,b1, symbol b1 is for identifying the convolution block, and and then, which is input to a duration memory intensive Network Ndense, export RDUk,nd, symbol nd in dense network, used 8 convolution blocks, often for identifying the dense network The characteristic spectrum number of a convolution block output is 64, therefore when finally combining 8 convolution blocks in a manner of channel attached, it obtains To 64 × 8=512 characteristic spectrum;It is easy to cause memory overload since characteristic spectrum dimension is excessively high, therefore in dense network Port number is dropped to 64 from 512 using 1 × 1 convolution kernel by end;Duration memory intensive network NdenseWith mathematical notation are as follows:
RDUk,nd=Hl([d1,…,di,…,d8])
Wherein HlFor NdenseThe channel reduction process of end, diIndicate NdenseI-th of the convolution block RDU in insidek,diOutput Value, symbol [] indicate three-dimensional union operation Concatenation, finally, being connected using a global residual error by noise characteristic It extracts, mathematical expression are as follows: RDUk+1=RDUk-1-RDUk,nd=fRDU(RDUk-1), wherein fRDUIndicate that RDU unit institute is right The mapping function answered;
In the PMDB with nesting RDU unit, in addition to duration memory intensive network NdenseInterior roll block RDUk,di's Other than input value is different, with the convolution block number, channel attached mode and characteristic spectrum drop used inside nesting RDU unit Dimension is all consistent with without nested RDU unit;Dense network N inside non-nested RDU unitdense, exporting result is diIts input value of convolution block be di-1, and come in the input value with the dense network inside nesting RDU unit, each convolution block From in another RDU unit, i.e. output result is diIts input value of convolution block be fRDU(di-1);
In the end duration memory intensive network module PMDB, the mode that is connected using recurrence by the output of preceding layers into Row is three-dimensional to merge stacking, obtains the output valve T of the modulePMDB, three-dimensional union operation expression therein are as follows: TPMDB=[RDU1,…, RDUk,…,RDUD], RDU thereinkIndicate k-th of RDU unit;
Step2.3, internal layer nested networks module carry out the fining of noise information: internal layer nested networks module is with duration The output valve T of memory intensive network modulePMDBIt is internal that noise information is made into one using two parallel sub-networks as input The fining of step is extracted;Image original feature channel is dropped to 64 using the filter that size is 1 × 1 by one of network path A, another access also first reduces dimension using same method, is then reduced to 64 channels using 3 × 3 filter 32;Nested networks module can introduce more complicated Nonlinear Mapping, module end in the case where not increasing multi-parameter to network The characteristic spectrum of two paths is carried out three-dimensional merging, the output result T obtained at this time by endninTo be added in former clear image Noise information;
Step2.4, residual error evaluation module: the module is mainly by the resulting feature T of shallow-layer network abstraction-1Net nested with internal layer The output T of network moduleninSubtracted each other and convolution block is used to finely tune difference, obtains estimating denoising block yest
Further, the step Step3 includes:
Denoising block y is estimated using what network exportedestWith the known clear block y without noisenfCarry out the calculating of penalty values; Specifically distance between the two, mathematical expression are measured using square mean error amount are as follows: The wherein total number with noise degeneration block x that N is inputted by an iteration, yest (i)Indicate that i-th band noise is degenerated Block estimates denoising block as a result, y what is obtained after network trainingnf (i)Then for corresponding to i-th input tape noise degeneration block Without the clear block of noise, Θ is the general designation of all parameters in Step2.
Network training parameter of the invention and training objective are described as follows:
The training parameter for including in Step2 has: single convolution block, duration memory intensive net in shallow-layer feature extraction module Convolution kernel size included in network module PMDB and internal layer nested networks module, convolution kernel number, band parameters revision are linear The initial value of unit;Learning rate involved in Adam operator in Step4, the number of iterations etc. of iterative process.
Training objective of the invention is to have converged in Step3 in the penalty values that model is completed to be calculated when training One lower value.
Further, the particular content of step Step4, Step5 is as follows:
Network weight is updated using Adam operator, the stop condition of circulation step Step2-Step3, iteration is, Network losses value changes between adjacent the number of iterations is less than 1e-3;Or the number of iterations of the network in convergence reaches 200 times, repeatedly After the completion of generation, trained network model is obtained.
In order to illustrate effect of the invention, the convergence of model training when being illustrated in figure 2 white Gaussian noise standard deviation sigma=45 Figure and the corresponding PSNR value of various algorithms;
As can be seen from Figure 2, Denoising Algorithm proposed by the invention has greater advantage on Y-PSNR PSNR, compared to Its promotion amplitude of the MemNet being recently proposed is 0.11dB, and compared to traditional classical Denoising Algorithm BM3D, promotion amplitude is more apparent, Size is 1.54dB.
Fig. 3 is qualitative to compare several classic algorithm, while removing noise to the fidelity of original image detailed information. Comparison discovery, still there is more noise in the restored image of classic algorithm such as BM3D, NCSR, and the algorithm based on deep learning The noise of DnCNN/MemNet restored image has been efficiently removed, but detailed information is distorted.The present invention removal noise with To being significantly better than that aforementioned algorism in terms of the fidelity situation two of prime information.
Fig. 4 is qualitative to compare several classic algorithm, right in the higher situation of fidelity to original image detailed information The ability of noise remove.Noise on comparison discovery, classic algorithm such as BM3D, NCSR and algorithm DnCNN based on deep learning Information is still more, and noise image its noise that MemNet and the present invention are handled has been effectively suppressed, but the present invention is believing It is better than MemNet algorithm in terms of breath fidelity.
Above in conjunction with attached drawing, the embodiment of the present invention is explained in detail, but the present invention is not limited to above-mentioned Embodiment within the knowledge of a person skilled in the art can also be before not departing from present inventive concept Put that various changes can be made.

Claims (6)

1. a kind of Efficient image denoising method based on duration memory intensive network, characterized by the following steps:
Step1, training set, verifying collection and test set are chosen, and training set and verifying collection is pre-processed, obtained with noise Degeneration block and the clear block without noise;
Step2, pass through shallow-layer feature extraction module, duration memory intensive network module, internal layer nested networks module and residual error Evaluation module handles the degeneration block with noise, obtains estimating denoising block;
Preliminary noise feature extraction is carried out to input picture using shallow-layer feature extraction module first, will then extract resulting letter Breath is transmitted to duration memory intensive network module, carries out fine noise characteristic using the stacking of residual error dense cell by the module Extraction;After the three-dimensional arrangement of multiple residual error dense cells output, gained information is further transferred to the internal layer of two accesses Nested networks module carries out feature and refines process;Obtained information is the obtained noise of extraction for being network at this time, by The output of shallow-layer feature extraction module and the output valve of internal layer nested networks module, which subtract each other can be obtained, estimates denoising block;
Step3, denoising block and the known clear block calculating mean square error without noise are estimated using what network exported, to be damaged Lose value metric;
Step4, since network end-point, seek weight gradient value using Adam operator and update network parameter;
Step5, circulation step Step2-Step4, training terminates and obtains denoising model after network convergence;
Step6, the image of removal noise is obtained containing gaussian noise image in the input of trained denoising model.
2. the Efficient image denoising method according to claim 1 based on duration memory intensive network, it is characterised in that: In the step Step1, select 900 2K high definition png images used in image restoration and enhancing match NTIRE as mould Training set, the verifying collection of type, wherein preceding 800 images are used for model training, latter 100 for verifying;Select image denoising mark Quasi- data set Set12, BSD68 test network model, concentrate in training set and verifying, add to every 2K high definition original image White Gaussian noise, the image for being respectively then 96 × 96 in high definition original image and the same position of noise pattern interception size with step-length 27 Block obtains the clear block y without noisenfWith the degeneration block x with noise.
3. the Efficient image denoising method according to claim 1 based on duration memory intensive network, it is characterised in that: In the step 2, the network first floor tentatively extracts noise characteristic using the shallow-layer feature extraction module comprising single convolution block;With Afterwards, using residual error dense cell as basic unit, and duration memory intensive net is constructed by the layer-by-layer nesting of basic unit Network module, to extract more granularity noise characteristic information;In the latter half of network, internal layer nested networks use two parallel subnets Network carries out channel integration to noise characteristic map, realizes noise information feature by different numbers, different size of convolution kernel Fining, to obtain image high-frequency noise information;Finally, network end-point is by the picture material comprising high-frequency noise information and tentatively The noise characteristic of extraction is integrated, and removes the image of noise by seeking the way of difference in residual error evaluation module.
4. the Efficient image denoising method according to claim 1 based on duration memory intensive network, it is characterised in that: The specific steps of the step Step2:
Step2.1, shallow-layer feature extraction module tentatively extract noise characteristic: using the degeneration block x with noise as input value, passing through Shallow-layer feature extraction module increases the port number of input block, and the preliminary extraction of noise information is carried out to it;
Shallow-layer feature extraction module is using the degeneration block x with noise as input, by convolution operation Conv, weight bias and with ginseng After number amendment linear unit PReLU, T is exported-1;The process is expressed with mathematical formulae are as follows: T-1=PReLU (aC1,C1), wherein C1 =W1*x+b1The output of process, W are biased for convolution1For the weight parameter of convolution operation, b1For amount of bias, a is band parameters revision The initial value of linear unit PReLU;
Step2.2, duration memory intensive network module refine noise characteristic information:
Duration memory intensive network module (persistent memory dense block, PMDB) is with residual error dense cell (Residual Dense Unit, RDU) is used as its basic unit, is reached by the way of nesting RDU further inside RDU To the purpose for extracting more granularity noise characteristic information;
Specific practice are as follows: PMDB is with the output valve T of shallow-layer feature extraction module-1As input, obtained after a convolution block Output valve T0;Subsequent T0Multiple RDU units containing recurrence connection are passed to, output valve T is obtainedPMDB;Due to the entire mould of PMDB The main composition of block is RDU unit, therefore first is unfolded to describe to RDU unit;
For convenience of understanding, the transmitting without nested RDU unit wherein information is illustrated first;With RDUkIt indicates on PMDB K-th of the RDU stacked, wherein The total number of layer RDU unit where PMDB;Then its input value is RDUk-1, output valve RDUk+1;In RDUkInside compresses the dimension of input value using a convolution block, obtains noise Feature RDUk,b1, symbol b1 is for identifying the convolution block, and and then, which is input to a duration memory intensive net Network Ndense, export RDUk,nd, symbol nd in dense network, used 8 convolution blocks, each for identifying the dense network The characteristic spectrum number of convolution block output is 64, therefore when finally combining 8 convolution blocks in a manner of channel attached, it obtains 64 × 8=512 characteristic spectrum;Memory overload is easy to cause since characteristic spectrum dimension is excessively high, therefore at dense network end Port number is dropped to 64 from 512 using 1 × 1 convolution kernel by end;Duration memory intensive network NdenseWith mathematical notation are as follows:
RDUk,nd=Hl([d1,…,di,…,d8])
Wherein HlFor NdenseThe channel reduction process of end, diIndicate NdenseI-th of the convolution block RDU in insidek,diOutput valve, symbol Number [] indicates three-dimensional union operation Concatenation, finally, being extracted noise characteristic using a global residual error connection Come, mathematical expression are as follows: RDUk+1=RDUk-1-RDUk,nd=fRDU(RDUk-1), wherein fRDUIt indicates to reflect corresponding to the RDU unit Penetrate function;
In the PMDB with nesting RDU unit, in addition to duration memory intensive network NdenseInterior roll block RDUk,diInput Other than value is different, all with convolution block number, channel attached mode and the characteristic spectrum dimensionality reduction used inside nesting RDU unit It is consistent with without nested RDU unit;Dense network N inside non-nested RDU unitdense, output result is di's Its input value of convolution block is di-1, and with the dense network inside nesting RDU unit, the input value of each convolution block from Another RDU unit, i.e. output result are diIts input value of convolution block be fRDU(di-1);
In the end duration memory intensive network module PMDB, the output of preceding layers is carried out three using the mode that recurrence connects Dimension, which merges, to be stacked, and the output valve T of the module is obtainedPMDB, three-dimensional union operation expression therein are as follows: TPMDB=[RDU1,…, RDUk,…,RDUD], RDU thereinkIndicate k-th of RDU unit;
Step2.3, internal layer nested networks module carry out the fining of noise information: internal layer nested networks module is remembered with duration The output valve T of dense network modulePMDBIt is internal that noise information is made further using two parallel sub-networks as input Fining is extracted;Image original feature channel is dropped to 64 using the filter that size is 1 × 1 by one of network path, Another access also first reduces dimension using same method, and 64 channels are then reduced to 32 using 3 × 3 filter; Nested networks module can introduce more complicated Nonlinear Mapping in the case where not increasing multi-parameter to network, and end of module will The characteristic spectrum of two paths carries out three-dimensional merging, the output result T obtained at this timeninIt makes an uproar for what is be added in former clear image Acoustic intelligence;
Step2.4, residual error evaluation module: the module is mainly by the resulting feature T of shallow-layer network abstraction-1With internal layer nested networks mould The output T of blockninSubtracted each other and convolution block is used to finely tune difference, obtains estimating denoising block yest
5. the Efficient image denoising method according to claim 4 based on duration memory intensive network, it is characterised in that: The step Step3 includes:
Denoising block y is estimated using what network exportedestWith the known clear block y without noisenfCarry out the calculating of penalty values;Specifically It is the distance measured using square mean error amount between the two, mathematical expression are as follows: Its The total number with noise degeneration block x that middle N is inputted by an iteration, yest (i)Indicate i-th band noise degeneration block by net What is obtained after network training estimates denoising block as a result, ynf (i)It is then corresponding to i-th input tape noise degeneration block without noise Clear block, Θ are the general designation of all parameters in Step2.
6. the Efficient image denoising method according to claim 4 based on duration memory intensive network, it is characterised in that: The particular content of step Step4, Step5 is as follows:
Network weight is updated using Adam operator, the stop condition of circulation step Step2-Step3, iteration are networks Penalty values change between adjacent the number of iterations is less than 1e-3;Or the number of iterations of the network in convergence reaches 200 times, iteration is complete Cheng Hou obtains trained network model.
CN201811576192.5A 2018-12-22 2018-12-22 A kind of Efficient image denoising method based on duration memory intensive network Pending CN109785249A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811576192.5A CN109785249A (en) 2018-12-22 2018-12-22 A kind of Efficient image denoising method based on duration memory intensive network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811576192.5A CN109785249A (en) 2018-12-22 2018-12-22 A kind of Efficient image denoising method based on duration memory intensive network

Publications (1)

Publication Number Publication Date
CN109785249A true CN109785249A (en) 2019-05-21

Family

ID=66497556

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811576192.5A Pending CN109785249A (en) 2018-12-22 2018-12-22 A kind of Efficient image denoising method based on duration memory intensive network

Country Status (1)

Country Link
CN (1) CN109785249A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110232361A (en) * 2019-06-18 2019-09-13 中国科学院合肥物质科学研究院 Human body behavior intension recognizing method and system based on the dense network of three-dimensional residual error
CN110349103A (en) * 2019-07-01 2019-10-18 昆明理工大学 It is a kind of based on deep neural network and jump connection without clean label image denoising method
CN110490816A (en) * 2019-07-15 2019-11-22 哈尔滨工程大学 A kind of underwater Heterogeneous Information data noise reduction
CN110569738A (en) * 2019-08-15 2019-12-13 杨春立 natural scene text detection method, equipment and medium based on dense connection network
CN110738231A (en) * 2019-07-25 2020-01-31 太原理工大学 Method for classifying mammary gland X-ray images by improving S-DNet neural network model
CN110838095A (en) * 2019-11-06 2020-02-25 广西师范大学 Single image rain removing method and system based on cyclic dense neural network
CN111105375A (en) * 2019-12-17 2020-05-05 北京金山云网络技术有限公司 Image generation method, model training method and device thereof, and electronic equipment
CN111275643A (en) * 2020-01-20 2020-06-12 西南科技大学 True noise blind denoising network model and method based on channel and space attention
CN111681298A (en) * 2020-06-08 2020-09-18 南开大学 Compressed sensing image reconstruction method based on multi-feature residual error network
CN112150384A (en) * 2020-09-29 2020-12-29 中科方寸知微(南京)科技有限公司 Method and system based on fusion of residual error network and dynamic convolution network model
CN112419219A (en) * 2020-11-25 2021-02-26 广州虎牙科技有限公司 Image enhancement model training method, image enhancement method and related device
CN113284059A (en) * 2021-04-29 2021-08-20 Oppo广东移动通信有限公司 Model training method, image enhancement method, device, electronic device and medium
EP3985972A4 (en) * 2019-10-16 2022-11-16 Tencent Technology (Shenzhen) Company Limited Machine learning-based artifact removal method and apparatus, and machine learning-based artifact removal model training method and apparatus
CN116051408A (en) * 2023-01-06 2023-05-02 郑州轻工业大学 Image depth denoising method based on residual error self-coding

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156960A (en) * 2010-12-16 2011-08-17 新奥特(北京)视频技术有限公司 Picture noise adding method
US20150086126A1 (en) * 2012-04-27 2015-03-26 Nec Corporation Image processing method, image processing system, image processing device, and image processing device
US20150339806A1 (en) * 2014-05-26 2015-11-26 Fujitsu Limited Image denoising method and image denoising apparatus
CN106709875A (en) * 2016-12-30 2017-05-24 北京工业大学 Compressed low-resolution image restoration method based on combined deep network
CN108304755A (en) * 2017-03-08 2018-07-20 腾讯科技(深圳)有限公司 The training method and device of neural network model for image procossing
CN108492265A (en) * 2018-03-16 2018-09-04 西安电子科技大学 CFA image demosaicing based on GAN combines denoising method
CN108629736A (en) * 2017-03-15 2018-10-09 三星电子株式会社 System and method for designing super-resolution depth convolutional neural networks
CN108961186A (en) * 2018-06-29 2018-12-07 赵岩 A kind of old film reparation recasting method based on deep learning
CN109767386A (en) * 2018-12-22 2019-05-17 昆明理工大学 A kind of rapid image super resolution ratio reconstruction method based on deep learning

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156960A (en) * 2010-12-16 2011-08-17 新奥特(北京)视频技术有限公司 Picture noise adding method
US20150086126A1 (en) * 2012-04-27 2015-03-26 Nec Corporation Image processing method, image processing system, image processing device, and image processing device
US20150339806A1 (en) * 2014-05-26 2015-11-26 Fujitsu Limited Image denoising method and image denoising apparatus
CN106709875A (en) * 2016-12-30 2017-05-24 北京工业大学 Compressed low-resolution image restoration method based on combined deep network
CN108304755A (en) * 2017-03-08 2018-07-20 腾讯科技(深圳)有限公司 The training method and device of neural network model for image procossing
CN108629736A (en) * 2017-03-15 2018-10-09 三星电子株式会社 System and method for designing super-resolution depth convolutional neural networks
CN108492265A (en) * 2018-03-16 2018-09-04 西安电子科技大学 CFA image demosaicing based on GAN combines denoising method
CN108961186A (en) * 2018-06-29 2018-12-07 赵岩 A kind of old film reparation recasting method based on deep learning
CN109767386A (en) * 2018-12-22 2019-05-17 昆明理工大学 A kind of rapid image super resolution ratio reconstruction method based on deep learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
BAOSHUN SHI等: "SBM3D: Sparse Regularization Model Induced by BM3D for Weighted Diffraction Imaging", 《IEEE ACCESS》 *
GIORGIO PATRINI等: "Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach", 《2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 *
刘辉等: "基于均匀空间色差度量的彩色图像椒盐噪声滤波算法", 《传感器与微系统》 *
高净植等: "改进深度残差卷积神经网络的LDCT图像估计", 《计算机工程与应用》 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110232361B (en) * 2019-06-18 2021-04-02 中国科学院合肥物质科学研究院 Human behavior intention identification method and system based on three-dimensional residual dense network
CN110232361A (en) * 2019-06-18 2019-09-13 中国科学院合肥物质科学研究院 Human body behavior intension recognizing method and system based on the dense network of three-dimensional residual error
CN110349103A (en) * 2019-07-01 2019-10-18 昆明理工大学 It is a kind of based on deep neural network and jump connection without clean label image denoising method
CN110490816A (en) * 2019-07-15 2019-11-22 哈尔滨工程大学 A kind of underwater Heterogeneous Information data noise reduction
CN110490816B (en) * 2019-07-15 2022-11-15 哈尔滨工程大学 Underwater heterogeneous information data noise reduction method
CN110738231A (en) * 2019-07-25 2020-01-31 太原理工大学 Method for classifying mammary gland X-ray images by improving S-DNet neural network model
CN110738231B (en) * 2019-07-25 2022-12-27 太原理工大学 Method for classifying mammary gland X-ray images by improving S-DNet neural network model
CN110569738A (en) * 2019-08-15 2019-12-13 杨春立 natural scene text detection method, equipment and medium based on dense connection network
CN110569738B (en) * 2019-08-15 2023-06-06 杨春立 Natural scene text detection method, equipment and medium based on densely connected network
US11985358B2 (en) 2019-10-16 2024-05-14 Tencent Technology (Shenzhen) Company Limited Artifact removal method and apparatus based on machine learning, and method and apparatus for training artifact removal model based on machine learning
EP3985972A4 (en) * 2019-10-16 2022-11-16 Tencent Technology (Shenzhen) Company Limited Machine learning-based artifact removal method and apparatus, and machine learning-based artifact removal model training method and apparatus
CN110838095A (en) * 2019-11-06 2020-02-25 广西师范大学 Single image rain removing method and system based on cyclic dense neural network
CN110838095B (en) * 2019-11-06 2022-06-07 广西师范大学 Single image rain removing method and system based on cyclic dense neural network
CN111105375A (en) * 2019-12-17 2020-05-05 北京金山云网络技术有限公司 Image generation method, model training method and device thereof, and electronic equipment
CN111105375B (en) * 2019-12-17 2023-08-22 北京金山云网络技术有限公司 Image generation method, model training method and device thereof, and electronic equipment
CN111275643B (en) * 2020-01-20 2022-09-02 西南科技大学 Real noise blind denoising network system and method based on channel and space attention
CN111275643A (en) * 2020-01-20 2020-06-12 西南科技大学 True noise blind denoising network model and method based on channel and space attention
CN111681298A (en) * 2020-06-08 2020-09-18 南开大学 Compressed sensing image reconstruction method based on multi-feature residual error network
CN112150384A (en) * 2020-09-29 2020-12-29 中科方寸知微(南京)科技有限公司 Method and system based on fusion of residual error network and dynamic convolution network model
CN112150384B (en) * 2020-09-29 2024-03-29 中科方寸知微(南京)科技有限公司 Method and system based on fusion of residual network and dynamic convolution network model
CN112419219A (en) * 2020-11-25 2021-02-26 广州虎牙科技有限公司 Image enhancement model training method, image enhancement method and related device
CN113284059A (en) * 2021-04-29 2021-08-20 Oppo广东移动通信有限公司 Model training method, image enhancement method, device, electronic device and medium
CN116051408B (en) * 2023-01-06 2023-10-27 郑州轻工业大学 Image depth denoising method based on residual error self-coding
CN116051408A (en) * 2023-01-06 2023-05-02 郑州轻工业大学 Image depth denoising method based on residual error self-coding

Similar Documents

Publication Publication Date Title
CN109785249A (en) A kind of Efficient image denoising method based on duration memory intensive network
CN106204467B (en) Image denoising method based on cascade residual error neural network
CN103927531B (en) It is a kind of based on local binary and the face identification method of particle group optimizing BP neural network
CN104361328B (en) A kind of facial image normalization method based on adaptive multiple row depth model
CN110796625B (en) Image compressed sensing reconstruction method based on group sparse representation and weighted total variation
De-Maeztu et al. Near real-time stereo matching using geodesic diffusion
CN109064423B (en) Intelligent image repairing method for generating antagonistic loss based on asymmetric circulation
CN107748895A (en) UAV Landing landforms image classification method based on DCT CNN models
CN110648292A (en) High-noise image denoising method based on deep convolutional network
CN106204482A (en) Based on the mixed noise minimizing technology that weighting is sparse
CN111723701A (en) Underwater target identification method
CN114663685B (en) Pedestrian re-recognition model training method, device and equipment
CN112200733B (en) Grid denoising method based on graph convolution network
Su et al. Multi‐scale cross‐path concatenation residual network for Poisson denoising
CN114202017A (en) SAR optical image mapping model lightweight method based on condition generation countermeasure network
Yap et al. A recursive soft-decision approach to blind image deconvolution
CN105913451B (en) A kind of natural image superpixel segmentation method based on graph model
CN111291810A (en) Information processing model generation method based on target attribute decoupling and related equipment
CN114998107A (en) Image blind super-resolution network model, method, equipment and storage medium
Xu et al. Dual-branch deep image prior for image denoising
CN103037168A (en) Stable Surfacelet domain multi-focus image fusion method based on compound type pulse coupled neural network (PCNN)
CN116405100B (en) Distortion signal restoration method based on priori knowledge
Nejati et al. Low-rank regularized collaborative filtering for image denoising
Zou et al. EDCNN: a novel network for image denoising
CN111047537A (en) System for recovering details in image denoising

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned
AD01 Patent right deemed abandoned

Effective date of abandoning: 20230707