CN108876737B - Image denoising method combining residual learning and structural similarity - Google Patents
Image denoising method combining residual learning and structural similarity Download PDFInfo
- Publication number
- CN108876737B CN108876737B CN201810583825.9A CN201810583825A CN108876737B CN 108876737 B CN108876737 B CN 108876737B CN 201810583825 A CN201810583825 A CN 201810583825A CN 108876737 B CN108876737 B CN 108876737B
- Authority
- CN
- China
- Prior art keywords
- data set
- image
- training data
- noise
- definition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 238000012549 training Methods 0.000 claims abstract description 78
- 238000012360 testing method Methods 0.000 claims abstract description 37
- 230000006870 function Effects 0.000 claims abstract description 30
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 28
- 238000007781 pre-processing Methods 0.000 claims abstract description 10
- 238000004364 calculation method Methods 0.000 claims abstract description 4
- 238000013507 mapping Methods 0.000 claims description 9
- 239000000126 substance Substances 0.000 claims description 9
- 238000013528 artificial neural network Methods 0.000 claims description 8
- 238000005457 optimization Methods 0.000 claims description 5
- 230000008569 process Effects 0.000 claims description 5
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 4
- 238000010606 normalization Methods 0.000 claims description 4
- 238000011176 pooling Methods 0.000 claims description 4
- 230000004913 activation Effects 0.000 claims description 3
- 238000000638 solvent extraction Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 8
- 230000016776 visual perception Effects 0.000 abstract description 6
- 238000010586 diagram Methods 0.000 description 6
- 238000011156 evaluation Methods 0.000 description 6
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000004438 eyesight Effects 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000012938 design process Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Processing (AREA)
Abstract
The invention provides an image denoising method combining residual error learning and structural similarity. Selecting a plurality of high-definition images in a BSD database to respectively construct a training data set and a test data set; the method comprises the steps of obtaining a cut training data set by performing center cutting on a high-definition image in a training data set, preprocessing the cut training data set to obtain a preprocessed training data set, and adding Gaussian noise with certain intensity to the preprocessed training data set and a test data set respectively to obtain a noise-containing training data set and a noise-containing test data set; designing a deep convolutional neural network, training the deep convolutional neural network by designing L2 norm and SSIM combined loss function minimization, and obtaining a clear image data set by calculation between a noisy test data set and a noise residual image obtained according to the deep convolutional neural network. The invention has the advantage that the denoising effect is more in line with the visual perception of human eyes.
Description
Technical Field
The invention belongs to the field of image processing and computer vision, and particularly relates to an image denoising method combining residual learning and structural similarity.
Background
Image denoising has been a hotspot of research in the field of image processing. In practical application, more and more ways of acquiring images are provided, but the images are affected by equipment and external factors in the acquisition and transmission processes, various noises are introduced, so that the post-processing of the images is difficult, and the understanding of human eyes on the image information is seriously affected by the noise information. Therefore, it is important to establish an image denoising method conforming to human visual perception.
The purpose of image denoising is to obtain a denoised clear image from a noisy image to be processed. With the development of the denoising algorithm, a relatively obvious effect is obtained for various types of noise at present. Representative denoising algorithms include a local method, a non-local method, a sparse representation and the like, and the methods have better performance on specific types of noise. However, the noise type of the actual noisy image is complex and difficult to describe by a specific model, so that the traditional method has poor denoising effect in the complex noisy image. In recent years, with the aid of feature learning and nonlinear feature mapping capabilities of convolutional neural networks, an end-to-end image denoising neural network is established, so that denoising effects are further improved.
The evaluation of the performance of the image denoising algorithm is divided into a subjective evaluation mode and an objective evaluation mode, wherein the subjective evaluation mode is that human eyes are used for directly perceiving the image denoising effect; the objective evaluation is mainly evaluated according to the existing indexes such as Mean Square Error (MSE), peak signal-to-noise ratio (PSNR), Structural Similarity (SSIM) and the like. At present, a loss function of a denoising neural network based on deep learning is generally based on an L2 norm, and although the L2 norm is beneficial to the improvement of a peak signal-to-noise ratio (PSNR) index, the PSNR evaluation index is inconsistent with human visual perception, so that the edge and local texture of an image after denoising are generally smoothed, and detail features are lost. Therefore, the SSIM with higher consistency with human visual perception is added into the loss function, so that the consistency with the human visual perception after image denoising is improved.
Through the search of documents in the prior art, Chinese patent application publication No. CN106204468A (published as 2016.12.07) discloses an image denoising method based on a ReLU convolutional neural network, which adopts a ReLU-based convolutional neural network model, uses a plurality of convolutional layers and ReLU active layers, and establishes the mapping from a noise image to a denoised clear image by taking minimum MSE as a loss function. The method has simple model structure, obtains certain denoising effect, but has the following problems: (1) with the increase of the network depth, the model of the clear image after being directly learned and denoised is not easy to be converged; (2) the loss function only considers MSE, and although large noise can be suppressed, the loss function can tolerate small noise, and the effect after denoising is often inconsistent with human eye subjectivity.
Disclosure of Invention
In order to solve the technical problem, the invention discloses an image denoising method combining residual error learning and structural similarity, which designs a network model with noise-containing images as input and noise residual error images as output by utilizing the thought of residual error learning, and enables the denoising effect to be more accordant with human visual perception by optimizing a loss function combining L2 norm and SSIM.
In order to achieve the purpose, the technical scheme of the invention comprises the following steps:
step 1: selecting a plurality of high-definition images in a BSD database to respectively construct a training data set and a test data set;
step 2: the method comprises the steps of obtaining a cut training data set by performing center cutting on a high-definition image in a training data set, preprocessing the cut training data set to obtain a preprocessed training data set, and adding Gaussian noise with certain intensity to the preprocessed training data set and a test data set respectively to obtain a noise-containing training data set and a noise-containing test data set;
and step 3: designing a deep convolutional neural network, training the deep convolutional neural network by designing L2 norm and SSIM combined loss function minimization, and obtaining a clear image data set by calculation between a noisy test data set and a noise residual image obtained according to the deep convolutional neural network.
Preferably, the number of the high-definition images in the step 1 is K;
the high-quality image in the step 1 is as follows:
P1,P2,...,PK
wherein the image Pk k∈[1,K]Resolution M N, image Pk k∈[1,K]Has a pixel value of
Pk(i,j)i∈[1,M]j∈[1,N];
The number of the high-definition images in the training data set in the step 1 is KaSheet, residual Kb=K-Ka,KbUsing a Zhang high definition image as a test data set;
the high-definition image in the training data set is Pa a∈[1,Ka]Having a pixel value of
Pa(ia,ja)ia∈[1,M]ja∈[1,N];
The high-definition image in the test data set is Pb b∈[1,Kb]Having a pixel value of
Pb(ib,jb)ib∈[1,M]jb∈[1,N];
Preferably, the high-definition image in the training data set in step 2 is K in step 1aTraining data is concentrated into high-definition images;
cutting to reserve the central part K of the image in the step 2Z,KZ×KZHigh definition image P in training data set after size and cuttinga′a∈[1,Ka]The pixel values in (1) are:
the preprocessing of the training data set after cutting comprises the following steps:
to high definition image P in post-cutting training data seta′a∈[1,Ka]According to step size alpha to KY,KY×KYImage sliding and partitioning of the image block to obtain a horizontal KzColumn,/α, vertical KzImage blocks of/alpha line, in total (K)z/α)2A block;
then, each image is horizontally translated and vertically turned over, and is reduced to 0.8 time of the original image, and is clockwise rotated by 90 degrees to obtain (K)z/α)24 high-definition image blocks;
pre-processed training data set Pc c∈[1,(Kz/α)2*4]Each high definition image consists of (K)z/α)2A KY×KYImage block composition, KY;
In the step 2, adding a certain intensity of gaussian noise to the preprocessed training data set is as follows:
to (K)z/α)2*4*KaAdding Gaussian noise with certain intensity into a training data set after preprocessing of the Zhang HD image blocks to obtain a noise-containing training data set:
c∈[1,(Kz/α)2*4]
wherein the content of the first and second substances,for corresponding noisy images, PcFor high-definition images, [ sigma ]pFor noise variance, randn (size (P)c) Is generated with P)cRandom matrices of the same size;
in the step 2, adding a certain intensity of Gaussian noise to the test data set is as follows:
adding Gaussian noise with certain intensity to the test data set in the step 1 to obtain a noise-containing training data set:
wherein the content of the first and second substances,for corresponding noisy images, PbFor high-definition images, [ sigma ]pFor noise variance, randn (size (P)b) Is generated andrandom matrices of the same size;
preferably, the first layer and the second layer of the end-to-end deep convolutional neural network in the step 3 are 3 × 3 convolutional layers, each convolutional layer is composed of a convolutional layer, a batch normalization layer and a ReLU activation layer;
the end-to-end deep convolutional neural network comprises a second layer and a ninth layer, wherein each layer consists of a 1 × 1 convolutional layer, a 3 × 3 convolutional layer and a maximum pooling layer which are branched in parallel to form an increment module;
the tenth layer of the end-to-end deep convolution neural network is 3 multiplied by 3 convolution, a noise residual image is output, and a mapping v from a noise-containing image X to the noise residual image v is established to be F (X, W), wherein F (X, W) is a nonlinear mapping function of the whole network;
designing a joint loss function of the L2 norm and SSIM, and training the deep convolutional neural network by minimizing the joint loss function:
wherein the content of the first and second substances,in order to be a function of the loss,the method comprises the following steps that (1) a real noise-free image and a denoised image are respectively obtained, W is a parameter needing to be learned by a neural network, and alpha controls the influence degree of SSIM loss on the whole loss function;
further, in the present invention,x is the input noisy image which is the noisy training data set in the step 2c∈[1,(Kz/α)2*4]V is the noise residual image, F (X, W) is the network learning process,wherein SSIM (x, y) is a structural similarity index, image quality is measured from three aspects of brightness, contrast and structure, and the value range is [0,1 ]]A larger value indicates a higher image quality;
the specific formula for SSIM is as follows:
wherein u isx,uyThe mean values of the images x, y,variance, σ, of the images x, y, respectivelyxyIs the covariance of the images x, y, C1, C2 are constants;
optimizing a joint loss function by adopting an Adam method to obtain a network optimization parameter W;
k in the noisy test data set in the step 2bOpening noisy imageInputting the data into a deep convolutional neural network, and outputting to obtain a noise residual image v;
using noisy images in noisy test data setsSubtracting the noise residual image v to obtain a denoised clear image:
wherein, KbThe number of noisy images in the noisy test data set.
Compared with the prior art, the invention has the advantages that:
the method utilizes residual learning to construct an end-to-end convolutional neural network with input of a noisy image and output of a noisy residual image, reduces network parameters while capturing local detail characteristics of the image through small-scale convolution, and accelerates convergence of a model by adding a batch normalization layer; the size of the characteristic graph is kept consistent in the whole convolution process, and image edge information is kept;
according to the invention, a loss function combining the L2 norm and the SSIM is designed for network parameter learning, and as the PSNR does not consider the visual characteristics of human eyes, the SSIM has higher consistency in image quality perception and human eye vision from three aspects of brightness, contrast and structure. Therefore, the optimization of the loss function with SSIM is beneficial to improving the consistency of the denoised image and the subjective perception of human eyes.
Drawings
FIG. 1: the overall flow diagram of the invention;
FIG. 2: the detailed structure diagram of the network sub-module inclusion;
FIG. 3: the overall structure diagram of the network model;
FIG. 4: the image size and the number of channels of each layer of the network are set.
Detailed Description
In order to facilitate the understanding and implementation of the present invention for those of ordinary skill in the art, the present invention is further described in detail with reference to the accompanying drawings and examples, it is to be understood that the embodiments described herein are merely illustrative and explanatory of the present invention and are not restrictive thereof.
FIG. 1 is a general flow diagram of the present invention;
fig. 2 is a detailed structural diagram of the networking sub-module inclusion, where the tensor output of the upper layer is B × C × W × H, where C denotes the number of channels, B denotes the size of the batch, and W, H denotes the width and height of the feature map, respectively; the middle part is a convolution layer of filters with different sizes, F represents the size of a convolution kernel, S represents the step length of convolution, and P represents the boundary filling quantity; finally, the output connects the characteristics through filter aggregation operation, and 256 channel characteristic graphs are output; the increment module consists of a plurality of parallel branches such as 1 × 1 convolution, 3 × 3 convolution and a maximum pooling layer, and the network width is expanded; the feature fusion of each feature map channel is effectively realized by using 1 × 1 convolution, and the channel dimension can be reduced and the parameter number can be reduced; the two 3 × 3 convolution cascades are equivalent to realizing 5 × 5 convolution, so that the number of parameters is reduced while the receptive field is enlarged; finally, connecting a plurality of branches on the channel dimension to realize the fusion of feature maps on different receptive fields; in addition, the whole module ensures the consistency of the scale when the final channels are fused through different filling quantities;
FIG. 3 is a diagram of an overall structure of a network model, the overall structure being used for learning noise by residual error to establish a mapping from an input noisy image to an output noisy residual image, the overall network being used for learning noise; in the network design process, with the help of an inclusion module, convolution kernels with different sizes are used for extracting detail features of the image in different receptive fields;
FIG. 4 is the input tensor and output channel number of each layer of the eigen map in the network structure; during training, a network inputs a 64 × 64 image, the image is convolved twice by 3 × 3 to become 64 channels, then the image passes through an increment module for 8 times, and finally the image is convolved by 3 × 3 to obtain a noise residual image v. Setting the network batch size to be 128, completely training for 50 times, finishing training, initially setting the learning rate to be 0.0001, and attenuating the iteration attenuation of each 10 times to be 0.1 time of the original iteration attenuation; the Adam optimization method is adopted in the whole training process, and the momentum is 0.9.
Embodiments of the present invention will be described below with reference to fig. 1 to 4. The implementation mode of the invention comprises the following steps: step 1: selecting a plurality of high-definition images in a BSD database to respectively construct a training data set and a test data set;
the number of the high-definition images in the step 1 is K equal to 500;
the high-quality image in the step 1 is as follows:
P1,P2,...,PK
wherein the image Pk k∈[1,K]Resolution M N, image Pk k∈[1,K]Has a pixel value of
Pk(i,j)i∈[1,M]j∈[1,N];
The number of the high-definition images in the training data set in the step 1 is Ka400 pieces, the rest Kb=K-Ka,KbTaking 100 high-definition images as a test data set;
the high-definition image in the training data set is Pa a∈[1,Ka]Having a pixel value of
Pa(ia,ja)ia∈[1,M]ja∈[1,N];
The high-definition image in the test data set is Pb b∈[1,Kb]Having a pixel value of
Pb(ib,jb)ib∈[1,M]jb∈[1,N];
Step 2: the method comprises the steps of obtaining a cut training data set by performing center cutting on a high-definition image in a training data set, preprocessing the cut training data set to obtain a preprocessed training data set, and adding Gaussian noise with certain intensity to the preprocessed training data set and a test data set respectively to obtain a noise-containing training data set and a noise-containing test data set;
the high-definition image in the training data set in the step 2 is K in the step 1a400 training data sets with high definition images;
cutting to reserve the central part K of the image in the step 2Z=256,KZ×KZHigh definition image P in training data set after size and cuttinga′a∈[1,Ka]Has a pixel value of
The preprocessing of the training data set after cutting comprises the following steps:
to high definition image P in post-cutting training data seta′a∈[1,Ka]According to the step length alpha, 16 pairs are divided by KY=64,KY×KYImage sliding and partitioning of the image block to obtain a horizontal Kz16 columns,/α, vertical Kz16 rows of image blocks, for a total of 256 (K)z/α)2A block;
then, each image is horizontally translated and vertically turned over, the image is reduced to 0.8 time of the original image, and the image is clockwise rotated by 90 degrees to obtain 1024 ═ Kz/α)24 high-definition image blocks;
pre-processed training data set Pc c∈[1,(Kz/α)2*4]Each high definition image consists of (K)z/α)2256 of KY×KYImage block composition, KY=64;
In the step 2, adding a certain intensity of gaussian noise to the preprocessed training data set is as follows:
to (K)z/α)2*4*KaAdding Gaussian noise with certain intensity into a training data set after preprocessing of the Zhang HD image blocks to obtain a noise-containing training data set:
c∈[1,(Kz/α)2*4]
wherein the content of the first and second substances,for corresponding noisy images, PcFor high-definition images, [ sigma ]pFor noise variance, randn (size (P)c) Is generated with P)cRandom matrices of the same size;
in the step 2, adding a certain intensity of Gaussian noise to the test data set is as follows:
adding Gaussian noise with certain intensity to the test data set in the step 1 to obtain a noise-containing training data set:
wherein the content of the first and second substances,for corresponding noisy images, PbFor high-definition images, [ sigma ]pFor noise variance, randn (size (P)b) Is generated andrandom matrices of the same size;
and step 3: designing a deep convolutional neural network, training the deep convolutional neural network by designing L2 norm and SSIM combined loss function minimization, and obtaining a clear image data set by calculation between a noisy test data set and a noise residual image obtained according to the deep convolutional neural network;
in step 3, the first layer and the second layer of the end-to-end deep convolutional neural network are 3 multiplied by 3 convolutional layers, and each convolutional layer consists of a convolutional layer, a batch normalization layer and a ReLU activation layer;
the end-to-end deep convolutional neural network comprises a second layer and a ninth layer, wherein each layer consists of a 1 × 1 convolutional layer, a 3 × 3 convolutional layer and a maximum pooling layer which are branched in parallel to form an increment module;
the tenth layer of the end-to-end deep convolution neural network is 3 multiplied by 3 convolution, a noise residual image is output, and a mapping v from a noise-containing image X to the noise residual image v is established to be F (X, W), wherein F (X, W) is a nonlinear mapping function of the whole network;
designing a joint loss function of the L2 norm and SSIM, and training the deep convolutional neural network by minimizing the joint loss function:
wherein the content of the first and second substances,in order to be a function of the loss,the method comprises the following steps that (1) a real noise-free image and a denoised image are respectively obtained, W is a parameter needing to be learned by a neural network, and alpha controls the influence degree of SSIM loss on the whole loss function;
further, in the present invention,x is the input noisy image which is the noisy training data set in the step 2c∈[1,(Kz/α)2*4]Kz16, v is the noise residual image, F (X, W) is the network learning process,wherein SSIM (a)x, y) is a structural similarity index, the image quality is respectively measured from three aspects of brightness, contrast and structure, and the value range is [0, 1%]A larger value indicates a higher image quality;
the specific formula for SSIM is as follows:
wherein u isx,uyThe mean values of the images x, y,variance, σ, of the images x, y, respectivelyxyIs the covariance of the images x, y, C1, C2 are constants;
optimizing a joint loss function by adopting an Adam method to obtain a network optimization parameter W;
k in the noisy test data set in the step 2b100 noisy imagesInputting the data into a deep convolutional neural network, and outputting to obtain a noise residual image v;
using noisy images in noisy test data setsSubtracting the noise residual image v to obtain a denoised clear image:
wherein, Kb100 is the number of noisy images in the noisy test data set.
It should be understood that parts of the specification not set forth in detail are well within the prior art.
It should be understood that the above description of the preferred embodiments is given for clarity and not for any purpose of limitation, and that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (3)
1. An image denoising method combining residual learning and structural similarity is characterized by comprising the following steps:
step 1: selecting a plurality of high-definition images in a BSD database to respectively construct a training data set and a test data set;
step 2: the method comprises the steps of obtaining a cut training data set by performing center cutting on a high-definition image in a training data set, preprocessing the cut training data set to obtain a preprocessed training data set, and adding Gaussian noise with certain intensity to the preprocessed training data set and a test data set respectively to obtain a noise-containing training data set and a noise-containing test data set;
and step 3: designing a deep convolutional neural network, training the deep convolutional neural network by designing L2 norm and SSIM combined loss function minimization, and obtaining a clear image data set by calculation between a noisy test data set and a noise residual image obtained according to the deep convolutional neural network; in the step 3, the first layer and the second layer of the deep convolutional neural network are 3 multiplied by 3 convolutional layers, and each convolutional layer consists of a convolutional layer, a batch normalization layer and a ReLU activation layer;
the end-to-end deep convolutional neural network comprises a second layer and a ninth layer, wherein each layer consists of a 1 × 1 convolutional layer, a 3 × 3 convolutional layer and a maximum pooling layer which are branched in parallel to form an increment module;
the tenth layer of the end-to-end deep convolution neural network is 3 multiplied by 3 convolution, a noise residual image is output, and a mapping v from a noise-containing image X to the noise residual image v is established to be F (X, W), wherein F (X, W) is a nonlinear mapping function of the whole network;
designing a joint loss function of the L2 norm and SSIM, and training the deep convolutional neural network by minimizing the joint loss function:
wherein the content of the first and second substances,the loss function, Y,the method comprises the following steps that (1) a real noise-free image and a denoised image are respectively obtained, W is a parameter needing to be learned by a neural network, and alpha controls the influence degree of SSIM loss on the whole loss function;
further, in the present invention,x is the input noisy image which is the noisy training data set in the step 2v is the noise residual image, F (X, W) is the net learning process,wherein SSIM (x, y) is a structural similarity index, image quality is measured from three aspects of brightness, contrast and structure, and the value range is [0,1 ]]A larger value indicates a higher image quality; kZFor the step 2, cutting to reserve the central part of the image, and alpha is the step length;
the specific formula for SSIM is as follows:
wherein u isx,uyThe mean values of the images x, y,the variance of the images x, y,σxyis the covariance of the images x, y, C1, C2 are constants;
optimizing a joint loss function by adopting an Adam method to obtain a network optimization parameter W;
k in the noisy test data set in the step 2bOpening noisy imageInputting the data into a deep convolutional neural network, and outputting to obtain a noise residual image v;
using noisy images in noisy test data setsSubtracting the noise residual image v to obtain a denoised clear image:
wherein, KbThe number of noisy images in the noisy test data set.
2. The method for image denoising in joint residual learning and structural similarity according to claim 1, wherein the number of the high definition images in step 1 is K;
in the step 1, the high-definition image is as follows:
P1,P2,...,PK
wherein the image Pk,k∈[1,K]Resolution M N, image Pk,k∈[1,K]Has a pixel value of Pk(i,j),i∈[1,M],j∈[1,N];
The number of the high-definition images in the training data set in the step 1 is KaSheet, residual Kb=K-Ka,KbUsing a Zhang high definition image as a test data set;
the high-definition image in the training data set is Pa,a∈[1,Ka]Having a pixel value of
Pa(ia,ja),ia∈[1,M],ja∈[1,N];
The high-definition image in the test data set is Pb,b∈[1,Kb]Having a pixel value of
Pb(ib,jb),ib∈[1,M],jb∈[1,N]。
3. The method of claim 2, wherein the high-definition image in the training data set in step 2 is K in step 1aTraining data is concentrated into high-definition images;
cutting to reserve the central part K of the image in the step 2Z,KZ×KZSize, high definition image P 'in post-clip training data set'a,a∈[1,Ka]The pixel values in (1) are:
the preprocessing of the training data set after cutting comprises the following steps:
concentrating high-definition image P 'in clipped training data'a,a∈[1,Ka]According to step size α to KY×KYImage sliding and partitioning of the image block to obtain a horizontal KzColumn,/α, vertical KzImage blocks of/alpha line, in total (K)z/α)2A block;
then, each image is horizontally translated and vertically turned over, and is reduced to 0.8 time of the original image, and is clockwise rotated by 90 degrees to obtain (K)z/α)24 high-definition image blocks;
pre-processed training data set Pc,c∈[1,(Kz/α)2*4]Each high definition image consists of (K)z/α)2A KY×KYImage block composition;
in the step 2, adding a certain intensity of gaussian noise to the preprocessed training data set is as follows:
to (K)z/α)2*4*KaAdding Gaussian noise with certain intensity into a training data set after preprocessing of the Zhang HD image blocks to obtain a noise-containing training data set:
c∈[1,(Kz/α)2*4]
wherein the content of the first and second substances,for corresponding noisy images, PcFor high-definition images, [ sigma ]pFor noise variance, randn (size (P)c) Is generated with P)cRandom matrices of the same size;
in the step 2, adding a certain intensity of Gaussian noise to the test data set is as follows:
adding Gaussian noise with certain intensity to the test data set in the step 1 to obtain a noise-containing training data set:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810583825.9A CN108876737B (en) | 2018-06-06 | 2018-06-06 | Image denoising method combining residual learning and structural similarity |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810583825.9A CN108876737B (en) | 2018-06-06 | 2018-06-06 | Image denoising method combining residual learning and structural similarity |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108876737A CN108876737A (en) | 2018-11-23 |
CN108876737B true CN108876737B (en) | 2021-08-03 |
Family
ID=64338539
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810583825.9A Active CN108876737B (en) | 2018-06-06 | 2018-06-06 | Image denoising method combining residual learning and structural similarity |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108876737B (en) |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109801232A (en) * | 2018-12-27 | 2019-05-24 | 北京交通大学 | A kind of single image to the fog method based on deep learning |
CN109685743B (en) * | 2018-12-30 | 2023-01-17 | 陕西师范大学 | Image mixed noise elimination method based on noise learning neural network model |
WO2020150223A1 (en) * | 2019-01-15 | 2020-07-23 | Schlumberger Technology Corporation | Residual signal detection for noise attenuation |
CN109829903B (en) * | 2019-01-28 | 2020-02-11 | 合肥工业大学 | Chip surface defect detection method based on convolution denoising autoencoder |
CN110119704A (en) * | 2019-05-08 | 2019-08-13 | 武汉大学 | A kind of text based on depth residual error network is revealed the exact details phenomenon minimizing technology |
CN110213462B (en) * | 2019-06-13 | 2022-01-04 | Oppo广东移动通信有限公司 | Image processing method, image processing device, electronic apparatus, image processing circuit, and storage medium |
CN110728643A (en) * | 2019-10-18 | 2020-01-24 | 上海海事大学 | Low-illumination band noise image optimization method based on convolutional neural network |
CN110852966B (en) * | 2019-11-04 | 2022-04-22 | 西北工业大学 | Image noise estimation method based on deep convolutional neural network |
CN111028163B (en) * | 2019-11-28 | 2024-02-27 | 湖北工业大学 | Combined image denoising and dim light enhancement method based on convolutional neural network |
EP4070268A4 (en) * | 2020-01-23 | 2023-01-25 | Baidu.com Times Technology (Beijing) Co., Ltd. | Deep residual network for color filter array image denoising |
CN111667424B (en) * | 2020-05-28 | 2022-04-01 | 武汉大学 | Unsupervised real image denoising method |
CN111738267B (en) * | 2020-05-29 | 2023-04-18 | 南京邮电大学 | Visual perception method and visual perception device based on linear multi-step residual learning |
CN112634159B (en) * | 2020-12-23 | 2022-07-26 | 中国海洋大学 | Hyperspectral image denoising method based on blind noise estimation |
CN112819707B (en) * | 2021-01-15 | 2022-05-03 | 电子科技大学 | End-to-end anti-blocking effect low-illumination image enhancement method |
CN113628146B (en) * | 2021-08-30 | 2023-05-30 | 中国人民解放军国防科技大学 | Image denoising method based on depth convolution network |
CN116681618A (en) * | 2023-06-13 | 2023-09-01 | 强联智创(北京)科技有限公司 | Image denoising method, electronic device and storage medium |
CN116703772B (en) * | 2023-06-15 | 2024-03-15 | 山东财经大学 | Image denoising method, system and terminal based on adaptive interpolation algorithm |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106204467A (en) * | 2016-06-27 | 2016-12-07 | 深圳市未来媒体技术研究院 | A kind of image de-noising method based on cascade residual error neutral net |
CN107169927A (en) * | 2017-05-08 | 2017-09-15 | 京东方科技集团股份有限公司 | A kind of image processing system, method and display device |
CN107507141A (en) * | 2017-08-07 | 2017-12-22 | 清华大学深圳研究生院 | A kind of image recovery method based on adaptive residual error neutral net |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10043243B2 (en) * | 2016-01-22 | 2018-08-07 | Siemens Healthcare Gmbh | Deep unfolding algorithm for efficient image denoising under varying noise conditions |
-
2018
- 2018-06-06 CN CN201810583825.9A patent/CN108876737B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106204467A (en) * | 2016-06-27 | 2016-12-07 | 深圳市未来媒体技术研究院 | A kind of image de-noising method based on cascade residual error neutral net |
CN107169927A (en) * | 2017-05-08 | 2017-09-15 | 京东方科技集团股份有限公司 | A kind of image processing system, method and display device |
CN107507141A (en) * | 2017-08-07 | 2017-12-22 | 清华大学深圳研究生院 | A kind of image recovery method based on adaptive residual error neutral net |
Non-Patent Citations (2)
Title |
---|
Loss Functions for Image Restoration With Neural Networks;H. Zhao 等;《 IEEE Transactions on Computational Imaging》;20161223;第3卷(第1期);第47-57页 * |
基于字典学习的残差信息融合图像去噪方法;董明堃 等;《微处理机》;20150325;第58-62页 * |
Also Published As
Publication number | Publication date |
---|---|
CN108876737A (en) | 2018-11-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108876737B (en) | Image denoising method combining residual learning and structural similarity | |
CN108876735B (en) | Real image blind denoising method based on depth residual error network | |
CN109859147B (en) | Real image denoising method based on generation of antagonistic network noise modeling | |
CN114140353B (en) | Swin-Transformer image denoising method and system based on channel attention | |
CN110599409B (en) | Convolutional neural network image denoising method based on multi-scale convolutional groups and parallel | |
CN108986050B (en) | Image and video enhancement method based on multi-branch convolutional neural network | |
Deng et al. | Wavelet domain style transfer for an effective perception-distortion tradeoff in single image super-resolution | |
CN110706166B (en) | Image super-resolution reconstruction method and device for sharpening label data | |
CN107464217B (en) | Image processing method and device | |
CN106709877B (en) | A kind of image deblurring method based on multi-parameter canonical Optimized model | |
CN109523513A (en) | Based on the sparse stereo image quality evaluation method for rebuilding color fusion image | |
CN114066747A (en) | Low-illumination image enhancement method based on illumination and reflection complementarity | |
Niu et al. | Siamese-network-based learning to rank for no-reference 2D and 3D image quality assessment | |
CN109345609A (en) | Mural painting image denoising is carried out based on convolutional neural networks and line drawing draws the method generated | |
Cheng et al. | Enhancement of weakly illuminated images by deep fusion networks | |
CN112288652A (en) | PSO optimization-based guide filtering-Retinex low-illumination image enhancement method | |
CN109887023B (en) | Binocular fusion stereo image quality evaluation method based on weighted gradient amplitude | |
CN116797468A (en) | Low-light image enhancement method based on self-calibration depth curve estimation of soft-edge reconstruction | |
Xu et al. | Multiplicative decomposition based image contrast enhancement method using PCNN factoring model | |
CN112767311A (en) | Non-reference image quality evaluation method based on convolutional neural network | |
Li et al. | An enhanced image denoising method using method noise | |
Guo et al. | Warm start of multi-channel weighted nuclear norm minimization for color image denoising | |
CN113112425B (en) | Four-direction relative total variation image denoising method | |
CN102968771A (en) | Noise-containing image enhancing method based on Contourlet domain and multi-state HMT (Hidden Markov Tree) model | |
Xue et al. | MMPDNet: Multi-Stage & Multi-Attention Progressive Image Denoising |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |