CN112200748A - Image blind denoising method based on capsule generation countermeasure network noise modeling - Google Patents

Image blind denoising method based on capsule generation countermeasure network noise modeling Download PDF

Info

Publication number
CN112200748A
CN112200748A CN202011126575.XA CN202011126575A CN112200748A CN 112200748 A CN112200748 A CN 112200748A CN 202011126575 A CN202011126575 A CN 202011126575A CN 112200748 A CN112200748 A CN 112200748A
Authority
CN
China
Prior art keywords
noise
image
countermeasure network
block
size
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011126575.XA
Other languages
Chinese (zh)
Inventor
史明光
汤亚晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN202011126575.XA priority Critical patent/CN112200748A/en
Publication of CN112200748A publication Critical patent/CN112200748A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image blind denoising method based on capsule generation countermeasure network noise modeling, which comprises the following steps: 1. extracting a smooth noise block from a given noise image, 2, generating a noise modeling of a countermeasure network based on a capsule, and 3, training a deep CNN to obtain a noise reduction model so as to realize blind denoising of the image. The invention can improve the defect of poor noise reduction effect in the prior art under the condition of unknown noise information or uncertain sensors, thereby improving the noise reduction effect.

Description

Image blind denoising method based on capsule generation countermeasure network noise modeling
Technical Field
The invention belongs to the field of computer vision, and particularly relates to an image blind denoising method based on a capsule generation countermeasure network noise modeling.
Background
Image denoising is a classic topic in low vision and also an important preprocessing step in many vision tasks. Following the degradation model y x + v, the goal of image denoising is to recover a noise-free image x from a noisy observation y by reducing the noise v. The existing denoising methods are basically three: the method comprises a denoising method based on image prior, a blind denoising method based on noise modeling and a denoising method based on discriminant learning.
The image prior adopted by the denoising method based on the image prior is mainly defined based on human knowledge and can limit the denoising performance; furthermore, most methods only utilize internal information of the input image, and do not fully utilize external information from other images when modeling image priorities.
Blind denoising methods based on noise modeling only utilize internal information of a single input image and explicitly define a noise model, which may limit the ability of noise modeling and further affect the noise reduction performance.
Although the denoising method based on discriminant learning achieves high denoising quality, the denoising method cannot work under the condition of lacking of paired training data.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides an image blind denoising method based on the capsule generation countermeasure network noise modeling, so that effective image denoising can be still realized under the condition that noise information in an image is unavailable or the uncertainty of a sensor is faced, and the denoising effect is improved.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention relates to an image blind denoising method based on capsule generation countermeasure network noise modeling, which is characterized by comprising the following steps:
step 1, extracting a smooth noise block for a given noise image:
step 1.1, defining loop variables i and j, and initializing i to 1;
step 1.2, by step length sgExtracting the ith image block p with the size of c multiplied by c from a noise imagei
Step 1.3, initializing j to 1;
step 1.4, step length is slFor the ith image block piExtracting the jth local image block qi with the size of h x hj
Step 1.5, judging the ith image block piAnd the jth local image block
Figure BDA0002733804230000021
Whether or not the formula (1) and the formula (2) are satisfied at the same time, and if so, the i-th image block p is representediIs a smooth noise block, and adds the noise block piAfter adding into the smooth noise block set S, executing step 1.6; otherwise, directly executing the step 1.6;
Figure BDA0002733804230000022
Figure BDA0002733804230000023
in the formula (1) and the formula (2), Mean () represents the average, var () represents the variance, μ, γ are constant coefficients whose values belong to (0,1), and μ and γ are belonged to (0, 1);
step 1.6, after j +1 is assigned to j, step 1.4 is returned until j is equal to jmaxUntil the end; wherein j ismaxRepresenting for the i-th image block piThe maximum number of local image blocks of size h x h that can be extracted,
Figure BDA0002733804230000024
step 1.7, assigning i +1 to i, and returning to step 1.3 until i is equal to imaxUntil the end; so as to obtain the final smooth noise block set S ═ S1,s2,…,si,…st}; wherein imaxThe number of image blocks with the size of c multiplied by c which can be extracted at most for a noise picture is shown,
Figure BDA0002733804230000025
w represents the width of a noise picture, and l represents the height of the noise picture; t represents the total number of smooth noise blocks;
step 1.8, obtaining the ith approximate noise by using the formula (3)Block viSo as to obtain an approximate noise block set V ═ V1,v2,…,vi,…vt}:
vi=si-Mean(si) (3)
Step 2, generating noise modeling of the countermeasure network based on the capsule:
step 2.1, reconstructing the arbiter generating the countermeasure network into a capsule neural network and using the reconstructed arbiter as the arbiter in the capsule generation countermeasure network:
using c in the convolutional layer of the discriminator1Convolution kernels of size N × N with step size set to s1Using c in Primarycaps layers2Convolution kernels of size N × N with step size set to s2Setting the number of the capsules of the Digitcaps layer as K;
step 2.2, the generator for generating the countermeasure network imitates the structure of the generator in the deep convolution countermeasure network DCGAN and serves as the generator in the capsule generation countermeasure network:
using a micro-step convolution kernel of size M × M in the deconvolution layer of the generator; the last layer of output layer of the generator uses Tanh function as activation function, and the other layers of the generator use ReLU function as activation function;
the discriminator and the generator in the capsule generation countermeasure network form the capsule generation countermeasure network;
2.3, selecting a loss function of the WGAN as a target function of the capsule in the process of generating the antagonistic network training;
step 2.4, setting the iteration number ratio of a discriminator and a generator in the capsule generation countermeasure network as 1: 2; training the capsule generation countermeasure network with the approximate noise set V, thereby generating a noise sample V';
step 3, training the deep CNN to obtain a noise reduction model:
step 3.1, divide the acquired one noiseless image into E small blocks with the size of c × c, and form a small block set X ═ X1,x2,…,xe,…xEIn which xeDenotes the e-thSmall, and E ═ 1,2, …, E;
the kth noise block V ' in the noise sample V ' is processed by equation (4) 'kE-th tile X randomly added to tile set XeTo obtain the f noise picture yfObtaining a noise picture set Y ═ Y1,y2,…,yf,…yFAnd F is 1,2, …, F:
yf=xe+v′k (4)
a training data set { X, Y } is formed by the small block set X and the noise picture set Y;
step 3.2, making the network structures of the deep CNN and the DnCNN similar:
making the size of a convolution kernel of the deep CNN be Q multiplied by Q, the depth of the deep CNN be M, and each layer of the deep CNN adopts a zero filling mode to ensure that the input and output pictures of each layer have the same size;
3.3, selecting a loss function of the DnCNN as a target function in the training process;
and 3.4, training the deep CNN by utilizing the training data set { X, Y }, thereby obtaining a noise reduction model to realize blind noise reduction of the image.
Compared with the prior art, the invention has the beneficial effects that:
1. the method trains a capsule-based generation countermeasure network to estimate the noise distribution on the input noise image and generate noise samples, and noise patches sampled from the resulting noise samples are used to construct a paired training data set, which in turn is used to train a deep Convolutional Neural Network (CNN) for noise reduction. Through the overall process, the present invention offers certain advantages over existing techniques in the face of conditions such as the unavailability of noise information in the image or uncertainty of the sensor.
2. According to the invention, the discriminator for generating the countermeasure network is reconstructed into the capsule neural network, so that the reconstructed capsule neural network can improve the characteristic that the conventional discriminator for generating the countermeasure network does not fully utilize the human brain when identifying the object, and the defect that the conventional discriminator for generating the countermeasure network wastes some information during sampling is improved.
3. In the invention, the capsule is used for generating the confrontation network to generate more noise data, so that the training set data is strengthened, and the effect of the method is more excellent than that of the method for directly training the deep CNN to obtain the noise reduction model in the prior art.
Drawings
FIG. 1 is a schematic diagram of a model for generating an antagonistic network based on capsules, which is constructed by the invention;
FIG. 2 is a diagram illustrating the architecture of a deep CNN used in the present invention;
FIG. 3 is a schematic diagram of a capsule generation countermeasure network discrimination network architecture used in the present invention;
FIG. 4 is a schematic diagram of a capsule generation countermeasure network architecture used by the present invention.
Detailed Description
In this example, referring to fig. 1, an image blind denoising method based on the capsule generation countermeasure network noise modeling is performed as follows:
step 1, extracting a smooth noise block for a given noise image:
step 1.1, defining loop variables i and j, and initializing i to 1; in the present embodiment, the noise image data set employed is BSD 68.
Step 1.2, by step length sgExtracting the ith image block p with the size of c multiplied by c from a noise imageiIn the present embodiment, s is setg=32,c=64;
Step 1.3, initializing j to 1;
step 1.4, step length is slFor the ith image block piExtracting the jth local image block with the size of h x h
Figure BDA0002733804230000044
In this embodiment, s is setl=16,h=16;
Step 1.5, judging the ith image block piAnd the jth local image block
Figure BDA0002733804230000041
If equations (1) and (2) are satisfied simultaneously, in the present embodiment, μ is set to 0.2 and γ is set to 0.25, and if satisfied, this indicates that the i-th image block p is satisfied simultaneouslyiIs a smooth noise block, and adds the noise block piAfter adding into the smooth noise block set S, executing step 1.6; otherwise, directly executing the step 1.6;
Figure BDA0002733804230000042
Figure BDA0002733804230000043
in the formulas (1) and (2), Mean () represents the average of the values in parentheses, var () represents the variance of the values in parentheses, and μ, γ are constant coefficients whose values belong to (0, 1);
step 1.6, after j +1 is assigned to j, step 1.4 is returned until j is equal to jmaxUntil the end; wherein j ismaxRepresenting for the i-th image block piThe maximum number of local image blocks of size h x h that can be extracted,
Figure BDA0002733804230000051
step 1.7, assigning i +1 to i, and returning to step 1.3 until i is equal to imaxUntil the end; so as to obtain the final smooth noise block set S ═ S1,s2,…,si,…st}; wherein imaxThe number of image blocks with the size of c multiplied by c which can be extracted at most for a noise picture is shown,
Figure BDA0002733804230000052
wherein w represents the width of a noise picture and l represents the height of a noise picture; t represents the number of total resulting smoothed noise blocks.
Step 1.8, obtaining the ith approximate noise block v by using the formula (3)iSo as to obtain an approximate noise block set V ═ V1,v2,…,vi,…vt}:
vi=si-Mean(si) (3)
Step 2, generating noise modeling of the countermeasure network based on the capsule:
step 2.1, reconstructing the arbiter generating the countermeasure network into a capsule neural network and using the reconstructed arbiter as the arbiter in the capsule generation countermeasure network:
using c in the convolutional layer of the discriminator1Convolution kernels of size N × N with step size set to s1Using c in Primarycaps layers2Convolution kernels of size N × N with step size set to s2The number of capsules in the Digitcaps layer is K, in this example c1=256,N=9,c2=32,s22, k is 2, and the overall structure of the discriminator is shown in fig. 3;
step 2.2, the generator for generating the countermeasure network imitates the structure of the generator in the deep convolution countermeasure network DCGAN and serves as the generator in the capsule generation countermeasure network:
using a micro-step convolution kernel of size M × M in the deconvolution layer of the generator, M being set to 5 in this example; the last layer of output layer of the generator uses Tanh function as activation function, and the other layers of the generator use ReLU function as activation function, and the overall architecture of the generator is as shown in FIG. 4;
the discriminator and the generator in the capsule generation countermeasure network form the capsule generation countermeasure network;
2.3, selecting a loss function of the WGAN as a target function of the capsule in the process of generating the antagonistic network training;
step 2.4, setting the iteration number ratio of a discriminator and a generator in the capsule generation countermeasure network as 1: 2; training the capsule generation countermeasure network with the approximate noise set V, thereby generating a noise sample V';
step 3, training the deep CNN to obtain a noise reduction model:
step 3.1, divide a noiseless image obtained into E small blocks of size c × c, andform a set of small blocks X ═ X1,x2,…,xe,…xEIn which xeRepresents the E-th patch, and E is 1,2, …, E, in this example the noiseless training set employed is clear 1;
the kth noise block V ' in the noise sample V ' is processed by equation (4) 'kE-th tile X randomly added to tile set XeTo obtain the f noise picture yfObtaining a noise picture set Y ═ Y1,y2,…,yf,…yFAnd F is 1,2, …, F:
yf=xe+v′k (4)
a training data set { X, Y } is formed by the small block set X and the noise picture set Y;
step 3.2, making the network structures of the deep CNN and the DnCNN similar:
let the size of the convolution kernel of the deep CNN be Q × Q, the depth of the deep CNN be M, and each layer of the deep CNN adopts a zero padding manner so that the input and output pictures of each layer have the same size, in this example, Q is 3, M is 20, and the specific architecture of the deep CNN is shown in fig. 2.
3.3, selecting a loss function of the DnCNN as a target function in the training process;
and 3.4, training the deep CNN by utilizing the training data set { X, Y }, thereby obtaining a noise reduction model to realize blind noise reduction of the image. The overall architecture of all the steps described above is shown in fig. 1.
Example (b):
in order to verify the effectiveness of the method of the present invention, the noise reduction effect of the method is verified by using mixed noise in the present example, the mixed noise used in the present example is composed of 10% of uniform noise (distribution interval [ -s, s ], gaussian noise with variance of 1 of 20% and gaussian noise with variance of 0.01 of 70%, and the peak signal-to-noise ratio (PSNR) is used as an evaluation index, as shown in table 1.
TABLE 1
Figure BDA0002733804230000071
From the experimental results in table 1, it can be seen that, in terms of mixed noise denoising, the peak signal-to-noise ratio of the method of the present invention is higher than that of the existing methods BM3D, WNNM, EPLL in the noise non-blind mode and multiscale, DnCNN in the noise blind denoising mode, which shows the superiority of the method.

Claims (1)

1. An image blind denoising method based on the noise modeling of a capsule generation countermeasure network is characterized by comprising the following steps:
step 1, extracting a smooth noise block for a given noise image:
step 1.1, defining loop variables i and j, and initializing i to 1;
step 1.2, by step length sgExtracting the ith image block p with the size of c multiplied by c from a noise imagei
Step 1.3, initializing j to 1;
step 1.4, step length is slFor the ith image block piExtracting the jth local image block with the size of h x h
Figure FDA0002733804220000011
Step 1.5, judging the ith image block piAnd the jth local image block
Figure FDA0002733804220000012
Whether or not the formula (1) and the formula (2) are satisfied at the same time, and if so, the i-th image block p is representediIs a smooth noise block, and adds the noise block piAfter adding into the smooth noise block set S, executing step 1.6; otherwise, directly executing the step 1.6;
Figure FDA0002733804220000013
Figure FDA0002733804220000014
in the formula (1) and the formula (2), Mean () represents the average, var () represents the variance, μ, γ are constant coefficients whose values belong to (0,1), and μ and γ are belonged to (0, 1);
step 1.6, after j +1 is assigned to j, step 1.4 is returned until j is equal to jmaxUntil the end; wherein j ismaxRepresenting for the i-th image block piThe maximum number of local image blocks of size h x h that can be extracted,
Figure FDA0002733804220000015
step 1.7, assigning i +1 to i, and returning to step 1.3 until i is equal to imaxUntil the end; so as to obtain the final smooth noise block set S ═ S1,s2,…,si,…st}; wherein imaxThe number of image blocks with the size of c multiplied by c which can be extracted at most for a noise picture is shown,
Figure FDA0002733804220000016
w represents the width of a noise picture, and l represents the height of the noise picture; t represents the total number of smooth noise blocks;
step 1.8, obtaining the ith approximate noise block v by using the formula (3)iSo as to obtain an approximate noise block set V ═ V1,v2,…,vi,…vt}:
vi=si-Mean(si) (3)
Step 2, generating noise modeling of the countermeasure network based on the capsule:
step 2.1, reconstructing the arbiter generating the countermeasure network into a capsule neural network and using the reconstructed arbiter as the arbiter in the capsule generation countermeasure network:
using c in the convolutional layer of the discriminator1Convolution kernels of size N × N with step size set to s1Using c in Primarycaps layers2Volumes of size NxNA kernel is accumulated, and the step length is set to s2Setting the number of the capsules of the Digitcaps layer as K;
step 2.2, the generator for generating the countermeasure network imitates the structure of the generator in the deep convolution countermeasure network DCGAN and serves as the generator in the capsule generation countermeasure network:
using a micro-step convolution kernel of size M × M in the deconvolution layer of the generator; the last layer of output layer of the generator uses Tanh function as activation function, and the other layers of the generator use ReLU function as activation function;
the discriminator and the generator in the capsule generation countermeasure network form the capsule generation countermeasure network;
2.3, selecting a loss function of the WGAN as a target function of the capsule in the process of generating the antagonistic network training;
step 2.4, setting the iteration number ratio of a discriminator and a generator in the capsule generation countermeasure network as 1: 2; training the capsule generation countermeasure network with the approximate noise set V, thereby generating a noise sample V';
step 3, training the deep CNN to obtain a noise reduction model:
step 3.1, divide the acquired one noiseless image into E small blocks with the size of c × c, and form a small block set X ═ X1,x2,…,xe,…xEIn which xeRepresents the E-th patch, and E is 1,2, …, E;
the kth noise block V ' in the noise sample V ' is processed by equation (4) 'kE-th tile X randomly added to tile set XeTo obtain the f noise picture yfObtaining a noise picture set Y ═ Y1,y2,…,yf,…yFAnd F is 1,2, …, F:
yf=xe+v′k (4)
a training data set { X, Y } is formed by the small block set X and the noise picture set Y;
step 3.2, making the network structures of the deep CNN and the DnCNN similar:
making the size of a convolution kernel of the deep CNN be Q multiplied by Q, the depth of the deep CNN be M, and each layer of the deep CNN adopts a zero filling mode to ensure that the input and output pictures of each layer have the same size;
3.3, selecting a loss function of the DnCNN as a target function in the training process;
and 3.4, training the deep CNN by utilizing the training data set { X, Y }, thereby obtaining a noise reduction model to realize blind noise reduction of the image.
CN202011126575.XA 2020-10-20 2020-10-20 Image blind denoising method based on capsule generation countermeasure network noise modeling Pending CN112200748A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011126575.XA CN112200748A (en) 2020-10-20 2020-10-20 Image blind denoising method based on capsule generation countermeasure network noise modeling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011126575.XA CN112200748A (en) 2020-10-20 2020-10-20 Image blind denoising method based on capsule generation countermeasure network noise modeling

Publications (1)

Publication Number Publication Date
CN112200748A true CN112200748A (en) 2021-01-08

Family

ID=74009625

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011126575.XA Pending CN112200748A (en) 2020-10-20 2020-10-20 Image blind denoising method based on capsule generation countermeasure network noise modeling

Country Status (1)

Country Link
CN (1) CN112200748A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991198A (en) * 2021-02-08 2021-06-18 西安理工大学 Blind denoising method based on noise modeling
WO2024002063A1 (en) * 2022-06-30 2024-01-04 中移动信息技术有限公司 Image denoising method and apparatus, and computer readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529165A (en) * 2016-10-28 2017-03-22 合肥工业大学 Method for identifying cancer molecular subtype based on spectral clustering algorithm of sparse similar matrix
CN108198154A (en) * 2018-03-19 2018-06-22 中山大学 Image de-noising method, device, equipment and storage medium
CN109584337A (en) * 2018-11-09 2019-04-05 暨南大学 A kind of image generating method generating confrontation network based on condition capsule
CN109859147A (en) * 2019-03-01 2019-06-07 武汉大学 A kind of true picture denoising method based on generation confrontation network noise modeling
CN110473154A (en) * 2019-07-31 2019-11-19 西安理工大学 A kind of image de-noising method based on generation confrontation network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529165A (en) * 2016-10-28 2017-03-22 合肥工业大学 Method for identifying cancer molecular subtype based on spectral clustering algorithm of sparse similar matrix
CN108198154A (en) * 2018-03-19 2018-06-22 中山大学 Image de-noising method, device, equipment and storage medium
CN109584337A (en) * 2018-11-09 2019-04-05 暨南大学 A kind of image generating method generating confrontation network based on condition capsule
CN109859147A (en) * 2019-03-01 2019-06-07 武汉大学 A kind of true picture denoising method based on generation confrontation network noise modeling
CN110473154A (en) * 2019-07-31 2019-11-19 西安理工大学 A kind of image de-noising method based on generation confrontation network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
凌铖: "基于生成对抗网络的医学图像数据增强技术研究", 《中国优秀硕士学位论文全文数据库》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991198A (en) * 2021-02-08 2021-06-18 西安理工大学 Blind denoising method based on noise modeling
WO2024002063A1 (en) * 2022-06-30 2024-01-04 中移动信息技术有限公司 Image denoising method and apparatus, and computer readable storage medium

Similar Documents

Publication Publication Date Title
Chen et al. Image blind denoising with generative adversarial network based noise modeling
CN114140353B (en) Swin-Transformer image denoising method and system based on channel attention
CN110223254A (en) A kind of image de-noising method generating network based on confrontation
CN109035163B (en) Self-adaptive image denoising method based on deep learning
CN110473142B (en) Single image super-resolution reconstruction method based on deep learning
CN108765319A (en) A kind of image de-noising method based on generation confrontation network
CN110840445B (en) Automatic noise reduction method for dynamic electrocardiosignals
CN111161178A (en) Single low-light image enhancement method based on generation type countermeasure network
CN111028163A (en) Convolution neural network-based combined image denoising and weak light enhancement method
CN111260591B (en) Image self-adaptive denoising method based on attention mechanism
CN103049892A (en) Non-local image denoising method based on similar block matrix rank minimization
CN112200748A (en) Image blind denoising method based on capsule generation countermeasure network noise modeling
CN112270654A (en) Image denoising method based on multi-channel GAN
CN111861894A (en) Image motion blur removing method based on generating type countermeasure network
CN110852955A (en) Image enhancement method based on image intensity threshold and adaptive cutting
CN109493295A (en) A kind of non local Haar transform image de-noising method
CN104657951A (en) Multiplicative noise removal method for image
CN114972332A (en) Bamboo laminated wood crack detection method based on image super-resolution reconstruction network
CN105872315B (en) A kind of video denoising method for mixed noise
CN116645283A (en) Low-dose CT image denoising method based on self-supervision perceptual loss multi-scale convolutional neural network
CN116523794A (en) Low-light image enhancement method based on convolutional neural network
Zou et al. EDCNN: a novel network for image denoising
CN113222879B (en) Generation countermeasure network for fusion of infrared and visible light images
CN113554104B (en) Image classification method based on deep learning model
CN116977455A (en) Face sketch image generation system and method based on deep two-way learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210108