CN111340716B - Image deblurring method for improving double-discrimination countermeasure network model - Google Patents

Image deblurring method for improving double-discrimination countermeasure network model Download PDF

Info

Publication number
CN111340716B
CN111340716B CN201911145549.9A CN201911145549A CN111340716B CN 111340716 B CN111340716 B CN 111340716B CN 201911145549 A CN201911145549 A CN 201911145549A CN 111340716 B CN111340716 B CN 111340716B
Authority
CN
China
Prior art keywords
image
edge
discriminator
blurred
generator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911145549.9A
Other languages
Chinese (zh)
Other versions
CN111340716A (en
Inventor
邹倩颖
陈东祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu College of University of Electronic Science and Technology of China
Original Assignee
Chengdu College of University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu College of University of Electronic Science and Technology of China filed Critical Chengdu College of University of Electronic Science and Technology of China
Priority to CN201911145549.9A priority Critical patent/CN111340716B/en
Publication of CN111340716A publication Critical patent/CN111340716A/en
Application granted granted Critical
Publication of CN111340716B publication Critical patent/CN111340716B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image deblurring method for improving a dual-discrimination countermeasure network model, which comprises the following steps: the method comprises the following steps: preprocessing the blurred image, and extracting the image edge of the blurred image by adopting a fusion extraction method; step two: inputting the blurred image and the extracted image edge into a generator together by using the extracted image edge as auxiliary information to generate a deblurred image; step three: inputting the deblurred image and the edge image thereof, the clear image and the edge image thereof into a discriminator network respectively, and training a discriminator; step four: and repeating the second step and the third step until the discriminator can not judge whether the deblurred image is generated by the generator, and finishing the deblurring of the image by the generator. By the method and the device, the remarkable image deblurring effect can be realized.

Description

Image deblurring method for improving dual-discrimination countermeasure network model
Technical Field
The invention relates to the field of image processing, in particular to an image deblurring method for improving a dual-discrimination network-confrontation model.
Background
In reality, the photographing apparatus and the photographed object cannot be kept relatively still, which causes image blur. How to remove the blur to obtain a sharp image is one of the important contents of image recognition and detection research. Motion blur can be thought of as the convolution of a sharp image with a blur kernel, plus various noise. The image deblurring research mainly comprises two categories of blind deblurring and non-blind deblurring, and mainly relates to the field of blind deblurring.
The blind deblurring method comprises two main categories of estimating a blur kernel and deblurring an image end to end, wherein the deblurring is carried out by solving a plurality of local blur kernels and fusing the local blur kernels into a global blur kernel by utilizing similarity; the blur kernel is estimated by adding edge information as a constraint condition, but the methods based on estimating the blur kernel are easily influenced by noise, and the estimation of the blur kernel of the motion blur is inaccurate, so that the ringing phenomenon occurs to cause poor deblurring effect. Therefore, on the premise of reducing complexity, the method improves the image deblurring performance and inhibits the generation of artifacts and ringing phenomena, thereby becoming a difficult problem of the current end-to-end deblurring method.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an image deblurring method for improving a dual-discrimination network-confrontation model, which comprises the following steps:
the method comprises the following steps: preprocessing the blurred image, and extracting the image edge of the blurred image by adopting a fusion extraction method;
step two: inputting the blurred image and the extracted image edge into a generator together by using the extracted image edge as auxiliary information to generate a deblurred image;
step three: inputting the deblurred image and the edge image thereof, the clear image and the edge image thereof into a discriminator network respectively, and training a discriminator;
step four: and repeating the second step and the third step until the discriminator can not judge whether the deblurred image is generated by the generator, and finishing the deblurring of the image by the generator.
Further, the step one of extracting the image edge of the blurred image by using the fusion extraction method includes the following processes of denoising the blurred image by using guided filtering on the blurred image through an image extractor, and finally obtaining a significant edge by performing weighted fusion on the edges extracted by Sobel, roberts, prewitt and Canny 4 operators by using the fusion edge extraction method of the edge extraction operators, wherein an algorithm objective function is as follows:
Figure GDA0003820997470000011
α=w 1 E s +w 2 E R +w 3 E P +w 4 E c +w 5
wherein E is s ,E R ,E P ,E c Represent Sobel, roberts, prewitt, and Canny 4 operators, w, respectively 5 Is a correction term.
Further, the generator in the second step is used for taking the blurred image and the edge image of the blurred image as input and outputting a deblurred image; the structure of the generator comprises two convolution blocks with 1/2 interval, 9 residual blocks and two deconvolution blocks which are connected at one time, wherein each residual block in the 9 residual blocks comprises a convolution layer, an instant normalization layer and a Relu activation function layer, and Dropout regularization with the probability of 0.5 is added behind the first convolution layer.
Furthermore, the classifiers in the third step include a real image classifier and an edge classifier, and the network structures of the two classifiers are the same, where training data of the real image classifier is a clear image corresponding to the blurred image, and training samples of the edge classifier are edge gradient images extracted from the clear image by the edge extractor.
Furthermore, the network structure of the real image discriminator or the edge discriminator is the same as that of PatchGAN, the real image discriminator or the edge discriminator comprises 5 modules which are connected in sequence, except the last module, each of the other modules comprises a volume block, an Instance normalization layer and a LeakyReLU activation layer, and the first four modules are connected with a one-dimensional full connection layer and a sigmoid finally to obtain the output of a single discrimination network; and connecting the outputs of the two discriminators as inputs with a one-dimensional full connection layer and sigmoid to obtain final output.
Further, the joint training process of the generator and the arbiter is as follows: inputting the blurred image and the edge image obtained by the edge extractor into a generator together to obtain a restored deblurred image; respectively sending the deblurred image and the clear image, and the edge image of the deblurred image and the edge image of the clear image into a real image discriminator and an edge image discriminator for training; when the discriminator cannot distinguish whether the image is the generator-generated image, the generator-generated image reaches the result of image deblurring.
Further, the objective function may bring about gradient disappearance in the processes of calculating the loss function and performing iterative training, and the objective function is improved as follows:
Figure GDA0003820997470000021
wherein x to P x(x) Representing the distribution of data in a sharp image;
Figure GDA0003820997470000022
edge image data distribution z-P representing sharp image z(z) Representing the distribution of data in the blurred image,
Figure GDA0003820997470000023
a random sample of a concatenation of acquired data samples representing the true image X and the generated image B, wherein
Figure GDA0003820997470000024
Figure GDA0003820997470000025
Representing true edge image Y X And generating an edge image Y B Wherein a random sample is connected in the collected data samples, wherein
Figure GDA0003820997470000026
E denotes the mathematical expectation, D 1 Representing real image discriminating networks, D 2 Indicating that the edge discrimination network X is a blurred image; y is X Is an edge image of a sharp image; z is a blurred image; y is Z Is an edge image taken from a blurred image
The beneficial effects of the invention are: the image edge is extracted by preprocessing the blurred image, and the extracted image edge is used as auxiliary information and is acted in a generator for blurring. Meanwhile, the research improves the judgers, adopts a double-judger structure, uses two judgers to respectively judge the deblurred image and the edge image extracted from the image, and considers that the deblurred effect is achieved after the double-judgers are all determined to pass. Because the edge extraction adopts a fusion extraction algorithm, a significant edge can be extracted, noise interference can be reduced, ringing can be inhibited, and simultaneously model complexity can be reduced, so that the deblurring performance can be well ensured by using a double-discriminator. The result shows that the improved model can ensure that the obvious image deblurring effect can be achieved under the condition of lower model complexity.
Drawings
FIG. 1 is a schematic diagram of an image deblurring method for improving a dual-discrimination network model;
FIG. 2 is a diagram of a framework for generating a countering neural network based on dual discrimination of image edges;
FIG. 3 is a diagram of a generator neural network model;
FIG. 4 is a comparison graph of the deblurring experiment effect;
FIG. 5 is a comparison graph of the effect of the self-structure default experiment deblurring experiment.
Detailed Description
The technical solutions of the present invention are further described in detail below with reference to the accompanying drawings, but the scope of the present invention is not limited to the following.
As shown in FIG. 1, the present study takes the conditional generative antagonistic neural network as a reference model, and introduces an image edge-based dual discriminator to generate the antagonistic neural network, as shown in FIG. 2. In the training model, a motion blur image Z and an edge image Y obtained by an edge extractor are combined Z And is input into the generator together to obtain a restored deblurred image B. Because the discriminator network adopts a double-discriminator structure, the deblurred image B, the clear image X and the edge image Y thereof are subjected to deblurring B 、Y X And respectively sending the images into a real image discriminator and an edge image discriminator for training.
The generator and the discriminator are a mutual game process in the whole process of training the model. Wherein the purpose of the generator is to generate the image while attempting to spoof the discriminator; the purpose of the discriminator is to distinguish between the generated image and the sharp image, when the discriminator cannot distinguish whether the image is generated for the generator, the image generated by the generator is shown to have achieved the deblurring effect.
Because the condition information and the generated picture information are simultaneously input into the discriminator network and a comprehensive result is obtained, the deblurring quality of the generator cannot be strictly restricted by the mode, the research considers that a clear image is clear except the image, the edge image of the image is also clear, the discriminator is transformed into a double-discriminator structure, namely, the recovered image and the edge image extracted from the recovered image are independently judged, and the image is considered to achieve the deblurring effect after the two discriminators simultaneously judge to pass. And training the discriminator and the generator simultaneously until the generator and the discriminator reach the balance, and the discriminator cannot distinguish the clear image from the generated image.
Because the original objective function brings the problems of gradient disappearance and slow training in the processes of calculating a loss function and iterative training, and because the original CGAN discriminator part is improved by research, the original discrimination network is modified into two discrimination networks, and a WGAN-G method is adopted as the discrimination function, the original objective function is optimized into
Figure GDA0003820997470000041
Wherein x to P x(x) Representing the distribution of data in a sharp image;
Figure GDA0003820997470000042
edge image data distribution z-P representing sharp image z(Z) Representing the distribution of data in the blurred image,
Figure GDA0003820997470000043
a random sample of a concatenation of samples of acquired data representing real image X and generated image B, wherein
Figure GDA0003820997470000044
Figure GDA0003820997470000045
Representing a true edge image Y X And generating an edge image Y B Wherein a random sample is connected in the collected data samples, wherein
Figure GDA0003820997470000046
E denotes the mathematical expectation, D 1 Representing a real image discriminating network, D 2 Indicating that the edge discrimination network X is a blurred image; y is X Is an edge image of a sharp image; z is a blurred image; y is Z Is an edge image taken from a blurred image.
Since the edge image is extracted as a condition to generate the auxiliary information against the neural network, the guidance generator generates a clear picture. Therefore, the effect of image edge extraction can directly influence the quality of the model, an image extractor is designed for extracting edges of the image, and because the noise of the image can greatly influence the image edge extraction in the extracted image edges, the image is denoised by using guide filtering in the edge extractor at first, and because a single edge extraction operator has various defects, such as Laplace operator and Canny are easily interfered by picture noise, although Sobel operator can effectively inhibit the noise, the Sobel operator is easy to generate false edges, and the accuracy of the edge extraction of the Prewitt operator is low. In order to better extract the significant edge, an edge extraction algorithm adopting the fusion of edge extraction operators mentioned in the literature is researched, the edge detection algorithm in the literature finally obtains the significant edge by carrying out weighted fusion on the edges extracted by Sobel, roberts, prewitt and Canny 4 operators, and the objective function of the algorithm is as follows:
Figure GDA0003820997470000047
α=α=w 1 E s +w 2 E R +w 3 Ex+w 4 E c +w 5 (3)
wherein, E s ,E R ,E P ,E c Represent Sobel, roberts, prewitt, and Canny 4 operators, w, respectively 5 Is a correction term.
In order to reduce the influence of image noise on edge extraction, the image is subjected to preprocessing by using guided filtering before edge extraction, so that the image is subjected to denoising processing.
Due to the wide application of the encoder-decoder network architecture in the field of computer vision, research is therefore also taking a similar structure as the infrastructure of the generator, which aims to take as input the blurred picture Z and the edge image YZ of the blurred picture and output the deblurred image B.
Generator network model As shown in FIG. 3, the generator is composed of 13 blocks, two of which are included
Figure GDA0003820997470000051
The method comprises the following steps of (1) spacing volume blocks, 9 residual error blocks and two deconvolution blocks, wherein each of the 9 residual error blocks comprises a volume layer, an Instance normalization layer and a Relu activation function layer, and Dropout regularization with the probability of 0.5 is added behind the first volume layer. In addition, the introduction of global jump connection enables the generator to learn residual correction from the blurred image, so that the model training is faster and the model generalization capability is stronger.
The double discriminator consists of a real image discriminator and an edge discriminator, the network structures of the two discriminators are consistent, but training samples are different, the training data of the real image discriminator is a clear image corresponding to a fuzzy image, the training sample of the edge discriminator is an edge gradient image extracted by the clear image through an edge extractor, and particularly, the study is to firstly preprocess the clear image by using guided filtering in the edge extractor and carry out edge enhancement on the clear image after the gradient edge image is extracted, so that the discrimination capability of the discriminator can be enhanced and the robustness of the discriminator can be improved.
The network structure of a single discriminator of the discriminator network is the same as that of PatchGAN, and the network structure of the single discriminator of the discriminator network comprises five blocks in total, except the last block, each block comprises a volume block, an Instance normalization layer and a LeakyReLU activation layer, and the first four blocks are connected with a one-dimensional full connection layer and a sigmoid finally to obtain the output of the single discriminator network. And connecting the outputs of the two discriminator networks as inputs with a one-dimensional full connection layer and sigmoid to obtain final output.
Since the research adopts the WGAN-GP method as the discriminant function and uses two discriminant networks, the total countermeasure loss should be the weighted sum of the two discriminant losses calculated separately, so the countermeasure loss function should be rewritten as shown in the following formula:
L GAN =kL D1 +wL D2 (4)
Figure GDA0003820997470000052
Figure GDA0003820997470000053
wherein each is L D1 、L D2 Respectively, the countermeasure loss of the real image discrimination network and the edge discrimination network; k. w is a weight parameter; B. y is B Edge images respectively representing the restored sharp image and the restored sharp image;
in order to make the generated image B and the sharp image X more similar in content and structure, research has been introduced into L 2 loss represents the difference between the generated image B and the sharp image X as a function of content perceptual loss:
Figure GDA0003820997470000054
wherein j represents the j-th layer, C j H j W j Representing the size of the level j feature map,
Figure GDA0003820997470000061
a feature map representing the clear image of layer j,
Figure GDA0003820997470000062
a feature map representing the generated image of layer j.
The total loss function is formed by weighted combination of the countermeasure loss and the content perception loss, and the formula is shown as the formula (10).
L=β LGAN +αL x (8)
Wherein L is GAN To combat the loss, β, α are weighting parameters, L x Is a content-aware loss.
Specifically, two different experiments are designed to compare the algorithm with a traditional front-edge end-to-end deblurring model and a self-structure default respectively so as to verify the deblurring efficiency of the experiments and the effectiveness of a neural network structure respectively. To verify the performance of model deblurring, three evaluation indices were used here: peak signal-to-noise ratio (PSNR), structural Similarity (SSIM), and deblurring mean time to evaluate model deblurring performance. The definition of peak signal-to-noise ratio (PSNR) and Structural Similarity (SSIM) is as follows:
Figure GDA0003820997470000063
Figure GDA0003820997470000064
wherein I is a clear image, K is a deblurred image, m and n respectively represent the width of the image, MAX 1 For maximum pixel value of image
Figure GDA0003820997470000065
Wherein x and y represent deblurred image samples and sharp image samples, respectively; mu.s x 、μ y The pixel average values of the images x and y are represented respectively; sigma x 、σ y Representing the standard deviation, σ, of the pixel values of the images x, y, respectively xy Representing the covariance of the x, y pixel values of the image.
All experiments are researched and realized based on a deep learning framework Pythrch, the used language is python, and the hardware configuration of the experiments is as follows: the CPU is Intel i99900k, GPU is NVIDA GTX 1080TI of 11G memory. The study followed WGAN-GP [16 ]]The training strategy in (1) is to use Adam's optimization algorithm to make beta _1=0.92 and beta_2 =0.999, and the generator and the discriminator are cross-trained, and 5 gradient descent steps are performed on the discriminator first, and then once in the generator. Learning rate is initially set to 10 4 And decays to 10 in 600 periods -6 . Similar to the various CGANs, all models were trained with batch size = 1. Unless otherwise stated, all experiments studied were trained on the same data set with the same PC configuration.
Studies using the method mentioned in DeblurGAN, on the basis of pest data published by the AI chan engine 2018 pest identification challenge match, in which 2000 sharp images were blurred using 12 different blur kernels and gaussian noise with variance =0.0007 and variance =0.0003 was randomly added to mimic real motion blur, thus forming a new data set with 2000 image pairs.
To verify the advancement and validity of the algorithm herein, the algorithm herein was used with the method of deblurGAN proposed by Orest Kupyn et al, with the edge heuristic GAN proposed by Shuai Zheng et al for non-uniform blind disambiguation,
the evaluation indexes of the experiment adopt two mentioned peak signal-to-noise ratio (PSNR) and Structural Similarity (SSIM) as evaluation indexes. And the average time it takes the algorithm herein to deblur with other algorithms is evaluated. Each algorithm uses 30 graphs, and the experiment is independently repeated for 20 times under the same condition, and the average value is finally taken as the final experiment result. The comparative graph of the deblurring experiment effect is shown in fig. 5, the evaluation index table is shown in a table I, and the comparative table of the deblurring time consumption is shown in a table II.
Table one: mean value of evaluation index under test set
Figure GDA0003820997470000071
From the results of table one, it can be seen that the peak signal-to-noise ratio (PSNR) and the Structural Similarity (SSIM) of the algorithm herein are on average 6.9% and 7.1% higher than those of Shuai Zheng; on average 8.5% and 13.4% higher than the peak signal-to-noise ratio (PSNR) and Structural Similarity (SSIM) of Orest Kupyn. It is shown that the deblurring performance of the algorithm herein is significantly better than the rest of the algorithm.
A second table: average under test set deblurring takes time
Figure GDA0003820997470000072
From the results in Table II, it can be seen that the algorithm herein takes 4.3% more time than Shuai Zheng and 2.7% less time than the method of Orest Kupyn; it is shown that the deblurring time spent by the algorithm does not increase significantly.
Compared with the conventional model, the model researches the structure of adding edge information as auxiliary information and using a dual-discriminator and adding image preprocessing work in edge extraction to improve the performance of image deblurring. Since the study of Shuai Zheng et al has verified that adding edge information as an auxiliary condition into a generator can effectively improve the model deblurring capability, it is not proved again by experiments herein that the study in this experiment designs the following three baseline models to verify the structure of the dual-discriminator and to preprocess the data before extracting the edge image, and uses two of peak signal-to-noise ratio (PSNR) and Structural Similarity (SSIM) as evaluation indexes. Wherein the baseline model is the algorithm model proposed herein; the second baseline model keeps other structures unchanged on the basis of the first baseline model, so that the discriminator network uses a single discriminator model of the traditional CGAN; and the baseline model III removes the preprocessing work of the image before the image edge is extracted on the basis of the baseline model I.
A third table: mean value of evaluation index under test set
Figure GDA0003820997470000073
Figure GDA0003820997470000081
From the results of table three, it can be seen that the peak signal-to-noise ratio (PSNR) and the Structural Similarity (SSIM) of the algorithm are higher than those of model two by 7% and 14% on average; the peak signal-to-noise ratio (PSNR) and the Structural Similarity (SSIM) of the model III are higher by 3.4 percent and 5.9 percent on average. It is shown that the deblurring performance of the algorithm herein is significantly better than the rest of the algorithm.
The experimental results show that: compared with the second model, the algorithm has a text algorithm with a double-discriminator structure in the overall deblurring effect, and the deblurring efficiency is better; compared with the third model, the algorithm with data preprocessing has more prominent local detail expression in comparison with the third model. The image deblurring capability of the model can be improved by using a dual-discriminator structure and performing data preprocessing operation of noise reduction before the image edge is extracted.
The foregoing is illustrative of the preferred embodiments of this invention, and it is to be understood that the invention is not limited to the precise form disclosed herein and that various other combinations, modifications, and environments may be resorted to, falling within the scope of the concept as disclosed herein, either as described above or as apparent to those skilled in the relevant art. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (3)

1. An image deblurring method for improving a double-discrimination network-confrontation model is characterized by comprising the following steps:
the method comprises the following steps: preprocessing the blurred image, and extracting the image edge of the blurred image by adopting a fusion extraction method;
step two: inputting the blurred image and the extracted image edge into a generator together by using the extracted image edge as auxiliary information to generate a deblurred image;
step three: inputting the deblurred image and the edge image thereof, the clear image and the edge image thereof into a discriminator network respectively, and training a discriminator;
step four: repeating the second step and the third step until the discriminator can not judge whether the deblurred image is generated by the generator, and finishing image deblurring by the generator;
the method for extracting the image edge of the blurred image by adopting the fusion extraction method in the first step comprises the following processes of denoising the blurred image by using guided filtering on the blurred image through an image extractor, and obtaining a significant edge by weighting and fusing edges extracted by Sobel, roberts, prewitt and Canny 4 operators by adopting the fusion edge extraction method of edge extraction operators, wherein an algorithm objective function is as follows:
Figure FDA0003820997460000011
α=w 1 E s +w 2 E R +w 3 E P +w 4 E c +w 5
wherein E is s ,E R ,E P ,E c Represent the Sobel, roberts, prewitt, and Canny 4 operators, w, respectively 5 Is a correction term;
the judgers in the third step comprise a real image judger and an edge judger, and the network structures of the two judgers are the same, wherein, the training data of the real image judger is a clear image corresponding to the fuzzy image, and the training sample of the edge judger is an edge gradient image extracted by the clear image through an edge extractor;
the network structure of the real image discriminator or the edge discriminator is the same as that of PatchGAN, the real image discriminator or the edge discriminator comprises 5 modules which are connected in sequence, except the last module, each of the other modules comprises a volume block, an instant normalization layer and a LeakyReLU activation layer, and the first four modules are connected with a one-dimensional full connection layer and a sigmoid finally to obtain the output of a single discrimination network; connecting the outputs of the two discriminators as inputs with a one-dimensional full connection layer and sigmoid to obtain final output;
the joint training process of the generator and the arbiter comprises the following steps: inputting the blurred image and the edge image obtained by the edge extractor into a generator together to obtain a restored deblurred image; respectively sending the deblurred image and the clear image, and the edge image of the deblurred image and the edge image of the clear image into a real image discriminator and an edge image discriminator for training; when the discriminator cannot distinguish whether the image is the generator-generated image, the generator-generated image reaches the result of image deblurring.
2. The method for deblurring the image of the improved double-discriminant countering network model according to claim 1, wherein the generator in the second step is configured to output the deblurred image by taking the blurred image and the edge image of the blurred image as input; the generator structurally comprises two convolution blocks with 1/2 interval, 9 residual blocks and two deconvolution blocks which are connected at a time, wherein each residual block in the 9 residual blocks comprises a convolution layer, an instant normalization layer and a Relu activation function layer, and Dropout regularization with the probability of 0.5 is added behind the first convolution layer.
3. The method for image deblurring of the improved double-discriminant confrontation network model as claimed in claim 1, wherein the objective function brings about gradient disappearance in the process of calculating the loss function and iterative training, and the objective function is improved as follows:
Figure FDA0003820997460000021
wherein x to P x(x) Representing the distribution of data in a sharp image;
Figure FDA0003820997460000022
edge image data distribution z-P representing sharp image z(z) Representing the distribution of data in the blurred image,
Figure FDA0003820997460000023
a random sample of a concatenation of samples of acquired data representing real image X and generated image B, wherein
Figure FDA0003820997460000024
Figure FDA0003820997460000025
Representing a true edge image Y X And generating an edge image Y B A random sample in the concatenation of the collected data samples, wherein
Figure FDA0003820997460000026
E denotes the mathematical expectation, D 1 Representing a real image discriminating network, D 2 Indicating that the edge discrimination network X is a blurred image; y is X Is an edge image of a sharp image; z is a blurred image; y is Z Are edge images taken from blurred images.
CN201911145549.9A 2019-11-20 2019-11-20 Image deblurring method for improving double-discrimination countermeasure network model Active CN111340716B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911145549.9A CN111340716B (en) 2019-11-20 2019-11-20 Image deblurring method for improving double-discrimination countermeasure network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911145549.9A CN111340716B (en) 2019-11-20 2019-11-20 Image deblurring method for improving double-discrimination countermeasure network model

Publications (2)

Publication Number Publication Date
CN111340716A CN111340716A (en) 2020-06-26
CN111340716B true CN111340716B (en) 2022-12-27

Family

ID=71185357

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911145549.9A Active CN111340716B (en) 2019-11-20 2019-11-20 Image deblurring method for improving double-discrimination countermeasure network model

Country Status (1)

Country Link
CN (1) CN111340716B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111951192A (en) * 2020-08-18 2020-11-17 义乌清越光电科技有限公司 Shot image processing method and shooting equipment
CN112102205B (en) * 2020-10-15 2024-02-09 平安科技(深圳)有限公司 Image deblurring method and device, electronic equipment and storage medium
CN112257787B (en) * 2020-10-23 2023-01-17 天津大学 Image semi-supervised classification method based on generation type dual-condition confrontation network structure
CN112488940A (en) * 2020-11-30 2021-03-12 哈尔滨市科佳通用机电股份有限公司 Method for enhancing image edge of railway locomotive component
CN112541877B (en) * 2020-12-24 2024-03-19 广东宜教通教育有限公司 Defuzzification method, system, equipment and medium for generating countermeasure network based on condition
CN114697709B (en) * 2020-12-25 2023-06-06 华为技术有限公司 Video transmission method and device
CN112734678B (en) * 2021-01-22 2022-11-08 西华大学 Image motion blur removing method based on depth residual shrinkage network and generation countermeasure network
CN113538263A (en) * 2021-06-28 2021-10-22 江苏威尔曼科技有限公司 Motion blur removing method, medium, and device based on improved DeblurgAN model

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110223259A (en) * 2019-06-14 2019-09-10 华北电力大学(保定) A kind of road traffic fuzzy image enhancement method based on production confrontation network
CN110378844A (en) * 2019-06-14 2019-10-25 杭州电子科技大学 Motion blur method is gone based on the multiple dimensioned Image Blind for generating confrontation network is recycled

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050033139A1 (en) * 2002-07-09 2005-02-10 Deus Technologies, Llc Adaptive segmentation of anatomic regions in medical images with fuzzy clustering
CN110073301A (en) * 2017-08-02 2019-07-30 强力物联网投资组合2016有限公司 The detection method and system under data collection environment in industrial Internet of Things with large data sets
CN108416752B (en) * 2018-03-12 2021-09-07 中山大学 Method for removing motion blur of image based on generation type countermeasure network
CN108550118B (en) * 2018-03-22 2022-02-22 深圳大学 Motion blur image blur processing method, device, equipment and storage medium
CN108734677B (en) * 2018-05-21 2022-02-08 南京大学 Blind deblurring method and system based on deep learning
CN109166102A (en) * 2018-07-24 2019-01-08 中国海洋大学 It is a kind of based on critical region candidate fight network image turn image interpretation method
CN109523476B (en) * 2018-11-02 2022-04-05 武汉烽火众智数字技术有限责任公司 License plate motion blur removing method for video detection

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110223259A (en) * 2019-06-14 2019-09-10 华北电力大学(保定) A kind of road traffic fuzzy image enhancement method based on production confrontation network
CN110378844A (en) * 2019-06-14 2019-10-25 杭州电子科技大学 Motion blur method is gone based on the multiple dimensioned Image Blind for generating confrontation network is recycled

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于深度学习的车牌图像去运动模糊技术;毛勇等;《杭州电子科技大学学报(自然科学版)》;20180915(第05期);全文 *
运动模糊图像复原技术研究进展与展望;刘桂雄等;《激光杂志》;20190425(第04期);全文 *

Also Published As

Publication number Publication date
CN111340716A (en) 2020-06-26

Similar Documents

Publication Publication Date Title
CN111340716B (en) Image deblurring method for improving double-discrimination countermeasure network model
CN111209952B (en) Underwater target detection method based on improved SSD and migration learning
Fu et al. Removing rain from single images via a deep detail network
CN112233038B (en) True image denoising method based on multi-scale fusion and edge enhancement
CN112541877B (en) Defuzzification method, system, equipment and medium for generating countermeasure network based on condition
CN110473142B (en) Single image super-resolution reconstruction method based on deep learning
CN111161178A (en) Single low-light image enhancement method based on generation type countermeasure network
CN111861894A (en) Image motion blur removing method based on generating type countermeasure network
CN106709877A (en) Image deblurring method based on multi-parameter regular optimization model
Smith et al. Effect of pre-processing on binarization
EP2851867A2 (en) Method and apparatus for filtering an image
Min et al. Blind deblurring via a novel recursive deep CNN improved by wavelet transform
CN112733929A (en) Improved method for detecting small target and shielded target of Yolo underwater image
CN115358952B (en) Image enhancement method, system, equipment and storage medium based on meta-learning
CN114266894A (en) Image segmentation method and device, electronic equipment and storage medium
CN114626042A (en) Face verification attack method and device
CN112927137A (en) Method, device and storage medium for acquiring blind super-resolution image
CN117726537A (en) SAR image denoising network method and system for self-adaptive multi-scale feature fusion AMFFD-Net
CN116309178A (en) Visible light image denoising method based on self-adaptive attention mechanism network
CN114862710A (en) Infrared and visible light image fusion method and device
CN113450275A (en) Image quality enhancement system and method based on meta-learning and storage medium
Du et al. UIEDP: Underwater Image Enhancement with Diffusion Prior
CN104966271B (en) Image de-noising method based on biological vision receptive field mechanism
Peng et al. MND-GAN: A research on image deblurring algorithm based on generative adversarial network
CN113487506B (en) Attention denoising-based countermeasure sample defense method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant