CN112435187A - Single-image blind motion blur removing method for generating countermeasure network based on aggregation residual - Google Patents
Single-image blind motion blur removing method for generating countermeasure network based on aggregation residual Download PDFInfo
- Publication number
- CN112435187A CN112435187A CN202011321461.0A CN202011321461A CN112435187A CN 112435187 A CN112435187 A CN 112435187A CN 202011321461 A CN202011321461 A CN 202011321461A CN 112435187 A CN112435187 A CN 112435187A
- Authority
- CN
- China
- Prior art keywords
- image
- motion blur
- network
- generator
- reconstructed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 230000002776 aggregation Effects 0.000 title description 3
- 238000004220 aggregation Methods 0.000 title description 3
- 238000012549 training Methods 0.000 claims abstract description 22
- 230000006870 function Effects 0.000 claims abstract description 19
- 238000004364 calculation method Methods 0.000 claims abstract description 7
- 230000008447 perception Effects 0.000 claims abstract description 5
- 238000010586 diagram Methods 0.000 claims description 13
- 230000004913 activation Effects 0.000 claims description 9
- 238000010606 normalization Methods 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims description 6
- 230000003042 antagnostic effect Effects 0.000 claims description 5
- 238000005457 optimization Methods 0.000 claims description 4
- 238000013527 convolutional neural network Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 3
- 239000011800 void material Substances 0.000 description 3
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000017105 transposition Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/40—Analysis of texture
- G06T7/41—Analysis of texture based on statistical description of texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a single image blind motion blur removing method for generating a countermeasure network based on aggregated residual errors, which comprises the following steps: (1) acquiring a data set, forming an image pair by using the fuzzy image and the clear image as a sample, and constructing a sample set; (2) constructing a generation countermeasure network for removing motion blur, wherein the generation countermeasure network comprises three parts, and the first part is a generator for generating a reconstructed image; the second part is a discriminator used for comparing a clear image with a reconstructed image to select a reconstructed image with the same quality as the clear image, and the third part is a VGG19 network used for constructing a perception loss function regular term; (3) training by utilizing a sample set to generate a confrontation network, and extracting a generator with determined parameters as a motion blur removal model when training is finished; (4) when the method is applied, the blurred image to be subjected to motion blur removal is input into a motion blur removal model, and a clear image is output through calculation so as to realize blind motion blur removal of the single image.
Description
Technical Field
The invention belongs to the technical field of convolutional neural networks and image blind deblurring, and particularly relates to a single image blind deblurring method for generating a countermeasure network based on aggregated residual errors.
Background
With the rapid development of science and technology, photographing devices such as smart phones and digital cameras begin to be widely popularized, so that the acquisition and the transmission of images become very simple and convenient. In the process of taking a picture, the motion of a camera in the exposure process or the motion of a shooting object in the exposure process of the camera is caused by the hand shake of a photographer, so that the motion of the image is blurred, and the application of the image in various fields such as image semantic segmentation, detection and the like is finally influenced. Therefore, as one of the image restoration techniques, the problem of removing motion blur in an image is also very valuable for application and research.
Motion blur is ubiquitous in photography, and especially when light mobile devices such as mobile phones and vehicle-mounted cameras are used, it is a difficult task to eliminate pixel-level non-uniform motion blur. The conventional image restoration method mainly includes: inverse filtering, wiener filtering, luci-richardson algorithm. They all require iterative updating of the intermediate image and the blur kernel using an image degradation model, a blur kernel estimate, but many real images do not fit the assumptions of a particular model. With the steady development of deep learning, the method for deblurring images by using the convolutional neural network is pursued by researchers, the problem of the traditional deblurring method in the aspect of fuzzy kernel estimation is solved, an end-to-end blind deblurring method is adopted, and a major breakthrough is made in the field of blind deblurring of images.
In recent years, the image deblurring method based on the convolutional neural network is mainly divided into a method for estimating a blur kernel through the convolutional neural network and a form of directly outputting a blurred image end to end, wherein the method for estimating the blur kernel mainly works as Sun et al, and the blurred image is reconstructed according to the blur kernel estimated by the neural network; gong et al estimate the Motion Flow (Motion Flow) of an image using a Full Convolutional Network (FCN) and reconstruct the image accordingly; the method for directly outputting the deblurred picture from end to end comprises the following steps: nah et al propose using a multi-scale convolutional neural network to directly deblur an image, allowing the network to directly output the reconstructed image. Meanwhile, recently, a generation countermeasure network gan (generic adaptive network) is also applied to the field of deblurring.
Disclosure of Invention
In order to solve the problem of image motion blur in a real scene, the invention provides a single-image blind motion blur removing method for generating an antagonistic network based on a gathered residual error.
The technical scheme adopted by the invention is as follows:
a single image blind motion blur removal method for generating a countermeasure network based on aggregated residuals is characterized by comprising the following steps:
(1) acquiring a data set, forming an image pair by using the fuzzy image and the clear image as a sample, and constructing a sample set;
(2) constructing a generation countermeasure network for removing motion blur, wherein the generation countermeasure network comprises three parts, and the first part is a generator for generating a reconstructed image; the second part is a discriminator used for comparing a clear image with a reconstructed image to select a reconstructed image with the same quality as the clear image, and the third part is a VGG19 network used for constructing a perception loss function regular term;
(3) training by utilizing a sample set to generate a confrontation network, and extracting a generator with determined parameters as a motion blur removal model when training is finished;
(4) when the method is applied, the blurred image to be subjected to motion blur removal is input into a motion blur removal model, and a clear image is output through calculation so as to realize blind motion blur removal of the single image.
Preferably, before the blurred image and the sharp image are input to generate the countermeasure network, the images need to be cut into the input size of the countermeasure network.
Preferably, the generator comprises three parts, wherein the first part is a feature extraction module which comprises a hole convolution layer, an example normalization layer and an activation layer and is used for capturing texture shallow features of the blurred image and outputting the shallow texture features; the second part consists of a plurality of ResNeXt residual error network blocks and is used for extracting deep feature textures of the image, wherein the ResNeXt residual error network consists of a convolutional layer and an activation function; the third part is an upsampling module for magnifying the feature image so that it can be displayed on a higher resolution display device, wherein the upsampling module is composed of a convolutional layer, a transposed convolutional layer, and an activation function.
Preferably, the discriminator is used for confronting the reconstructed image and the clear image generated in the generator until the discriminator cannot distinguish the true image and the false image of the reconstructed image and the clear image, and the discriminator consists of a convolution layer, an activation layer and a batch normalization layer.
Preferably, the VGG19 network penalizes the loss function of the countermeasure network as a regular term by inputting the reconstructed image generated in the generator and the clear image into a pre-trained VGG19 network, and by respectively extracting the characteristics of the reconstructed image and the clear image for comparison.
Preferably, the activation function in the generator and the arbiter is a PRelu function. The PRelu formula is:
PRelu(x)=MAX(ax,0)
where ax refers to the weight of the output layer; max (ax,0) takes a large value by comparing 0 with the input ax.
Preferably, when the confrontation network is generated in the training process, the peak signal-to-noise ratio, the structural similarity and the loss optimization of the reconstructed image and the corresponding label clear image of the confrontation network are calculated, and the adopted loss function L comprises the confrontation loss LGANAnd content loss LcontentThe method specifically comprises the following steps:
L=LGAN+λLcontent
wherein λ is weight, IBRepresenting the input blurred image, G (I)B) A reconstructed image representing an output of the blurred image reconstructed by the generator G, D (G: (G) (G))IB) Represents the probability calculation for the reconstructed image, and N represents the number of samples; phi represents the output characteristic diagram of the seventh layer convolution of VGG-19, W and H respectively represent the width and height of the characteristic diagram, x and y represent the pixel position of the characteristic diagram, and ISA sharp image is represented.
Compared with the prior art, the invention has the beneficial effects that at least:
the single image blind motion blur removing method for generating the countermeasure network based on the aggregated residual error, provided by the invention, is based on the countermeasure network generated in the convolutional neural network, adopts the cavity convolution to replace a common convolutional layer and adopts a ResNeXt residual error module as a backbone network in the generator, reduces the super-parameters and improves the accuracy of the model, further improves the subjective visual effect and the objective evaluation index of the deblurred image, and simultaneously adopts a mode of combining the countermeasure loss and the content loss to ensure the deblurred content accuracy in order to ensure that the deblurred image can be perfectly obtained in the generation of the countermeasure training.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a diagram of a training process in a single-image blind de-motion blur method for generating a countermeasure network based on aggregated residuals according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a generator network according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a feature extraction module according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an upsampling module according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a ResNeXt residual error module according to an embodiment of the present invention;
fig. 6 is a comparison graph before and after deblurring of the adopted GoPRo data set according to the embodiment of the present invention, wherein a-d are sharp images, e-h are blurred images, and i-l are deblurred images.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the detailed description and specific examples, while indicating the scope of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
The embodiment provides a single image blind de-motion blur method for generating a countermeasure network based on aggregation residuals, which comprises the following steps:
step 1, acquiring a GoPRo paired data set, connecting the GoPRo paired data set into an image pair of a fuzzy image and a clear image, and randomly cutting the image pair into image blocks with the size of 256 multiplied by 256 to serve as input for generating a countermeasure network.
Step 2, constructing a generation countermeasure network for removing motion blur, wherein the generation countermeasure network mainly comprises 3 parts, one part is a generator for generating a reconstructed image; the second is used for comparing the clear picture with the generated reconstructed image (namely, the false image is marked as Fake), and then a discriminator with the same quality as the real picture is selected, and the third is used for constructing a VGG19 network of a perception loss function regular term.
As shown in fig. 1 to 5, the generator includes three parts, which are:
the first is a feature extraction module, which includes a (7 × 7) convolution layer, instance normalization, and a prilu active layer, and then extracts a feature map from the picture after passing through two (3 × 3) void convolution layers, instance normalization, and a prilu active layer, where the disparity rate of the (3 × 3) void convolution layer is 2, the field of experience after convolution is 5, which is equivalent to the (5 × 5) convolution effect, and more feature map information is extracted by using the void convolution, and finally the picture is extracted into feature maps of 256 channels. The feature extraction module is shown in fig. 3. Hole convolution may increase the receptive field, but may not change the size of the image output feature map.
And the second is a module for extracting deep feature texture deblurring of the image, which consists of 9 ResNeXt residual modules connected in sequence, wherein the ResNeXt residual module consists of two parts which are two ResNeXt modules connected in sequence. The ResNeXt module changes the block of three-layer convolution of the original ResNet, adopts 32 blocks which are parallelly stacked and have the same topological structure, reduces the super-parameters and improves the accuracy of the model. The ResNeXt module consists of 32 parallel three-layer convolution blocks, each three-layer convolution block is a (1 multiplied by 1) convolution layer, the number of input channels is 256, and the number of output channels is 4; the number of input channels of one (3 × 3) convolutional layer is 4, and the number of output channels is 4; one (1 × 1) convolutional layer, the number of input channels is 4, and the number of output channels is 256. Two ResNeXt are connected in sequence to acquire more image information, so that the complex correlation between channels can be fitted better, and the parameter number and the calculated amount are reduced. Finally, the output of each ResNeXt residual module is added to the output of the previous module. The ResNeXt residual block is shown in FIG. 5. The ResNeXt network module greatly reduces the parameter quantity of the network structure and improves the accuracy of the network structure.
And thirdly, an upsampling part module, upsampling the result output by the ResNeXt residual error module, firstly upsampling the residual error feature map to 64 channels by two times of transposition convolution, wherein the convolution kernel of the transposition convolution is (3 × 3), restoring the residual error feature map into a three-channel image by one (7 × 7) convolution layer, wherein the convolution layers are all hollow convolution with a resolution rate of 2, and finally superposing the convolution with the input picture through global connection to remove residual error information to obtain the deblurred picture. The upsampling module is shown in fig. 4.
The discriminator is a Markov discriminator which is formed by overlapping 5 layers of convolution layers, wherein the first convolution block consists of a (4 multiplied by 4) convolution layer and a LeakyRelu function, the last convolution block only consists of a (4 multiplied by 4) convolution layer, and the rest of convolution blocks respectively consist of the convolution layer, a batch normalization layer and the LeakyRelu function. And inputting the clear picture and the reconstructed picture into a discriminator to obtain a receptive field through the convolution layer, and comprehensively grading to distinguish whether the picture is the clear picture or the reconstructed picture.
The input of the VGG19 network is both the generated false picture (Fake) and the clear real picture in the data set for constructing the perceptual loss function.
And 3, taking the fuzzy image in the step 1 as input, training the constructed generation countermeasure network, continuously generating a Fake image block by the generator in the training process, continuously resisting the Fake image block and the clear image block in the discriminator, and finally obtaining the single image after training for removing motion blur generation countermeasure network model until the network model is converged and the discriminator cannot distinguish the Fake image block and the Fake image block.
When training, deblurring is carried out on the blurred image through the generator, the peak signal-to-noise ratio and the structural similarity after deblurring are output, then a deblurred picture or a clear image is input into the discriminator, the discriminator is used for distinguishing whether the input image is the deblurred image or the clear image, loss is updated, and the generator is fixed during the stage of training the discriminator. The two training modes are alternately trained. As shown in fig. 1.
The specific hyper-parameters during training are set as follows: adam was still used as an optimization algorithm for the generator and arbiter, with the learning rate set to 1 × 10-4After 150epochs the learning rate is reduced to 0, batch _ size 8, where Adam optimizer exponential decay rate is β1=0.5,β2=0.999。
The loss of training optimization is mainly composed of two parts: fight against loss and content loss; wherein the distance Wassertein-1 of Wasserein GAN is selected to improve the training stability and resist the loss and is recorded as LGANThe content loss is used for evaluating the difference between the generated clear image and the original clear image and is recorded as L by adopting the perception losscontent;
L=LGAN+λLcontent
Wherein L isGANTo combat losses, LcontentFor content loss, λ is 100.
Wherein IBRepresenting an input blurred picture, G (I)B) D (G (I)) which is a reconstructed picture obtained by reconstructing the blurred picture through the generation network G and is outputB) Represents the probability calculation for the reconstructed picture, and N represents the number of samples; phi represents the output characteristic diagram of the seventh layer convolution of VGG-19, W and H represent the width, height and I of the characteristic diagram respectivelySRepresenting a standard definition image.
And when the training is finished, testing the picture quality of the generator after the training through image indexes such as PSNR (Peak Signal to noise ratio), SSIM (structural similarity) and the like, and selecting the training parameters with the best index evaluation, wherein the parameters are loaded into the generator to obtain the motion blur removal model directly.
And selecting a verification set image to test the deblurring performance of the trained deblurring model, namely determining the deblurring effect through the peak signal-to-noise ratio and the index value of the structural similarity, and carrying out observation evaluation by combining the subjective visual effect.
And 4, inputting a picture with a motion blur scene in reality into the motion blur removing model during application, and outputting a clear image with motion blur removed through calculation.
In the embodiment, the blurred image is deblurred by using the motion blur removal model, and the result is shown in fig. 6, wherein a-d are sharp images, e-h are blurred images, and i-l are deblurred images. As can be seen from FIG. 6, the deblurring effect of the motion blur removal model provided by the present embodiment is very good.
The above-mentioned embodiments are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only the most preferred embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions, equivalents, etc. made within the scope of the principles of the present invention should be included in the scope of the present invention.
Claims (7)
1. A single image blind motion blur removal method for generating a countermeasure network based on aggregated residuals is characterized by comprising the following steps:
(1) acquiring a data set, forming an image pair by using the fuzzy image and the clear image as a sample, and constructing a sample set;
(2) constructing a generation countermeasure network for removing motion blur, wherein the generation countermeasure network comprises three parts, and the first part is a generator for generating a reconstructed image; the second part is a discriminator used for comparing a clear image with a reconstructed image to select a reconstructed image with the same quality as the clear image, and the third part is a VGG19 network used for constructing a perception loss function regular term;
(3) training by utilizing a sample set to generate a confrontation network, and extracting a generator with determined parameters as a motion blur removal model when training is finished;
(4) when the method is applied, the blurred image to be subjected to motion blur removal is input into a motion blur removal model, and a clear image is output through calculation so as to realize blind motion blur removal of the single image.
2. The single image blind de-motion blur method for forming a countermeasure network based on aggregated residuals of claim 1, wherein before the blurred and sharp images are input into the countermeasure network, the images are further cropped to the input size of the countermeasure network.
3. The single-image blind de-motion blur method for generating a countermeasure network based on the aggregated residuals, according to claim 1, wherein the generator comprises three parts, the first part is a feature extraction module, and comprises a hole convolution layer, an instance normalization layer and an activation layer, and the feature extraction module is used for grabbing texture shallow features of the blurred image and outputting the shallow texture features; the second part consists of a plurality of ResNeXt residual error network blocks and is used for extracting deep feature textures of the image, wherein the ResNeXt residual error network consists of a convolutional layer and an activation function; the third part is an upsampling module for magnifying the feature image so that it can be displayed on a higher resolution display device, wherein the upsampling module is composed of a convolutional layer, a transposed convolutional layer, and an activation function.
4. The single-image blind de-motion blur method based on aggregate residual generation countermeasure network of claim 1, wherein the discriminator is used to confront the reconstructed image and the sharp image generated in the generator until the discriminator cannot distinguish the true and false of the reconstructed image and the sharp image, and the discriminator is composed of a convolutional layer, an active layer and a batch normalization layer.
5. The single image blind de-motion blur method for generating a confrontation network based on aggregated residuals, according to claim 1, wherein the VGG19 network penalizes the damage function of the confrontation network by inputting the reconstructed image generated in the generator and the sharp image into a pre-trained VGG19 network, and comparing the features of the reconstructed image and the sharp image by extracting them respectively, as a regular term.
6. The method of blind de-blurring of single images based on aggregated residual generation countermeasure networks according to claim 1, wherein the activation function in the generator and the discriminator is a prilu function.
7. The single-image blind deblurring method for generating an antagonistic network based on aggregated residuals according to claim 1, wherein during training to generate the antagonistic network, the peak signal-to-noise ratio, the structural similarity and the loss optimization of the reconstructed image and the corresponding sharp labeled image of the antagonistic network are calculated, and the loss function L is adopted to include the antagonistic loss LGANAnd content loss LcontentThe method specifically comprises the following steps:
L=LGAN+λLcontent
wherein λ is weight, IBPresentation inputBlurred image of G (I)B) A reconstructed image, D (G (I)), representing an output of the blurred image reconstructed by the generator GB) Represents the probability calculation for the reconstructed image, and N represents the number of samples; phi represents the output characteristic diagram of the seventh layer convolution of VGG-19, W and H respectively represent the width and height of the characteristic diagram, x and y represent the pixel position of the characteristic diagram, and ISA sharp image is represented.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011321461.0A CN112435187A (en) | 2020-11-23 | 2020-11-23 | Single-image blind motion blur removing method for generating countermeasure network based on aggregation residual |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011321461.0A CN112435187A (en) | 2020-11-23 | 2020-11-23 | Single-image blind motion blur removing method for generating countermeasure network based on aggregation residual |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112435187A true CN112435187A (en) | 2021-03-02 |
Family
ID=74693663
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011321461.0A Pending CN112435187A (en) | 2020-11-23 | 2020-11-23 | Single-image blind motion blur removing method for generating countermeasure network based on aggregation residual |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112435187A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112837245A (en) * | 2021-03-16 | 2021-05-25 | 西北工业大学 | Dynamic scene deblurring method based on multi-mode fusion |
CN113129237A (en) * | 2021-04-26 | 2021-07-16 | 广西师范大学 | Depth image deblurring method based on multi-scale fusion coding network |
CN113129240A (en) * | 2021-05-19 | 2021-07-16 | 广西师范大学 | Motion blur removing method for industrial packaging characters |
CN113205464A (en) * | 2021-04-30 | 2021-08-03 | 作业帮教育科技(北京)有限公司 | Image deblurring model generation method, image deblurring method and electronic equipment |
CN113393396A (en) * | 2021-06-18 | 2021-09-14 | 江苏园上园智能科技有限公司 | Image deblurring model, deblurred three-dimensional reconstruction method and system |
CN113538266A (en) * | 2021-07-07 | 2021-10-22 | 南京国电南自电网自动化有限公司 | WGAN-based fuzzy aerial image processing method |
CN113643215A (en) * | 2021-10-12 | 2021-11-12 | 北京万里红科技有限公司 | Method for generating image deblurring model and iris image deblurring method |
CN114359082A (en) * | 2021-12-24 | 2022-04-15 | 复旦大学 | Gastroscope image deblurring algorithm based on self-built data pair |
CN114596219A (en) * | 2022-01-27 | 2022-06-07 | 太原理工大学 | Image motion blur removing method based on condition generation countermeasure network |
CN114820342A (en) * | 2022-03-17 | 2022-07-29 | 西北工业大学 | Video deblurring method based on dynamic neural network |
CN117196985A (en) * | 2023-09-12 | 2023-12-08 | 军事科学院军事医学研究院军事兽医研究所 | Visual rain and fog removing method based on deep reinforcement learning |
CN112837245B (en) * | 2021-03-16 | 2024-05-31 | 西北工业大学 | Dynamic scene deblurring method based on multi-mode fusion |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110378844A (en) * | 2019-06-14 | 2019-10-25 | 杭州电子科技大学 | Motion blur method is gone based on the multiple dimensioned Image Blind for generating confrontation network is recycled |
CN111199522A (en) * | 2019-12-24 | 2020-05-26 | 重庆邮电大学 | Single-image blind motion blur removing method for generating countermeasure network based on multi-scale residual errors |
CN111612703A (en) * | 2020-04-22 | 2020-09-01 | 杭州电子科技大学 | Image blind deblurring method based on generation countermeasure network |
-
2020
- 2020-11-23 CN CN202011321461.0A patent/CN112435187A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110378844A (en) * | 2019-06-14 | 2019-10-25 | 杭州电子科技大学 | Motion blur method is gone based on the multiple dimensioned Image Blind for generating confrontation network is recycled |
CN111199522A (en) * | 2019-12-24 | 2020-05-26 | 重庆邮电大学 | Single-image blind motion blur removing method for generating countermeasure network based on multi-scale residual errors |
CN111612703A (en) * | 2020-04-22 | 2020-09-01 | 杭州电子科技大学 | Image blind deblurring method based on generation countermeasure network |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112837245A (en) * | 2021-03-16 | 2021-05-25 | 西北工业大学 | Dynamic scene deblurring method based on multi-mode fusion |
CN112837245B (en) * | 2021-03-16 | 2024-05-31 | 西北工业大学 | Dynamic scene deblurring method based on multi-mode fusion |
CN113129237A (en) * | 2021-04-26 | 2021-07-16 | 广西师范大学 | Depth image deblurring method based on multi-scale fusion coding network |
CN113205464A (en) * | 2021-04-30 | 2021-08-03 | 作业帮教育科技(北京)有限公司 | Image deblurring model generation method, image deblurring method and electronic equipment |
CN113129240B (en) * | 2021-05-19 | 2023-07-25 | 广西师范大学 | Motion blur removing method for industrial packaging characters |
CN113129240A (en) * | 2021-05-19 | 2021-07-16 | 广西师范大学 | Motion blur removing method for industrial packaging characters |
CN113393396A (en) * | 2021-06-18 | 2021-09-14 | 江苏园上园智能科技有限公司 | Image deblurring model, deblurred three-dimensional reconstruction method and system |
CN113538266A (en) * | 2021-07-07 | 2021-10-22 | 南京国电南自电网自动化有限公司 | WGAN-based fuzzy aerial image processing method |
CN113643215B (en) * | 2021-10-12 | 2022-01-28 | 北京万里红科技有限公司 | Method for generating image deblurring model and iris image deblurring method |
CN113643215A (en) * | 2021-10-12 | 2021-11-12 | 北京万里红科技有限公司 | Method for generating image deblurring model and iris image deblurring method |
CN114359082A (en) * | 2021-12-24 | 2022-04-15 | 复旦大学 | Gastroscope image deblurring algorithm based on self-built data pair |
CN114359082B (en) * | 2021-12-24 | 2023-01-06 | 复旦大学 | Gastroscope image deblurring algorithm based on self-built data pair |
CN114596219A (en) * | 2022-01-27 | 2022-06-07 | 太原理工大学 | Image motion blur removing method based on condition generation countermeasure network |
CN114596219B (en) * | 2022-01-27 | 2024-04-26 | 太原理工大学 | Image motion blur removing method based on condition generation countermeasure network |
CN114820342A (en) * | 2022-03-17 | 2022-07-29 | 西北工业大学 | Video deblurring method based on dynamic neural network |
CN114820342B (en) * | 2022-03-17 | 2024-02-27 | 西北工业大学 | Video deblurring method based on dynamic neural network |
CN117196985A (en) * | 2023-09-12 | 2023-12-08 | 军事科学院军事医学研究院军事兽医研究所 | Visual rain and fog removing method based on deep reinforcement learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112435187A (en) | Single-image blind motion blur removing method for generating countermeasure network based on aggregation residual | |
CN111199522B (en) | Single-image blind removal motion blurring method for generating countermeasure network based on multi-scale residual error | |
Claus et al. | Videnn: Deep blind video denoising | |
CN109360156B (en) | Single image rain removing method based on image block generation countermeasure network | |
CN112507997B (en) | Face super-resolution system based on multi-scale convolution and receptive field feature fusion | |
CN110599401A (en) | Remote sensing image super-resolution reconstruction method, processing device and readable storage medium | |
CN111861894B (en) | Image motion blur removing method based on generation type countermeasure network | |
CN110766632A (en) | Image denoising method based on channel attention mechanism and characteristic pyramid | |
CN112164011A (en) | Motion image deblurring method based on self-adaptive residual error and recursive cross attention | |
CN108830829B (en) | Non-reference quality evaluation algorithm combining multiple edge detection operators | |
CN111369548A (en) | No-reference video quality evaluation method and device based on generation countermeasure network | |
CN115345791A (en) | Infrared image deblurring algorithm based on attention mechanism residual error network model | |
CN116309178A (en) | Visible light image denoising method based on self-adaptive attention mechanism network | |
Chen et al. | Image denoising via deep network based on edge enhancement | |
CN113724134A (en) | Aerial image blind super-resolution reconstruction method based on residual distillation network | |
CN117173232A (en) | Depth image acquisition method, device and equipment | |
CN116977200A (en) | Processing method and device of video denoising model, computer equipment and storage medium | |
Bull et al. | Perceptual improvements for super-resolution of satellite imagery | |
CN113362338B (en) | Rail segmentation method, device, computer equipment and rail segmentation processing system | |
CN115511733A (en) | Image degradation modeling method, neural network training method and device | |
CN114820316A (en) | Video image super-resolution recovery system based on deep learning | |
CN115273089A (en) | Optical character restoration method based on condition generation countermeasure network | |
CN114841873A (en) | Image motion blur blind removal method based on generation countermeasure network | |
CN114118199A (en) | Image classification method and system for fault diagnosis of intelligent pump cavity endoscope | |
CN115311149A (en) | Image denoising method, model, computer-readable storage medium and terminal device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |