CN111681188A - Image deblurring method based on combination of image pixel prior and image gradient prior - Google Patents

Image deblurring method based on combination of image pixel prior and image gradient prior Download PDF

Info

Publication number
CN111681188A
CN111681188A CN202010543543.3A CN202010543543A CN111681188A CN 111681188 A CN111681188 A CN 111681188A CN 202010543543 A CN202010543543 A CN 202010543543A CN 111681188 A CN111681188 A CN 111681188A
Authority
CN
China
Prior art keywords
image
loss function
pixel
label
gradient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010543543.3A
Other languages
Chinese (zh)
Other versions
CN111681188B (en
Inventor
祁清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qinghai Nationalities University
Original Assignee
Qinghai Nationalities University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qinghai Nationalities University filed Critical Qinghai Nationalities University
Priority to CN202010543543.3A priority Critical patent/CN111681188B/en
Publication of CN111681188A publication Critical patent/CN111681188A/en
Application granted granted Critical
Publication of CN111681188B publication Critical patent/CN111681188B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention discloses an image deblurring method based on combination of image pixel prior and image gradient prior, which comprises the following steps: preparing data; and constructing and generating a confrontation network model, wherein the confrontation network model comprises two subnets which are respectively marked as DNet and GNet. DNet recovers the content of the image from an image pixel domain, and GNet recovers the gradient of the image from an image gradient domain; setting a target loss function: the target loss function that includes the constrained DNet subnet includes: image content target loss function LcontentImage pixel level reconstruction target loss function Lpixel(ii) a On the other hand, the objective loss function of the GNet subnet is represented by LGradientNetConstitution LGradientNetThe function of (1) is to narrow the difference between the gradient intensity map of the label image and the generated blurred image gradient intensity map; target loss function L in discriminatoradv,LadvAnd the label image generation device is used for judging the authenticity of the generated image and the label image and driving the generated image of the generator to be close to the label image.

Description

Image deblurring method based on combination of image pixel prior and image gradient prior
Technical Field
The invention belongs to the technical field of image processing and computer vision, and particularly relates to an image deblurring method based on combination of image pixel prior and image gradient prior.
Background
As a carrier for recording and transmitting external objective world information, images are always the main source and means for acquiring and distinguishing the objective world information by human beings. However, image blur problems caused by camera shake or object motion may frequently occur during photographing of an image. The blurred image loses clear edges and rich texture information, so that people can hardly acquire clear content and fine information from the blurred image. Therefore, how to sharpen the motion-blurred image so that the motion-blurred image can be better applied to the fields of advanced image processing (image detection, image recognition) and the like has become a research hotspot.
In view of the problem of image deblurring, the prior art method can summarize the following two aspects: the image deblurring method based on the traditional image prior and the image deblurring method based on the deep learning. The image deblurring method based on the traditional method depends on manually extracting prior or statistical information of an image from the image, modeling an optimization equation on the basis, and obtaining a restored image by iteratively solving the optimization equation. Since the traditional method only extracts priors on limited images, the method only obtains better deblurring results on specific blurred images, and has lower generalization on other blurred images. In addition, a great deal of time is consumed for iteratively solving the optimization function, so that the method cannot well meet the requirement of the algorithm on real-time performance. The image deblurring problem based on deep learning recovers potential label images by extracting features from a large number of data sets and continuously iterating and selecting a weight more suitable for image recovery in the process of network model training. Although the image deblurring problem has achieved some success, the recovered image is not very satisfactory. For example, some methods based on deep learning have the problems of excessive network parameters and excessive network models, which undoubtedly puts higher requirements on network training in terms of hardware configuration; other methods are only suitable for synthetic blurred images, and in actual blurred images, generalization and robustness are weak.
Disclosure of Invention
The invention aims to overcome the defects in the existing image deblurring technology, and researches an image deblurring method which can effectively improve the deblurring performance of an image and recover the details and the structure of the image by jointly utilizing an image gradient prior and a confrontation generation model.
The purpose of the invention is realized by the following technical scheme:
an image deblurring method based on combination of image pixel prior and image gradient prior comprises the following steps:
(1) preparing experimental data, specifically including a fuzzy image and a label image;
(2) setting a network structure framework of a generator and a discriminator; the generator consists of two independent subnets, namely DNet and GNet, wherein each subnet adopts a U-shaped network structure, and the U-shaped network structure comprises an encoder and a decoder; inputting the blurred image into a discriminator, wherein an encoder is used for down-sampling and extracting detail features in the blurred image for encoding, and a decoder is used for up-sampling and obtaining a generated image (an image obtained through learning of a generator in the network training process); the DNet restores the content of the blurred image from the level of an image pixel domain, and the GNet restores the gradient of the blurred image from the level of an image gradient domain;
the discriminator adopts PatchGAN as a network structure of the discriminator, and specifically comprises a flat convolution layer, three down-sampling convolution layers and a convolution layer activated by a Sigmoid activation function; inputting a generated image and a label image into a discriminator, wherein the downsampling convolutional layer is used for downsampling and coding local features of the generated image and the label image which are used for representing the classification response; the convolution layer activated by the Sigmoid activation function is used for obtaining a final classification response; an input Normalization sample Normalization layer and a Leaky ReLU activation function are added behind each convolution layer in the discriminator, and the size of a convolution kernel of each convolution layer is 4 multiplied by 4;
(3) setting a target loss function for generating a confrontation network model; the target loss function of the generator consists of target loss functions of a DNet subnet and a GNet subnet; wherein the target loss function of the DNet subnet comprises an image content target loss function LcontentImage pixel level reconstruction target loss function Lpixel(ii) a Wherein L iscontentEnsuring that the generated image and the label image retain the same semantic information, LpixelFor reducing differences between the generated image and corresponding label image pixels; on the other hand, the objective loss function of the GNet subnet is represented by LGradientNetConstitution LGradientNetHas the effect of reducing the gradient intensity map and the generated modulus of the label imageThe difference between the blurred image gradient intensity maps; the target loss function in the discriminator is Ladv,LadvThe label image generation device is used for judging the truth of the generated image and the label image and driving the generated image obtained by the learning of the generator to be close to the label image;
(4) inputting the blurred image and the label image into a generator for image deblurring learning, and meanwhile, a discriminator is used for discriminating the genuineness and the falseness of the generated image and the label image; the discriminator feeds back the discriminated true and false probability to the generator, drives the generator to approach the generated image to the label image, updates the weight parameter of the network according to the generator and enters the next iterative training; during generation of the confrontation network training, the generator and the arbiter compete with each other for learning until the network training converges.
Further, in step (3), the objective loss function for generating the countermeasure network model can be expressed as:
L(G,D)=βLcontent+βLpixel+βLGradientNet+αLadv
wherein β and α are each Lcontent,Lpixel,LGradientNetAnd LadvThe weight coefficient of each constraint term is defined as β -10, α -1.
Further, the DNet subnet maps three-channel blurred images to 64 x 64 by using a flat convolution layer to map the blurred image dimension; then, three down-sampling layers are used for down-sampling and coding the blurred image, three dense blocks are respectively added after each down-sampling layer, and the resolution of the blurred image is reduced from 256 × 256 to 64 × 64; correspondingly, the decoder comprises three upsampling layers for upsampling and decoding to obtain a generated image, three dense blocks are respectively added in front of each upsampling layer, and the resolution of the generated image is increased from 64 × 64 to 256 × 256; finally, the generated image is reconstructed by a Tanh layer and a convolution layer with convolution kernel of 3 multiplied by 3; the GNet subnets have the same network structure as the DNet subnets.
Further, the generation countermeasure network model is carried on a desktop workstation configured as a block of Intel (R) core (TM) i7CPU (16GBRAM)3.60GHz CPU and 1 block of NVIDIA GeForce GTX 1080Ti GPU for operation; wherein the batch size (number of batch training images) is 2, and the G and D learning rates are 0.0001; the slope of the activation function leak ReLU is 0.2; the network uses Adam optimizers with momentum parameters β 1 ═ 0.5 and β 2 ═ 0.999, respectively.
Compared with the prior art, the technical scheme of the invention has the following beneficial effects:
(1) the invention provides a two-branch network based on combination of image pixel prior and image gradient prior. Compared with the traditional method for manually extracting the image gradient prior from the image, the method disclosed by the invention carries out depth fusion on the image pixel domain prior and the gradient domain prior in a data-driven manner by generating a framework of a countermeasure network. The gradient domain features of the image are continuously optimized with the training process. In addition, in the training process of the network, the feature maps of the image domain and the gradient domain are mutually guided, so that the recovery of image structure and texture information is facilitated;
(2) the invention provides a target loss function beneficial to optimization training of an image gradient branch network. Compared with the image deblurring method which relies on the image semantic target loss function and the image pixel target loss function to carry out network optimization training, the method utilizes the gradient strength target loss function to restrain the process of learning from the gradient strength of the blurred image to the gradient strength of the label image;
(3) compared with the existing image deblurring algorithm, the image generated by the method has a more obvious structure and a better visual effect, so that the deblurring performance of the image is obviously improved;
drawings
FIG. 1 is a schematic flow diagram of the process of the present invention;
FIG. 2 is a network layout and parameter diagram of a generator;
FIG. 3 is a network layout and parameter diagram of a discriminator;
fig. 4 is a network structure diagram of a network Dense Block (sense redundant Block).
Detailed Description
The invention is described in further detail below with reference to the figures and specific examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The present invention is described in detail below with reference to an algorithm flow diagram.
As shown in fig. 1, the present embodiment provides an image deblurring method based on a combination of image pixel priors and image gradient priors, which includes the following steps:
step 1: network structure for building generator and discriminator
As shown in fig. 2 and 3, the generator is used for learning the process of image sharpening, and the discriminator discriminates and feeds back the generated image learned by the generator;
step 1.1: building generator G network structure
The invention adopts a network based on a U-shaped structure as a network structure of a generator, as shown in figure 2; the generator is composed of two sub-networks which are respectively marked as DNet and GNet and adopt the same network structure; DNet includes an encoder for down-sampling and extracting detail features from blurred images for encoding and a decoder. The encoder maps the three-channel blurred image to a mapping map with dimensions of 64 x 64 through a flat convolutional layer; then, three down-sampling layers are used for down-sampling the resolution of the blurred image, three dense blocks are respectively added after each down-sampling layer, and the resolution of the blurred image is reduced from 256 × 256 to 64 × 64; correspondingly, the decoder comprises three upsampling layers for upsampling to obtain a generated image, three dense blocks are respectively added in front of each upsampling layer, and the resolution of the generated image is increased from 64 × 64 to 256 × 256; on the other hand, the GNet subnet has the same network structure as the DNet subnet; what needs to be described is that gradient intensity diagram of fuzzy image of three channels is input to GNet subnet, GNet subnet and DNet subnet finish the connection of the channel and realize the conversion of the feature mapping dimension at the up-sampling layer and down-sampling layer, then carry on the fusion of the gradient domain and pixel domain feature through the dense block; finally, the generated image is obtained by reconstructing a convolution layer with 7 multiplied by 7 of convolution kernel activated by a Tanh function; in addition, jump connection is established between an up-sampling layer and a down-sampling layer with the same scale, and low-dimensional macro image features are associated with high-dimensional abstract features, which is very important for learning the structure and details of the image;
step 1.2: construction of discriminator D network structure
In order to distinguish the real tag image from the generated image, the present invention adopts a network structure in which PatchGAN is used as a discriminator as shown in FIG. 3. Specifically, the method comprises the steps that a flat convolutional layer, three down-sampling convolutional layers and a convolutional layer activated by a Sigmoid activation function map feature dimensions to 64 x 64; inputting a generated image and a label image into a discriminator, wherein the downsampled convolutional layer is used for downsampling and coding local features of the generated image and the label image for representing the classification response, and the resolution of the generated image and the label image is reduced from 256 × 256 to 32 × 32; the convolution layer activated by the Sigmoid activation function is used for obtaining a final classification response; an instant normalization sample normalization layer and a Leaky ReLU activation function are added behind each convolution layer in the discriminator, and the size of a convolution kernel of each convolution layer is 4 multiplied by 4;
the generator expects that the generated image can cheat the discriminator, so that the discriminator cannot discriminate whether the generated image is true or false; the discriminator is used for finishing discrimination of the generated image and the label image, the discriminator feeds back a discrimination result to the generator to drive the generator to convert the fuzzy image into a clear image close to the label image, the generator enters the next iterative training according to the feedback updating network parameters of the discriminator, the generator and the discriminator continuously compete for training in the described mode, and finally dynamic balance is achieved, so that the generated image with a remarkable structure is reconstructed by the generated network;
step 2: constructing a target loss function that generates a countermeasure network model
Since the generator consists of two subnets, DNet and GNet, the objective loss function of the generator consists of two parts, DNet and GNet; wherein the target loss function for the DNet subnet comprises: image content target loss function LcontentImage pixelLevel reconstruction target loss function Lpixel(ii) a Wherein L iscontentEnsuring that the generated image and the label image retain the same semantic information, LpixelReducing differences between the generated image and corresponding label image pixels; on the other hand, the objective loss function of the GNet subnet is represented by LGradientNetConstitution LGradientNetThe function of (1) is to narrow the difference between the gradient intensity map of the label image and the generated blurred image gradient intensity map; the target loss function in the discriminator is Ladv,LadvThe label generator is used for judging the authenticity of the generated image and the label image and driving the generated image of the generator to approach the label image; the objective loss function that generates the antagonistic network model can be expressed in a weighted manner as:
L(G,D)=βLcontent+βLpixel+βLGradientNet+αLadv
wherein β and α are each Lcontent,Lpixel,LGradientNetAnd LadvThe weight coefficient of each constraint term is defined as β -10, α -1, and the value of the weight coefficient depends on experiments and experiences;
step 2.1: image content target loss function Lcontent
The image deblurring method needs to realize that the generated image and the label image can keep the same perception information; in the implementation, a pre-trained VGG19 model on an ImageNet data set is adopted, high-order features of an image and a label image are generated through extraction, and a two-norm L is solved2Reducing the gap between them, the image content target loss function is expressed as follows:
Figure BDA0002539750450000051
wherein H and W represent the height and width, phi, of the input image, respectivelyi,jRepresenting features extracted from the pre-trained VGG19 model after activation of the ith pooling layer and the jth volume base layer, G representing the generation network and all parameters, siRepresenting a label image, G (b)i) Representing the generation of an image;φi,j(si) Representing the semantic content of the label image, phii,j(G(bi) ) represents the semantic content of the generated image. In the implementation, the perception characteristics of the image are extracted by adopting a 'Conv 3-2' layer of a pre-trained VGG19 model;
step 2.2: image pixel level reconstruction target loss function Lpixel
LpixelThe average pixel difference between the generated image and the label image can be reduced, and the prior method adopts a two-norm L2Optimization is performed, but the result is too smooth to maintain sparse visual effect. However, such an objective loss function is still widely used to accelerate convergence and improve the performance of image deblurring. Thus, the present implementation solves for a norm L1Constraining the image pixel level reconstruction process, LpixelThe target constraint function is expressed as follows:
Figure BDA0002539750450000061
where H and W denote the height and width, s, respectively, of the input imageiRepresenting a label image, G (b)i) Representing the generation of an image;
step 2.3: target loss function L for constraining image gradient strengthGradientNet
In order to better utilize gradient domain prior of an image in a data-driven manner, the invention provides a GNet subnet; and L isGradientNetThe target constraint function can enable the gradient intensity graph of the blurred image to approach the gradient intensity graph of the label image after network optimization training. This implementation is done by solving a norm L1The difference between the reduced label image gradient intensity map and the generated gradient intensity map is constrained, LGradientNetThe target constraint function is expressed as follows:
Figure BDA0002539750450000062
where H and W represent the height and width, respectively, of the input image, Gra(s)i) To representGradient intensity map of label image, G (Gra (b)i) Represents the generated gradient intensity map;
step 2.4: constructing the target loss function L of the discriminatoradv
In the method of the invention, the training of the discriminator D serves to maximize the generation of the image G (b)i) And a label image siAnd distinguishing and feeding back the distinguishing result to the generator in the form of a probability value, so that the generator approaches the generated image to the label image. In addition, the invention adopts an optimization framework based on the bulldozer distance and the gradient penalty term (WGAN-GP) as a discriminator. Thus, the target loss function of the network arbiter is expressed as follows:
Figure BDA0002539750450000063
wherein the content of the first and second substances,
Figure BDA0002539750450000064
item is a tag image s discriminated by a discriminatoriIn the case of a true expected value,
Figure BDA0002539750450000065
item is discrimination of the generated image G (b) by the discriminatori) In the event of a false expectation value,
Figure BDA0002539750450000066
represents a gradient penalty term, λ is a coefficient term,
Figure BDA0002539750450000071
is shown in the label image siAnd generating data G (b)i) The sample distribution obtained by uniform sampling is carried out on the connecting line of random values;
and step 3: training and testing to generate an antagonistic network model
In this embodiment, a training set of 2013 pairs of labeled images/blurred images in the GOPRO dataset is selected as the training set of the present invention. In the training process of the network, the image b is blurrediAnd a label image siIs randomly cut into 256 × 256A size image block. Optimization of the network is determined by the objective loss function Lcontent、Lpixel、LGradientNetAnd LadvAnd (6) carrying out constraint. The discriminator needs to finish generating the image G (b)i) And a label image siAnd (4) judging. The generator and the discriminator compete with each other in the whole training process until the network training converges; when in testing, only the fuzzy image is input into the generator which has been trained to converge, and the generated image with a remarkable structure can be obtained;
in this embodiment, the generation countermeasure network model is mounted on a desktop workstation configured as one piece of Intel (R) core (TM) i7CPU (16GBRAM)3.60GHz CPU and 1 piece of NVIDIA GeForce GTX 1080Ti GPU for operation. Where the batch size (number of batch training images) is 2, and the G and D learning rates are 0.0001. The slope of the activation function leak ReLU is 0.2. The network uses Adam optimizers with momentum parameters β 1 ═ 0.5 and β 2 ═ 0.999, respectively.
The present invention is not limited to the above-described embodiments. The foregoing description of the specific embodiments is intended to describe and illustrate the technical solutions of the present invention, and the above specific embodiments are merely illustrative and not restrictive. Those skilled in the art can make many changes and modifications to the invention without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (4)

1. The image deblurring method based on the combination of image pixel prior and image gradient prior is characterized by comprising the following steps of:
(1) preparing experimental data, specifically including a fuzzy image and a label image;
(2) setting a network structure framework of a generator and a discriminator; the generator consists of two independent subnets, namely DNet and GNet, wherein each subnet adopts a U-shaped network structure, and the U-shaped network structure comprises an encoder and a decoder; inputting the blurred image into a discriminator, wherein an encoder is used for down-sampling and extracting detail features in the blurred image for encoding, and a decoder is used for up-sampling and obtaining a generated image; the DNet restores the content of the blurred image from the level of an image pixel domain, and the GNet restores the gradient of the blurred image from the level of an image gradient domain;
the discriminator adopts PatchGAN as a network structure of the discriminator, and specifically comprises a flat convolution layer, three down-sampling convolution layers and a convolution layer activated by a Sigmoid activation function; inputting a generated image and a label image into a discriminator, wherein the downsampling convolutional layer is used for downsampling and coding local features of the generated image and the label image which are used for representing the classification response; the convolution layer activated by the Sigmoid activation function is used for obtaining a final classification response; an input Normalization sample Normalization layer and a Leaky ReLU activation function are added behind each convolution layer in the discriminator, and the size of a convolution kernel of each convolution layer is 4 multiplied by 4;
(3) setting a target loss function for generating a confrontation network model; the target loss function of the generator consists of target loss functions of a DNet subnet and a GNet subnet; wherein the target loss function of the DNet subnet comprises an image content target loss function LcontentImage pixel level reconstruction target loss function Lpixel(ii) a Wherein L iscontentEnsuring that the generated image and the label image retain the same semantic information, LpixelFor reducing differences between the generated image and corresponding label image pixels; on the other hand, the objective loss function of the GNet subnet is represented by LGradientNetConstitution LGradientNetThe function of (1) is to narrow the difference between the gradient intensity map of the label image and the generated blurred image gradient intensity map; the target loss function in the discriminator is Ladv,LadvThe label image generation device is used for judging the truth of the generated image and the label image and driving the generated image obtained by the learning of the generator to be close to the label image;
(4) inputting the blurred image and the label image into a generator for image deblurring learning, and meanwhile, a discriminator is used for discriminating the genuineness and the falseness of the generated image and the label image; the discriminator feeds back the discriminated true and false probability to the generator, drives the generator to approach the generated image to the label image, updates the weight parameter of the network according to the generator and enters the next iterative training; during generation of the confrontation network training, the generator and the arbiter compete with each other for learning until the network training converges.
2. The method for deblurring an image based on a combination of image pixel priors and image gradient priors as claimed in claim 1, wherein in the step (3), the objective loss function for generating the confrontation network model can be expressed in a weighted manner as:
L(G,D)=βLcontent+βLpixel+βLGradientNet+αLadv
wherein β and α are each Lcontent,Lpixel,LGradientNetAnd LadvThe weight coefficient of each constraint term is defined as β -10, α -1.
3. The image deblurring method based on a combination of image pixel priors and image gradient priors according to claim 1, wherein the DNet subnet maps the three-channel blurred image to 64 x 64 by one flat convolution layer in dimension; then, three down-sampling layers are used for down-sampling and coding the blurred image, three dense blocks are respectively added after each down-sampling layer, and the resolution of the blurred image is reduced from 256 × 256 to 64 × 64; correspondingly, the decoder comprises three upsampling layers for upsampling and decoding to obtain a generated image, three dense blocks are respectively added in front of each upsampling layer, and the resolution of the generated image is increased from 64 × 64 to 256 × 256; finally, the generated image is reconstructed by a Tanh layer and a convolution layer with convolution kernel of 3 multiplied by 3; the GNet subnets have the same network structure as the DNet subnets.
4. The method of claim 1, wherein the generating a countermeasure network model runs on a desktop workstation configured as one piece of Intel (R) core (TM) i7CPU (16GB RAM)3.60GHz CPU and 1 piece of NVIDIA GeForce GTX 1080Ti GPU; wherein the batch size (number of batch training images) is 2, and the G and D learning rates are 0.0001; the slope of the activation function leak ReLU is 0.2; the network uses Adam optimizers with momentum parameters β 1 ═ 0.5 and β 2 ═ 0.999, respectively.
CN202010543543.3A 2020-06-15 2020-06-15 Image deblurring method based on combination of image pixel prior and image gradient prior Active CN111681188B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010543543.3A CN111681188B (en) 2020-06-15 2020-06-15 Image deblurring method based on combination of image pixel prior and image gradient prior

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010543543.3A CN111681188B (en) 2020-06-15 2020-06-15 Image deblurring method based on combination of image pixel prior and image gradient prior

Publications (2)

Publication Number Publication Date
CN111681188A true CN111681188A (en) 2020-09-18
CN111681188B CN111681188B (en) 2022-06-17

Family

ID=72435812

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010543543.3A Active CN111681188B (en) 2020-06-15 2020-06-15 Image deblurring method based on combination of image pixel prior and image gradient prior

Country Status (1)

Country Link
CN (1) CN111681188B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419200A (en) * 2020-12-04 2021-02-26 宁波舜宇仪器有限公司 Image quality optimization method and display method
CN112597887A (en) * 2020-12-22 2021-04-02 深圳集智数字科技有限公司 Target identification method and device
CN112614072A (en) * 2020-12-29 2021-04-06 北京航空航天大学合肥创新研究院 Image restoration method and device, image restoration equipment and storage medium
CN112837232A (en) * 2021-01-13 2021-05-25 山东省科学院海洋仪器仪表研究所 Underwater image enhancement and detail recovery method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9342869B2 (en) * 2014-04-29 2016-05-17 Adobe Systems Incorporated Discriminative indexing for patch-based image enhancement
CN109035149A (en) * 2018-03-13 2018-12-18 杭州电子科技大学 A kind of license plate image based on deep learning goes motion blur method
CN109859111A (en) * 2018-11-20 2019-06-07 昆明理工大学 A kind of blind deblurring method of single image based on MAP method
CN110378844A (en) * 2019-06-14 2019-10-25 杭州电子科技大学 Motion blur method is gone based on the multiple dimensioned Image Blind for generating confrontation network is recycled
CN110610458A (en) * 2019-04-30 2019-12-24 北京联合大学 Method and system for GAN image enhancement interactive processing based on ridge regression

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9342869B2 (en) * 2014-04-29 2016-05-17 Adobe Systems Incorporated Discriminative indexing for patch-based image enhancement
CN109035149A (en) * 2018-03-13 2018-12-18 杭州电子科技大学 A kind of license plate image based on deep learning goes motion blur method
CN109859111A (en) * 2018-11-20 2019-06-07 昆明理工大学 A kind of blind deblurring method of single image based on MAP method
CN110610458A (en) * 2019-04-30 2019-12-24 北京联合大学 Method and system for GAN image enhancement interactive processing based on ridge regression
CN110378844A (en) * 2019-06-14 2019-10-25 杭州电子科技大学 Motion blur method is gone based on the multiple dimensioned Image Blind for generating confrontation network is recycled

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
QI QING 等: ""Attention Network for Non-Uniform Deblurring"", 《IEEE ACCESS》, vol. 8, 25 May 2020 (2020-05-25), pages 100044 - 100057 *
孙涛 等: ""基于非凸的全变分和低秩混合正则化的图像去模糊模型和算法"", 《计算机学报》, vol. 43, no. 4, 8 February 2020 (2020-02-08), pages 643 - 652 *
陈赛健 等: ""基于生成对抗网络的文本图像联合超分辨率与去模糊方法"", 《计算机应用》, vol. 40, no. 3, 19 September 2019 (2019-09-19), pages 859 - 864 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419200A (en) * 2020-12-04 2021-02-26 宁波舜宇仪器有限公司 Image quality optimization method and display method
CN112419200B (en) * 2020-12-04 2024-01-19 宁波舜宇仪器有限公司 Image quality optimization method and display method
CN112597887A (en) * 2020-12-22 2021-04-02 深圳集智数字科技有限公司 Target identification method and device
CN112614072A (en) * 2020-12-29 2021-04-06 北京航空航天大学合肥创新研究院 Image restoration method and device, image restoration equipment and storage medium
CN112614072B (en) * 2020-12-29 2022-05-17 北京航空航天大学合肥创新研究院 Image restoration method and device, image restoration equipment and storage medium
CN112837232A (en) * 2021-01-13 2021-05-25 山东省科学院海洋仪器仪表研究所 Underwater image enhancement and detail recovery method

Also Published As

Publication number Publication date
CN111681188B (en) 2022-06-17

Similar Documents

Publication Publication Date Title
CN111681188B (en) Image deblurring method based on combination of image pixel prior and image gradient prior
Li et al. Single image dehazing via conditional generative adversarial network
CN110211045B (en) Super-resolution face image reconstruction method based on SRGAN network
CN111241958B (en) Video image identification method based on residual error-capsule network
CN110348330B (en) Face pose virtual view generation method based on VAE-ACGAN
US20190087726A1 (en) Hypercomplex deep learning methods, architectures, and apparatus for multimodal small, medium, and large-scale data representation, analysis, and applications
CN111488865B (en) Image optimization method and device, computer storage medium and electronic equipment
CN111489304B (en) Image deblurring method based on attention mechanism
CN111429355A (en) Image super-resolution reconstruction method based on generation countermeasure network
Li et al. Embedding Image Through Generated Intermediate Medium Using Deep Convolutional Generative Adversarial Network.
CN111275637A (en) Non-uniform motion blurred image self-adaptive restoration method based on attention model
CN110473142B (en) Single image super-resolution reconstruction method based on deep learning
CN111932529B (en) Image classification and segmentation method, device and system
CN115457568B (en) Historical document image noise reduction method and system based on generation countermeasure network
CN110599411A (en) Image restoration method and system based on condition generation countermeasure network
Min et al. Blind deblurring via a novel recursive deep CNN improved by wavelet transform
CN115131218A (en) Image processing method, image processing device, computer readable medium and electronic equipment
CN115293966A (en) Face image reconstruction method and device and storage medium
Hongmeng et al. A detection method for deepfake hard compressed videos based on super-resolution reconstruction using CNN
Tunc et al. Age group and gender classification using convolutional neural networks with a fuzzy logic-based filter method for noise reduction
CN115601257A (en) Image deblurring method based on local features and non-local features
CN113129237B (en) Depth image deblurring method based on multi-scale fusion coding network
Mo et al. The image inpainting algorithm used on multi-scale generative adversarial networks and neighbourhood
CN112990215B (en) Image denoising method, device, equipment and storage medium
CN113421212B (en) Medical image enhancement method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant