CN114187191A - Image deblurring method based on high-frequency-low-frequency information fusion - Google Patents
Image deblurring method based on high-frequency-low-frequency information fusion Download PDFInfo
- Publication number
- CN114187191A CN114187191A CN202111380902.9A CN202111380902A CN114187191A CN 114187191 A CN114187191 A CN 114187191A CN 202111380902 A CN202111380902 A CN 202111380902A CN 114187191 A CN114187191 A CN 114187191A
- Authority
- CN
- China
- Prior art keywords
- frequency information
- network
- low
- image
- frequency
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 230000004927 fusion Effects 0.000 title claims abstract description 36
- 238000012549 training Methods 0.000 claims abstract description 18
- 238000012545 processing Methods 0.000 claims abstract description 11
- 238000007781 pre-processing Methods 0.000 claims abstract description 6
- 230000004913 activation Effects 0.000 claims description 5
- 238000000605 extraction Methods 0.000 claims description 4
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000007500 overflow downdraw method Methods 0.000 claims description 2
- 238000013528 artificial neural network Methods 0.000 abstract description 7
- 238000004364 calculation method Methods 0.000 abstract description 3
- 230000000694 effects Effects 0.000 description 9
- 230000006870 function Effects 0.000 description 7
- 238000012360 testing method Methods 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000012795 verification Methods 0.000 description 3
- 238000011084 recovery Methods 0.000 description 2
- 238000005096 rolling process Methods 0.000 description 2
- 102220047090 rs6152 Human genes 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000001575 pathological effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
- 239000011800 void material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention provides an image deblurring method based on high-frequency-low-frequency information fusion. Firstly, preprocessing a public GOPRO fuzzy data set to serve as a training data set; then, constructing a high-frequency-low-frequency information fusion network comprising a low-frequency information network, a high-frequency information network and a fusion module, wherein the low-frequency information network is used for reconstructing low-frequency background contour information of the blurred image, the high-frequency information network is used for reconstructing high-frequency edge information of the blurred image, and the fusion module is used for fusing the encoded high-frequency information and the encoded low-frequency information; and then, network training is carried out, and the trained network is utilized to process the fuzzy image to obtain a corresponding clear image. According to the invention, by constructing an end-to-end deep neural network and fusing high-frequency and low-frequency information and edge information, deblurring processing can be effectively carried out, and the method has the advantages of less model parameters and high calculation speed.
Description
Technical Field
The invention belongs to the technical field of computer vision and image restoration, and particularly relates to an image deblurring method based on high-frequency and low-frequency information fusion.
Background
Image blur can be mainly classified into three categories: motion blur, defocus blur, and gaussian blur. Motion blur is mainly caused by relative movement of a photographer and a subject during photographing, and is the most common and most easily occurring blur condition; defocus blur is a blur phenomenon caused by failure to adjust the focal length, and a common photographer can use defocus blur to show a different look and feel; gaussian blur is a common way of image preprocessing and is usually rare. The image deblurring method mainly aims at motion blur, and the early image deblurring method is mainly based on the principle of blur generation, recovers a clear image by estimating a blur kernel and then performing deconvolution. However, since the problem is a pathological problem and the solution space is large, new methods exist to solve the problem reversely by constraining the solution space through various prior information. Later, with the development of Deep learning, more methods for achieving a Deblurring effect by using a Neural Network training model appear, that is, a Deblurring method based on Deep learning, such as a Multi-scale Network structure proposed in a "Deep Multi-scale conditional Neural Network for Dynamic Scene Deblurring" paper, performs feature extraction on a blurred image from three different scales to realize a Deblurring function; the "Scale-recurrence Network for Deep Image decompression" paper also proposes a structure of Scale circulation on the basis of multi-Scale, and introduces RNN to connect information of different scales. Although these methods have a certain deblurring effect, the properties of the blur (the blur is caused by the loss of high-frequency information of the image) are not fully considered, so that the deblurring performance of the model is limited.
The deblurring method based on deep learning has a good deblurring effect, but because the method usually needs to obtain good image recovery capability through more network layers and channels of network convolution layers, the parameter quantity of a model is large, the time required by training and testing is long, and the method is not suitable for real-time processing; in addition, many models do not make good use of the structural information of the image itself, and the deblurring effect is not ideal.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides an image deblurring method based on high-frequency and low-frequency information fusion. Firstly, preprocessing a public GOPRO fuzzy data set to serve as a training data set; then, constructing a high-frequency-low-frequency information fusion network comprising a low-frequency information network, a high-frequency information network and a fusion module, wherein the low-frequency information network is used for reconstructing low-frequency background contour information of the blurred image, the high-frequency information network is used for reconstructing high-frequency edge information of the blurred image, and the fusion module is used for fusing the encoded high-frequency information and the encoded low-frequency information; and then, network training is carried out, and the trained network is utilized to process the fuzzy image to obtain a corresponding clear image. According to the invention, by constructing an end-to-end deep neural network and fusing high-frequency and low-frequency information and edge information, deblurring processing can be effectively carried out, and the method has the advantages of less model parameters and high calculation speed.
An image deblurring method based on high-frequency-low-frequency information fusion is characterized by comprising the following steps:
the low-frequency information network adopts an encoder-decoder structure, input features are continuously sampled and encoded by convolution layers in an encoder, the encoded features are continuously sampled and decoded by deconvolution layers in a decoder, each convolution layer and each deconvolution layer are connected with three multi-scale cavity convolution blocks and a ReLU activation layer, wherein each multi-scale cavity convolution block is composed of three cavity convolution layers, the two cavity convolution layers with cavity convolution rates of 2 and 3 are used for multi-scale feature extraction, the other cavity convolution layer with the cavity convolution rate of 1 is used for fusing the extracted features, and the fused features are output features of the multi-scale cavity convolution blocks;
the high-frequency information network also adopts a coder-decoder structure, each convolution layer and each deconvolution layer are connected with two residual convolution blocks and a ReLU activation layer, wherein each residual convolution block is composed of two convolution layers, and the input characteristics are added with the input characteristics after passing through the two convolution layers to obtain the characteristics which are the output of the residual convolution block; the number of channels of all the convolution layers in the high-frequency information network is half of that of the low-frequency information network;
the fusion module carries out channel splicing on the coded features of the high-frequency information network and the low-frequency information network, then adopts a convolution layer to fuse the features after channel splicing, and finally inputs the fused features into a decoder of the low-frequency information network to output a final clear image;
the loss function of the network is calculated as:
L=-(λ1SSIM(xgt,xl)+λ2SSIM(xedge,xh)) (1)
wherein L represents the total loss of the network, λ1Taking the value of 1, lambda for the low frequency information network loss weight2Taking the loss weight of the high-frequency information network as 0.5; x is the number ofgtRepresenting sharp images, xlImages, x, representing low frequency information network outputsedgeEdge information image, x, representing sharp imagehThe image which represents the output of the high-frequency information network, and the SSIM (phi) represents the structural similarity of the two images;
step 4, deblurring the image: and (3) inputting the blurred image into the high-frequency and low-frequency information fusion network trained in the step (3), wherein the image output by the network is the image after deblurring processing.
The invention has the beneficial effects that: because a method of fusing high-frequency information and low-frequency information is utilized, an end-to-end deep neural network is constructed, and deblurring processing can be effectively carried out; the designed multi-scale cavity rolling block is combined with information of multiple scales under the condition of small parameter quantity, so that the size of a model receptive field can be well improved, and image low-frequency information with remarkable edge information is constructed; due to the adoption of the loss function calculated based on the SSIM, a better deblurring effect can be obtained. The method has the advantages of less constructed model parameters and high calculation speed.
Drawings
Fig. 1 is a schematic diagram of a high-frequency-low-frequency information fusion network structure according to the present invention.
FIG. 2 is a block diagram illustrating the structure of the multi-scale hole convolution block of the present invention;
in the figure, AR ═ 1 denotes a hole convolution layer having a hole convolution rate of 1, AR ═ 2 denotes a hole convolution layer having a hole convolution rate of 2, AR3 ═ 3 denotes a hole convolution layer having a hole convolution rate of 3, ReLU denotes a nonlinear activation function,the channel splice is shown as being a splice of channels,represents a numerical addition;
FIG. 3 is a comparison image of the results of deblurring using the method of the present invention;
in the figure, (a) is an input blurred image, (b) is an image obtained by the method of the invention, and (c) is a real clear image.
Detailed Description
The present invention will be further described with reference to the following drawings and examples, which include, but are not limited to, the following examples.
The invention provides an image deblurring method based on high-frequency-low-frequency information fusion, which comprises the following specific implementation processes:
1. data set preprocessing
The method adopts a maximum fuzzy data set GOPRO disclosed at present as a training data set of the network, the GOPRO data set comprises 3214 fuzzy clear image pairs, and the training set and the test set are divided, wherein the training set comprises 2103 pairs of images, and the test set comprises 1111 pairs of images. Considering the nature that the blur is generated due to the loss of edge information, the present invention extracts the edge information as a guide. Therefore, the Sobel operator is adopted to extract the edge information of each clear image, and a corresponding edge information image is obtained. Then, each image is arbitrarily cut into 256 × 256 sizes, and then, random inversion in the horizontal direction, random rotation by ninety degrees, and normalization processing are performed.
2. Constructing a high-frequency-low-frequency information fusion network
In order to fully utilize the high-frequency information and the low-frequency information of the image for deblurring, the invention constructs a high-frequency-low-frequency information fusion network capable of fusing the high-frequency information and the low-frequency information, which comprises a low-frequency information network, a high-frequency information network and a fusion module, and is shown in figure 1. And inputting the blurred image into a high-frequency-low-frequency information fusion network, and outputting a corresponding clear image.
(1) Low frequency information network
The low-frequency information network takes an encoder-decoder as a basic structural framework, the encoder continuously performs down-sampling on input features through a convolutional layer for encoding, and the decoder continuously performs up-sampling on the encoded features through a deconvolution layer for decoding. In addition, the low-frequency information network is combined with a plurality of multi-scale hole rolling blocks, so that the receptive field is enlarged, and simultaneously, the low-frequency information of the image can be well restored. Specifically, the low-frequency information network is composed of convolution layers, multi-scale hole convolution blocks and deconvolution layers, and three multi-scale hole convolution blocks and a ReLU active layer are connected behind each convolution layer and each deconvolution layer. And inputting the blur into a low-frequency information network, and outputting a corresponding clear image.
Although the traditional hole convolution layer can increase the receptive field under the condition of ensuring that the size of the characteristic diagram is not changed, the obvious grid effect appears, so that the traditional hole convolution layer is difficult to be used in the research of bottom layer vision such as image deblurring. As shown in fig. 2, the present invention designs a multi-scale space convolution block, where the multi-scale cavity convolution block is composed of three cavity convolution layers, and firstly, the two cavity convolution layers with cavity convolution rates of 2 and 3 are used to perform multi-scale feature extraction, and then the other cavity convolution layer with a cavity convolution rate of 1 is used to reduce the number of channels of the features to the size of the input features of the multi-scale cavity convolution block.
Table 1 shows a specific structural parameter example of a low frequency information network, which includes 3 convolutional layers, 15 multi-scale void convolutional blocks, and 3 deconvolution layers.
TABLE 1
(2) High frequency information network
The high frequency information network is also composed of an encoder-decoder as a basic structural framework, and a residual block is adopted in the high frequency information network to recover the edge information of the image. The high-frequency information network is composed of convolution layers, residual convolution blocks and deconvolution layers, and two residual convolution blocks and a ReLU active layer are connected behind each convolution layer and each deconvolution layer. The residual convolution block is composed of two convolution layers, and the input features are added with the input features after passing through the two convolution layers to obtain the output features of the residual convolution block. The input blurred image outputs a clear edge information image through a high-frequency information network.
The overall structure of the high frequency information network is similar to that of the low frequency information network, and the difference is that each convolutional layer in the network is followed by two convolutional blocks instead of three, and the number of channels of all convolutional layers in the overall network is half of that of the low frequency information network.
(3) Fusion module
The fusion module carries out channel splicing on the coded features of the high-frequency information network and the low-frequency information network (the coded features are the features of the input fuzzy image output by a network coder), then uses a convolution layer to fuse the features after channel splicing, and finally inputs the fused features into a decoder of the low-frequency information network to output a final clear image.
3. Network training
Inputting the preprocessed training data obtained in the step 1 into the high-frequency and low-frequency information fusion network constructed in the step 2, and performing network training by adopting Adam as an optimizer to obtain a trained network.
In the existing methods, most of the methods adopt L1 loss or L2 loss as a loss function of a model to train the network. In contrast, the invention adopts the combined loss calculated based on the SSIM as the loss function of the model, and the SSIM is a method for evaluating the similarity of the images, and the similarity of the two images is evaluated by using the overall structural information (comprising the structure, the brightness and the contrast) of the images. A large number of experiments prove that SSIM loss has better effect in the deblurring task compared with L1 and L2 losses. The formula for SSIM is:
wherein, muxAnd muyRespectively representing the mean of image x and image y,andrepresenting the variance, σ, of image x and image y, respectivelyxyRepresenting the covariance of image x and image y, c1And c2Is a constant. The value of SSIM is between 0 and 1, and the larger the difference between the two images is, the smaller the value of SSIM is.
The loss function of the high-frequency-low-frequency information fusion network is calculated according to the following formula:
L=-(λ1SSIM(xgt,xl)+λ2SSIM(xedge,xh)) (3)
wherein L represents the total loss of the network, λ1Taking the value of 1, lambda for the low frequency information network loss weight2Taking the loss weight of the high-frequency information network as 0.5; x is the number ofgtRepresenting sharp images, xlImages, x, representing low frequency information network outputsedgeEdge information image, x, representing sharp imagehThe image which represents the output of the high-frequency information network, and the SSIM (phi) represents the structural similarity of the two images.
4. Image deblurring
And (3) inputting the blurred image into the high-frequency and low-frequency information fusion network trained in the step (3), wherein the image output by the network is the image after deblurring processing.
One fifth of the image pairs are extracted from the test set of the GOPRO data set to serve as a verification set, and the verification set is used for carrying out effect verification on different Deblurring methods, namely a Non-blind Deblurring Network (DL-MRF) method in a 'Learning a conditional Neural Network for Non-Uniform Motion Blur Removal' paper, a Deep Deblurring Network (Deepdeblurr) method in a 'Deep Multi-scale conditional Neural Network for Dynamic Scene Deblurring' paper and a generation anti-Deblurring Network (Debluurgan-V2) method in a 'Deblurring-V2: Deblurring (Orders-of-Magnite) Faster and Better' paper. The peak signal-to-noise ratio (PSNR), the Structural Similarity (SSIM), the number of model parameters, and the time used for model inference obtained by each method are calculated, and the results are shown in table 2. It can be seen that the method has very small model parameter number and very short reasoning time, and simultaneously, PSNR and SSIM indexes are optimal, thereby fully proving the effectiveness and superiority of the method.
TABLE 2
Method | SSIM | PSNR | Quantity of model parameters | Time taken for model reasoning |
Non-blind deblurring network | 0.8429 | 24.64dB | 54.1MB | 12.1min |
Deep deblurring network | 0.9135 | 29.08dB | 303.6MB | 3.1s |
Generating a countermeasure deblurring network | 0.9340 | 29.55dB | 15.1MB | 0.35s |
The method of the invention | 0.9483 | 30.25dB | 5.89MB | 0.078s |
Fig. 3 shows a result image of deblurring processing performed by the method of the present invention, and from the perspective of a visualization result, the method of the present invention has a good deblurring effect and an obvious recovery of edge information.
Claims (1)
1. An image deblurring method based on high-frequency-low-frequency information fusion is characterized by comprising the following steps:
step 1, preprocessing a data set: for the GOPRO data set, extracting edge information of each clear image by using a Sobel operator to obtain a corresponding edge information image; then, randomly cutting each image into 256 × 256 sizes, and then carrying out random turning in the horizontal direction, random rotation at ninety degrees and normalization processing;
step 2, constructing a high-frequency-low-frequency information fusion network: the fuzzy image fusion method comprises a low-frequency information network, a high-frequency information network and a fusion module, wherein the low-frequency information network is used for reconstructing low-frequency background contour information of a fuzzy image, the high-frequency information network is used for reconstructing high-frequency edge information of the fuzzy image, and the fusion module fuses encoded high-frequency information and low-frequency information;
the low-frequency information network adopts an encoder-decoder structure, input features are continuously sampled and encoded by convolution layers in an encoder, the encoded features are continuously sampled and decoded by deconvolution layers in a decoder, each convolution layer and each deconvolution layer are connected with three multi-scale cavity convolution blocks and a ReLU activation layer, wherein each multi-scale cavity convolution block is composed of three cavity convolution layers, the two cavity convolution layers with cavity convolution rates of 2 and 3 are used for multi-scale feature extraction, the other cavity convolution layer with the cavity convolution rate of 1 is used for fusing the extracted features, and the fused features are output features of the multi-scale cavity convolution blocks;
the high-frequency information network also adopts a coder-decoder structure, each convolution layer and each deconvolution layer are connected with two residual convolution blocks and a ReLU activation layer, wherein each residual convolution block is composed of two convolution layers, and the input characteristics are added with the input characteristics after passing through the two convolution layers to obtain the characteristics which are the output of the residual convolution block; the number of channels of all the convolution layers in the high-frequency information network is half of that of the low-frequency information network;
the fusion module carries out channel splicing on the coded features of the high-frequency information network and the low-frequency information network, then adopts a convolution layer to fuse the features after channel splicing, and finally inputs the fused features into a decoder of the low-frequency information network to output a final clear image;
step 3, network training: inputting the preprocessed training data obtained in the step 1 into the high-frequency and low-frequency information fusion network constructed in the step 2, and performing network training by adopting Adam as an optimizer to obtain a trained network;
the loss function of the network is calculated as:
L=-(λ1SSIM(xgt,xl)+λ2SSIM(xedge,xh)) (1)
wherein L represents the total loss of the network, λ1Taking the value of 1, lambda for the low frequency information network loss weight2Taking the loss weight of the high-frequency information network as 0.5; x is the number ofgtRepresenting sharp images, xlImages, x, representing low frequency information network outputsedgeEdge information image, x, representing sharp imagehThe image which represents the output of the high-frequency information network, and the SSIM (phi) represents the structural similarity of the two images;
step 4, deblurring the image: and (3) inputting the blurred image into the high-frequency and low-frequency information fusion network trained in the step (3), wherein the image output by the network is the image after deblurring processing.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111380902.9A CN114187191B (en) | 2021-11-20 | 2021-11-20 | Image deblurring method based on high-frequency-low-frequency information fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111380902.9A CN114187191B (en) | 2021-11-20 | 2021-11-20 | Image deblurring method based on high-frequency-low-frequency information fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114187191A true CN114187191A (en) | 2022-03-15 |
CN114187191B CN114187191B (en) | 2024-02-27 |
Family
ID=80602246
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111380902.9A Active CN114187191B (en) | 2021-11-20 | 2021-11-20 | Image deblurring method based on high-frequency-low-frequency information fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114187191B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114723630A (en) * | 2022-03-31 | 2022-07-08 | 福州大学 | Image deblurring method and system based on cavity double-residual multi-scale depth network |
CN115953333A (en) * | 2023-03-15 | 2023-04-11 | 杭州魔点科技有限公司 | Dynamic backlight compensation method and system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20190114340A (en) * | 2018-03-29 | 2019-10-10 | 한국과학기술원 | Image deblurring network processing methods and systems |
CN111028177A (en) * | 2019-12-12 | 2020-04-17 | 武汉大学 | Edge-based deep learning image motion blur removing method |
CN111539884A (en) * | 2020-04-21 | 2020-08-14 | 温州大学 | Neural network video deblurring method based on multi-attention machine mechanism fusion |
-
2021
- 2021-11-20 CN CN202111380902.9A patent/CN114187191B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20190114340A (en) * | 2018-03-29 | 2019-10-10 | 한국과학기술원 | Image deblurring network processing methods and systems |
CN111028177A (en) * | 2019-12-12 | 2020-04-17 | 武汉大学 | Edge-based deep learning image motion blur removing method |
CN111539884A (en) * | 2020-04-21 | 2020-08-14 | 温州大学 | Neural network video deblurring method based on multi-attention machine mechanism fusion |
Non-Patent Citations (1)
Title |
---|
郭业才;朱文军;: "基于深度卷积神经网络的运动模糊去除算法", 南京理工大学学报, no. 03, 30 June 2020 (2020-06-30) * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114723630A (en) * | 2022-03-31 | 2022-07-08 | 福州大学 | Image deblurring method and system based on cavity double-residual multi-scale depth network |
CN115953333A (en) * | 2023-03-15 | 2023-04-11 | 杭州魔点科技有限公司 | Dynamic backlight compensation method and system |
Also Published As
Publication number | Publication date |
---|---|
CN114187191B (en) | 2024-02-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108520503B (en) | Face defect image restoration method based on self-encoder and generation countermeasure network | |
CN111199522B (en) | Single-image blind removal motion blurring method for generating countermeasure network based on multi-scale residual error | |
Li et al. | Single image dehazing via conditional generative adversarial network | |
CN108596841B (en) | Method for realizing image super-resolution and deblurring in parallel | |
CN114187191B (en) | Image deblurring method based on high-frequency-low-frequency information fusion | |
CN111179187B (en) | Single image rain removing method based on cyclic generation countermeasure network | |
CN110675329B (en) | Image deblurring method based on visual semantic guidance | |
CN109949222A (en) | Image super-resolution rebuilding method based on grapheme | |
Luo et al. | Lattice network for lightweight image restoration | |
CN113191969A (en) | Unsupervised image rain removing method based on attention confrontation generation network | |
CN112241939B (en) | Multi-scale and non-local-based light rain removal method | |
CN113837959A (en) | Image denoising model training method, image denoising method and image denoising system | |
CN113538258B (en) | Mask-based image deblurring model and method | |
CN116977651B (en) | Image denoising method based on double-branch and multi-scale feature extraction | |
CN116703750A (en) | Image defogging method and system based on edge attention and multi-order differential loss | |
CN115272131B (en) | Image mole pattern removing system and method based on self-adaptive multispectral coding | |
CN111047537A (en) | System for recovering details in image denoising | |
CN113129237B (en) | Depth image deblurring method based on multi-scale fusion coding network | |
CN113379641B (en) | Single image rain removing method and system based on self-coding convolutional neural network | |
CN114998142A (en) | Motion deblurring method based on dense feature multi-supervision constraint | |
CN115272113A (en) | Image deblurring method based on multi-scale frequency separation network | |
Li et al. | Image Super‐Resolution Using Lightweight Multiscale Residual Dense Network | |
CN113888417A (en) | Human face image restoration method based on semantic analysis generation guidance | |
Wen et al. | Overview of traditional denoising and deep learning-based denoising | |
Feng et al. | Coal mine image dust and fog clearing algorithm based on deep learning network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |