CN109410146A - A kind of image deblurring algorithm based on Bi-Skip-Net - Google Patents

A kind of image deblurring algorithm based on Bi-Skip-Net Download PDF

Info

Publication number
CN109410146A
CN109410146A CN201811298475.8A CN201811298475A CN109410146A CN 109410146 A CN109410146 A CN 109410146A CN 201811298475 A CN201811298475 A CN 201811298475A CN 109410146 A CN109410146 A CN 109410146A
Authority
CN
China
Prior art keywords
features
feature
skip
shallow
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811298475.8A
Other languages
Chinese (zh)
Inventor
李革
张毅伟
王荣刚
王文敏
高文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University Shenzhen Graduate School
Original Assignee
Peking University Shenzhen Graduate School
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University Shenzhen Graduate School filed Critical Peking University Shenzhen Graduate School
Priority to CN201811298475.8A priority Critical patent/CN109410146A/en
Priority to PCT/CN2018/117634 priority patent/WO2020087607A1/en
Publication of CN109410146A publication Critical patent/CN109410146A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明涉及数字图像处理领域,特别一种基于Bi‑Skip‑Net的图像去模糊方法,是一种Bi‑Skip‑Net网络来实现模糊图像复原,旨在解决现有深度学习去模糊算法存在的时间复杂度高、纹理恢复不准确、复原图像存在方格效应等问题。本发明公开的一种Bi‑Skip‑Net网络来作为GAN(Generative Adversarial Network)的生成网络,旨在解决现有深度学习去模糊算法存在的缺点,通过对比现有最优算法,本发明在时间复杂度上提升了0.1s,在图像复图像原性能上平均提升了1dB。

The invention relates to the field of digital image processing, in particular to an image deblurring method based on Bi-Skip-Net, which is a Bi-Skip-Net network to achieve blurred image restoration, and aims to solve the problem of existing deep learning deblurring algorithms. The time complexity is high, the texture restoration is inaccurate, and the restored image has problems such as grid effect. A Bi-Skip-Net network disclosed in the present invention is used as a generative network of GAN (Generative Adversarial Network), aiming to solve the shortcomings of the existing deep learning deblurring algorithms. The complexity is improved by 0.1s, and the average performance of image restoration is improved by 1dB.

Description

一种基于Bi-Skip-Net的图像去模糊算法An Image Deblurring Algorithm Based on Bi-Skip-Net

技术领域technical field

本发明涉及数字图像处理领域,特别是一种基于Bi-Skip-Net的图像去模糊方法,该方法是通过Bi-Skip-Net网络来实现模糊图像复原。The invention relates to the field of digital image processing, in particular to an image deblurring method based on Bi-Skip-Net, which realizes blurred image restoration through Bi-Skip-Net network.

技术背景technical background

去模糊技术是图像和视频处理领域被广泛研究的主题。基于相机抖动造成的模糊在一定意义上严重影响图像的成像质量,视觉观感。作为图像预处理领域一个及其重要的分支,去模糊技术的提升直接影响其他计算机视觉算法的性能,如前景分割,物体检测,行为分析等;同时它也影响着图像的编码性能。因此,研究一种高性能的去模糊算法势在必行。Deblurring techniques are a subject of extensive research in the field of image and video processing. The blur caused by camera shake seriously affects the imaging quality and visual perception of the image in a certain sense. As an important branch in the field of image preprocessing, the improvement of deblurring technology directly affects the performance of other computer vision algorithms, such as foreground segmentation, object detection, behavior analysis, etc.; at the same time, it also affects the image coding performance. Therefore, it is imperative to study a high-performance deblurring algorithm.

文献1-3介绍了图像和视频处理的去模糊技术,深度学习去模糊算法;文献1:Kupyn O, Budzan V,Mykhailych M,et al.DeblurGAN:Blind Motion Deblurring UsingConditional Adversarial Networks[J].arXiv preprint arXiv:1711.07064,2017。文献2:Nah S,Kim T H,Lee K M.Deep multi-scale convolutional neural network fordynamic scene deblurring[C]//CVPR.2017,1(2):3。文献3:Sun J,Cao W,Xu Z,etal.Learning a convolutional neural network for non-uniform motion blurremoval[C]//Proceedings of the IEEE Conference on Computer Vision and PatternRecognition.2015:769-777.。Documents 1-3 introduce deblurring techniques for image and video processing, and deep learning deblurring algorithms; Document 1: Kupyn O, Budzan V, Mykhailych M, et al. DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks[J].arXiv preprint arXiv:1711.07064, 2017. Document 2: Nah S, Kim T H, Lee K M. Deep multi-scale convolutional neural network for dynamic scene deblurring[C]//CVPR.2017, 1(2):3. Literature 3: Sun J, Cao W, Xu Z, et al. Learning a convolutional neural network for non-uniform motion blurremoval[C]//Proceedings of the IEEE Conference on Computer Vision and PatternRecognition. 2015:769-777.

一般来说,图像去模糊算法可以分为基于概率模型的传统算法和基于深度学习的去模糊算法。传统算法采用卷积模型来解释模糊成因,相机抖动的过程可以映射为模糊核轨迹 PSF(Point Spread Function)。在模糊核未知的情况下还原清晰图像,这一问题属于不适定 (ill-posed)问题,所以通常意义上需要先估计模糊核,再利用评估的模糊核进行返卷积操作得到复原图像。基于深度学习的去模糊算法则利用深层网络结构获取图像的潜在信息,进而实现模糊图像复原。深度学习去模糊算法可以实现模糊核估计和非盲反卷积两个操作来进行图像复原,同时也可以采用生成对抗机制来复原图像。本专利旨在解决去模糊算法存在的缺点:Generally speaking, image deblurring algorithms can be divided into traditional algorithms based on probability models and deblurring algorithms based on deep learning. The traditional algorithm uses a convolution model to explain the cause of blur, and the process of camera shake can be mapped to the blur kernel trajectory PSF (Point Spread Function). Restoring a clear image when the blur kernel is unknown is an ill-posed problem, so it is usually necessary to estimate the blur kernel first, and then use the estimated blur kernel to perform a deconvolution operation to obtain the restored image. The deblurring algorithm based on deep learning uses the deep network structure to obtain the latent information of the image, and then realizes the restoration of the blurred image. The deep learning deblurring algorithm can realize the two operations of blur kernel estimation and non-blind deconvolution for image restoration, and can also use the generative confrontation mechanism to restore the image. This patent aims to address the shortcomings of deblurring algorithms:

1)时间复杂度高,1) High time complexity,

2)纹理恢复不准确,2) Inaccurate texture recovery,

3)复原图像存在方格效应。3) There is a checkerboard effect in the restored image.

发明内容SUMMARY OF THE INVENTION

本发明提出了一种Bi-Skip-Net网络来作为GAN(Generative AdversarialNetwork)的生成网络,旨在解决现有深度学习去模糊算法存在的缺点。通过对比现有最优算法,本发明在时间复杂度上提升了0.1s,在图像复图像原性能上平均提升了1dB。The present invention proposes a Bi-Skip-Net network as a generative network of GAN (Generative Adversarial Network), aiming to solve the shortcomings of existing deep learning deblurring algorithms. By comparing with the existing optimal algorithms, the present invention improves the time complexity by 0.1s, and improves the image restoration performance by 1dB on average.

本发明提供的技术方案如下:(注:需要用自然语言对技术方案进行说明,不能用“如图”这样的叙述方式,方法技术方案最好按:步骤1:步骤2…的方式撰写)The technical solution provided by the present invention is as follows: (Note: the technical solution needs to be explained in natural language, and cannot be described in the way of "as shown in the figure". The method and technical solution are preferably written in the way of: step 1: step 2...)

本发明采用生成对抗网络机制来实现模糊图像复原,并设计了一种Bi-Skip-Net网络来作为其中的生成器。具体步骤如下:The invention adopts the generation confrontation network mechanism to realize the restoration of the blurred image, and designs a Bi-Skip-Net network as the generator therein. Specific steps are as follows:

1):输入模糊图像,通过卷积核尺寸为7x7,步长为1的卷积层得到浅层特征;1): Input a blurred image, and obtain shallow features through a convolutional layer with a convolution kernel size of 7x7 and a stride of 1;

2):将浅层特征通过3个残差块得到当前尺度下的深度特征;2): Pass the shallow feature through 3 residual blocks to obtain the depth feature at the current scale;

3):将深度特征进行下采样加残差的模式得到下一个尺度下的浅层特征;3): The mode of downsampling and adding residuals to deep features is used to obtain shallow features at the next scale;

4):按照规定的下采样次数n,重复步骤2,3获取不同尺度下的浅层特征和深度特征,并且在最小尺度下不获取深度特征;4): Repeat steps 2 and 3 to obtain shallow features and depth features at different scales according to the specified number of downsampling times n, and do not obtain depth features at the minimum scale;

5):将最小尺度的浅层特征作为基本特征;5): Take the shallow feature of the smallest scale as the basic feature;

6):将上一层尺度的浅层特征经过卷积核大小为1x1,步长为1的卷积层得到浅层降维特征;将对应的深度特征经过卷积核大小为3x3,步长为2的卷积层得到深度降维特征,并与基本特征串联进行上采样;将上采样后的特征与浅层降维特征进行串联得到当前尺度下的基本特征;6): Pass the shallow features of the previous scale through a convolutional layer with a convolution kernel size of 1x1 and a stride of 1 to obtain shallow dimension reduction features; pass the corresponding depth features through a convolution kernel with a size of 3x3 and a stride of 1. Obtain deep dimensionality reduction features for the convolutional layer of 2, and perform upsampling in series with the basic features; concatenate the upsampled features with shallow dimensionality reduction features to obtain basic features at the current scale;

7):重复步骤6直到采样操作截止;7): Repeat step 6 until the sampling operation ends;

8):将得到的基本特征经过卷积核大小为7x7,步长为1的卷积层得到残差特征;8): Pass the obtained basic features through a convolutional layer with a convolution kernel size of 7x7 and a stride of 1 to obtain residual features;

9):将残差特征与输入图像相加得到复原图像;9): Add the residual feature to the input image to obtain the restored image;

采用Bi-Skip-Net加残差的模式来作为生成器。The Bi-Skip-Net plus residual mode is used as the generator.

步骤4)中,按照规定的下采样次数为5。In step 4), the number of downsampling times is 5 according to the regulations.

其中,模糊图像通过生成器来获得复原图像,判别的任务为尽可能区分复原图像和清晰图像;而生成器的任务为尽可能欺骗判别器来降低对两种图像的区分的能力。Among them, the fuzzy image is obtained by the generator to obtain the restored image, and the task of discrimination is to distinguish the restored image from the clear image as much as possible; and the task of the generator is to deceive the discriminator as much as possible to reduce the ability to distinguish between the two images.

所述的Bi-Skip-Net网络由三部分组成:contract路径(D),Skip路径(S)以及expand 路径(U)。Contract层进行下采样实现特征压缩,Skip层用于连接深层特征与浅层特征,expand层进行上采样。其中D*,S*,U*为对应下采样尺度下的特征。The Bi-Skip-Net network is composed of three parts: contract path (D), Skip path (S) and expand path (U). The Contract layer performs downsampling to achieve feature compression, the Skip layer is used to connect deep features and shallow features, and the expand layer performs upsampling. Where D*, S*, U* are the features at the corresponding downsampling scale.

所述的采样尺度下的特征操作,在contract路径,当前特征通过3个残差块(3xResBlock) 来获得深层特征,并采用池化(pooling)和卷积相加的残差模式来获取下一尺度的特征;在 Skip路径,通过1x1的卷积来对压缩浅层特征,通过3x3的卷积来压缩深度特征;在expand 路径,通过concat来实现特征连接,并通过3x3的反卷积实现特征上采样。In the feature operation at the sampling scale, in the contract path, the current feature obtains deep features through 3 residual blocks (3xResBlock), and uses the residual mode of pooling and convolution to obtain the next feature. Scale features; in the Skip path, the shallow features are compressed by 1x1 convolution, and the deep features are compressed by 3x3 convolution; in the expand path, the feature connection is realized by concat, and the feature is realized by 3x3 deconvolution upsampling.

本发明具有如下技术效果:由于本发明采用了Bi-Skip-Net网络来作为GAN(Generative Adversarial Network)的生成网络与现有技术相比具有以下技术效果:The present invention has the following technical effects: because the present invention adopts the Bi-Skip-Net network as the generation network of GAN (Generative Adversarial Network), compared with the prior art, the following technical effects are obtained:

1、时间复杂度低;对比传统方法,传统的去运动模糊方法采用模糊核估计和非盲反卷积两个步骤,而这两个步骤均需进行多次迭代才能达到较好的复原效果,正因如此,也造成处理单张运动模糊图像的时间较长;而本发明设计的模型可以避免多次迭代优化造成的时间损耗。1. The time complexity is low; compared with the traditional method, the traditional motion blurring method adopts two steps of blur kernel estimation and non-blind deconvolution, and these two steps require multiple iterations to achieve a better restoration effect. Because of this, it also takes a long time to process a single motion blurred image; and the model designed in the present invention can avoid the time loss caused by multiple iterations of optimization.

2、纹理恢复准确;对比传统方法,在传统方法中,模糊核估计不准确会造成复原过程中图像信息的错误恢复,而非盲反卷操作经常会造成纹理部分出现振铃效应;本发明设计的双跨连接网络在每一层尺度上都提取了深度特征和浅层特征,通过特征连接,网络在一定程度可以恢复更多的细节信息。2. Accurate texture recovery; compared with the traditional method, in the traditional method, the inaccurate estimation of the fuzzy kernel will cause the erroneous recovery of image information during the restoration process, while the non-blind de-scrolling operation often causes ringing effects in the texture part; the design of the present invention The dual-span connection network extracts deep features and shallow features at each layer scale. Through feature connection, the network can recover more detailed information to a certain extent.

3、复原图像不存在方格效应,对比现存的深度学习方法,现存的多数深度学习方法在上采样过程中采用反卷积层来实现,而由于每次反卷积都存在一定的锯齿效应,这使得最后的复原图像也存在一些锯齿,即本发明提到的方格效应。3. There is no grid effect in the restored image. Compared with the existing deep learning methods, most of the existing deep learning methods use a deconvolution layer in the upsampling process, and because each deconvolution has a certain sawtooth effect, This makes the final restored image also have some jaggedness, that is, the checkered effect mentioned in the present invention.

为了更好地理解本发明的构思和原理,下面结合附图和实施例,对本发明进行详细的描述。但具体实施例的描述不以任何方式限制本发明的保护范围。In order to better understand the concept and principle of the present invention, the present invention will be described in detail below with reference to the accompanying drawings and embodiments. However, the description of the specific embodiments does not limit the protection scope of the present invention in any way.

附图说明Description of drawings

图1为本发明的生成对抗网络机制;Fig. 1 is the generative adversarial network mechanism of the present invention;

图2为本发明的Bi-Skip-Net网络结构图;Fig. 2 is the Bi-Skip-Net network structure diagram of the present invention;

图3为本发明的一个采样尺度下的特征操作;3 is a feature operation under a sampling scale of the present invention;

图4生成器设计:Bi-Skip-Net+残差;Figure 4 Generator design: Bi-Skip-Net+residual;

图5a-d为本发明与其它算法的主观对比;其中,Figures 5a-d are subjective comparisons between the present invention and other algorithms; wherein,

图5a:模糊图像;Figure 5a: Blurred image;

图5b:Nah等人的复原效果;Figure 5b: The recovery effect of Nah et al.;

图5c:Kupyn等人的复原效果;Figure 5c: The recovery effect of Kupyn et al.;

图5d:Bi-Skip-Net的复原效果。Figure 5d: Restoration effect of Bi-Skip-Net.

具体实施方式Detailed ways

图1为本发明采用的生成对抗网络机制。其中,模糊图像通过生成器来获得复原图像,判别的任务为尽可能区分复原图像和清晰图像;而生成器的任务为尽可能欺骗判别器来降低对两种图像的区分的能力。FIG. 1 shows the generative adversarial network mechanism adopted in the present invention. Among them, the fuzzy image is obtained by the generator to obtain the restored image, and the task of discrimination is to distinguish the restored image from the clear image as much as possible; and the task of the generator is to deceive the discriminator as much as possible to reduce the ability to distinguish between the two images.

本发明实施例的具体步骤如下:The concrete steps of the embodiment of the present invention are as follows:

(1)设计生成器和判别器,原理如图4所示,是建筑物的模糊图像通过Bi-Skip-Net生成器,而得到清晰的建筑物图片;其他任何模糊图像都可以用这个模型生成清晰的图片。(1) Design generator and discriminator. The principle is shown in Figure 4. The fuzzy image of the building is passed through the Bi-Skip-Net generator to obtain a clear building image; any other fuzzy image can be generated by this model. Clear picture.

(2)采用如下的损失函数来训练网络,(2) Use the following loss function to train the network,

其中为对抗损失函数,为条件损失函数,λ为条件损失函数的权重。in For the adversarial loss function, is the conditional loss function, and λ is the weight of the conditional loss function.

通过最大化来优化判别器D;by maximizing to optimize the discriminator D;

通过最小化式3来优化生成器G;Optimize the generator G by minimizing Equation 3;

其中设计如下:in The design is as follows:

其中,L,S分别表示模型在不同层级的输出和真值,α取值为1或2,整个条件损失函数被通道数c,宽度w和高度h所规范。Among them, L and S respectively represent the output and true value of the model at different levels, α is 1 or 2, and the entire conditional loss function is regulated by the number of channels c, the width w and the height h.

(3)将训练好的网络作为最终的复原模型。(3) Use the trained network as the final restoration model.

如图1所示,本发明实施例的方法采用生成对抗网络机制来实现模糊图像复原。图2为 Bi-Skip-Net网络结构图,采取图2所示的网络结构,设计了一种Bi-Skip-Net网络来作为其中的生成器。As shown in FIG. 1 , the method of the embodiment of the present invention adopts a generative adversarial network mechanism to achieve blurred image restoration. Figure 2 is a diagram of the Bi-Skip-Net network structure. Taking the network structure shown in Figure 2, a Bi-Skip-Net network is designed as the generator.

该Bi-Skip-Net网络结构图中的判别器参数见表1.判别器参数表。The discriminator parameters in this Bi-Skip-Net network structure diagram are shown in Table 1. Discriminator parameter table.

表1.判别器参数表Table 1. Discriminator parameter table

## Floor 参数维度parameter dimension 步长step size 11 convconv 32x3x5x532x3x5x5 22 22 convconv 64x32x5x564x32x5x5 11 33 convconv 64x64x5x564x64x5x5 22 44 convconv 128x64x5x5128x64x5x5 11 55 convconv 128x128x5x5128x128x5x5 44 66 convconv 256x128x5x5256x128x5x5 11 77 convconv 256x256x5x5256x256x5x5 44 88 convconv 512x256x5x5512x256x5x5 11 99 convconv 512x512x4x4512x512x4x4 44 1010 fcfc 512x1x1x1512x1x1x1 - -

如图2所示,本发明实施例设计的Bi-Skip-Net网络由三部分组成:contract路径(D),包括D0、D1、D2和D3;Skip路径(S),包括S0、S1、S2和S3;以及expand路径(U),包括U0、U1、U2和U3。Contract层进行下采样实现特征压缩,Skip层用于连接深层特征与浅层特征,expand层进行上采样。其中D*(D0、D1、D2和D3),S*(S0、S1、S2和S3), U*(U0、U1、U2和U3)为对应下采样尺度下的特征。As shown in FIG. 2, the Bi-Skip-Net network designed in the embodiment of the present invention consists of three parts: contract path (D), including D0, D1, D2 and D3; Skip path (S), including S0, S1, S2 and S3; and expand path (U), including U0, U1, U2, and U3. The Contract layer performs downsampling to achieve feature compression, the Skip layer is used to connect deep features and shallow features, and the expand layer performs upsampling. Among them, D*(D0, D1, D2 and D3), S*(S0, S1, S2 and S3), U*(U0, U1, U2 and U3) are the features at the corresponding downsampling scale.

图3为一个采样尺度下的特征操作,如图3所示,在contract路径,即压缩路径,当前特征通过3个残差块(3xResBlock)来获得深层特征,并采用池化(pooling)和卷积相加的残差模式来获取下一尺度的特征;在Skip路径,即跨连接路径,通过1x1的卷积来对压缩浅层特征,通过3x3的卷积来压缩深层特征;在expand路径,通过concat,即concatenate, 来实现特征连接,并通过3x3的反卷积实现特征上采样。Figure 3 is a feature operation at a sampling scale. As shown in Figure 3, in the contract path, that is, the compression path, the current feature obtains deep features through three residual blocks (3xResBlock), and uses pooling (pooling) and volume The residual mode of product and addition is used to obtain the features of the next scale; in the Skip path, that is, the cross-connection path, the shallow features are compressed by 1x1 convolution, and the deep features are compressed by 3x3 convolution; in the expand path, Feature connection is realized through concat, namely concatenate, and feature upsampling is realized through 3x3 deconvolution.

图4生成器设计:Bi-Skip-Net+残差,如图4所示,最后采用Bi-Skip-Net加残差的模式来作为生成器。Figure 4 Generator design: Bi-Skip-Net+residual, as shown in Figure 4, and finally the Bi-Skip-Net plus residual mode is used as the generator.

本发明实施与其他算法的对比结果详见表2.本发明与其它算法在GoPro数据集上的测试对比。The comparison results between the implementation of the present invention and other algorithms are shown in Table 2. The test comparison between the present invention and other algorithms on the GoPro data set.

表2.本发明与其它算法在GoPro数据集上的测试对比Table 2. Test comparison between the present invention and other algorithms on the GoPro dataset

图5a-d为本发明与其它算法的主观对比。图5a为模糊图像,图5b为Nah等人的复原效果,图5c为Kupyn等人的复原效果,图5d为本发明的Bi-Skip-Net的复原效果。图片左下角的文字“HARDWARE”在其他三张图片中无法辨认或辨认模糊,本发明能清晰还原和识别。从人主观对比可以看出,本发明对模糊图像的修复效果明显。Figures 5a-d are subjective comparisons of the present invention with other algorithms. Figure 5a is a blurred image, Figure 5b is the restoration effect of Nah et al., Figure 5c is the restoration effect of Kupyn et al., and Figure 5d is the restoration effect of Bi-Skip-Net of the present invention. The text "HARDWARE" in the lower left corner of the picture cannot be recognized or vaguely recognized in the other three pictures, but the present invention can restore and recognize it clearly. It can be seen from the subjective comparison of people that the present invention has obvious repairing effect on blurred images.

需要注意的是,公布实施例的目的在于帮助进一步理解本发明,但是本领域的技术人员可以理解:在不脱离本发明及所附权利要求的精神和范围内,各种替换和修改都是可能的。因此,本发明不应局限于实施例所公开的内容,本发明要求保护的范围以权利要求书界定的范围为准。It should be noted that the purpose of publishing the embodiments is to help further understanding of the present invention, but those skilled in the art can understand that various replacements and modifications are possible without departing from the spirit and scope of the present invention and the appended claims of. Therefore, the present invention should not be limited to the contents disclosed in the embodiments, and the scope of protection of the present invention shall be subject to the scope defined by the claims.

Claims (5)

1.一种基于Bi-Skip-Net的图像去模糊方法,包括如下步骤:1. An image deblurring method based on Bi-Skip-Net, comprising the following steps: 1)输入模糊图像,通过卷积核尺寸为7x7,步长为1的卷积层得到浅层特征;1) Input a blurred image, and obtain shallow features through a convolutional layer with a convolution kernel size of 7x7 and a stride of 1; 2)将浅层特征通过3个残差块得到当前尺度下的深度特征;2) Pass the shallow feature through three residual blocks to obtain the depth feature at the current scale; 3)将深度特征进行下采样加残差的模式得到下一个尺度下的浅层特征;3) The mode of downsampling and adding residuals to the deep features obtains the shallow features at the next scale; 4)按照规定的下采样次数n,重复步骤2,3获取不同尺度下的浅层特征和深度特征,并且在最小尺度下不获取深度特征;4) According to the specified number of downsampling times n, repeat steps 2 and 3 to obtain shallow features and depth features at different scales, and do not obtain depth features at the minimum scale; 5)将最小尺度的浅层特征作为基本特征;5) Take the shallow feature of the smallest scale as the basic feature; 6)将上一层尺度的浅层特征经过卷积核大小为1x1,步长为1的卷积层得到浅层降维特征;将对应的深度特征经过卷积核大小为3x3,步长为2的卷积层得到深度降维特征,并与基本特征串联进行上采样;将上采样后的特征与浅层降维特征进行串联得到当前尺度下的基本特征;6) Pass the shallow features of the previous scale through a convolutional layer with a convolution kernel size of 1x1 and a stride of 1 to obtain shallow dimension reduction features; pass the corresponding depth features through a convolution kernel with a size of 3x3 and a stride of 1 The convolutional layer of 2 obtains deep dimensionality reduction features, and upsampling them in series with basic features; concatenates the upsampled features and shallow dimensionality reduction features to obtain basic features at the current scale; 7)重复步骤6直到采样操作截止;7) Repeat step 6 until the sampling operation ends; 8)将得到的基本特征经过卷积核大小为7x7,步长为1的卷积层得到残差特征;8) Pass the obtained basic features through a convolutional layer with a convolution kernel size of 7x7 and a stride of 1 to obtain residual features; 9)将残差特征与输入图像相加得到复原图像;9) adding the residual feature to the input image to obtain the restored image; 10)采用Bi-Skip-Net加残差的模式作为生成器。10) The model of Bi-Skip-Net plus residual is used as the generator. 2.根据权利要求1所述的图像去模糊方法,其特征在于:2. image deblurring method according to claim 1, is characterized in that: 步骤4)按照规定的下采样次数为5。Step 4) The number of downsampling times is 5 as specified. 3.根据权利要求1所述的图像去模糊方法,其特征在于:3. image deblurring method according to claim 1, is characterized in that: 所述的Bi-Skip-Net网络由三部分组成:contract路径(D),Skip路径(S)以及expand路径(U);Contract层进行下采样实现特征压缩,Skip层用于连接深层特征与浅层特征,expand层进行上采样;其中D*,S*,U*为对应下采样尺度下的特征。The Bi-Skip-Net network is composed of three parts: contract path (D), Skip path (S) and expand path (U); the Contract layer performs downsampling to achieve feature compression, and the Skip layer is used to connect deep features and shallow features. Layer features, the expand layer is up-sampling; where D*, S*, U* are the features under the corresponding downsampling scale. 4.根据权利要求3所述的图像去模糊方法,其特征在于:4. image deblurring method according to claim 3, is characterized in that: 所述的采样尺度下的特征操作,在contract路径,当前特征通过3个残差块(3xResBlock)来获得深层特征,并采用池化(pooling)和卷积相加的残差模式来获取下一尺度的特征;在Skip路径,通过1x1的卷积来对压缩浅层特征,通过3x3的卷积来压缩深度特征;在expand路径,通过concat来实现特征连接,并通过3x3的反卷积实现特征上采样。For the feature operation at the sampling scale, in the contract path, the current feature obtains deep features through 3 residual blocks (3xResBlock), and uses the residual mode of pooling and convolution to obtain the next feature. Scale features; in the Skip path, the shallow features are compressed by 1x1 convolution, and the deep features are compressed by 3x3 convolution; in the expand path, the feature connection is realized by concat, and the feature is realized by 3x3 deconvolution upsampling. 5.根据权利要求1所述的图像去模糊方法,其特征在于:5. image deblurring method according to claim 1, is characterized in that: 步骤10)所述的生成器是按如下方式来设计的,The generator described in step 10) is designed as follows, ①采用如下的损失函数来训练网络,① The following loss function is used to train the network, 其中为对抗损失函数,为条件损失函数,λ为条件损失函数的权重;in For the adversarial loss function, is the conditional loss function, and λ is the weight of the conditional loss function; 通过最大化来优化判别器D;by maximizing to optimize the discriminator D; 通过最小化式3来优化生成器G;Optimize the generator G by minimizing Equation 3; 其中设计如下:in The design is as follows: 其中,L,S分别表示模型在不同层级的输出和真值,α取值为1或2,整个条件损失函数被通道数c,宽度w和高度h所规范;Among them, L and S respectively represent the output and true value of the model at different levels, α is 1 or 2, and the entire conditional loss function is regulated by the number of channels c, the width w and the height h; ②将训练好的网络作为最终的复原模型。② Use the trained network as the final restoration model.
CN201811298475.8A 2018-11-02 2018-11-02 A kind of image deblurring algorithm based on Bi-Skip-Net Pending CN109410146A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811298475.8A CN109410146A (en) 2018-11-02 2018-11-02 A kind of image deblurring algorithm based on Bi-Skip-Net
PCT/CN2018/117634 WO2020087607A1 (en) 2018-11-02 2018-11-27 Bi-skip-net-based image deblurring method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811298475.8A CN109410146A (en) 2018-11-02 2018-11-02 A kind of image deblurring algorithm based on Bi-Skip-Net

Publications (1)

Publication Number Publication Date
CN109410146A true CN109410146A (en) 2019-03-01

Family

ID=65471437

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811298475.8A Pending CN109410146A (en) 2018-11-02 2018-11-02 A kind of image deblurring algorithm based on Bi-Skip-Net

Country Status (2)

Country Link
CN (1) CN109410146A (en)
WO (1) WO2020087607A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110570375A (en) * 2019-09-06 2019-12-13 腾讯科技(深圳)有限公司 image processing method, image processing device, electronic device and storage medium
CN111612711A (en) * 2019-05-31 2020-09-01 北京理工大学 An Improved Image Deblurring Method Based on Generative Adversarial Networks
CN112102184A (en) * 2020-09-04 2020-12-18 西北工业大学 Image deblurring method based on Scale-Encoder-Decoder-Net network
CN113570516A (en) * 2021-07-09 2021-10-29 湖南大学 Image Blind Motion Deblurring Based on CNN-Transformer Hybrid Autoencoder

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111986102B (en) * 2020-07-15 2024-02-27 万达信息股份有限公司 Digital pathological image deblurring method
CN112070658B (en) * 2020-08-25 2024-04-16 西安理工大学 Deep learning-based Chinese character font style migration method
CN112070693B (en) * 2020-08-27 2024-03-26 西安理工大学 Single dust image recovery method based on gray world adaptive network
CN112184590B (en) * 2020-09-30 2024-03-26 西安理工大学 Single dust image recovery method based on gray world self-guiding network
CN112330554B (en) * 2020-10-30 2024-01-19 西安工业大学 Structure learning method for deconvolution of astronomical image
CN112561819A (en) * 2020-12-17 2021-03-26 温州大学 Self-filtering image defogging algorithm based on self-supporting model
CN112634163B (en) * 2020-12-29 2024-10-15 南京大学 Method for removing image motion blur based on improved cyclic generation countermeasure network
CN113538263A (en) * 2021-06-28 2021-10-22 江苏威尔曼科技有限公司 Motion blur removing method, medium, and device based on improved DeblurgAN model
CN113592736B (en) * 2021-07-27 2024-01-12 温州大学 Semi-supervised image deblurring method based on fused attention mechanism
CN113610721B (en) * 2021-07-27 2024-09-24 河南大学 Image restoration method for generating countermeasure network based on partial convolution
CN113947589B (en) * 2021-10-26 2024-08-02 北京理工大学 Missile-borne image deblurring method based on countermeasure generation network
CN114119395B (en) * 2021-11-15 2024-06-11 北京理工大学 Image processing system and method integrating distortion detection and restoration
CN114240771A (en) * 2021-11-23 2022-03-25 无锡学院 Image deblurring system and method based on dual control network
CN114511465B (en) * 2022-02-21 2024-08-20 华东交通大学 Image restoration method and system based on improvement DCGAN
CN114723630B (en) * 2022-03-31 2024-09-06 福州大学 Image deblurring method and system based on cavity double-residual multi-scale depth network
CN114998121A (en) * 2022-05-17 2022-09-02 杭州电子科技大学 Image highlight removing method and system based on improved unet + + network
CN114936977A (en) * 2022-05-31 2022-08-23 东南大学 An Image Deblurring Method Based on Channel Attention and Cross-Scale Feature Fusion
CN114913095B (en) * 2022-06-08 2024-03-12 西北工业大学 Depth deblurring method based on domain adaptation
CN114841897B (en) * 2022-06-08 2024-03-15 西北工业大学 Depth deblurring method based on self-adaptive fuzzy kernel estimation
CN115760589A (en) * 2022-09-30 2023-03-07 浙江大学 Image optimization method and device for motion blurred images
CN117058038B (en) * 2023-08-28 2024-04-30 北京航空航天大学 A diffraction blurred image restoration method based on even convolution deep learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106251303A (en) * 2016-07-28 2016-12-21 同济大学 A kind of image denoising method using the degree of depth full convolutional encoding decoding network
US20170365046A1 (en) * 2014-08-15 2017-12-21 Nikon Corporation Algorithm and device for image processing
CN107689034A (en) * 2017-08-16 2018-02-13 清华-伯克利深圳学院筹备办公室 A neural network training method, denoising method and device
CN108629743A (en) * 2018-04-04 2018-10-09 腾讯科技(深圳)有限公司 Processing method, device, storage medium and the electronic device of image

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8737687B2 (en) * 2011-12-30 2014-05-27 Honeywell International Inc. System and method for tracking a subject using raw images and tracking errors
CN108460742A (en) * 2018-03-14 2018-08-28 日照职业技术学院 A kind of image recovery method based on BP neural network
CN108711141B (en) * 2018-05-17 2022-02-15 重庆大学 Motion blurred image blind restoration method using improved generation type countermeasure network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170365046A1 (en) * 2014-08-15 2017-12-21 Nikon Corporation Algorithm and device for image processing
CN106251303A (en) * 2016-07-28 2016-12-21 同济大学 A kind of image denoising method using the degree of depth full convolutional encoding decoding network
CN107689034A (en) * 2017-08-16 2018-02-13 清华-伯克利深圳学院筹备办公室 A neural network training method, denoising method and device
CN108629743A (en) * 2018-04-04 2018-10-09 腾讯科技(深圳)有限公司 Processing method, device, storage medium and the electronic device of image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KAI LIN 等: "Adaptive Integration Skip Compensation Neural Networks for Removing Mixed Noise in Image", 《PCM2018:ADVANCES IN MULTIMEDIA INFORMATION PROCESSING》 *
OREST KUPYN 等: "DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks", 《ARXIV》 *
WEIWEI FU 等: "Arrears Prediction For Electricity Customer Through Wgan-Gp", 《2017 IEEE 2ND INFORMATION TECHNOLOGY, NETWORKING, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE(ITNEC)》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111612711A (en) * 2019-05-31 2020-09-01 北京理工大学 An Improved Image Deblurring Method Based on Generative Adversarial Networks
CN111612711B (en) * 2019-05-31 2023-06-09 北京理工大学 Picture deblurring method based on generation of countermeasure network improvement
CN110570375A (en) * 2019-09-06 2019-12-13 腾讯科技(深圳)有限公司 image processing method, image processing device, electronic device and storage medium
CN110570375B (en) * 2019-09-06 2022-12-09 腾讯科技(深圳)有限公司 Image processing method, device, electronic device and storage medium
CN112102184A (en) * 2020-09-04 2020-12-18 西北工业大学 Image deblurring method based on Scale-Encoder-Decoder-Net network
CN113570516A (en) * 2021-07-09 2021-10-29 湖南大学 Image Blind Motion Deblurring Based on CNN-Transformer Hybrid Autoencoder
CN113570516B (en) * 2021-07-09 2022-07-22 湖南大学 Image Blind Motion Deblurring Based on CNN-Transformer Hybrid Autoencoder

Also Published As

Publication number Publication date
WO2020087607A1 (en) 2020-05-07

Similar Documents

Publication Publication Date Title
CN109410146A (en) A kind of image deblurring algorithm based on Bi-Skip-Net
CN111709895B (en) Image blind deblurring method and system based on attention mechanism
CN109241982B (en) Target detection method based on deep and shallow layer convolutional neural network
Zhang et al. One-two-one networks for compression artifacts reduction in remote sensing
CN110782399A (en) Image deblurring method based on multitask CNN
CN107133923B (en) A non-blind deblurring method for blurred images based on adaptive gradient sparse model
CN110766632A (en) Image denoising method based on channel attention mechanism and characteristic pyramid
CN109544475A (en) Bi-Level optimization method for image deblurring
CN111612741B (en) Accurate reference-free image quality evaluation method based on distortion recognition
CN110428382B (en) Efficient video enhancement method and device for mobile terminal and storage medium
CN107993208A (en) It is a kind of based on sparse overlapping group prior-constrained non local full Variational Image Restoration method
CN107038688A (en) The detection of image noise and denoising method based on Hessian matrixes
CN116051428A (en) A low-light image enhancement method based on joint denoising and super-resolution of deep learning
WO2023206343A1 (en) Image super-resolution method based on image pre-training strategy
CN105590296B (en) A kind of single-frame images Super-Resolution method based on doubledictionary study
CN104952051B (en) Low-rank image repair method based on gauss hybrid models
CN117218013A (en) Event camera image processing method, training method, system, equipment and medium
KR102095444B1 (en) Method and Apparatus for Removing gain Linearity Noise Based on Deep Learning
CN116245765A (en) Image denoising method and system based on enhanced deep dilated convolutional neural network
CN104123707B (en) Local rank priori based single-image super-resolution reconstruction method
Wei et al. Image denoising with deep unfolding and normalizing flows
CN116757962A (en) Image denoising method and device
CN105787890B (en) The image de-noising method of adaptive equidistant template iteration mean filter
CN112734655B (en) Low-light image enhancement method for enhancing CRM (customer relationship management) based on convolutional neural network image
Wang et al. RFFNet: Towards Robust and Flexible Fusion for Low-Light Image Denoising

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190301

RJ01 Rejection of invention patent application after publication