CN111028177A - Edge-based deep learning image motion blur removing method - Google Patents

Edge-based deep learning image motion blur removing method Download PDF

Info

Publication number
CN111028177A
CN111028177A CN201911275632.8A CN201911275632A CN111028177A CN 111028177 A CN111028177 A CN 111028177A CN 201911275632 A CN201911275632 A CN 201911275632A CN 111028177 A CN111028177 A CN 111028177A
Authority
CN
China
Prior art keywords
image
edge
loss function
convolution
motion blur
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911275632.8A
Other languages
Chinese (zh)
Other versions
CN111028177B (en
Inventor
姚剑
蒋佳芹
李俐俐
龚烨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201911275632.8A priority Critical patent/CN111028177B/en
Publication of CN111028177A publication Critical patent/CN111028177A/en
Application granted granted Critical
Publication of CN111028177B publication Critical patent/CN111028177B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20201Motion blur correction
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention relates to an image restoration technology, in particular to an edge-based method for removing motion blur of a deep learning image, which comprises the steps of extracting edges from a blurred image by using a trained HED network, and then extracting edge characteristic information guiding a motion blur removal process by using a convolution layer; extracting multi-scale feature information from the blurred image by a deblurring backbone network, integrating image features and edge features on each scale by using a spatial feature transformation layer, and gradually recovering a potential clear image from the deepest image features by a decoding part; taking the fuzzy-clear image pair as a training sample set, defining a total loss function by the sum of a mean square error loss function and a perception loss function, and training the deblurred trunk network by using the total loss function until the fuzzy-clear image pair converges to the optimal precision; and inputting the motion blur image into a trained deblurring backbone network to obtain a deblurred result. The method realizes effective integration of image features and edge features, and the deblurring effect is obvious.

Description

Edge-based deep learning image motion blur removing method
Technical Field
The invention belongs to the technical field of image restoration, and particularly relates to a method for removing motion blur of a deep learning image based on edges.
Background
In the photographing process, the imaging device and the scene object move relatively to generate motion blur, and important detail information of the obtained image is lost. The process of recovering a potentially sharp image from a degraded blurred image is called deblurring. The motion blur removal can recover clear edges from blurred images caused by camera shake, flying vehicles in scenes and the like, visual perception quality can be improved, subsequent high-level applications such as character recognition and target detection are facilitated, and therefore the motion blur removal has high research value and application prospect.
The existing image deblurring algorithm can be generally divided into a traditional deblurring method based on energy optimization and a deblurring method based on deep learning. In the traditional deblurring method based on energy optimization, global consistent deblurring and global inconsistent deblurring can be further subdivided.
In conventional approaches, motion blurred images can be modeled as a convolution of a blur kernel with the sharp image, followed by the addition of additive noise. The deblurring method based on energy optimization comprises two stages of blur kernel estimation and image deconvolution, wherein a degradation model of a motion blurred image is analyzed in the stage of the blur kernel estimation, an energy equation is established by combining the prior statistical knowledge of a blur kernel and a clear image, and a blur kernel estimation value is obtained by solving the minimum value of the equation; and after the fuzzy kernel is obtained, modeling and solving by combining the degradation model and the priori knowledge of the clear image to obtain a clear image estimation value. The global consistent blur is generally caused by in-plane translation when a camera shoots a static scene, and the blur kernel is shared by a whole image at the moment, so that an image pyramid can be established, and the blur kernel can be recovered from rough to fine. The global inconsistent blur causes are complex, including camera rotation, dynamic target and depth of field change in a static scene, etc., generally, it is considered that each small region in a blurred image shares a blur kernel, and a linear blur kernel library is usually established to fit the blur kernels of the small regions. The global uniform deblurring effect is good, but the uniform blurring assumption is over idealized; the inconsistent blurring with complex cause is closer to the real world, and has more practical application value for the research of the inconsistent blurring, but the modeling and the solving are complex under the framework of the traditional method, and the effect is not satisfactory.
In recent years, the deep neural network has shown strong learning ability in the field of computer vision, and has been produced in the field of image motion blur removal. The deep learning method is based on data-driven, which does not strictly distinguish between globally consistent deblurring and globally inconsistent deblurring. The estimation value of the fuzzy kernel is obtained by learning with the strong characteristic expression capability of the network, and then the image deconvolution is carried out by using the traditional method, but the deblurring effect is not greatly improved. The subsequent end-to-end deblurring network framework learning is a mapping from a blurred image to a clear image, and compared with the traditional energy optimization method, the existing inconsistent deblurring method based on deep learning has great progress in model establishment, model solution and deblurring effects, but the problem of incomplete deblurring still exists at the edge of the image.
Disclosure of Invention
The invention aims to provide a method for removing motion blur of a deep learning image by taking edge information as auxiliary information.
In order to achieve the purpose, the invention adopts the technical scheme that: an edge-based deep learning image motion blur removing method comprises the following steps:
step 1, extracting edges from a blurred image by using a trained HED network, and then extracting edge characteristic information guiding a motion blur removing process by using a convolutional layer;
step 2, extracting multi-scale feature information from the blurred image by the deblurring backbone network, integrating image features and edge features on each scale by using a spatial feature transformation layer, and gradually recovering a potential clear image from the deepest image features by a decoding part;
step 3, taking the fuzzy-clear image pair as a training sample set, defining the sum of a mean square error loss function and a perception loss function as a total loss function, and training the deblurred backbone network by using the total loss function until the fuzzy-clear image pair converges to the optimal precision;
and 4, inputting the motion blur image into the deblurred backbone network trained in the step 3 to obtain a deblurred result.
In the above method for removing motion blur in the deep learning image based on edges, the obtaining of the edge feature information in step 1 includes the following sub-steps:
step 1.1, obtaining a blurred image edge image; inputting color blurred images with the size of WxHx3 into an HED network loaded with pre-training weights to obtain a WxHx1 edge image, wherein W is the width of an original image, and H is the height of the original image;
step 1.2, excavating deep-level feature information of the edge map; taking the edge map output in the step 1.1 as an input, extracting high-level edge feature information from the fuzzy edge through a series of convolution and nonlinear activation operations: the convolution kernel size of the first convolution is 1 multiplied by 1, the convolution kernel size of the subsequent four convolutions is 3 multiplied by 3, and the image space resolution is kept unchanged in the whole process; and the nonlinear activation adopts a hole correction linear unit, and finally outputs high-dimensional edge characteristic information with the size of W multiplied by H multiplied by 128.
In the above method for motion blur removal of deep learning image based on edge, the implementation of step 2 includes the following sub-steps:
2.1, extracting the characteristics of the blurred image; inputting a color blurred image with the size of W multiplied by H multiplied by 3 into a convolution layer formed by convolution and nonlinear activation, wherein the encoding stage can be divided into 4 processing blocks, the size of a feature map of each block is kW multiplied by kH multiplied by l, W is the width of an original image, and H is the height of the original image; k is 1,0.5,0.25,0.25, l is 32/k;
step 2.2, integrating the characteristics of the blurred image and the edge information at different scales; integrating the edge characteristics output in the step 1.2 and the image characteristics of the current scale obtained in the step 2.1 by adopting a spatial characteristic transformation residual block, wherein the spatial characteristic transformation residual block comprises a spatial characteristic transformation layer-a convolutional layer-a spatial characteristic transformation layer-a convolutional layer;
step 2.3, further mining the characteristics of the blurred image; combining the hole convolutions with different hole rates, and increasing the receptive field so as to further mine characteristic information, wherein the characteristic information comprises 2 serial-hole convolution residual blocks and 1 parallel-hole convolution residual block;
and 2.4, gradually reconstructing the blurred image from the deep image characteristics.
In the above method for motion blur removal of deep learning image based on edge, the implementation of step 3 includes the following sub-steps:
step 3.1, defining the mean square error loss function L respectivelymseAnd a perceptual loss function Lp
Figure BDA0002315488760000041
Figure BDA0002315488760000042
Wherein, IcAnd IdRespectively a real sharp image and a deblurred image; m, n represents the horizontal and vertical coordinate index of the image; phi is ai,jVGG19 feature map representing weights pre-trained on ImageNet, located at jth convolution, W, before ith maximum pooling leveli,jAnd Hi,jIs the size of the feature map, and is typically set to i-3, j-3;
step 3.2, defining the total loss function:
Ltotal=Lmse+λ×Lp
wherein λ is the weight of the perceptual loss function, set to 0.01;
using the total loss function LtotalThe network is trained until the entire network converges to optimal accuracy.
The invention has the beneficial effects that: 1. the feature learning and generalization ability is strong. An end-to-end model is trained by using a deep learning method based on a convolutional neural network, so that a clear image with the same resolution as an input image can be obtained by inputting a motion blurred image. The process does not need to give the manually designed characteristics in advance, and the network can learn the required characteristics from the training data and reasonably utilize the characteristics, so that the method has better generalization capability and can stably express even in a more severe fuzzy scene.
2. The network structure is simple and easy to train. The deblurring backbone network is of a single scale, no additional sub-network (such as an edge extraction network) needs training, only the existing edge extraction network is used for acquiring edges, and guiding information provided by the edges is effectively utilized through an edge feature and image feature integration module. Therefore, the network designed by the invention has a simple structure and is easy to train.
3. The deblurring precision is improved, and the edge effect is obvious. By introducing the edge information and effectively integrating the edge information and the image information on a feature level through a spatial feature transformation layer, the deblurring effect is remarkably improved, particularly at the edge.
Drawings
FIG. 1 is a flowchart of a method for motion blur removal of a deep learning image of an edge according to an embodiment of the present invention;
FIG. 2(a) is a schematic diagram illustrating a motion blur removal backbone network in the architecture of an edge-based deep learning motion blur removal network according to an embodiment of the present invention;
FIG. 2(b) is a block diagram of an edge-based feature integration module in the architecture of an edge-based deep learning deblurring network according to an embodiment of the present invention;
FIG. 2(c) is a block of parallel-hole convolution residues in the architecture of the edge-based deep learning deblurring network according to an embodiment of the present invention;
FIG. 3(a) is a blurred image of an experiment on a GOPRO test data set according to an embodiment of the present invention;
FIG. 3(b) is a deblurred image of an experiment on a GOPRO test data set according to an embodiment of the present invention;
FIG. 4(a) is a first blurred image on the Stereo Blur Dataset test data set according to one embodiment of the present invention;
FIG. 4(b) is a first deblurred image on a Stereo Blur Dataset test data set in accordance with an embodiment of the present invention;
FIG. 4(c) is a second blurred image on the Stereo Blur Dataset test data set according to one embodiment of the present invention;
FIG. 4(d) is a second deblurred image on the Stereo Blur Dataset test data set according to one embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
In consideration of the performance deficiency of the end-to-end non-uniform deblurring method based on deep learning at the edge, and the feasibility that the traditional deblurring method introduces image priori knowledge to reduce the solution space and further obtain an effective deblurring result, the embodiment provides the deep learning image motion deblurring method taking the edge information as auxiliary information, so that the deblurring effect at the edge is further improved. Compared with a multi-scale deblurring network architecture, the method only uses a single-scale network architecture, and greatly reduces the complexity and parameter quantity of the network. Compared with most single-scale deblurring network architectures, the present embodiment introduces edge information to make the deblurring process more focused on edge regions. Compared with the existing single-scale deblurring network architecture considering the edge information, the embodiment directly uses the existing edge extraction network to obtain the edge and integrates the image and the edge information on a feature level by using novel spatial feature transformation.
The present embodiment is implemented by the following technical solutions, a deep learning image de-motion blur method based on edges, where the whole network structure is shown in fig. 2(a), fig. 2(b), and fig. 2(c), and the method mainly includes two modules, which are respectively a preprocessing of edge information and a de-motion blur backbone network, and if no specific description is given, the size of a convolution kernel used in the present embodiment is 3 × 3. The specific steps of motion blur removal are as follows:
the method comprises the following steps: extracting edges from the blurred image by using a trained HED (Hollistically-Nested Edge Detection) network, and then extracting Edge characteristic information guiding a motion deblurring process by using a convolution layer;
step two: the image deblurring backbone network of the coding-decoding structure extracts multi-scale feature information from a blurred image, a spatial feature transformation layer is used for integrating image features and edge features on each scale, and a decoding part gradually recovers a potential clear image from the deepest image features;
step three: taking the fuzzy-clear image pair as a training sample set, defining the sum of a mean square error loss function and a perception loss function as a total loss function, and training the deblurred trunk network by using the total loss function until the deblurred trunk network converges to the optimal precision;
step four: and inputting the motion blur image into a trained network to obtain a deblurred result.
In specific implementation, as shown in fig. 2(a), 2(b), and 2(c), the edge-based deep learning image deblurring network framework includes the following steps:
s1, acquiring the edge characteristic information, comprising the following substeps:
s1.1, acquiring a blurred image edge image. The HED (Hollistically-Nested Edge Detection) network is a neural network framework for extracting image edges, and once training is completed, the HED network can output an Edge probability map with the same resolution according to an input color image, and the pixel value is between 0 and 1. Inputting a color blurred image with a size of W × H × 3 into the HED network loaded with the pre-training weights, a W × H × 1 edge map can be obtained, where W represents the width of the original and H represents the height of the original.
And S1.2, mining deep feature information of the edge map. Taking the edge output in S1.1 as an input, extracting high-level edge feature information from the fuzzy edge through a series of convolution and nonlinear activation operations: the convolution kernel size of the first convolution is 1 multiplied by 1, the convolution kernel size of the subsequent four convolutions is 3 multiplied by 3, and the image space resolution is kept unchanged in the whole process; the nonlinear activation adopts a hole-modified linear unit (LEAKYRELU), and finally outputs high-dimensional edge feature information with the size of W multiplied by H multiplied by 128.
S2, the deblurring backbone network implements an end-to-end deblurring operation on the input blurred image, as shown in fig. 2(a), and includes the following sub-steps:
and S2.1, extracting the characteristics of the blurred image. A color blurred image of size W × H × 3 is input to a convolution layer including convolution and nonlinear activation, and the encoding stage is divided into 4 processing blocks, each having a feature map size of kW × kH × l, (k ═ 1,0.5,0.25,0.25, and l ═ 32/k).
And S2.2, integrating the image features and the edge information at different scales. And integrating the edge characteristics obtained in the step S1.2 and the image characteristics of the current scale obtained in the step S2.1 by adopting a spatial characteristic transformation residual block, wherein the spatial characteristic transformation residual block comprises a spatial characteristic transformation layer-a convolutional layer-a spatial characteristic transformation layer-a convolutional layer. As shown in fig. 2 (b).
The method includes the steps of (1) performing a convolution operation on a 1 st spatial feature transformation residual error on the original resolution scale, wherein k is 1, the image feature size is W × H × 32, (1) performing convolution on 2 convolution layers, wherein the 1 st convolution input is W × H × 32, the W × H × 32 is output at the original resolution, the 2 nd convolution input is W × H × 32, W × H × l (k is 1, l 32/k), performing pixel-by-pixel multiplication on the gain parameter γ and the image feature, performing pixel-by-pixel addition on the deviation parameter β and the image feature output last, obtaining an adjusted image feature, wherein the adjusted image feature is obtained by performing convolution operation on the original resolution scale, wherein k is 1, the image feature size is W × H × 32, the parameter is W × H × 32/k, the image feature is adjusted by performing convolution operation on 2 th convolution operation, the edge resolution is 3 × 3 k, the original resolution, the edge resolution is adjusted by performing convolution operation on the original resolution scale, the kW is 2 rd convolution operation, the original resolution, the edge resolution is 2 rd convolution input, the kW × H × 32/k, the original resolution is 2 th convolution operation, the edge resolution is adjusted image feature size is 2 rd convolution input, the read, the image feature size is 2 th convolution input, the edge resolution is 2 rd convolution input is 2 th convolution input is 2 rd convolution input, the read, the edge resolution is read, the edge resolution of the original resolution is read, the original resolution, the edge resolution is read, the read edge resolution is read, the read edge characteristic is read, the read.
And S2.3, further mining the characteristics of the blurred image. The hole convolutions with different hole rates are combined to increase the receptive field and further mine more characteristic information, including 2 serial-hole convolution residual blocks and 1 parallel-hole convolution residual block. The standard residual error module is used for adding the result of convolution-nonlinear activation-convolution operation of the input and the input to obtain the output, and the serial-hole convolution residual error block and the parallel-hole convolution are all changes on the standard residual error: the original void rate 1 of the 1 st convolution is changed by 2 and 3 by 2 serial-void convolution residual blocks respectively, and the input and output are both kW × kH × 128(k ═ 0.25). Parallel-hole convolution as shown in fig. 2(c), the input is subjected to a hole convolution operation with a hole rate of 1,2,3,4 in a parallel manner, and both the input and the output are kW × kH × 128(k ═ 0.25); the results of the 4 convolution operations are then merged together by concatenation in the channel dimension, at which point the image feature size is kW × kH × 512(k ═ 0.25); subsequently, the number of characteristic channels of the image is changed to 128 by a convolution operation with a void ratio of 1, and input and output are kW × kH × 512(k ═ 0.25) and kW × kH × 128(k ═ 0.25), respectively; finally, the result of the preceding operation is added to the input.
And S2.4, gradually reconstructing the blurred image from the deep image characteristics. Firstly, processing image features on a minimum resolution scale kW multiplied by kH multiplied by 128(k is 0.25) by adopting the operation of 1 convolution and 3 residual modules; then, the image features are up-sampled to kW × kH × 64(k is 0.5) by means of transposition convolution, and then 1 convolution and 3 residual module operations are performed, except that the input of the convolution is the value of the image features of the current scale and the image features of the scale corresponding to the coding part which are connected in series in the channel dimension; similarly, the image features are then up-sampled by transpose convolution to kW × kH × 32(k ═ 1), then 1 convolution plus 3 residual block operations, the input of the convolution is still the value of the image feature of the current scale and the image feature of the scale corresponding to the coding part concatenated in the channel dimension; then, changing the image characteristics of kW × kH × 32(k ═ 1) into a color image of W × H × 3 by a convolution operation, which represents the difference between the network-learned blurred image and the sharp image; finally, the input and the difference map are added to obtain the final deblurred image.
S3, defining a loss function, training the network based on the training sample set, and comprising the following substeps:
s3.1, using the loss function of mean square error LmseEnsuring the fidelity of deblurring results using a perceptual loss function LpTo improve the quality of the detail of the deblurring result, two loss functions are defined as follows:
Figure BDA0002315488760000111
Figure BDA0002315488760000112
wherein, IcAnd IdRespectively a real sharp image and a deblurred image; m, n represents the horizontal and vertical coordinate index of the image; phi is ai,jVGG19 feature map representing weights pre-trained on ImageNet, located at jth convolution, W, before ith maximum pooling leveli,jAnd Hi,jThe size of the feature map is usually set to i-3 and j-3.
S3.2, defining a total loss function,
Ltotal=Lmse+λ×Lp
where λ is the weight of the perceptual loss function, which is typically set to 0.01. Using the total loss function LtotalThe network is trained until the entire network converges to optimal accuracy.
And S4, inputting the motion blur image into the trained network in the S3 to obtain a deblurred result with more thorough blur removal.
The present embodiment deblurs partial experimental data, and the results are shown in fig. 3(a), fig. 3(b), fig. 4(a), fig. 4(b), fig. 4(c), and fig. 4(d), for example, and it can be seen that the present embodiment can stably and accurately deblur blurred images with different blurring degrees.
It should be understood that parts of the specification not set forth in detail are well within the prior art.
Although specific embodiments of the present invention have been described above with reference to the accompanying drawings, it will be appreciated by those skilled in the art that these are merely illustrative and that various changes or modifications may be made to these embodiments without departing from the principles and spirit of the invention. The scope of the invention is only limited by the appended claims.

Claims (4)

1. A deep learning image motion blur removing method based on edges is characterized by comprising the following steps:
step 1, extracting edges from a blurred image by using a trained HED network, and then extracting edge characteristic information guiding a motion blur removing process by using a convolutional layer;
step 2, extracting multi-scale feature information from the blurred image by the deblurring backbone network, integrating image features and edge features on each scale by using a spatial feature transformation layer, and gradually recovering a potential clear image from the deepest image features by a decoding part;
step 3, taking the fuzzy-clear image pair as a training sample set, defining the sum of a mean square error loss function and a perception loss function as a total loss function, and training the deblurred backbone network by using the total loss function until the fuzzy-clear image pair converges to the optimal precision;
and 4, inputting the motion blur image into the deblurred backbone network trained in the step 3 to obtain a deblurred result.
2. The method for motion blur removal of deep learning image based on edge as claimed in claim 1, wherein the obtaining of the edge feature information in step 1 comprises the following sub-steps:
step 1.1, obtaining a blurred image edge image; inputting color blurred images with the size of WxHx3 into an HED network loaded with pre-training weights to obtain a WxHx1 edge image, wherein W is the width of an original image, and H is the height of the original image;
step 1.2, excavating deep-level feature information of the edge map; taking the edge map output in the step 1.1 as an input, extracting high-level edge feature information from the fuzzy edge through a series of convolution and nonlinear activation operations: the convolution kernel size of the first convolution is 1 multiplied by 1, the convolution kernel size of the subsequent four convolutions is 3 multiplied by 3, and the image space resolution is kept unchanged in the whole process; and the nonlinear activation adopts a hole correction linear unit, and finally outputs high-dimensional edge characteristic information with the size of W multiplied by H multiplied by 128.
3. The edge-based deep learning image de-motion blur method of claim 2, wherein the implementation of step 2 comprises the sub-steps of:
2.1, extracting the characteristics of the blurred image; inputting a color blurred image with the size of W multiplied by H multiplied by 3 into a convolution layer formed by convolution and nonlinear activation, wherein the encoding stage can be divided into 4 processing blocks, the size of a feature map of each block is kW multiplied by kH multiplied by l, W is the width of an original image, and H is the height of the original image; k is 1,0.5,0.25,0.25, l is 32/k;
step 2.2, integrating the characteristics of the blurred image and the edge information at different scales; integrating the edge characteristics output in the step 1.2 and the image characteristics of the current scale obtained in the step 2.1 by adopting a spatial characteristic transformation residual block, wherein the spatial characteristic transformation residual block comprises a spatial characteristic transformation layer-a convolutional layer-a spatial characteristic transformation layer-a convolutional layer;
step 2.3, further mining the characteristics of the blurred image; combining the hole convolutions with different hole rates, and increasing the receptive field so as to further mine characteristic information, wherein the characteristic information comprises 2 serial-hole convolution residual blocks and 1 parallel-hole convolution residual block;
and 2.4, gradually reconstructing the blurred image from the deep image characteristics.
4. The edge-based deep learning image de-motion blur method of claim 1, wherein the implementation of step 3 comprises the sub-steps of:
step 3.1, defining the mean square error loss function L respectivelymseAnd a perceptual loss function Lp
Figure FDA0002315488750000021
Figure FDA0002315488750000022
Wherein, IcAnd IdRespectively a real sharp image and a deblurred image; m, n represents the horizontal and vertical coordinate index of the image; phi is ai,jVGG19 feature map representing weights pre-trained on ImageNet, located at jth convolution, W, before ith maximum pooling leveli,jAnd Hi,jIs the size of the feature map, and is typically set to i-3, j-3;
step 3.2, defining the total loss function:
Ltotal=Lmse+λ×Lp
wherein λ is the weight of the perceptual loss function, set to 0.01;
using the total loss function LtotalThe network is trained until the entire network converges to optimal accuracy.
CN201911275632.8A 2019-12-12 2019-12-12 Edge-based deep learning image motion blur removing method Active CN111028177B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911275632.8A CN111028177B (en) 2019-12-12 2019-12-12 Edge-based deep learning image motion blur removing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911275632.8A CN111028177B (en) 2019-12-12 2019-12-12 Edge-based deep learning image motion blur removing method

Publications (2)

Publication Number Publication Date
CN111028177A true CN111028177A (en) 2020-04-17
CN111028177B CN111028177B (en) 2023-07-21

Family

ID=70206421

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911275632.8A Active CN111028177B (en) 2019-12-12 2019-12-12 Edge-based deep learning image motion blur removing method

Country Status (1)

Country Link
CN (1) CN111028177B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111583143A (en) * 2020-04-30 2020-08-25 广州大学 Complex image deblurring method
CN111695421A (en) * 2020-04-30 2020-09-22 北京迈格威科技有限公司 Image recognition method and device and electronic equipment
CN111815536A (en) * 2020-07-15 2020-10-23 电子科技大学 Motion blur restoration method based on contour enhancement strategy
CN111986102A (en) * 2020-07-15 2020-11-24 万达信息股份有限公司 Digital pathological image deblurring method
CN112465730A (en) * 2020-12-18 2021-03-09 辽宁石油化工大学 Motion video deblurring method
CN112488946A (en) * 2020-12-03 2021-03-12 重庆邮电大学 Single-scale motion blur image frame restoration method for cab environment
CN112634153A (en) * 2020-12-17 2021-04-09 中山大学 Image deblurring method based on edge enhancement
CN112767277A (en) * 2021-01-27 2021-05-07 同济大学 Depth feature sequencing deblurring method based on reference image
CN112991194A (en) * 2021-01-29 2021-06-18 电子科技大学 Infrared thermal wave image deblurring method based on depth residual error network
CN113191984A (en) * 2021-05-24 2021-07-30 清华大学深圳国际研究生院 Depth learning-based motion blurred image joint restoration and classification method and system
CN113205464A (en) * 2021-04-30 2021-08-03 作业帮教育科技(北京)有限公司 Image deblurring model generation method, image deblurring method and electronic equipment
CN114187191A (en) * 2021-11-20 2022-03-15 西北工业大学 Image deblurring method based on high-frequency-low-frequency information fusion
CN114359082A (en) * 2021-12-24 2022-04-15 复旦大学 Gastroscope image deblurring algorithm based on self-built data pair
CN114549361A (en) * 2022-02-28 2022-05-27 齐齐哈尔大学 Improved U-Net model-based image motion blur removing method
CN114998156A (en) * 2022-06-30 2022-09-02 同济大学 Image motion deblurring method based on multi-patch multi-scale network
CN117593591A (en) * 2024-01-16 2024-02-23 天津医科大学总医院 Tongue picture classification method based on medical image segmentation

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103198319A (en) * 2013-04-11 2013-07-10 武汉大学 Method of extraction of corner of blurred image in mine shaft environment
CN103761710A (en) * 2014-01-08 2014-04-30 西安电子科技大学 Image blind deblurring method based on edge self-adaption
US20140140626A1 (en) * 2012-11-19 2014-05-22 Adobe Systems Incorporated Edge Direction and Curve Based Image De-Blurring
US20150131898A1 (en) * 2013-11-12 2015-05-14 Microsoft Corporation Blind image deblurring with cascade architecture
WO2018045602A1 (en) * 2016-09-07 2018-03-15 华中科技大学 Blur kernel size estimation method and system based on deep learning
CN107871310A (en) * 2017-10-26 2018-04-03 武汉大学 A kind of single image for being become more meticulous based on fuzzy core is blind to go motion blur method
US20180150684A1 (en) * 2016-11-30 2018-05-31 Shenzhen AltumView Technology Co., Ltd. Age and gender estimation using small-scale convolutional neural network (cnn) modules for embedded systems
US20180173994A1 (en) * 2016-12-15 2018-06-21 WaveOne Inc. Enhanced coding efficiency with progressive representation
CN109035149A (en) * 2018-03-13 2018-12-18 杭州电子科技大学 A kind of license plate image based on deep learning goes motion blur method
CN109087256A (en) * 2018-07-19 2018-12-25 北京飞搜科技有限公司 A kind of image deblurring method and system based on deep learning
CN110033415A (en) * 2019-03-20 2019-07-19 东南大学 A kind of image deblurring method based on Retinex algorithm
CN110060215A (en) * 2019-04-16 2019-07-26 深圳市商汤科技有限公司 Image processing method and device, electronic equipment and storage medium
CN110378844A (en) * 2019-06-14 2019-10-25 杭州电子科技大学 Motion blur method is gone based on the multiple dimensioned Image Blind for generating confrontation network is recycled

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140140626A1 (en) * 2012-11-19 2014-05-22 Adobe Systems Incorporated Edge Direction and Curve Based Image De-Blurring
CN103198319A (en) * 2013-04-11 2013-07-10 武汉大学 Method of extraction of corner of blurred image in mine shaft environment
US20150131898A1 (en) * 2013-11-12 2015-05-14 Microsoft Corporation Blind image deblurring with cascade architecture
CN103761710A (en) * 2014-01-08 2014-04-30 西安电子科技大学 Image blind deblurring method based on edge self-adaption
WO2018045602A1 (en) * 2016-09-07 2018-03-15 华中科技大学 Blur kernel size estimation method and system based on deep learning
US20180150684A1 (en) * 2016-11-30 2018-05-31 Shenzhen AltumView Technology Co., Ltd. Age and gender estimation using small-scale convolutional neural network (cnn) modules for embedded systems
US20180173994A1 (en) * 2016-12-15 2018-06-21 WaveOne Inc. Enhanced coding efficiency with progressive representation
CN107871310A (en) * 2017-10-26 2018-04-03 武汉大学 A kind of single image for being become more meticulous based on fuzzy core is blind to go motion blur method
CN109035149A (en) * 2018-03-13 2018-12-18 杭州电子科技大学 A kind of license plate image based on deep learning goes motion blur method
CN109087256A (en) * 2018-07-19 2018-12-25 北京飞搜科技有限公司 A kind of image deblurring method and system based on deep learning
CN110033415A (en) * 2019-03-20 2019-07-19 东南大学 A kind of image deblurring method based on Retinex algorithm
CN110060215A (en) * 2019-04-16 2019-07-26 深圳市商汤科技有限公司 Image processing method and device, electronic equipment and storage medium
CN110378844A (en) * 2019-06-14 2019-10-25 杭州电子科技大学 Motion blur method is gone based on the multiple dimensioned Image Blind for generating confrontation network is recycled

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
XIANGYU XU ETAL: "Motion Blur Kernel Estimation via Deep Learning", IEEE TRANSACTIONS ON IMAGE PROCESSING *
任静静;方贤勇;陈尚文;汪粼波;周健;: "基于快速卷积神经网络的图像去模糊", 计算机辅助设计与图形学学报, no. 08 *
吴从中;陈曦;季栋;詹曙;: "结合深度残差学习和感知损失的图像去噪", 中国图象图形学报, no. 10 *
毛勇等: "基于深度学习的车牌图像去运动模糊技术", pages 29 *

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111695421A (en) * 2020-04-30 2020-09-22 北京迈格威科技有限公司 Image recognition method and device and electronic equipment
CN111583143A (en) * 2020-04-30 2020-08-25 广州大学 Complex image deblurring method
CN111695421B (en) * 2020-04-30 2023-09-22 北京迈格威科技有限公司 Image recognition method and device and electronic equipment
CN111986102B (en) * 2020-07-15 2024-02-27 万达信息股份有限公司 Digital pathological image deblurring method
CN111815536A (en) * 2020-07-15 2020-10-23 电子科技大学 Motion blur restoration method based on contour enhancement strategy
CN111986102A (en) * 2020-07-15 2020-11-24 万达信息股份有限公司 Digital pathological image deblurring method
CN111815536B (en) * 2020-07-15 2022-10-14 电子科技大学 Motion blur restoration method based on contour enhancement strategy
CN112488946A (en) * 2020-12-03 2021-03-12 重庆邮电大学 Single-scale motion blur image frame restoration method for cab environment
CN112488946B (en) * 2020-12-03 2024-04-09 重庆邮电大学 Single-scale motion blurred image frame restoration method for cab environment
CN112634153A (en) * 2020-12-17 2021-04-09 中山大学 Image deblurring method based on edge enhancement
CN112634153B (en) * 2020-12-17 2023-10-20 中山大学 Image deblurring method based on edge enhancement
CN112465730A (en) * 2020-12-18 2021-03-09 辽宁石油化工大学 Motion video deblurring method
CN112767277B (en) * 2021-01-27 2022-06-07 同济大学 Depth feature sequencing deblurring method based on reference image
CN112767277A (en) * 2021-01-27 2021-05-07 同济大学 Depth feature sequencing deblurring method based on reference image
CN112991194A (en) * 2021-01-29 2021-06-18 电子科技大学 Infrared thermal wave image deblurring method based on depth residual error network
CN113205464A (en) * 2021-04-30 2021-08-03 作业帮教育科技(北京)有限公司 Image deblurring model generation method, image deblurring method and electronic equipment
CN113191984B (en) * 2021-05-24 2023-04-18 清华大学深圳国际研究生院 Deep learning-based motion blurred image joint restoration and classification method and system
CN113191984A (en) * 2021-05-24 2021-07-30 清华大学深圳国际研究生院 Depth learning-based motion blurred image joint restoration and classification method and system
CN114187191A (en) * 2021-11-20 2022-03-15 西北工业大学 Image deblurring method based on high-frequency-low-frequency information fusion
CN114187191B (en) * 2021-11-20 2024-02-27 西北工业大学 Image deblurring method based on high-frequency-low-frequency information fusion
CN114359082B (en) * 2021-12-24 2023-01-06 复旦大学 Gastroscope image deblurring algorithm based on self-built data pair
CN114359082A (en) * 2021-12-24 2022-04-15 复旦大学 Gastroscope image deblurring algorithm based on self-built data pair
CN114549361B (en) * 2022-02-28 2023-06-30 齐齐哈尔大学 Image motion blur removing method based on improved U-Net model
CN114549361A (en) * 2022-02-28 2022-05-27 齐齐哈尔大学 Improved U-Net model-based image motion blur removing method
CN114998156A (en) * 2022-06-30 2022-09-02 同济大学 Image motion deblurring method based on multi-patch multi-scale network
CN114998156B (en) * 2022-06-30 2023-06-20 同济大学 Image motion deblurring method based on multi-patch multi-scale network
CN117593591A (en) * 2024-01-16 2024-02-23 天津医科大学总医院 Tongue picture classification method based on medical image segmentation

Also Published As

Publication number Publication date
CN111028177B (en) 2023-07-21

Similar Documents

Publication Publication Date Title
CN111028177B (en) Edge-based deep learning image motion blur removing method
CN112233038B (en) True image denoising method based on multi-scale fusion and edge enhancement
Tian et al. Deep learning on image denoising: An overview
CN108921799B (en) Remote sensing image thin cloud removing method based on multi-scale collaborative learning convolutional neural network
Chang et al. HSI-DeNet: Hyperspectral image restoration via convolutional neural network
CN111915530B (en) End-to-end-based haze concentration self-adaptive neural network image defogging method
US20240062530A1 (en) Deep perceptual image enhancement
CN112257766B (en) Shadow recognition detection method in natural scene based on frequency domain filtering processing
CN112241939B (en) Multi-scale and non-local-based light rain removal method
CN114463218B (en) Video deblurring method based on event data driving
CN114331886A (en) Image deblurring method based on depth features
Fang et al. High-resolution optical flow and frame-recurrent network for video super-resolution and deblurring
Zhao et al. Deep pyramid generative adversarial network with local and nonlocal similarity features for natural motion image deblurring
Zhao et al. Skip-connected deep convolutional autoencoder for restoration of document images
Zhou et al. Sparse representation with enhanced nonlocal self-similarity for image denoising
Huang et al. Deep gaussian scale mixture prior for image reconstruction
Chen et al. Attention-based Broad Self-guided Network for Low-light Image Enhancement
Guo et al. Image blind deblurring using an adaptive patch prior
CN114862711B (en) Low-illumination image enhancement and denoising method based on dual complementary prior constraints
Ren et al. Enhanced latent space blind model for real image denoising via alternative optimization
CN110648291B (en) Unmanned aerial vehicle motion blurred image restoration method based on deep learning
CN113902647A (en) Image deblurring method based on double closed-loop network
Han et al. MPDNet: An underwater image deblurring framework with stepwise feature refinement module
Fazlali et al. Atmospheric turbulence removal in long-range imaging using a data-driven-based approach
Wen et al. Patch-wise blind image deblurring via Michelson channel prior

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant