CN117670727A - Image deblurring model and method based on residual intensive U-shaped network - Google Patents

Image deblurring model and method based on residual intensive U-shaped network Download PDF

Info

Publication number
CN117670727A
CN117670727A CN202410129316.4A CN202410129316A CN117670727A CN 117670727 A CN117670727 A CN 117670727A CN 202410129316 A CN202410129316 A CN 202410129316A CN 117670727 A CN117670727 A CN 117670727A
Authority
CN
China
Prior art keywords
convolution
network
residual
image
gradient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410129316.4A
Other languages
Chinese (zh)
Other versions
CN117670727B (en
Inventor
喻春雨
张俊
韩鼎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202410129316.4A priority Critical patent/CN117670727B/en
Publication of CN117670727A publication Critical patent/CN117670727A/en
Application granted granted Critical
Publication of CN117670727B publication Critical patent/CN117670727B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention provides an image deblurring model and method based on a residual dense U-shaped network, comprising the following steps: the gradient residual dense mixing U-shaped sub-network is applied to the first stage, is used for roughly acquiring multi-resolution characteristics of a blurred image and extracting edge information of the blurred image; the multi-scale convolution attention U-shaped sub-network is applied to gradient residual concentrated multi-scale convolution attention U-shaped sub-networks in the second, third and fourth stages and is used for obtaining multi-resolution characteristics of finer blurred images; and simultaneously, a feature supervision attention module is introduced to connect the two stages and perform feature fusion. The image deblurring model and the method based on the residual dense network enable network training to be more stable, can greatly reduce loss of feature map information, and further achieve the effect of recovering a blurred image.

Description

Image deblurring model and method based on residual intensive U-shaped network
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an image deblurring model and method based on a residual intensive U-shaped network.
Background
At present, an image deblurring technology based on deep learning is steadily developed, and the technology is mainly based on a convolutional neural network, and comprises the following general steps of image deblurring by using the convolutional neural network: firstly, preparing a group of fuzzy images and corresponding clear images as data sets, and preprocessing the data sets, such as cutting, scaling, rotating and the like, so as to increase the diversity and richness of the data, wherein the processed data sets are divided into training sets and test sets; and secondly, designing a convolutional neural network suitable for image deblurring, wherein the convolutional neural network generally comprises a convolutional layer, a deconvolution layer, a residual block, a generator network, a discriminator network and the like. The convolution layer carries out convolution operation on an input image to extract the characteristics of the image, the deconvolution layer is used for recovering a low-resolution characteristic image to a high-resolution image to realize up-sampling of the image, the residual block is used for solving the gradient disappearance problem in depth network training, the generator network is used for mapping a blurred image to a clear image, and the discriminator network is used for distinguishing the difference between the generated clear image and a real clear image. And then training a convolutional neural network model by using the prepared fuzzy data set, continuously adjusting parameters of the convolutional neural network model by using an optimization algorithm such as a back propagation algorithm, a gradient descent method and the like to gradually improve the model performance, finally testing by a test set, evaluating the performance of the model in a deblurring task, such as Peak Signal-to-Noise Ratio (PSNR), a structural similarity index (Structural Similarity Index, SSIM) and a root mean square error (Root Mean Square Error, RMSE), and evaluating the processed image.
The deep learning method has been successful in the image deblurring field, but most methods have the common defects of insufficient extraction of detail features of blurred images and serious edge feature loss, so that the detail aspect of restored images is still blurred.
In view of this, there is a need to propose an image deblurring model based on a residual dense U-shaped network to solve the above-mentioned problems.
Disclosure of Invention
The invention aims to provide an image deblurring model based on a residual dense network, which can restore a blurred image.
In order to achieve the above purpose, the present invention adopts the following technical scheme, including:
a gradient residual intensive mixing U-shaped sub-network is used for roughly obtaining multi-resolution characteristics of a blurred image, and the gradient residual intensive mixing U-shaped sub-network is applied to the first stage;
a gradient residual dense multi-scale convolution attention U-shaped sub-network, which is applied to a second stage, a third stage and a fourth stage, for obtaining the multi-resolution features of the blurred image which are finer;
the feature supervision attention module SAM fuses multi-scale feature information of different stages.
As a further improvement of the present invention, the gradient residual dense hybrid U-shaped subnetwork comprises:
a gradient residual dense hybrid encoding network, the gradient residual dense hybrid encoding network comprising: three channel attention modules highlighting important features of the blur blocks; three groups of intensive residual mixing U-shaped modules GRDB (generic random access memory) -RSU 4Mix Block×L 1 The method comprises the steps of carrying out a first treatment on the surface of the The two downsampling modules adopt an upsampling function with an expansion factor of 0.5 and add a 1 multiplied by 1 convolution layer, the feature scale is reduced to 1/2 of the original size through downsampling operation each time, the number of channels is increased by ex_factor, and the ex_factor is set to be 32;
a gradient residual dense hybrid decoding network, the gradient residual dense hybrid decoding network comprising: three channel attention modules highlighting important features of the blur blocks; three groups of intensive residual mixing U-shaped modules GRDB (generic random access memory) -RSU 4Mix Block×L 1 The method comprises the steps of carrying out a first treatment on the surface of the Two groups of up-sampling modules adopt two groups of convolution layers with 1 multiplied by 1 convolution kernels;
and the jump connection module adopts a group of channel attention modules, and the characteristic information extracted by the gradient residual dense mixed coding network is sent into the gradient residual dense mixed decoding network through the channel attention modules.
As a further improvement of the invention, the intensive residual mixing U-shaped module GRDB_RSU4MixBlockxL 1 Comprising the following steps:
the method comprises the steps that a dense residual error module GRDB Block comprises two groups of convolution layers with 3 multiplied by 3 convolution kernels, the activation layers of two groups of GELUs are densely connected, and finally Sobel gradient operation is added in a residual branch to calculate the gradient amplitude of a characteristic and 1 multiplied by 1 convolution layer jump connection is used for eliminating the difference of input and output channel dimensions;
residual hybrid U-shaped Block RSU4Mix Block, the residual hybrid U-shaped Block RSU4Mix Block comprising:
an input depth convolution layer comprising a set of depth convolution layers of 3 x 3 convolution kernels, the input depth convolution layer to input features (/>) Intermediate input feature which becomes 64 channels ∈ -> ();
The deep convolution coding sub-network comprises three groups of deep convolution layers with 3 multiplied by 3 convolution kernels, two groups of maximum pooling layers simulate downsampling operation, and the characteristics are output after passing through the deep convolution coding sub-network
The deep convolution decoding sub-network comprises three groups of deep convolution layers with 3 multiplied by 3 convolution kernels, two groups of feature mixing modules Mix Block and three groups of residual error connecting modules, and the deep convolution decoding sub-network fuses local features and multi-scale features;
a concatenated depth convolution module comprising a set of depth convolution layers of 3 x 3 convolution kernels for encoding said features extracted by said sub-networkAnd transmitting the data to the depth convolution decoding sub-network.
As a further improvement of the present invention, the residual mixing U-shaped module RSU4Mix Block includes:
an input depth convolution layer comprising a set of 3×A depth convolution layer of 3 convolution kernels, the input depth convolution layer to input features (/>) Intermediate input feature which becomes 64 channels ∈ -> ();
The deep convolution coding sub-network comprises three groups of deep convolution layers with 3 multiplied by 3 convolution kernels, two groups of maximum pooling layers simulate downsampling operation, and the characteristics are output after passing through the deep convolution coding sub-network
The deep convolution decoding sub-network comprises three groups of deep convolution layers with 3 multiplied by 3 convolution kernels, two groups of feature mixing modules Mix Block and three groups of residual error connecting modules, and the deep convolution decoding sub-network fuses local features and multi-scale features;
a concatenated depth convolution module comprising a set of depth convolution layers of 3 x 3 convolution kernels for encoding said features extracted by said sub-networkAnd transmitting the data to the depth convolution decoding sub-network.
As a further improvement of the present invention, the operation of the feature mixing module Mix Block is as follows:
and carrying out self-adaptive mixing operation on the characteristic information processed by the downsampling layer and the information of the upsampling layer of the next layer to fuse the information of the two layers and transmitting the characteristic information to the next upsampling layer, wherein the self-adaptive mixing operation can be formulated as follows:
wherein the method comprises the steps ofAnd->The feature map from the i-1 th downsampling layer and the feature map of the i-th upsampling layer respectively,characteristic map representing the i+1th upsampling layer, wherein i is 2 or 3,/and->Representing an adaptive blend operation factor for fusing the i-1 th downsampled feature map and the i-th upsampled feature map, wherein +.>Is added by Sigmoid operator->And (5) determining.
As a further refinement of the invention, the gradient residual dense multiscale convolution attention U-shaped subnetwork comprises:
a gradient residual dense multi-scale convolution attention encoding network, the gradient residual dense multi-scale convolution attention encoding network comprising: three channel attention modules highlighting important features of the blur blocks; three groups of gradient residual dense multi-scale convolution attention module GRDB_MSCA Block×L 1 The method comprises the steps of carrying out a first treatment on the surface of the The two downsampling modules adopt an upsampling function with an expansion factor of 0.5 and add a 1 multiplied by 1 convolution layer, the feature scale is reduced to 1/2 of the original size through downsampling operation, the number of channels is increased by ex_factor, and the ex_factor is set to be 32;
gradient residual dense multi-scale convolution attention decoding network;
and the jump connection module adopts a group of channel attention modules, and the characteristic information extracted by the gradient residual dense multi-scale convolution attention coding network is sent into the gradient residual dense multi-scale convolution attention decoding network through the channel attention modules.
As a further improvement of the invention, the gradient residual dense multi-scale convolution attention module grdb_msca block×l 1 Comprising the following steps:
the method comprises the steps that a dense residual error module GRDB Block comprises two groups of convolution layers with 3 multiplied by 3 convolution kernels, the activation layers of the two groups of GELUs are densely linked, and finally Sobel gradient operation is added in a residual branch to calculate the gradient amplitude of a characteristic and 1 multiplied by 1 convolution layer jump connection is used for eliminating the difference of input and output channel dimensions;
the multi-scale convolution attention module MSCA Block.
As a further improvement of the present invention, the multi-scale convolution attention module MSCA Block includes:
a first layer comprising a set of 5 x 5 depth separable convolutions, the first layer aggregating image information;
a second layer comprising a set of 7 x 7 normal convolutional layers, a set of 11 x 11 normal convolutional layers, and a set of 21 x 21 normal convolutional layers;
a third layer comprising a set of 7 x 7 depth over-parameterized convolutional layers, a set of 11 x 11 depth over-parameterized convolutional layers, and a set of 21 x 21 depth over-parameterized convolutional layers, the third layer depth extracting feature information;
a fourth layer comprising a set of 1 x 1 depth over parameterized convolutional layers, the fourth layer simulating a channel relationship.
As a further improvement of the present invention, the steps of extracting features by the multi-scale convolution attention module are as follows:
wherein the method comprises the steps ofRespectively, output, attention-seeking and characteristicsInput of->Representing multiplication of the respective element matrix, +.>For depth over-parameterized convolution,/->The method comprises the steps of two layers of convolution, wherein the first layer uses Depthwise Conv, and each convolution kernel corresponds to each output channel; the second layer uses Pointwise Conv;for connecting inputs, where->
It is another object of the present invention to provide an image deblurring model method based on a residual dense network that can restore blurred images.
In order to achieve the above purpose, the present invention adopts the following technical scheme, including:
dividing a blurred image into eight blurred blocks with the same size, sending the eight blurred blocks into the gradient residual intensive mixing U-shaped sub-network, roughly extracting multi-resolution characteristics of the blurred image, extracting edge information of the blurred image, splicing the processed image blocks in pairs, and inputting the processed image blocks into a second stage through a characteristic supervision attention module SAM;
the second stage, dividing the blurred image into four blurred blocks with the same size, carrying out feature fusion on the blurred image and the blurred image blocks processed in the first stage, then sending the blurred image and the blurred image into a gradient residual dense multi-scale convolution attention U-shaped sub-network to obtain finer multi-resolution features of the blurred image, splicing the processed image blocks in pairs, and inputting the processed image blocks into the third stage through a feature supervision attention module SAM;
dividing the blurred image into two blurred blocks with the same size, carrying out feature fusion on the blurred image and the image blocks processed in the second stage, then sending the blurred image and the blurred image into the gradient residual intensive multi-scale convolution attention U-shaped sub-network to obtain finer multi-resolution features of the blurred image, splicing the processed image blocks in pairs, and inputting the processed image blocks into a fourth stage through the feature supervision attention module SAM;
and in the fourth stage, after the complete image and the image block processed in the third stage are subjected to feature fusion, the gradient residual dense multi-scale convolution attention U-shaped sub-network obtains a final clear image.
The beneficial effects of the invention are as follows:
the invention provides four-stage progressive extraction of characteristic information of a blurred image, introduces a gradient residual error intensive mixing U-shaped subnetwork, roughly acquires multi-resolution characteristics of the blurred image, and extracts edge information of the blurred image. Meanwhile, a gradient residual dense multi-scale convolution attention U-shaped sub-network is introduced for obtaining the multi-resolution characteristic of a finer blurred image. The fuzzy image is processed from coarse to fine, so that the detail characteristics of the fuzzy image are more effectively saved, and the recovery work of the fuzzy image is more facilitated.
Drawings
FIG. 1 is a block diagram of an image deblurring model based on a multi-stage progressive residual dense U-type network and an attention mechanism.
Fig. 2 is a diagram of gradient residual dense hybrid U-shaped subnetwork structure in an image deblurring model based on a multi-stage progressive residual dense U-shaped network and an attention mechanism.
FIG. 3 is a dense residual hybrid U-shaped Block (GRDB_RSU4MixBlock×L) 1 ) Is a dense residual Block structure diagram of the (b) Block.
FIG. 4 is a dense residual hybrid U-shaped Block (GRDB_RSU4MixBlock×L) 1 ) Residual hybrid U-Block RSU4Mix Block structure diagram.
Fig. 5 is a diagram of a gradient residual dense multi-scale convolution attention U-shaped sub-network structure in an image deblurring model based on a multi-stage progressive residual dense U-shaped network and an attention mechanism.
FIG. 6 is a residual dense multi-scale convolution attention module (GRDB_MSCA Block L) 1 ) In a multi-scale convolution attention module(MSCA Block) Structure diagram.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments.
It should be noted that, in order to avoid obscuring the present invention due to unnecessary details, only structures and/or processing steps closely related to aspects of the present invention are shown in the drawings, and other details not greatly related to the present invention are omitted.
For the purpose of promoting an understanding of the principles and advantages of the invention, reference will now be made to the drawings and specific examples.
Referring to fig. 1, the present invention provides an image deblurring model of a multi-stage progressive residual dense U-shaped network and an attention mechanism for deblurring an image to preserve more edge features while ensuring image deblurring, the image deblurring model comprising:
referring to fig. 2, the gradient residual dense hybrid U-shaped subnetwork comprises:
a gradient residual dense hybrid coding network;
the jump connection module adopts a group of channel attention modules, and the characteristic information extracted by the coding network is sent to the decoding network through the module;
gradient residual dense hybrid decoding network.
The gradient residual dense mixed U-shaped sub-network is used for roughly extracting multi-resolution characteristics of the blurred image, extracting edge information of the blurred image, splicing the processed image blocks in pairs, and inputting the processed image blocks into the second stage through a characteristic supervision attention module SAM.
Referring to FIG. 3, a dense residual hybrid U-shaped Block (GRDB_RSU4Mix Block×L) 1 ) The dense residual Block GRDB Block of (1) comprises:
and finally, adding Sobel gradient operation to the residual branches to calculate the characteristic gradient amplitude and eliminating the difference of the dimensions of the input and output channels by using one 1X 1 convolution layer jump connection.
Referring to FIG. 4, a dense residual hybrid U-shaped Block (GRDB_RSU4Mix Block×L) 1 ) The residual mixing U-shaped module RSU4Mix Block comprises:
an input depth convolution layer comprising a set of 3 x 3 convolution kernels, the convolution block to input features () Intermediate input feature which becomes 64 channels ∈ -> (/>);
The deep convolution coding sub-network comprises three groups of depth convolution layers with 3 multiplied by 3 convolution kernels, and two groups of maximum pooling layers simulate downsampling operation and output characteristics after passing through the depth convolution coding sub-network
A cascade depth convolution module comprising a set of depth convolution layers of 3 x 3 convolution kernels for delivering features extracted by the encoding sub-network into the decoding sub-network;
the deep convolution decoding sub-network comprises three groups of deep convolution layers with 3 multiplied by 3 convolution kernels, two groups of feature mixing modules Mix Block, and three groups of residual error connecting modules used for fusing local features and multi-scale features.
Referring to fig. 5, a gradient residual dense multi-scale convolution attention U-shaped subnetwork, comprising:
gradient residual dense multi-scale convolution attention coding network;
the jump connection module adopts a group of channel attention modules, and the characteristic information extracted by the coding network is sent to the decoding network through the module;
gradient residual dense multi-scale convolution attention decoding network.
Referring to FIG. 6, a residual dense multi-scale convolution attention module (GRDB_MSCA Block L) 1 ) Is a multi-scale convolution attention module (MSCA Block) comprising:
a first layer comprising a set of 5 x 5 depth separable convolutions effective to gather image information;
a second layer comprising a set of 7 x 7 normal convolutional layers, a set of 11 x 11 normal convolutional layers, a set of 21 x 21 normal convolutional layers;
a third layer comprising a set of 7 x 7 depth over-parameterized convolutional layers, a set of 11 x 11 depth over-parameterized convolutional layers, a set of 21 x 21 depth over-parameterized convolutional layers, and depth extraction feature information;
the fourth layer, comprising a set of 1 x 1 depth over parameterized convolutional layers, models the channel relationship.
In summary, the invention provides an image deblurring model and an image deblurring method based on a residual intensive U-shaped network, wherein the gradient residual intensive hybrid U-shaped sub-network is introduced in the first stage, the gradient residual intensive multi-scale convolution attention U-shaped sub-network is introduced in the second, third and fourth stages, and the characteristic supervision attention module is connected between the two adjacent stages for characteristic fusion, so that the network training is more stable, the loss of the characteristic image information can be greatly reduced, and the effect of recovering the blurred image is further realized.
The above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made thereto without departing from the spirit and scope of the technical solution of the present invention.

Claims (10)

1. An image deblurring model based on a residual dense U-shaped network, comprising:
a gradient residual intensive mixing U-shaped sub-network is used for roughly obtaining multi-resolution characteristics of a blurred image, and the gradient residual intensive mixing U-shaped sub-network is applied to the first stage;
a gradient residual dense multi-scale convolution attention U-shaped sub-network, which is applied to a second stage, a third stage and a fourth stage, for obtaining the multi-resolution features of the blurred image which are finer;
the feature supervision attention module SAM fuses multi-scale feature information of different stages.
2. The image deblurring model based on a residual dense U-shaped network of claim 1, wherein said gradient residual dense hybrid U-shaped sub-network comprises:
a gradient residual dense hybrid encoding network, the gradient residual dense hybrid encoding network comprising: three channel attention modules highlighting important features of the blur blocks; three groups of intensive residual mixing U-shaped modules GRDB (generic random access memory) -RSU 4Mix Block×L 1 The method comprises the steps of carrying out a first treatment on the surface of the The two downsampling modules adopt an upsampling function with an expansion factor of 0.5 and add a 1 multiplied by 1 convolution layer, the feature scale is reduced to 1/2 of the original size through downsampling operation each time, the number of channels is increased by ex_factor, and the ex_factor is set to be 32;
a gradient residual dense hybrid decoding network, the gradient residual dense hybrid decoding network comprising: three channel attention modules highlighting important features of the blur blocks; three groups of intensive residual mixing U-shaped modules GRDB (generic random access memory) -RSU 4Mix Block×L 1 The method comprises the steps of carrying out a first treatment on the surface of the Two groups of up-sampling modules adopt two groups of convolution layers with 1 multiplied by 1 convolution kernels;
and the jump connection module adopts a group of channel attention modules, and the characteristic information extracted by the gradient residual dense mixed coding network is sent into the gradient residual dense mixed decoding network through the channel attention modules.
3. The image deblurring model based on a residual dense U-shaped network of claim 2, wherein said dense residual hybrid U-shaped module grdb_rsu4mix Block x L 1 Comprising the following steps:
the method comprises the steps that a dense residual error module GRDB Block comprises two groups of convolution layers with 3 multiplied by 3 convolution kernels, the activation layers of two groups of GELUs are densely connected, and finally Sobel gradient operation is added in a residual branch to calculate the gradient amplitude of a characteristic and 1 multiplied by 1 convolution layer jump connection is used for eliminating the difference of input and output channel dimensions;
residual hybrid U-shaped Block RSU4Mix Block, the residual hybrid U-shaped Block RSU4Mix Block comprising:
an input depth convolution layer comprising a set of depth convolution layers of 3 x 3 convolution kernels, the input depth convolution layer to input features (/>) Intermediate input feature which becomes 64 channels ∈ -> ();
The deep convolution coding sub-network comprises three groups of deep convolution layers with 3 multiplied by 3 convolution kernels, two groups of maximum pooling layers simulate downsampling operation, and the characteristics are output after passing through the deep convolution coding sub-network
The deep convolution decoding sub-network comprises three groups of deep convolution layers with 3 multiplied by 3 convolution kernels, two groups of feature mixing modules Mix Block and three groups of residual error connecting modules, and the deep convolution decoding sub-network fuses local features and multi-scale features;
a concatenated depth convolution module comprising a set of depth convolution layers of 3 x 3 convolution kernels for encoding said features extracted by said sub-networkAnd transmitting the data to the depth convolution decoding sub-network.
4. An image deblurring model based on a residual dense U-shaped network according to claim 3, wherein said residual hybrid U-shaped module RSU4Mix Block comprises:
an input depth convolution layer comprising a set of depth convolution layers of 3 x 3 convolution kernels, the input depth convolution layer to input features (/>) Intermediate input feature which becomes 64 channels ∈ -> ();
The deep convolution coding sub-network comprises three groups of deep convolution layers with 3 multiplied by 3 convolution kernels, two groups of maximum pooling layers simulate downsampling operation, and the characteristics are output after passing through the deep convolution coding sub-network
The deep convolution decoding sub-network comprises three groups of deep convolution layers with 3 multiplied by 3 convolution kernels, two groups of feature mixing modules Mix Block and three groups of residual error connecting modules, and the deep convolution decoding sub-network fuses local features and multi-scale features;
a concatenated depth convolution module comprising a set of depth convolution layers of 3 x 3 convolution kernels for encoding said features extracted by said sub-networkAnd transmitting the data to the depth convolution decoding sub-network.
5. The image deblurring model based on a residual dense U-shaped network of claim 4, wherein said feature blending module Mix Block operates as follows:
and carrying out self-adaptive mixing operation on the characteristic information processed by the downsampling layer and the information of the upsampling layer of the next layer to fuse the information of the two layers and transmitting the characteristic information to the next upsampling layer, wherein the self-adaptive mixing operation is formulated as follows:
wherein the method comprises the steps ofAnd->Feature map from the i-1 th downsampling layer and feature map from the i-th upsampling layer, respectively,/->Characteristic map representing the i+1th upsampling layer, wherein i is 2 or 3,/and->Representing an adaptive blend operation factor for fusing the i-1 th downsampled feature map and the i-th upsampled feature map, wherein +.>Is added by Sigmoid operator->And (5) determining.
6. An image deblurring model based on a residual dense U-shaped network according to claim 1, wherein the gradient residual dense multi-scale convolution attention U-shaped sub-network comprises:
gradient residual dense multi-scale convolution attention encoding network, the gradient residual dense multi-scale convolution annotatingThe meaning code network includes: three channel attention modules highlighting important features of the blur blocks; three groups of gradient residual dense multi-scale convolution attention module GRDB_MSCA Block×L 1 The method comprises the steps of carrying out a first treatment on the surface of the The two downsampling modules adopt an upsampling function with an expansion factor of 0.5 and add a 1 multiplied by 1 convolution layer, the feature scale is reduced to 1/2 of the original size through downsampling operation, the number of channels is increased by ex_factor, and the ex_factor is set to be 32;
gradient residual dense multi-scale convolution attention decoding network;
and the jump connection module adopts a group of channel attention modules, and the characteristic information extracted by the gradient residual dense multi-scale convolution attention coding network is sent into the gradient residual dense multi-scale convolution attention decoding network through the channel attention modules.
7. The image deblurring model based on a residual dense U-shaped network of claim 6, wherein said gradient residual dense multi-scale convolution attention module grdb_msca Block xl 1 Comprising the following steps:
the method comprises the steps that a dense residual error module GRDB Block comprises two groups of convolution layers with 3 multiplied by 3 convolution kernels, the activation layers of the two groups of GELUs are densely linked, and finally Sobel gradient operation is added in a residual branch to calculate the gradient amplitude of a characteristic and 1 multiplied by 1 convolution layer jump connection is used for eliminating the difference of input and output channel dimensions;
the multi-scale convolution attention module MSCA Block.
8. The image deblurring model based on a residual dense U-shaped network of claim 7, wherein said multi-scale convolution attention module MSCA Block comprises:
a first layer comprising a set of 5 x 5 depth separable convolutions, the first layer aggregating image information;
a second layer comprising a set of 7 x 7 normal convolutional layers, a set of 11 x 11 normal convolutional layers, and a set of 21 x 21 normal convolutional layers;
a third layer comprising a set of 7 x 7 depth over-parameterized convolutional layers, a set of 11 x 11 depth over-parameterized convolutional layers, and a set of 21 x 21 depth over-parameterized convolutional layers, the third layer depth extracting feature information;
a fourth layer comprising a set of 1 x 1 depth over parameterized convolutional layers, the fourth layer simulating a channel relationship.
9. The image deblurring model based on a residual dense U-shaped network of claim 8, wherein the step of extracting features by the multi-scale convolution attention module is:
wherein the method comprises the steps ofInput of attention map and feature, ++output, respectively>Representing multiplication of the respective element matrix, +.>For depth over-parameterized convolution,/->The method comprises the steps of two layers of convolution, wherein the first layer uses Depthwise Conv, and each convolution kernel corresponds to each output channel; the second layer uses Pointwise Conv;for connecting inputs, where->
10. An image deblurring model method based on a residual dense U-shaped network is characterized by comprising the following steps of:
dividing a blurred image into eight blurred blocks with the same size, sending the eight blurred blocks into a gradient residual intensive mixing U-shaped sub-network, roughly extracting multi-resolution characteristics of the blurred image, extracting edge information of the blurred image, splicing the processed image blocks in pairs, and inputting the processed image blocks into a second stage through a characteristic supervision attention module SAM;
the second stage, dividing the blurred image into four blurred blocks with the same size, carrying out feature fusion on the blurred image and the blurred image blocks processed in the first stage, then sending the blurred image and the blurred image into a gradient residual dense multi-scale convolution attention U-shaped sub-network to obtain finer multi-resolution features of the blurred image, splicing the processed image blocks in pairs, and inputting the processed image blocks into the third stage through a feature supervision attention module SAM;
dividing the blurred image into two blurred blocks with the same size, carrying out feature fusion on the blurred image and the image blocks processed in the second stage, then sending the blurred image and the blurred image into the gradient residual intensive multi-scale convolution attention U-shaped sub-network to obtain finer multi-resolution features of the blurred image, splicing the processed image blocks in pairs, and inputting the processed image blocks into a fourth stage through the feature supervision attention module SAM;
and in the fourth stage, after the complete image and the image block processed in the third stage are subjected to feature fusion, the gradient residual dense multi-scale convolution attention U-shaped sub-network obtains a final clear image.
CN202410129316.4A 2024-01-31 2024-01-31 Image deblurring model and method based on residual intensive U-shaped network Active CN117670727B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410129316.4A CN117670727B (en) 2024-01-31 2024-01-31 Image deblurring model and method based on residual intensive U-shaped network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410129316.4A CN117670727B (en) 2024-01-31 2024-01-31 Image deblurring model and method based on residual intensive U-shaped network

Publications (2)

Publication Number Publication Date
CN117670727A true CN117670727A (en) 2024-03-08
CN117670727B CN117670727B (en) 2024-05-14

Family

ID=90068364

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410129316.4A Active CN117670727B (en) 2024-01-31 2024-01-31 Image deblurring model and method based on residual intensive U-shaped network

Country Status (1)

Country Link
CN (1) CN117670727B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116523800A (en) * 2023-07-03 2023-08-01 南京邮电大学 Image noise reduction model and method based on residual dense network and attention mechanism
CN116758121A (en) * 2023-06-25 2023-09-15 哈尔滨工业大学 Infrared image and visible light image registration fusion method based on wearable helmet

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116758121A (en) * 2023-06-25 2023-09-15 哈尔滨工业大学 Infrared image and visible light image registration fusion method based on wearable helmet
CN116523800A (en) * 2023-07-03 2023-08-01 南京邮电大学 Image noise reduction model and method based on residual dense network and attention mechanism

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LINFENG TANG等: "Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network", 《INFORMATION FUSION》, vol. 28, no. 42, 1 January 2022 (2022-01-01), pages 1 - 15 *

Also Published As

Publication number Publication date
CN117670727B (en) 2024-05-14

Similar Documents

Publication Publication Date Title
CN111161150B (en) Image super-resolution reconstruction method based on multi-scale attention cascade network
CN110136062B (en) Super-resolution reconstruction method combining semantic segmentation
CN108647775B (en) Super-resolution image reconstruction method based on full convolution neural network single image
CN108596841B (en) Method for realizing image super-resolution and deblurring in parallel
CN111861961A (en) Multi-scale residual error fusion model for single image super-resolution and restoration method thereof
CN111768340B (en) Super-resolution image reconstruction method and system based on dense multipath network
CN111340744B (en) Attention double-flow depth network-based low-quality image down-sampling method and system
CN112200724B (en) Single-image super-resolution reconstruction system and method based on feedback mechanism
CN111681166A (en) Image super-resolution reconstruction method of stacked attention mechanism coding and decoding unit
CN112699844B (en) Image super-resolution method based on multi-scale residual hierarchy close-coupled network
CN112288632A (en) Single image super-resolution method and system based on simplified ESRGAN
CN113222818A (en) Method for reconstructing super-resolution image by using lightweight multi-channel aggregation network
CN113450288A (en) Single image rain removing method and system based on deep convolutional neural network and storage medium
CN112200720A (en) Super-resolution image reconstruction method and system based on filter fusion
CN112085655A (en) Face super-resolution method based on dense residual attention face prior network
CN115546032A (en) Single-frame image super-resolution method based on feature fusion and attention mechanism
CN112017116A (en) Image super-resolution reconstruction network based on asymmetric convolution and construction method thereof
CN114187191A (en) Image deblurring method based on high-frequency-low-frequency information fusion
CN109272450A (en) A kind of image oversubscription method based on convolutional neural networks
CN113362239A (en) Deep learning image restoration method based on feature interaction
CN113096015A (en) Image super-resolution reconstruction method based on progressive sensing and ultra-lightweight network
CN111402140A (en) Single image super-resolution reconstruction system and method
CN116188272B (en) Two-stage depth network image super-resolution reconstruction method suitable for multiple fuzzy cores
CN117670727B (en) Image deblurring model and method based on residual intensive U-shaped network
CN116029905A (en) Face super-resolution reconstruction method and system based on progressive difference complementation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant