CN117670727B - Image deblurring model and method based on residual intensive U-shaped network - Google Patents

Image deblurring model and method based on residual intensive U-shaped network Download PDF

Info

Publication number
CN117670727B
CN117670727B CN202410129316.4A CN202410129316A CN117670727B CN 117670727 B CN117670727 B CN 117670727B CN 202410129316 A CN202410129316 A CN 202410129316A CN 117670727 B CN117670727 B CN 117670727B
Authority
CN
China
Prior art keywords
convolution
network
residual
dense
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410129316.4A
Other languages
Chinese (zh)
Other versions
CN117670727A (en
Inventor
喻春雨
张俊
韩鼎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202410129316.4A priority Critical patent/CN117670727B/en
Publication of CN117670727A publication Critical patent/CN117670727A/en
Application granted granted Critical
Publication of CN117670727B publication Critical patent/CN117670727B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention provides an image deblurring model and method based on a residual dense U-shaped network, comprising the following steps: the gradient residual dense mixing U-shaped sub-network is applied to the first stage, is used for roughly acquiring multi-resolution characteristics of a blurred image and extracting edge information of the blurred image; the multi-scale convolution attention U-shaped sub-network is applied to gradient residual concentrated multi-scale convolution attention U-shaped sub-networks in the second, third and fourth stages and is used for obtaining multi-resolution characteristics of finer blurred images; and simultaneously, a feature supervision attention module is introduced to connect the two stages and perform feature fusion. The image deblurring model and the method based on the residual dense network enable network training to be more stable, can greatly reduce loss of feature map information, and further achieve the effect of recovering a blurred image.

Description

Image deblurring model and method based on residual intensive U-shaped network
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an image deblurring model and method based on a residual intensive U-shaped network.
Background
At present, an image deblurring technology based on deep learning is steadily developed, and the technology is mainly based on a convolutional neural network, and comprises the following general steps of image deblurring by using the convolutional neural network: firstly, preparing a group of fuzzy images and corresponding clear images as data sets, and preprocessing the data sets, such as cutting, scaling, rotating and the like, so as to increase the diversity and richness of the data, wherein the processed data sets are divided into training sets and test sets; and secondly, designing a convolutional neural network suitable for image deblurring, wherein the convolutional neural network generally comprises a convolutional layer, a deconvolution layer, a residual block, a generator network, a discriminator network and the like. The convolution layer carries out convolution operation on an input image to extract the characteristics of the image, the deconvolution layer is used for recovering a low-resolution characteristic image to a high-resolution image to realize up-sampling of the image, the residual block is used for solving the gradient disappearance problem in depth network training, the generator network is used for mapping a blurred image to a clear image, and the discriminator network is used for distinguishing the difference between the generated clear image and a real clear image. And then training the convolutional neural network model by using the prepared fuzzy data set, continuously adjusting parameters of the convolutional neural network model by using optimization algorithms such as a back propagation algorithm, a gradient descent method and the like to gradually improve the model performance, finally testing by a test set, evaluating the performance of the model in a deblurring task, such as three recognized objective evaluation indexes of peak signal-to-Noise Ratio (PSNR), structural similarity index (Structural Similarity Index, SSIM) and root mean square error (Root Mean Square Error, RMSE), and evaluating the processed image.
The deep learning method has been successful in the image deblurring field, but most methods have the common defects of insufficient extraction of detail features of blurred images and serious edge feature loss, so that the detail aspect of restored images is still blurred.
In view of this, there is a need to propose an image deblurring model based on a residual dense U-shaped network to solve the above-mentioned problems.
Disclosure of Invention
The invention aims to provide an image deblurring model based on a residual dense network, which can restore a blurred image.
In order to achieve the above purpose, the present invention adopts the following technical scheme, including:
A gradient residual intensive mixing U-shaped sub-network is used for roughly obtaining multi-resolution characteristics of a blurred image, and the gradient residual intensive mixing U-shaped sub-network is applied to the first stage;
A gradient residual dense multi-scale convolution attention U-shaped sub-network, which is applied to a second stage, a third stage and a fourth stage, for obtaining the multi-resolution features of the blurred image which are finer;
The feature supervision attention module SAM fuses multi-scale feature information of different stages.
As a further improvement of the present invention, the gradient residual dense hybrid U-shaped subnetwork comprises:
A gradient residual dense hybrid encoding network, the gradient residual dense hybrid encoding network comprising: three channel attention modules highlighting important features of the blur blocks; three sets of dense residual mixing U-shaped modules GRDB _RSU4MixBlockxL 1; the two downsampling modules adopt an upsampling function with an expansion factor of 0.5 and add a1 multiplied by 1 convolution layer, the feature scale is reduced to 1/2 of the original size through downsampling operation each time, the number of channels is increased by ex_factor, and the ex_factor is set to be 32;
a gradient residual dense hybrid decoding network, the gradient residual dense hybrid decoding network comprising: three channel attention modules highlighting important features of the blur blocks; three sets of dense residual mixing U-shaped modules GRDB _RSU4MixBlockxL 1; two groups of up-sampling modules adopt two groups of convolution layers with 1 multiplied by 1 convolution kernels;
And the jump connection module adopts a group of channel attention modules, and the characteristic information extracted by the gradient residual dense mixed coding network is sent into the gradient residual dense mixed decoding network through the channel attention modules.
As a further improvement of the present invention, the dense residual mixing U-shaped module GRDB _rsu4mix block×l 1 includes:
The dense residual module GRDB Block, the dense residual module GRDB Block includes two groups of convolution layers of 3×3 convolution kernels, the two groups of activation layers of GELU are densely connected, and finally, sobel gradient operation is added in the residual branch to calculate the gradient amplitude of the feature and one 1×1 convolution layer jump is used to eliminate the difference of the input and output channel dimensions;
Residual hybrid U-shaped Block RSU4Mix Block, the residual hybrid U-shaped Block RSU4Mix Block comprising:
an input depth convolution layer comprising a set of depth convolution layers of 3 x 3 convolution kernels, the input depth convolution layer to input features (/>) Intermediate input feature/>, which becomes 64 channels ();
The deep convolution coding sub-network comprises three groups of deep convolution layers with 3 multiplied by 3 convolution kernels, two groups of maximum pooling layers simulate downsampling operation, and the characteristics are output after passing through the deep convolution coding sub-network
The deep convolution decoding sub-network comprises three groups of deep convolution layers with 3 multiplied by 3 convolution kernels, two groups of feature mixing modules Mix Block and three groups of residual error connecting modules, and the deep convolution decoding sub-network fuses local features and multi-scale features;
a concatenated depth convolution module comprising a set of depth convolution layers of 3 x3 convolution kernels for encoding said features extracted by said sub-network And transmitting the data to the depth convolution decoding sub-network.
As a further improvement of the present invention, the residual mixing U-shaped module RSU4Mix Block includes:
an input depth convolution layer comprising a set of depth convolution layers of 3 x 3 convolution kernels, the input depth convolution layer to input features (/>) Intermediate input feature/>, which becomes 64 channels ();
The deep convolution coding sub-network comprises three groups of deep convolution layers with 3 multiplied by 3 convolution kernels, two groups of maximum pooling layers simulate downsampling operation, and the characteristics are output after passing through the deep convolution coding sub-network
The deep convolution decoding sub-network comprises three groups of deep convolution layers with 3 multiplied by 3 convolution kernels, two groups of feature mixing modules Mix Block and three groups of residual error connecting modules, and the deep convolution decoding sub-network fuses local features and multi-scale features;
a concatenated depth convolution module comprising a set of depth convolution layers of 3 x3 convolution kernels for encoding said features extracted by said sub-network And transmitting the data to the depth convolution decoding sub-network.
As a further improvement of the present invention, the operation of the feature mixing module Mix Block is as follows:
and carrying out self-adaptive mixing operation on the characteristic information processed by the downsampling layer and the information of the upsampling layer of the next layer to fuse the information of the two layers and transmitting the characteristic information to the next upsampling layer, wherein the self-adaptive mixing operation can be formulated as follows:
Wherein the method comprises the steps of And/>Features from the i-1 th downsampling layer and the i-th upsampling layer, respectively,/>Feature map representing the i+1th upsampling layer, where i is 2 or 3,/>Representing an adaptive blend operation factor for fusing the i-1 th downsampled feature map and the i-th upsampled feature map, wherein/>By Sigmoid operator/>And (5) determining.
As a further refinement of the invention, the gradient residual dense multiscale convolution attention U-shaped subnetwork comprises:
A gradient residual dense multi-scale convolution attention encoding network, the gradient residual dense multi-scale convolution attention encoding network comprising: three channel attention modules highlighting important features of the blur blocks; three sets of gradient residual dense multi-scale convolution attention modules GRDB _msca block×l 1; the two downsampling modules adopt an upsampling function with an expansion factor of 0.5 and add a 1 multiplied by 1 convolution layer, the feature scale is reduced to 1/2 of the original size through downsampling operation, the number of channels is increased by ex_factor, and the ex_factor is set to be 32;
gradient residual dense multi-scale convolution attention decoding network;
And the jump connection module adopts a group of channel attention modules, and the characteristic information extracted by the gradient residual dense multi-scale convolution attention coding network is sent into the gradient residual dense multi-scale convolution attention decoding network through the channel attention modules.
As a further refinement of the present invention, the gradient residual dense multi-scale convolution attention module GRDB _msca block×l 1 comprises:
The dense residual module GRDB Block, the dense residual module GRDB Block includes two groups of convolution layers of 3×3 convolution kernels, two groups of activation layers of GELU are densely linked, and finally, sobel gradient operation is added in the residual branch to calculate the gradient amplitude of the feature and one 1×1 convolution layer jump is used to eliminate the difference of the input and output channel dimensions;
the multi-scale convolution attention module MSCA Block.
As a further improvement of the present invention, the multi-scale convolution attention module MSCA Block includes:
a first layer comprising a set of 5 x 5 depth separable convolutions, the first layer aggregating image information;
A second layer comprising a set of 7 x 7 normal convolutional layers, a set of 11 x 11 normal convolutional layers, and a set of 21 x 21 normal convolutional layers;
a third layer comprising a set of 7 x 7 depth over-parameterized convolutional layers, a set of 11 x 11 depth over-parameterized convolutional layers, and a set of 21 x 21 depth over-parameterized convolutional layers, the third layer depth extracting feature information;
A fourth layer comprising a set of 1 x 1 depth over parameterized convolutional layers, the fourth layer simulating a channel relationship.
As a further improvement of the present invention, the steps of extracting features by the multi-scale convolution attention module are as follows:
Wherein the method comprises the steps of Input of attention map and feature,/>, respectively, outputRepresenting multiplication of the respective element matrices,/>For depth over-parameterized convolution,/>The method comprises two layers of convolution, wherein DEPTHWISE CONV is used for the first layer, and each convolution kernel corresponds to each output channel; the second layer uses Pointwise Conv; /(I)For connecting input points, where
It is another object of the present invention to provide an image deblurring model method based on a residual dense network that can restore blurred images.
In order to achieve the above purpose, the present invention adopts the following technical scheme, including:
dividing a blurred image into eight blurred blocks with the same size, sending the eight blurred blocks into the gradient residual intensive mixing U-shaped sub-network, roughly extracting multi-resolution characteristics of the blurred image, extracting edge information of the blurred image, splicing the processed image blocks in pairs, and inputting the processed image blocks into a second stage through a characteristic supervision attention module SAM;
The second stage, dividing the blurred image into four blurred blocks with the same size, carrying out feature fusion on the blurred image and the blurred image blocks processed in the first stage, then sending the blurred image and the blurred image into a gradient residual dense multi-scale convolution attention U-shaped sub-network to obtain finer multi-resolution features of the blurred image, splicing the processed image blocks in pairs, and inputting the processed image blocks into the third stage through a feature supervision attention module SAM;
Dividing the blurred image into two blurred blocks with the same size, carrying out feature fusion on the blurred image and the image blocks processed in the second stage, then sending the blurred image and the blurred image into the gradient residual intensive multi-scale convolution attention U-shaped sub-network to obtain finer multi-resolution features of the blurred image, splicing the processed image blocks in pairs, and inputting the processed image blocks into a fourth stage through the feature supervision attention module SAM;
And in the fourth stage, after the complete image and the image block processed in the third stage are subjected to feature fusion, the gradient residual dense multi-scale convolution attention U-shaped sub-network obtains a final clear image.
The beneficial effects of the invention are as follows:
The invention provides four-stage progressive extraction of characteristic information of a blurred image, introduces a gradient residual error intensive mixing U-shaped subnetwork, roughly acquires multi-resolution characteristics of the blurred image, and extracts edge information of the blurred image. Meanwhile, a gradient residual dense multi-scale convolution attention U-shaped sub-network is introduced for obtaining the multi-resolution characteristic of a finer blurred image. The fuzzy image is processed from coarse to fine, so that the detail characteristics of the fuzzy image are more effectively saved, and the recovery work of the fuzzy image is more facilitated.
Drawings
FIG. 1 is a block diagram of an image deblurring model based on a multi-stage progressive residual dense U-type network and an attention mechanism.
Fig. 2 is a diagram of gradient residual dense hybrid U-shaped subnetwork structure in an image deblurring model based on a multi-stage progressive residual dense U-shaped network and an attention mechanism.
Fig. 3 is a Block diagram of the dense residual Block GRDB Block in the dense residual hybrid U-Block (GRDB _rsu4mix block×l 1).
Fig. 4 is a Block diagram of the residual hybrid U-Block RSU4Mix Block in the dense residual hybrid U-Block (GRDB _rsu4mix block×l 1).
Fig. 5 is a diagram of a gradient residual dense multi-scale convolution attention U-shaped sub-network structure in an image deblurring model based on a multi-stage progressive residual dense U-shaped network and an attention mechanism.
Fig. 6 is a Block diagram of a multi-scale convolution attention module (MSCA Block) in a residual dense multi-scale convolution attention module (GRDB _msca block×l 1).
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments.
It should be noted that, in order to avoid obscuring the present invention due to unnecessary details, only structures and/or processing steps closely related to aspects of the present invention are shown in the drawings, and other details not greatly related to the present invention are omitted.
For the purpose of promoting an understanding of the principles and advantages of the invention, reference will now be made to the drawings and specific examples.
Referring to fig. 1, the present invention provides an image deblurring model of a multi-stage progressive residual dense U-shaped network and an attention mechanism for deblurring an image to preserve more edge features while ensuring image deblurring, the image deblurring model comprising:
Referring to fig. 2, the gradient residual dense hybrid U-shaped subnetwork comprises:
A gradient residual dense hybrid coding network;
The jump connection module adopts a group of channel attention modules, and the characteristic information extracted by the coding network is sent to the decoding network through the module;
gradient residual dense hybrid decoding network.
The gradient residual dense mixed U-shaped sub-network is used for roughly extracting multi-resolution characteristics of the blurred image, extracting edge information of the blurred image, splicing the processed image blocks in pairs, and inputting the processed image blocks into the second stage through a characteristic supervision attention module SAM.
Referring to fig. 3, the dense residual module GRDB Block in the dense residual hybrid U-shaped module (GRDB _rsu4Mix block×l 1) includes:
the convolution layers of the two groups of 3X 3 convolution kernels and the activation layers of the two groups GELU are densely connected, and finally Sobel gradient operation is added into a residual branch to calculate the gradient amplitude of the characteristic and one 1X 1 convolution layer jump connection is used to eliminate the difference of the dimensions of the input and output channels.
Referring to fig. 4, a residual mixing U-shaped module RSU4Mix Block in a dense residual mixing U-shaped module (GRDB _rsu4Mix block×l 1) includes:
An input depth convolution layer comprising a set of 3 x 3 convolution kernels, the convolution block to input features () Intermediate input feature/>, which becomes 64 channels (/>);
The deep convolution coding sub-network comprises three groups of depth convolution layers with 3 multiplied by 3 convolution kernels, and two groups of maximum pooling layers simulate downsampling operation and output characteristics after passing through the depth convolution coding sub-network
A cascade depth convolution module comprising a set of depth convolution layers of 3 x 3 convolution kernels for delivering features extracted by the encoding sub-network into the decoding sub-network;
The deep convolution decoding sub-network comprises three groups of deep convolution layers with 3 multiplied by 3 convolution kernels, two groups of feature mixing modules Mix Block, and three groups of residual error connecting modules used for fusing local features and multi-scale features.
Referring to fig. 5, a gradient residual dense multi-scale convolution attention U-shaped subnetwork, comprising:
Gradient residual dense multi-scale convolution attention coding network;
The jump connection module adopts a group of channel attention modules, and the characteristic information extracted by the coding network is sent to the decoding network through the module;
gradient residual dense multi-scale convolution attention decoding network.
Referring to fig. 6, a multi-scale convolution attention module (MSCA Block) of the residual dense multi-scale convolution attention modules (GRDB _msca block×l 1) includes:
a first layer comprising a set of 5 x 5 depth separable convolutions effective to gather image information;
a second layer comprising a set of 7 x 7 normal convolutional layers, a set of 11 x 11 normal convolutional layers, a set of 21 x 21 normal convolutional layers;
a third layer comprising a set of 7 x 7 depth over-parameterized convolutional layers, a set of 11 x 11 depth over-parameterized convolutional layers, a set of 21 x 21 depth over-parameterized convolutional layers, and depth extraction feature information;
the fourth layer, comprising a set of 1 x 1 depth over parameterized convolutional layers, models the channel relationship.
In summary, the invention provides an image deblurring model and an image deblurring method based on a residual intensive U-shaped network, wherein the gradient residual intensive hybrid U-shaped sub-network is introduced in the first stage, the gradient residual intensive multi-scale convolution attention U-shaped sub-network is introduced in the second, third and fourth stages, and the characteristic supervision attention module is connected between the two adjacent stages for characteristic fusion, so that the network training is more stable, the loss of the characteristic image information can be greatly reduced, and the effect of recovering the blurred image is further realized.
The above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made thereto without departing from the spirit and scope of the technical solution of the present invention.

Claims (8)

1. An image deblurring model method based on a residual dense U-shaped network is characterized by comprising the following steps of:
Dividing a blurred image into eight blurred blocks with the same size, sending the eight blurred blocks into a gradient residual intensive mixing U-shaped sub-network, roughly extracting multi-resolution characteristics of the blurred image, extracting edge information of the blurred image, splicing the processed image blocks in pairs, and inputting the processed image blocks into a second stage through a characteristic supervision attention module SAM;
The second stage, dividing the blurred image into four blurred blocks with the same size, carrying out feature fusion on the blurred image and the blurred image blocks processed in the first stage, then sending the blurred image and the blurred image into a gradient residual dense multi-scale convolution attention U-shaped sub-network to obtain finer multi-resolution features of the blurred image, splicing the processed image blocks in pairs, and inputting the processed image blocks into the third stage through a feature supervision attention module SAM;
Dividing the blurred image into two blurred blocks with the same size, carrying out feature fusion on the blurred image and the image blocks processed in the second stage, then sending the blurred image and the blurred image into the gradient residual intensive multi-scale convolution attention U-shaped sub-network to obtain finer multi-resolution features of the blurred image, splicing the processed image blocks in pairs, and inputting the processed image blocks into a fourth stage through the feature supervision attention module SAM;
A fourth stage, after the feature fusion of the complete image and the image block processed in the third stage, obtaining a final clear image by the gradient residual intensive multi-scale convolution attention U-shaped sub-network;
The gradient residual dense mixing U-shaped subnetwork comprises:
A gradient residual dense hybrid encoding network, the gradient residual dense hybrid encoding network comprising: three channel attention modules highlighting important features of the blur blocks; three sets of dense residual mixing U-shaped modules GRDB _RSU4MixBlockxL 1; the two downsampling modules adopt an upsampling function with an expansion factor of 0.5 and add a1 multiplied by 1 convolution layer, the feature scale is reduced to 1/2 of the original size through downsampling operation each time, the number of channels is increased by ex_factor, and the ex_factor is set to be 32;
a gradient residual dense hybrid decoding network, the gradient residual dense hybrid decoding network comprising: three channel attention modules highlighting important features of the blur blocks; three sets of dense residual mixing U-shaped modules GRDB _RSU4MixBlockxL 1; two groups of up-sampling modules adopt two groups of convolution layers with 1 multiplied by 1 convolution kernels;
the jump connection module adopts a group of channel attention modules, and the characteristic information extracted by the gradient residual dense mixed coding network is sent into the gradient residual dense mixed decoding network through the channel attention modules;
the gradient residual dense multi-scale convolution attention U-shaped subnetwork comprises:
A gradient residual dense multi-scale convolution attention encoding network, the gradient residual dense multi-scale convolution attention encoding network comprising: three channel attention modules highlighting important features of the blur blocks; three sets of gradient residual dense multi-scale convolution attention modules GRDB _msca block×l 1; the two downsampling modules adopt an upsampling function with an expansion factor of 0.5 and add a 1 multiplied by 1 convolution layer, the feature scale is reduced to 1/2 of the original size through downsampling operation, the number of channels is increased by ex_factor, and the ex_factor is set to be 32;
gradient residual dense multi-scale convolution attention decoding network;
And the jump connection module adopts a group of channel attention modules, and the characteristic information extracted by the gradient residual dense multi-scale convolution attention coding network is sent into the gradient residual dense multi-scale convolution attention decoding network through the channel attention modules.
2. The method of image deblurring model based on a residual dense U-shaped network according to claim 1, wherein said image deblurring model based on a residual dense U-shaped network comprises:
A gradient residual intensive mixing U-shaped sub-network is used for roughly obtaining multi-resolution characteristics of a blurred image, and the gradient residual intensive mixing U-shaped sub-network is applied to the first stage;
A gradient residual dense multi-scale convolution attention U-shaped sub-network, which is applied to a second stage, a third stage and a fourth stage, for obtaining the multi-resolution features of the blurred image which are finer;
The feature supervision attention module SAM fuses multi-scale feature information of different stages.
3. The image deblurring model method based on a residual dense U-shaped network according to claim 1, wherein the dense residual hybrid U-shaped module GRDB _rsu4mix block×l 1 comprises:
The dense residual module GRDB Block, the dense residual module GRDB Block includes two groups of convolution layers of 3×3 convolution kernels, the two groups of activation layers of GELU are densely connected, and finally, sobel gradient operation is added in the residual branch to calculate the gradient amplitude of the feature and one 1×1 convolution layer jump is used to eliminate the difference of the input and output channel dimensions;
Residual hybrid U-shaped Block RSU4Mix Block, the residual hybrid U-shaped Block RSU4Mix Block comprising:
an input depth convolution layer comprising a set of depth convolution layers of 3 x 3 convolution kernels, the input depth convolution layer to input features (/>) Intermediate input feature/>, which becomes 64 channels ();
The deep convolution coding sub-network comprises three groups of deep convolution layers with 3 multiplied by 3 convolution kernels, two groups of maximum pooling layers simulate downsampling operation, and the characteristics are output after passing through the deep convolution coding sub-network
The deep convolution decoding sub-network comprises three groups of deep convolution layers with 3 multiplied by 3 convolution kernels, two groups of feature mixing modules Mix Block and three groups of residual error connecting modules, and the deep convolution decoding sub-network fuses local features and multi-scale features;
a concatenated depth convolution module comprising a set of depth convolution layers of 3 x3 convolution kernels for encoding said features extracted by said sub-network And transmitting the data to the depth convolution decoding sub-network.
4. A method of image deblurring model based on a residual dense U-shaped network according to claim 3, wherein said residual hybrid U-shaped module RSU4Mix Block comprises:
an input depth convolution layer comprising a set of depth convolution layers of 3 x 3 convolution kernels, the input depth convolution layer to input features (/>) Intermediate input feature/>, which becomes 64 channels ();
The deep convolution coding sub-network comprises three groups of deep convolution layers with 3 multiplied by 3 convolution kernels, two groups of maximum pooling layers simulate downsampling operation, and the characteristics are output after passing through the deep convolution coding sub-network
The deep convolution decoding sub-network comprises three groups of deep convolution layers with 3 multiplied by 3 convolution kernels, two groups of feature mixing modules Mix Block and three groups of residual error connecting modules, and the deep convolution decoding sub-network fuses local features and multi-scale features;
a concatenated depth convolution module comprising a set of depth convolution layers of 3 x3 convolution kernels for encoding said features extracted by said sub-network And transmitting the data to the depth convolution decoding sub-network.
5. The image deblurring model method based on residual dense U-shaped network of claim 4, wherein said feature blending module Mix Block operates as follows:
and carrying out self-adaptive mixing operation on the characteristic information processed by the downsampling layer and the information of the upsampling layer of the next layer to fuse the information of the two layers and transmitting the characteristic information to the next upsampling layer, wherein the self-adaptive mixing operation is formulated as follows:
Wherein the method comprises the steps of And/>Features from the i-1 th downsampling layer and the i-th upsampling layer, respectively,/>Feature map representing the i+1th upsampling layer, where i=2, 3,/>Representing an adaptive blend operation factor for fusing the i-1 th downsampled feature map and the i-th upsampled feature map, wherein/>By Sigmoid operator/>And (5) determining.
6. The image deblurring model method based on a residual dense U-shaped network of claim 1, wherein the gradient residual dense multi-scale convolution attention module GRDB _msca Block x L 1 comprises:
The dense residual module GRDB Block, the dense residual module GRDB Block includes two groups of convolution layers of 3×3 convolution kernels, two groups of activation layers of GELU are densely linked, and finally, sobel gradient operation is added in the residual branch to calculate the gradient amplitude of the feature and one 1×1 convolution layer jump is used to eliminate the difference of the input and output channel dimensions;
the multi-scale convolution attention module MSCA Block.
7. The image deblurring model method based on a residual dense U-shaped network of claim 6, wherein said multi-scale convolution attention module MSCA Block comprises:
a first layer comprising a set of 5 x 5 depth separable convolutions, the first layer aggregating image information;
A second layer comprising a set of 7 x 7 normal convolutional layers, a set of 11 x 11 normal convolutional layers, and a set of 21 x 21 normal convolutional layers;
a third layer comprising a set of 7 x 7 depth over-parameterized convolutional layers, a set of 11 x 11 depth over-parameterized convolutional layers, and a set of 21 x 21 depth over-parameterized convolutional layers, the third layer depth extracting feature information;
A fourth layer comprising a set of 1 x 1 depth over parameterized convolutional layers, the fourth layer simulating a channel relationship.
8. The method for image deblurring model based on residual dense U-shaped network of claim 7, wherein said step of extracting features by said multi-scale convolution attention module comprises:
Wherein the method comprises the steps of Input of attention map and feature,/>, respectively, outputRepresenting multiplication of the respective element matrices,/>For depth over-parameterized convolution,/>The method comprises two layers of convolution, wherein DEPTHWISE CONV is used for the first layer, and each convolution kernel corresponds to each output channel; the second layer uses Pointwise Conv; /(I)For connecting input places, wherein/>
CN202410129316.4A 2024-01-31 2024-01-31 Image deblurring model and method based on residual intensive U-shaped network Active CN117670727B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410129316.4A CN117670727B (en) 2024-01-31 2024-01-31 Image deblurring model and method based on residual intensive U-shaped network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410129316.4A CN117670727B (en) 2024-01-31 2024-01-31 Image deblurring model and method based on residual intensive U-shaped network

Publications (2)

Publication Number Publication Date
CN117670727A CN117670727A (en) 2024-03-08
CN117670727B true CN117670727B (en) 2024-05-14

Family

ID=90068364

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410129316.4A Active CN117670727B (en) 2024-01-31 2024-01-31 Image deblurring model and method based on residual intensive U-shaped network

Country Status (1)

Country Link
CN (1) CN117670727B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116523800A (en) * 2023-07-03 2023-08-01 南京邮电大学 Image noise reduction model and method based on residual dense network and attention mechanism
CN116758121A (en) * 2023-06-25 2023-09-15 哈尔滨工业大学 Infrared image and visible light image registration fusion method based on wearable helmet

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116758121A (en) * 2023-06-25 2023-09-15 哈尔滨工业大学 Infrared image and visible light image registration fusion method based on wearable helmet
CN116523800A (en) * 2023-07-03 2023-08-01 南京邮电大学 Image noise reduction model and method based on residual dense network and attention mechanism

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network;Linfeng Tang等;《Information Fusion》;20220101;第28卷(第42期);第1-15页 *

Also Published As

Publication number Publication date
CN117670727A (en) 2024-03-08

Similar Documents

Publication Publication Date Title
CN108765296B (en) Image super-resolution reconstruction method based on recursive residual attention network
CN110136062B (en) Super-resolution reconstruction method combining semantic segmentation
CN111325751A (en) CT image segmentation system based on attention convolution neural network
CN111861961A (en) Multi-scale residual error fusion model for single image super-resolution and restoration method thereof
CN115222601A (en) Image super-resolution reconstruction model and method based on residual mixed attention network
CN111340744B (en) Attention double-flow depth network-based low-quality image down-sampling method and system
CN111681166A (en) Image super-resolution reconstruction method of stacked attention mechanism coding and decoding unit
CN113436076B (en) Image super-resolution reconstruction method with characteristics gradually fused and electronic equipment
CN112699844B (en) Image super-resolution method based on multi-scale residual hierarchy close-coupled network
CN111325165A (en) Urban remote sensing image scene classification method considering spatial relationship information
CN111768340A (en) Super-resolution image reconstruction method and system based on dense multi-path network
CN111833261A (en) Image super-resolution restoration method for generating countermeasure network based on attention
CN112017116B (en) Image super-resolution reconstruction network based on asymmetric convolution and construction method thereof
CN113222818A (en) Method for reconstructing super-resolution image by using lightweight multi-channel aggregation network
CN115100039B (en) Lightweight image super-resolution reconstruction method based on deep learning
CN112085655A (en) Face super-resolution method based on dense residual attention face prior network
CN115526779A (en) Infrared image super-resolution reconstruction method based on dynamic attention mechanism
CN115953294A (en) Single-image super-resolution reconstruction method based on shallow channel separation and aggregation
CN117576402B (en) Deep learning-based multi-scale aggregation transducer remote sensing image semantic segmentation method
CN112734645B (en) Lightweight image super-resolution reconstruction method based on feature distillation multiplexing
CN113362239A (en) Deep learning image restoration method based on feature interaction
CN111914853B (en) Feature extraction method for stereo matching
CN109272450A (en) A kind of image oversubscription method based on convolutional neural networks
CN117670727B (en) Image deblurring model and method based on residual intensive U-shaped network
CN113096015A (en) Image super-resolution reconstruction method based on progressive sensing and ultra-lightweight network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant