CN114429422A - Image super-resolution reconstruction method and system based on residual channel attention network - Google Patents

Image super-resolution reconstruction method and system based on residual channel attention network Download PDF

Info

Publication number
CN114429422A
CN114429422A CN202111581236.5A CN202111581236A CN114429422A CN 114429422 A CN114429422 A CN 114429422A CN 202111581236 A CN202111581236 A CN 202111581236A CN 114429422 A CN114429422 A CN 114429422A
Authority
CN
China
Prior art keywords
image
resolution
feature extraction
attention
resolution reconstruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111581236.5A
Other languages
Chinese (zh)
Inventor
王春兴
栗亚星
孙建德
乔建苹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Normal University
Original Assignee
Shandong Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Normal University filed Critical Shandong Normal University
Priority to CN202111581236.5A priority Critical patent/CN114429422A/en
Publication of CN114429422A publication Critical patent/CN114429422A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4046Scaling the whole image or part thereof using neural networks

Abstract

The invention belongs to the technical field of image processing, and provides an image super-resolution reconstruction method and system based on a residual channel attention network, wherein a high-resolution reconstructed image is obtained according to a low-resolution image to be reconstructed and an image super-resolution reconstruction model; the construction process of the image super-resolution reconstruction model comprises the following steps: the method comprises the steps of shallow feature extraction and deep feature extraction, wherein shallow features are obtained through a shallow feature channel, a deep feature extraction model is built based on a residual channel attention network, deep features are extracted according to the shallow feature extraction model and the deep feature extraction model, the deep feature extraction model comprises a plurality of pixels and channel attention networks, a channel attention unit, a pixel attention unit and an Inception unit are arranged in the pixels and the channel attention networks in parallel, and a residual structure is added in the outer layer in parallel. The network can extract more and more useful information, so as to realize super-resolution reconstruction with higher precision.

Description

Image super-resolution reconstruction method and system based on residual channel attention network
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an image super-resolution reconstruction method and system based on a residual channel attention network.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
At present, image super-resolution methods can be mainly classified into the following three categories: interpolation-based super-resolution reconstruction, reconstruction-based super-resolution reconstruction, and learning-based super-resolution reconstruction.
The super-resolution reconstruction method based on interpolation is the simplest method for improving resolution, but the reconstruction effect of the method is not very good. The reconstruction-based super-resolution reconstruction technology is improved in detail, but the performance of the reconstruction-based super-resolution reconstruction technology is reduced with the increase of scale factors, and the method is time-consuming.
Disclosure of Invention
In order to solve at least one technical problem in the background art, the invention provides an image super-resolution reconstruction method and system based on a residual channel attention network, and provides an improved method for improving image resolution by using channel attention, so that the problems of gradient disappearance and the like are avoided while the image resolution is improved.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention provides an image super-resolution reconstruction method based on a residual channel attention network, which comprises the following steps:
acquiring a low-resolution image to be reconstructed;
obtaining a high-resolution reconstruction image according to the low-resolution image to be reconstructed and the image super-resolution reconstruction model; the construction process of the image super-resolution reconstruction model comprises the following steps: the method comprises the steps of shallow feature extraction and deep feature extraction, wherein shallow features are obtained through a shallow feature channel, a deep feature extraction model is built based on a residual channel attention network, deep features are extracted according to the shallow feature extraction model and the deep feature extraction model, the deep feature extraction model comprises a plurality of pixels and channel attention networks, a channel attention unit, a pixel attention unit and an Inception unit are arranged in the pixels and the channel attention networks in parallel, and a residual structure is added in the outer layer in parallel.
The second aspect of the present invention provides an image super-resolution reconstruction system based on a residual channel attention network, comprising:
an image acquisition module configured to: acquiring a low-resolution image to be reconstructed;
a high resolution image reconstruction module configured to: obtaining a high-resolution reconstruction image according to the low-resolution image to be reconstructed and the image super-resolution reconstruction model; the construction process of the image super-resolution reconstruction model comprises the following steps: shallow feature extraction and deep feature extraction, wherein shallow features are obtained through a shallow feature channel,
the method comprises the steps of constructing a deep layer feature extraction model based on a residual channel attention network, extracting deep layer features according to a shallow layer feature extraction model and the deep layer feature extraction model, wherein the deep layer feature extraction model comprises a plurality of pixels and channel attention networks, a channel attention unit, a pixel attention unit and an inclusion unit are arranged in parallel in each pixel and channel attention network, and a residual structure is added in parallel on the outer layer.
A third aspect of the invention provides a computer-readable storage medium.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method for residual channel attention network based image super resolution reconstruction as described above.
A fourth aspect of the invention provides a computer apparatus.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor when executing the program implementing the steps in the method for super-resolution image reconstruction based on a residual channel attention network as described above.
Compared with the prior art, the invention has the beneficial effects that:
the method is characterized in that high-precision image super-resolution is achieved by using a depth pixel and channel attention (PACA) network, the PACA network is deeper than the former method based on a convolutional neural network, better reconstruction effect is achieved, a large amount of low-frequency information is bypassed by using residual learning, more useful high-frequency information is focused on learning, the interdependency between characteristic channels is considered by using the pixel and channel attention (PACA) network, the representation capability of the network is improved while fewer parameters are introduced, and the applicability of the network to different scales is improved while the width of the network is increased by using an inclusion structure and an attention mechanism.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the invention and not to limit the invention.
FIG. 1 is a schematic diagram of a super-resolution image reconstruction process in an embodiment of the present invention;
FIG. 2 is a schematic diagram of a super-resolution image reconstruction network according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a deep feature extraction model according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a CA unit structure in an embodiment of the present invention;
FIG. 5 is a schematic diagram of the internal principle of a CA unit in an embodiment of the invention;
FIG. 6 is a schematic diagram of a CA unit structure in an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of an inclusion unit in an embodiment of the invention;
fig. 8(a) is a low-resolution image to be reconstructed, and fig. 8(b) is a high-resolution image after reconstruction.
Detailed Description
The invention is further described with reference to the following figures and examples.
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
Example one
The vision is a main mode for acquiring external information, and the image is used as a direct carrier of the visual information, so that various information contained in the image can be accurately and visually reflected. The resolution of an image refers to the number of pixels contained in an image, and can be used to measure the quality of the image. The high-resolution image has abundant details and bears a large amount of information, so that the high-resolution image not only can give people better visual perception, but also is convenient for people to understand and process the information contained in the image. The image super-resolution is to input one or more low-resolution images, combine image processing, artificial intelligence technology, computer vision and other related knowledge, and reconstruct corresponding high-resolution images on the basis of the existing imaging system by means of a specific algorithm and a processing flow.
At present, image super-resolution methods can be mainly classified into the following three categories: interpolation-based super-resolution reconstruction, reconstruction-based super-resolution reconstruction, and learning-based super-resolution reconstruction. Wherein the content of the first and second substances,
(1) the SR reconstruction based on interpolation considers that newly added pixel points in an image are only related to pixel values around the pixel points, and the SR reconstruction based on interpolation mainly comprises a nearest neighbor interpolation method, a bilinear interpolation method and a bicubic interpolation method.
(2) Reconstruction-based SR techniques explore a large number of images' prior information to help complete the reconstruction of HR images. Classical super-resolution algorithms based on reconstruction include iterative back-projection, convex set projection and maximum posterior probability. This method is time-complex.
(3) The SR method based on learning firstly blocks image data and respectively constructs sample libraries of low-resolution and high-resolution images. Then, the mapping relation between the corresponding LR and HR is learned, and finally, on the basis of inputting a low-resolution image, the corresponding high-resolution image is reconstructed by utilizing the learned mapping relation. Common learning-based methods are popular learning, sparse representation, and deep learning. The most studied and effective method at present is an image super-resolution algorithm based on deep learning.
In recent years, with the development of deep learning, image super-resolution reconstruction technology has also been developed. In 2017 Lim et al constructed a very wide network EDSR and a very deep network MDSR (about 165 layers) by using simplified residual blocks, and the modifications made by this method were mainly on ResNet. The authors remove some unnecessary blocks in the residual structure, such as the BN layer, and the results prove that this is indeed effective. Because the batch standardization layer normalizes the features, the range variability of the network can be eliminated through the normalized features, and the normalized features are preferably deleted, so that the size of the model can be enlarged to improve the result quality. In addition, since the BN layer consumes the same amount of memory as the previous convolutional layer, the EDSR also reduces GPU memory usage after the BN layer is removed. The authors of the EDSR believe that the simplest way to improve the performance of a network model is to increase the number of parameters, stacking in a convolutional neural network, stacking multiple layers, or increasing the number of filters.
However, it is difficult to achieve better improvement by simply stacking the residual blocks to build a deeper network. Whether deeper networks can further contribute to the image SR, and how to build very deep trainable networks remains to be explored.
As shown in fig. 1, to solve the above problem, the present embodiment provides an image super-resolution reconstruction method based on a residual channel attention network, including:
s1: acquiring a low-resolution image to be reconstructed;
s2: obtaining a high-resolution reconstruction image according to the low-resolution image to be reconstructed and the image super-resolution reconstruction model; the construction process of the image super-resolution reconstruction model comprises the following steps:
s201: obtaining shallow layer characteristics through a shallow layer characteristic channel, and extracting shallow layer characteristics X by adopting a first convolution network0
That is, the first convolutional network is represented as one convolutional layer:
X0=HSFE(ILR)(1)
wherein HSFE () represents the convolution function, and the kernel size is 3X3, so as to obtain the shallow feature X0
As shown in fig. 2-3, S202: constructing a deep layer feature extraction model based on a residual channel attention network, and extracting a deep layer feature according to a shallow layer feature X0And extracting deep layer characteristic X by deep layer characteristic extraction model1
The deep feature extraction model comprises a plurality of Pixel and Channel attention (PACA) networks, wherein each Pixel and Channel attention network comprises a Channel attention unit (CA) and a Pixel attention unit (PA) inside, an addition unit is added, the Channel attention unit, the Pixel attention unit and the addition unit are arranged in parallel, and a residual structure is added outside the parallel. The channel attention unit, the pixel attention unit and the inclusion unit simultaneously process the input shallow features, and then the processing results of each unit are fused.
The process of image processing of the deep feature extraction model comprises the following steps:
selectively enhancing the useful information and suppressing the useless information by re-weighting the filter responses of all channels based on a channel attention unit (CA) to obtain a one-dimensional (Cx1x1) attention feature vector;
an attention map is obtained using a 1 × 1 convolution layer and a sigmoid function based on a pixel attention unit (PA), and then multiplied by an input feature to generate a 3D (C × H × W) matrix as an attention feature.
Based on an inclusion unit, firstly reducing the number of channels by convolution of 1x1 to perform visual information aggregation, then performing feature extraction and pooling of different scales to obtain information of multiple scales, and finally performing superposition output on the features, wherein the information of the multiple scales comprises: the Incepration has a plurality of parallel branches, and the characteristic superposition output is to superpose the output characteristics of the 4 branches.
As shown in fig. 4-5, the specific process of obtaining the one-dimensional (Cx1x1) attention feature vector includes:
firstly, global spatial information related to a channel is converted into a channel descriptor by using a global average pool, and X is [ X1,. cndot.,. Xc,. cndot.,. XC ] as an input and has C feature maps with the size of H multiplied by W. Obtained by scaling down X to the spatial dimension H X W.
The c-th element of z is determined by the following formula (2):
Figure BDA0003426109470000071
where Xc (i, j) is the value of the c-th feature Xc at location (i, j), HGP (-) represents the global pool function. Such channel statistics can be seen as a collection of local descriptors whose statistics help to express the entire image.
In order to fully capture the dependency between channels from the aggregated information through the global averaging pool, a gating mechanism is introduced in CA.
The gating mechanism should satisfy two conditions: first, it must be able to learn the nonlinear interaction between channels. Second, since multiple channel functions can be emphasized rather than being activated at once, it must learn a non-mutually exclusive relationship.
In this embodiment, a simple gating mechanism with a sigmoid function is used, as shown in the following formula (3):
s=f(WUδ(WDz)) (3)
where f (-) and δ (-) denote the S-gate and ReLU functions, respectively. WDIs the weight set of the Conv layer that acts as a channel reduction with a reduction ratio r. After activation by ReLU, the low dimensional signal is then set to W by the weightUIs sent toThe track-enlarging layer increases by a ratio r. The final channel statistics s are then obtained for rescaling the input XcThat is, the channel statistical information and the input characteristic diagram of CA are made into element-wise phase
The result of the multiplication is,
Figure BDA0003426109470000081
through the series of operations, the network focuses on the characteristics of much useful information.
As shown in fig. 6, the pixel attention unit (PA) includes: the PA generates a 3D attention map instead of a 1D attention vector or a 2D map. This attention mechanism introduces fewer additional parameters, but produces better SR results.
Channel attention is intended to obtain a one-dimensional (C1X 1) attention feature vector, while spatial attention is intended to obtain a two-dimensional (1H W) attention map.
Where C is the number of channels and H and W are the height and width of the feature, respectively. Unlike them, pixel attention can generate a 3D (C × H × W) matrix as an attention feature. In other words, the pixel attention generates an attention coefficient for all pixels of the feature map.
Pixel attention only uses the 1 × 1 convolutional layer and sigmoid functions to obtain an attention map, which is then multiplied by the input features.
Representing the input and output profiles as X, respectivelyk-1And XkThe PA layer may be calculated as:
xk=fPA(xk-1)·xk-1 (4)
wherein f isPA(. cndot.) is a 1 × 1 convolutional layer followed by a sigmoid function.
As shown in fig. 7, the inclusion unit (PA) includes:
in the concept, generally, the most secure method for improving the network performance is to increase the width and depth of the network, which is accompanied by side effects.
Firstly, the deeper and wider networks usually mean that huge parameters exist, when the data amount is small, the trained networks are easy to over-fit, and when the networks have the deeper depth, the gradient disappearance phenomenon is easy to cause, the two side effects restrict the development of the deeper and wider convolutional neural networks, and the Incep network well solves the two problems.
According to the Incep structure, the number of channels is reduced through 1x1 convolution to gather information, feature extraction and pooling of different scales are performed to obtain information of multiple scales, and finally, the features are output in a superposition mode, so that the width of a network is increased, and the applicability of the network to different scales is also increased.
S203: for deep layer characteristic X1Expanding to obtain the amplified deep layer characteristic X2
Amplifying the shallow layer characteristic X1 through an up-sampling module to obtain
X2=HUS(X1) (5)
In the formula, HUSRepresenting the functionality of the up-sampling module.
S204: pair of amplified deep layer features X by a second convolution layer2Reconstructing to obtain a high-resolution image with the same channel number as the low-resolution image to be reconstructed;
X2representing features after enlargement, and then reconstructing the features after passing through a convolution layer
ISR=HREC(X2)=HRPACA(ILR) (6)
Wherein HREC() Denotes a reconstruction layer, HRPACA() Representing the functionality of the entire network.
S204: the proposed network structure is optimized by loss functions, and there are several loss functions, such as L1, L2, countermeasure loss, and perceptual loss.
To demonstrate the effectiveness of the present invention, we lose the function with L1.
The existing image super-resolution reconstruction method uses the three modules of CA, PA and increment, but the modules are independently used in respective networks, and the invention combines the three modules to make them exert respective advantages.
The present invention sets the size of all convolutional layers except after concat and in the upsampling to 3x3 and uses padding to keep the size fixed. Training the network used 800 training images of DIV2K, and the test sets were Set5, Set14, B100, and Urban100, four standard reference data sets. The SR results were evaluated on the Y channel of the transformed YCbCr space by PSNR and SSIM.
As shown in fig. 8(a), the original low-resolution image to be reconstructed is shown, and as shown in fig. 8(b), the reconstructed high-resolution image is shown, and the method (RPACA) proposed by the present invention and other methods are compared at the same time, so as to obtain the following quantitative analysis result.
Table 1 quantitative comparison with other methods
Figure BDA0003426109470000101
Example two
The embodiment provides an image super-resolution reconstruction system based on a residual channel attention network, which comprises: an image acquisition module configured to: acquiring a low-resolution image to be reconstructed;
a high resolution image reconstruction module configured to: obtaining a high-resolution reconstruction image according to the low-resolution image to be reconstructed and the image super-resolution reconstruction model; the construction process of the image super-resolution reconstruction model comprises the following steps: the method comprises the steps of shallow feature extraction and deep feature extraction, wherein shallow features are obtained through a shallow feature channel, a deep feature extraction model is built based on a residual channel attention network, deep features are extracted according to the shallow feature extraction model and the deep feature extraction model, the deep feature extraction model comprises a plurality of pixels and channel attention networks, a channel attention unit, a pixel attention unit and an Inception unit are arranged in the pixels and the channel attention networks in parallel, and a residual structure is added in the outer layer in parallel.
EXAMPLE III
The present embodiment provides a computer readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the residual channel attention network based image super resolution reconstruction method as described above.
Example four
The present embodiment provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the program, the processor implements the steps in the image super-resolution reconstruction method based on the residual channel attention network as described above.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. The image super-resolution reconstruction method based on the residual channel attention network is characterized by comprising the following steps of:
acquiring a low-resolution image to be reconstructed;
obtaining a high-resolution reconstruction image according to the low-resolution image to be reconstructed and the image super-resolution reconstruction model; the construction process of the image super-resolution reconstruction model comprises the following steps: the method comprises the steps of shallow feature extraction and deep feature extraction, wherein shallow features are obtained through a shallow feature channel, a deep feature extraction model is built based on a residual channel attention network, deep features are extracted according to the shallow feature extraction model and the deep feature extraction model, the deep feature extraction model comprises a plurality of pixels and channel attention networks, a channel attention unit, a pixel attention unit and an Inception unit are arranged in the pixels and the channel attention networks in parallel, and a residual structure is added in the outer layer in parallel.
2. The residual channel attention network-based image super-resolution reconstruction method of claim 1, wherein the deep feature extraction model performs image processing including:
based on the channel attention unit, obtaining a one-dimensional attention feature vector by re-weighting the filter responses of all channels;
obtaining an attention diagram by using a 1 multiplied by 1 convolution layer and a sigmoid function based on a pixel attention unit, and multiplying the attention diagram by an input feature to generate a three-dimensional matrix as an attention feature;
based on an inclusion unit, the number of channels is reduced through 1x1 convolution to carry out visual information aggregation, feature extraction and pooling of different scales are carried out to obtain information of multiple scales, and finally, the features are superposed and output.
3. The method for image super-resolution reconstruction based on residual channel attention network as claimed in claim 2, wherein the specific process of acquiring the one-dimensional attention feature vector comprises:
converting global spatial information related to the channel into a channel descriptor by using a global average pool;
a gating mechanism is introduced, and the dependency relationship between the channels is completely captured from the aggregation information;
and obtaining final channel statistical information for rescaling the positions of the input features, and multiplying the channel statistical information by the input feature map of the channel attention unit.
4. The residual channel attention network-based image super-resolution reconstruction method of claim 2, wherein the process of acquiring the three-dimensional matrix as the attention feature comprises the following steps: an attention map is obtained using the 1 × 1 convolutional layer and sigmoid functions, which is then multiplied by the input features.
5. The residual channel attention network-based image super-resolution reconstruction method of claim 2, wherein the inclusion unit comprises a plurality of convolution layers connected in parallel, and the inclusion performs a plurality of convolution operations or pooling operations on the input image in parallel.
6. The residual channel attention network-based image super-resolution reconstruction method of claim 1, wherein the deep features are expanded after being acquired, so as to obtain the enlarged deep features; and reconstructing the amplified deep features through the convolution layer to obtain a high-resolution image with the same channel number as the low-resolution image to be reconstructed.
7. The image super-resolution reconstruction method based on the residual channel attention network of claim 1, wherein in the training process of the image super-resolution reconstruction model, a loss function is used for constraining the difference between a high-definition reconstructed image and a low-resolution image to be reconstructed, parameters of the model are continuously adjusted until the model converges, and the training of the model is completed.
8. The image super-resolution reconstruction system based on the residual channel attention network is characterized by comprising the following components:
an image acquisition module configured to: acquiring a low-resolution image to be reconstructed;
a high resolution image reconstruction module configured to: obtaining a high-resolution reconstruction image according to the low-resolution image to be reconstructed and the image super-resolution reconstruction model; the construction process of the image super-resolution reconstruction model comprises the following steps: the method comprises the steps of shallow feature extraction and deep feature extraction, wherein shallow features are obtained through a shallow feature channel, a deep feature extraction model is built based on a residual channel attention network, deep features are extracted according to the shallow feature extraction model and the deep feature extraction model, the deep feature extraction model comprises a plurality of pixels and channel attention networks, a channel attention unit, a pixel attention unit and an Inception unit are arranged in the pixels and the channel attention networks in parallel, and a residual structure is added in the outer layer in parallel.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the residual channel attention network based image super resolution reconstruction method according to any one of claims 1 to 7.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the steps in the residual channel attention network based image super resolution reconstruction method according to any one of claims 1 to 7.
CN202111581236.5A 2021-12-22 2021-12-22 Image super-resolution reconstruction method and system based on residual channel attention network Pending CN114429422A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111581236.5A CN114429422A (en) 2021-12-22 2021-12-22 Image super-resolution reconstruction method and system based on residual channel attention network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111581236.5A CN114429422A (en) 2021-12-22 2021-12-22 Image super-resolution reconstruction method and system based on residual channel attention network

Publications (1)

Publication Number Publication Date
CN114429422A true CN114429422A (en) 2022-05-03

Family

ID=81311513

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111581236.5A Pending CN114429422A (en) 2021-12-22 2021-12-22 Image super-resolution reconstruction method and system based on residual channel attention network

Country Status (1)

Country Link
CN (1) CN114429422A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114757832A (en) * 2022-06-14 2022-07-15 之江实验室 Face super-resolution method and device based on cross convolution attention antagonistic learning
CN114972041A (en) * 2022-07-28 2022-08-30 中国人民解放军国防科技大学 Polarization radar image super-resolution reconstruction method and device based on residual error network
CN115358932A (en) * 2022-10-24 2022-11-18 山东大学 Multi-scale feature fusion face super-resolution reconstruction method and system
CN115908206A (en) * 2023-03-13 2023-04-04 中国石油大学(华东) Remote sensing image defogging method based on dynamic characteristic attention network
CN116188584A (en) * 2023-04-23 2023-05-30 成都睿瞳科技有限责任公司 Method and system for identifying object polishing position based on image
CN116934598A (en) * 2023-09-19 2023-10-24 湖南大学 Multi-scale feature fusion light-weight remote sensing image superdivision method and system
CN117078516A (en) * 2023-08-11 2023-11-17 济宁安泰矿山设备制造有限公司 Mine image super-resolution reconstruction method based on residual mixed attention
CN117291846A (en) * 2023-11-27 2023-12-26 北京大学第三医院(北京大学第三临床医学院) OCT system applied to throat microsurgery and image denoising method

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114757832A (en) * 2022-06-14 2022-07-15 之江实验室 Face super-resolution method and device based on cross convolution attention antagonistic learning
CN114972041A (en) * 2022-07-28 2022-08-30 中国人民解放军国防科技大学 Polarization radar image super-resolution reconstruction method and device based on residual error network
CN114972041B (en) * 2022-07-28 2022-10-21 中国人民解放军国防科技大学 Polarization radar image super-resolution reconstruction method and device based on residual error network
CN115358932A (en) * 2022-10-24 2022-11-18 山东大学 Multi-scale feature fusion face super-resolution reconstruction method and system
CN115908206A (en) * 2023-03-13 2023-04-04 中国石油大学(华东) Remote sensing image defogging method based on dynamic characteristic attention network
CN116188584B (en) * 2023-04-23 2023-06-30 成都睿瞳科技有限责任公司 Method and system for identifying object polishing position based on image
CN116188584A (en) * 2023-04-23 2023-05-30 成都睿瞳科技有限责任公司 Method and system for identifying object polishing position based on image
CN117078516A (en) * 2023-08-11 2023-11-17 济宁安泰矿山设备制造有限公司 Mine image super-resolution reconstruction method based on residual mixed attention
CN117078516B (en) * 2023-08-11 2024-03-12 济宁安泰矿山设备制造有限公司 Mine image super-resolution reconstruction method based on residual mixed attention
CN116934598A (en) * 2023-09-19 2023-10-24 湖南大学 Multi-scale feature fusion light-weight remote sensing image superdivision method and system
CN116934598B (en) * 2023-09-19 2023-12-01 湖南大学 Multi-scale feature fusion light-weight remote sensing image superdivision method and system
CN117291846A (en) * 2023-11-27 2023-12-26 北京大学第三医院(北京大学第三临床医学院) OCT system applied to throat microsurgery and image denoising method
CN117291846B (en) * 2023-11-27 2024-02-27 北京大学第三医院(北京大学第三临床医学院) OCT system applied to throat microsurgery and image denoising method

Similar Documents

Publication Publication Date Title
CN114429422A (en) Image super-resolution reconstruction method and system based on residual channel attention network
Wu et al. Fast end-to-end trainable guided filter
Du et al. Super-resolution reconstruction of single anisotropic 3D MR images using residual convolutional neural network
WO2017106998A1 (en) A method and a system for image processing
Sun et al. Lightweight image super-resolution via weighted multi-scale residual network
Chen et al. Multi-attention augmented network for single image super-resolution
Zhou et al. Volume upscaling with convolutional neural networks
Du et al. Accelerated super-resolution MR image reconstruction via a 3D densely connected deep convolutional neural network
CN110136067B (en) Real-time image generation method for super-resolution B-mode ultrasound image
CN111507462A (en) End-to-end three-dimensional medical image super-resolution reconstruction method and system
Li et al. Single image super-resolution reconstruction based on genetic algorithm and regularization prior model
CN104463785A (en) Method and device for amplifying ultrasound image
CN112070752A (en) Method, device and storage medium for segmenting auricle of medical image
US20210074034A1 (en) Methods and apparatus for neural network based image reconstruction
CN116188272B (en) Two-stage depth network image super-resolution reconstruction method suitable for multiple fuzzy cores
Sindel et al. Learning from a handful volumes: MRI resolution enhancement with volumetric super-resolution forests
Rashid et al. Single MR image super-resolution using generative adversarial network
Du et al. Expectation-maximization attention cross residual network for single image super-resolution
Shen et al. Local to non-local: Multi-scale progressive attention network for image restoration
CN114638761A (en) Hyperspectral image panchromatic sharpening method, device and medium
Du et al. X-ray image super-resolution reconstruction based on a multiple distillation feedback network
Wang et al. Reconstructed densenets for image super-resolution
Wang et al. Two-stream deep sparse network for accurate and efficient image restoration
Jiang et al. Multi-dimensional visual data completion via weighted hybrid graph-Laplacian
Patel et al. Deep Learning in Medical Image Super-Resolution: A Survey

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination