CN115131242A - Lightweight super-resolution reconstruction method based on attention and distillation mechanism - Google Patents

Lightweight super-resolution reconstruction method based on attention and distillation mechanism Download PDF

Info

Publication number
CN115131242A
CN115131242A CN202210744329.3A CN202210744329A CN115131242A CN 115131242 A CN115131242 A CN 115131242A CN 202210744329 A CN202210744329 A CN 202210744329A CN 115131242 A CN115131242 A CN 115131242A
Authority
CN
China
Prior art keywords
attention
distillation
feature extraction
module
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210744329.3A
Other languages
Chinese (zh)
Other versions
CN115131242B (en
Inventor
曾坤
李培榕
杨钰
林贵敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Minjiang University
Original Assignee
Minjiang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Minjiang University filed Critical Minjiang University
Priority to CN202210744329.3A priority Critical patent/CN115131242B/en
Publication of CN115131242A publication Critical patent/CN115131242A/en
Application granted granted Critical
Publication of CN115131242B publication Critical patent/CN115131242B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a lightweight super-resolution reconstruction method based on attention and a distillation mechanism. The method learns the mapping relation between the low-resolution image and the high-resolution image through the convolutional neural network. First, the proposed Attention Convolution Attention (ACA) module can integrate the distilled features more efficiently, making the distillation network more efficient. Parameters are strictly controlled while the efficiency is high, and the parameters of one ACA module are only half of the parameters of one 3 multiplied by 3 convolution; secondly, the proposed signature distillation and attention-combined signature distillation module (DAB) enables efficient feature extraction, and stacking a plurality of such modules designs a lightweight super-resolution reconstruction network (DAN) based on attention and distillation mechanisms.

Description

Lightweight super-resolution reconstruction method based on attention and distillation mechanism
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a lightweight super-resolution reconstruction method and system based on attention and distillation mechanisms.
Background
At present, the image super-resolution reconstruction work based on the convolutional neural network has obtained great progress. The initial work was SRCNN proposed by board, et al [2] Initially, they enlarged the input LR image using bicubic interpolation before inputting it into the network, but this clearly increases the redundant computational cost and loses high frequency detail information. Research team of seoul university in korea proposed a research team based on ResNet [3] Network EDSR [4] The method makes a great contribution to super-resolution reconstruction, not only applies a residual network to a super-resolution reconstruction direction and greatly increases the number of layers of the network, but also creatively removes batch regularization operation (batch regularization), and the network is a champion scheme of an NTIRE2017 super-resolution reconstruction challenge match. RCAN proposed by Zhang et al [5] The channel attention is introduced into the task of super-resolution reconstruction for the first time, and the depth of the network is further deepened.
But due to the huge calculation amount of the network and excessive parameters, the network can only be deployed on hardware with larger calculation capacity. The main task of the lightweight image super-resolution reconstruction network is to realize image super-resolution reconstruction under the condition of only using lower parameters and calculation amount. In light-weight image super-resolution reconstruction networks, e.g. IDN [6] Only the parameter of 553K is available, but the effect is not satisfactory. This is due to the fact that most of the present neural networks achieve an improvement in performance by stacking convolutional layers, such as VDSR [7] Is a 20-layer network, and the RCAN with the improved type has 800 layers.
To address this problem, researchers have begun exploring lightweight and efficient image super-resolution reconstruction networks. For example, DRRN [8] The method uses multipath mode local residual learning, and transmits the characteristics in identity mapping (identity) to a following network in a branch form. Memnet [9] A gating cell is provided to receive the characteristics of each of the previous modules (referred to herein as "memory"). ECBSR, generalExtracting features over multiple paths makes the network more efficient. Hui et al made a breakthrough development in lightweight super-resolution reconstruction networks. Originally their article Information Distillation Network (IDN) was able to extract more useful information using fewer convolutional layers. Following an improvement of IDN by Hui et al, the published method IMDN [10] Win the champion of AIM 2019. Liu et al proposed an RFDN based on IMDN [11] The extracted features are definitely divided into two parts, one part of features are directly output, and the other part of features extract important features and then output to a subsequent network module through a distillation mechanism. RFDN won the champion of AIM 2020-ESR.
The prior art documents to which reference may be made include:
[1]Liu J,Tang J,and Wu G.Residual feature distillation network for lightweight image super-resolution[C].In Proceedings ofthe European Conference on Computer Vision,2020:41–55.
[2]Dong C,Loy C,He K,et al.Learning a deep convolutional network for image super-resolution[C].European conference on computer vision.2014:184-199.
[3]He K,Zhang X,Ren S,et al.Deep residual learning for image recognition[C].Proceedings ofthe IEEE conference on computer vision and pattern recognition.2016:770-778.
[4]Lim B,Son S,Kim H,et al.Enhanced deep residual networks for single image super-resolution[C].Proceedings of the IEEE conference on computer vision andpattern recognition workshops.2017:136-144.
[5]Zhang Y,Li K,Li K,et al.Image super-resolution using very deep residual channel attention networks[C].Proceedings ofthe European conference on computer vision.2018:286-301.
[6]Hui Z,Wang X,Gao X.Fast and accurate single image super-resolution via information distillation network[C].Proceedings ofthe IEEE conference on computer vision andpattern recognition.2018:723-731.
[7]Kim J,Lee J K,Lee K M.Accurate image super-resolution using very deep convolutional networks[C].Proceedings of the IEEE conference on computer vision andpattern recognition.2016:1646-1654.
[8]Tai Y,Yang J,Liu X.Image super-resolution via deep recursive residual network[C].Proceedings of the IEEE conference on computer vision and pattern recognition.2017:3147-3155.
[9]Tai Y,Yang J,Liu X,et al.Memnet:a persistent memory network for image restoration[C].Proceedings ofthe IEEE international conference on computer vision.2017:4539-4547.
[10]Hui Z,Gao X,Yang Y,et al.Lightweight image super-resolution with information multi-distillation network[C].Proceedings ofthe 27thACM international conference on multimedia.2019:2024-2032.
[11]Liu J,Tang J,Wu G.Residual feature distillation network for lightweight image super-resolution[C].European Conference on ComputerVision.2020:41-55.
disclosure of Invention
In order to further optimize and improve the prior art, the invention provides a light-weight super-resolution reconstruction method based on attention and distillation mechanisms. The method and the system are favorable for realizing image super-resolution reconstruction under the condition of using only lower parameters and calculation amount.
The method learns the mapping relation between the low-resolution image and the high-resolution image through the convolutional neural network. First, the proposed Attention Convolution Attention (ACA) module can integrate the distilled features more efficiently, making the distillation network more efficient. Parameters are strictly controlled while the efficiency is high, and the parameters of one ACA module are only half of the parameters of one 3 multiplied by 3 convolution; secondly, the proposed feature distillation and attention-combined feature distillation module (DAB) can efficiently extract features, and stacking a plurality of such modules designs a lightweight super-resolution reconstruction network (DAN) based on attention and distillation mechanisms.
The method introduces a combined attention mechanism, can accurately locate more important characteristic information from the characteristics extracted by distillation, and enhances the representation of the information. On the premise of balancing a network model and a network reconstruction effect, a feature extraction module (DAB) using feature distillation and attention is designed, and on the basis, a lightweight super-resolution reconstruction network (DAN) is realized based on an attention and feature distillation mechanism.
The experimental result shows that the network structure can obtain good balance between the network model and the network reconstruction effect.
In order to achieve the purpose, the invention adopts the technical scheme that:
a light-weight super-resolution reconstruction method based on attention and distillation mechanism is characterized in that a feature extraction network combining attention and feature distillation is used as a depth network for single image super-resolution reconstruction, and an input low-resolution image is mapped to a high-resolution image through the network;
the feature extraction network combining attention and feature distillation comprises: the device comprises a shallow layer feature extraction module, a nonlinear deep layer feature extraction module and an up-sampling reconstruction module;
the shallow feature extraction module adopts a convolution of 3 multiplied by 3 to extract shallow features from the input low-resolution image;
the nonlinear deep layer feature extraction module is formed by stacking a plurality of feature extraction modules combining feature distillation and attention;
the up-sampling reconstruction module adopts a sub-pixel convolution up-sampling method.
Further, a feature extraction module of the feature distillation and attention combination comprises an RFDB feature distillation module and an attention convolution attention module; the attention convolution attention module consists of a cooperative attention module, a 1 x 1 convolution and an enhanced spatial attention module to integrate the information after distillation. By using the module, more representative features can be flexibly aggregated and extracted, so that context and intermediate features can be well interacted, and the reconstruction of a high-quality SR image is facilitated.
Further, the air conditioner is provided with a fan,low resolution image I to be input LR At high resolution of the image I HR As a target, a corresponding super-resolution image I is obtained through a network SR (ii) a The formula of the process is described as follows:
I SR =H DAN (I LR )
wherein H DAN (. C) represents a feature extraction network combining attention and feature distillation;
optimizing the network by adopting an L1 loss function; by N sets of training data
Figure BDA0003717511530000041
Training is carried out; the loss function is defined as:
Figure BDA0003717511530000042
wherein Θ represents a parameter that can be iterated in the network model; and after training is finished, obtaining model parameters of a feature extraction network combining attention and feature distillation, and directly using the model parameters for image super-resolution reconstruction.
Further, the nonlinear deep feature extraction module is formed by stacking n feature extraction modules combining feature distillation and attention;
F k =H k (F k-1 ),k=1,...,n
wherein H k Feature extraction Module, F, representing the k-th combination of feature distillation and attention k-1 And F k Features representing the input and output of the kth feature extraction module combined with attention, respectively; output features F of each feature extraction network combining attention and feature distillation k And finally to be concatenated, after which the channels are first reduced using a 1 x 1 convolution, and then a3 x 3 convolution is used to smoothly aggregate the features, which is described by the formula:
y=H assemble (Concat(F 1 ,...,F n ))+F 0
wherein y represents a non-linear deep layer feature extraction modelOutput of the block, H assemble Is a 1 x 1 convolution followed by a3 x 3 convolution.
The proposed attention convolution attention module is placed after the characteristic distillation operation. In a characteristic distillation operation, while important information in the network can be retained, subsequent operations are also important. The information after distillation needs to be integrated, otherwise the network cannot distinguish which features distilled from different network layers are important and which are less important. Thus, the attention convolution attention module is designed specifically for integrating distillation features, which has a greater ability to integrate distillation features than previously using only 1 x 1 convolution.
And a lightweight super-resolution reconstruction system based on attention and distillation mechanism, characterized by comprising a memory, a processor and computer program instructions stored on the memory and executable by the processor to implement the method as described above when the computer program instructions are executed by the processor.
Compared with the prior art, the beneficial effects of the invention and the optimized scheme thereof are mainly two points: 1. the proposed attention convolution attention module is able to integrate the distilled features more efficiently, making the distillation network more efficient. And the parameters are strictly controlled while the efficiency is high, and the parameters of an attention convolution attention module are only half of those of a3 multiplied by 3 convolution. 2. The RFDB module is re-optimized, based on the full ablation experimental results, removing the redundant residual connections, which not only becomes better, but also does not increase the parameters. And finally an attention convolution attention module is added. The present invention refers to this integrated new module as a feature distillation and attention-combined feature extraction module and stacking a plurality of such feature distillation and attention-combined feature extraction modules constitutes a feature extraction network combining attention and feature distillation.
Drawings
FIG. 1 is a schematic diagram of a feature extraction network combining attention and feature distillation in an embodiment of the present invention.
FIG. 2 is a schematic diagram of a feature extraction module combining attention and feature distillation according to an embodiment of the present invention.
FIG. 3 is a block diagram of an attention convolution attention module according to an embodiment of the present invention.
Fig. 4 is a comparison graph of the reconstruction results of the present method and four other methods in the embodiment of the present invention.
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
As shown in fig. 1, the feature extraction network combining attention and feature distillation in this embodiment is divided into three parts, which are: the device comprises a shallow layer feature extraction module, a nonlinear deep layer feature extraction module and an up-sampling reconstruction module.
In the shallow feature extraction part, a3 × 3 convolution is used to extract I from the input LR Extracting shallow features from the image:
F 0 =C s (I LR )
wherein C is s (. represents a shallow extraction module, F) 0 Representing the extracted shallow features.
The nonlinear feature extraction module is followed, and is the key of the image super-resolution reconstruction effect. The embodiment adopts the following mode: the feature extraction module stack is composed of n feature distillation and attention combination modules, so as to extract image features one by one, and the process can be described by a formula:
F k =H k (F k-1 ),k=1,...,n
wherein H k Feature extraction Module, F, representing the k-th combination of feature distillation and attention k-1 And F k Then the input and output of the kth feature extraction module are characterized, respectively, by the feature distillation and attention combination. Output feature F of feature extraction network combining each attention and feature distillation k And eventually will be connected together. After this, it is desirable to polymerize these features. To reduce the parameters in this operation, the channels are first reduced using a 1 × 1 convolution, and then a3 × 3 convolution is used to smoothly aggregate the features. This process can be described by the formula:
y=H assemble (Concat(F 1 ,...,F n ))+F 0
where y represents the output of the non-linear deep feature extraction module, H assemble Is a 1 x 1 convolution followed by a3 x 3 convolution.
The output of the nonlinear deep feature extraction module will be used as the input of the up-sampling reconstruction module. The present embodiment employs a sub-pixel convolution upsampling method.
As shown in fig. 2, the feature extraction module of the present embodiment combining feature distillation and attention is composed of two parts. The top half uses the characteristic distillation structure in RFDB, and the bottom half uses the attention convolution attention module, as shown in FIG. 3, which is composed of a cooperative attention module, a 1 × 1 convolution and an enhanced space attention module [1] . How to prevent information loss and how to utilize the intermediate features becomes very important under the constraints of parameters and calculation by using the module. Using this method, the lost information can be reduced, resulting in more accurate features. And the method can reduce the complexity of the model and promote the application of the model in a real scene.
Specifically, as shown in FIG. 3, the attention convolution attention module first pinpoints the input with a joint attention to more important feature information and enhances their representation. And then a 1 x 1 convolution is carried out, so that the network can fuse the characteristics while reducing the number of channels. Finally, to maximize the effectiveness of the module, it is preferably combined with spatial attention. The module proposed in this embodiment is finally added with an enhanced spatial attention block which can obtain more representative features when working at the end of the residual block. By using the module, the distilled features can be effectively integrated, and the expression of important information by a network is enhanced.
The embodiment also provides a lightweight super-resolution reconstruction system based on attention and distillation mechanism, which comprises a memory, a processor and computer program instructions stored on the memory and capable of being executed by the processor, and when the computer program instructions are executed by the processor, the method can be realized.
In order to verify the effect of the embodiment, in the provided test example, experiments with scaling factors of 2 and 4 were respectively performed, the original cutting size with scaling factor of 2 is 240 × 240, and the original cutting size with scaling factor of 4 is 320 × 320. Random horizontal flipping and 90 degree rotation are employed to enhance the data set. The optimizer adopts ADAM mode, beta 1 ,β 2 Epsilon is 0.9,0.999 and 10 respectively -8 . Initial learning rate of 5 × 10 -4 Each epoch trains 500 batches (mini-batch) and the learning rate is 0.5 x to achieve decay every 400 epochs. The present invention sets the batch size (batch size) to 128 and lost training 1300 epochs using L1. Then, the learning rate was set to 1 × 10 -4 1000 epochs were trained to achieve fine tuning. All experimental results in the table were obtained using the machine learning library pytorch (version 1.9.0) and trained with an NVIDIA3090 GPU.
The method provided by the invention is compared with several representative lightweight image super-resolution reconstruction networks. In table 1, the results of the comparison of the proposed method of the present invention with these networks for scale factor 2 and scale factor 4 in 5 data sets are shown, respectively. It can be clearly found that the method provided by the invention stands out from the networks, and has superior image super-resolution reconstruction effect while keeping the model parameters small.
In fig. 4, the method of the present embodiment can be compared intuitively to other methods on the Urban100 data set. By comparing the super-resolution (SR) image results in the transverse direction, it can be seen that the method proposed by the present embodiment can have more accurate reconstruction details and less distortion than other methods. All these results demonstrate the effectiveness and superiority of the DAN proposed in this example.
TABLE 1
Figure BDA0003717511530000081
Figure BDA0003717511530000091
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is directed to preferred embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow. However, any simple modification, equivalent change and modification of the above embodiments according to the technical essence of the present invention are within the protection scope of the technical solution of the present invention.

Claims (5)

1. A light-weight super-resolution reconstruction method based on attention and distillation mechanism is characterized in that a feature extraction network combining attention and feature distillation is used as a depth network for single image super-resolution reconstruction, and an input low-resolution image is mapped to a high-resolution image through the network;
the feature extraction network combining attention and feature distillation comprises: the device comprises a shallow layer feature extraction module, a nonlinear deep layer feature extraction module and an up-sampling reconstruction module;
the shallow feature extraction module adopts a convolution of 3 multiplied by 3 to extract shallow features from the input low-resolution image;
the nonlinear deep layer feature extraction module is formed by stacking a plurality of feature extraction modules combining feature distillation and attention;
the up-sampling reconstruction module adopts a sub-pixel convolution up-sampling method.
2. The combined feature extraction module of claim 1, wherein: a feature extraction module of said feature distillation and attention combination comprising an RFDB feature distillation module and an attention convolution attention module; the attention convolution attention module consists of a cooperative attention module, a 1 x 1 convolution and an enhanced spatial attention module to integrate the information after distillation.
3. The attention and distillation mechanism-based lightweight super-resolution reconstruction method according to claim 1, characterized in that: low resolution image I to be input LR At high resolution of the image I HR For the target, a corresponding super-resolution image I is obtained through a network SR (ii) a The formula of the process is described as follows:
I SR =H DAN (I LR )
wherein H DAN () represents a feature extraction network combining attention and feature distillation;
optimizing the network by adopting an L1 loss function; by N sets of training data
Figure FDA0003717511520000011
Training is carried out; the loss function is defined as:
Figure FDA0003717511520000012
wherein Θ represents a parameter that can be iterated in the network model; and after training is finished, obtaining model parameters of a feature extraction network combining attention and feature distillation, and directly using the model parameters for image super-resolution reconstruction.
4. The attention and distillation mechanism-based lightweight super-resolution reconstruction method according to claim 2, characterized in that:
the nonlinear deep layer feature extraction module is formed by stacking n feature extraction modules combining feature distillation and attention;
F k =H k (F k-1 ),k=1,...,n
wherein H k Feature extraction Module, F, representing the k-th combination of feature distillation and attention k-1 And F k Features representing the input and output of the kth feature extraction module combined with attention, respectively; output features F of each feature extraction network combining attention and feature distillation k And finally to be concatenated, after which the channels are first reduced using a 1 x 1 convolution, and then a3 x 3 convolution is used to smoothly aggregate the features, which is described by the formula:
y=H assemble (Concat(F 1 ,...,F n ))+F 0
wherein y represents the output of the non-linear deep feature extraction module, H assemble Is a 1 x 1 convolution followed by a3 x 3 convolution.
5. A lightweight super-resolution reconstruction system based on an attention and distillation mechanism, comprising a memory, a processor and computer program instructions stored on the memory and executable by the processor to perform the method of any of claims 1 to 4 when the computer program instructions are executed by the processor.
CN202210744329.3A 2022-06-28 2022-06-28 Light-weight super-resolution reconstruction method based on attention and distillation mechanism Active CN115131242B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210744329.3A CN115131242B (en) 2022-06-28 2022-06-28 Light-weight super-resolution reconstruction method based on attention and distillation mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210744329.3A CN115131242B (en) 2022-06-28 2022-06-28 Light-weight super-resolution reconstruction method based on attention and distillation mechanism

Publications (2)

Publication Number Publication Date
CN115131242A true CN115131242A (en) 2022-09-30
CN115131242B CN115131242B (en) 2023-08-29

Family

ID=83380326

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210744329.3A Active CN115131242B (en) 2022-06-28 2022-06-28 Light-weight super-resolution reconstruction method based on attention and distillation mechanism

Country Status (1)

Country Link
CN (1) CN115131242B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180137603A1 (en) * 2016-11-07 2018-05-17 Umbo Cv Inc. Method and system for providing high resolution image through super-resolution reconstruction
CN112508794A (en) * 2021-02-03 2021-03-16 中南大学 Medical image super-resolution reconstruction method and system
US20210133925A1 (en) * 2019-11-05 2021-05-06 Moxa Inc. Device and Method of Handling Image Super-Resolution
CN113240580A (en) * 2021-04-09 2021-08-10 暨南大学 Lightweight image super-resolution reconstruction method based on multi-dimensional knowledge distillation
CN113837946A (en) * 2021-10-13 2021-12-24 中国电子技术标准化研究院 Lightweight image super-resolution reconstruction method based on progressive distillation network
CN114519667A (en) * 2022-01-10 2022-05-20 武汉图科智能科技有限公司 Image super-resolution reconstruction method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180137603A1 (en) * 2016-11-07 2018-05-17 Umbo Cv Inc. Method and system for providing high resolution image through super-resolution reconstruction
US20210133925A1 (en) * 2019-11-05 2021-05-06 Moxa Inc. Device and Method of Handling Image Super-Resolution
CN112508794A (en) * 2021-02-03 2021-03-16 中南大学 Medical image super-resolution reconstruction method and system
CN113240580A (en) * 2021-04-09 2021-08-10 暨南大学 Lightweight image super-resolution reconstruction method based on multi-dimensional knowledge distillation
CN113837946A (en) * 2021-10-13 2021-12-24 中国电子技术标准化研究院 Lightweight image super-resolution reconstruction method based on progressive distillation network
CN114519667A (en) * 2022-01-10 2022-05-20 武汉图科智能科技有限公司 Image super-resolution reconstruction method and system

Also Published As

Publication number Publication date
CN115131242B (en) 2023-08-29

Similar Documents

Publication Publication Date Title
Yang et al. Parsing r-cnn for instance-level human analysis
CN111161150B (en) Image super-resolution reconstruction method based on multi-scale attention cascade network
Xing et al. End-to-end learning for joint image demosaicing, denoising and super-resolution
CN112016507B (en) Super-resolution-based vehicle detection method, device, equipment and storage medium
CN113592718A (en) Mine image super-resolution reconstruction method and system based on multi-scale residual error network
CN109345449A (en) A kind of image super-resolution based on converged network and remove non-homogeneous blur method
DE112020003128T5 (en) DILATED CONVOLUTION WITH SYSTOLIC ARRAY
CN108921294A (en) A kind of gradual piece of knowledge distillating method accelerated for neural network
CN114757832B (en) Face super-resolution method and device based on cross convolution attention pair learning
CN111915487A (en) Face super-resolution method and device based on hierarchical multi-scale residual fusion network
CN111768340B (en) Super-resolution image reconstruction method and system based on dense multipath network
CN111932461A (en) Convolutional neural network-based self-learning image super-resolution reconstruction method and system
CN111967524A (en) Multi-scale fusion feature enhancement algorithm based on Gaussian filter feedback and cavity convolution
CN113222818A (en) Method for reconstructing super-resolution image by using lightweight multi-channel aggregation network
Esmaeilzehi et al. SRNSSI: a deep light-weight network for single image super resolution using spatial and spectral information
Wang et al. Multi-scale attention network for image super-resolution
CN115660955A (en) Super-resolution reconstruction model, method, equipment and storage medium for efficient multi-attention feature fusion
CN109146792A (en) Chip image super resolution ratio reconstruction method based on deep learning
Xie et al. Large kernel distillation network for efficient single image super-resolution
Xiao et al. Feature redundancy mining: Deep light-weight image super-resolution model
Chen et al. Hierarchical generative adversarial networks for single image super-resolution
CN111461976A (en) Image super-resolution method based on efficient lightweight coordinate neural network
CN115131242A (en) Lightweight super-resolution reconstruction method based on attention and distillation mechanism
CN111414988A (en) Remote sensing image super-resolution method based on multi-scale feature self-adaptive fusion network
CN108734179B (en) SIFT key point description method based on hardware optimization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant