CN113837946B - Lightweight image super-resolution reconstruction method based on progressive distillation network - Google Patents

Lightweight image super-resolution reconstruction method based on progressive distillation network Download PDF

Info

Publication number
CN113837946B
CN113837946B CN202111191958.XA CN202111191958A CN113837946B CN 113837946 B CN113837946 B CN 113837946B CN 202111191958 A CN202111191958 A CN 202111191958A CN 113837946 B CN113837946 B CN 113837946B
Authority
CN
China
Prior art keywords
convolution
image
resolution
super
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111191958.XA
Other languages
Chinese (zh)
Other versions
CN113837946A (en
Inventor
范科峰
洪开
徐洋
孙文龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Electronics Standardization Institute
Original Assignee
China Electronics Standardization Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Electronics Standardization Institute filed Critical China Electronics Standardization Institute
Priority to CN202111191958.XA priority Critical patent/CN113837946B/en
Publication of CN113837946A publication Critical patent/CN113837946A/en
Application granted granted Critical
Publication of CN113837946B publication Critical patent/CN113837946B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4007Interpolation-based scaling, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4046Scaling the whole image or part thereof using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution

Abstract

The disclosure relates to a light-weight image super-resolution reconstruction method based on a progressive distillation network, relating to the technical field of image super-resolution reconstruction and comprising the following steps: the method for improving the existing super-resolution convolutional neural network model comprises the following steps: according to the receptive field progressive principle of convolution kernels with different expansion rates, two different progressive distillation connection combinations are used for replacing the original characteristic distillation connection, and an asymmetric extended convolution residual block is adopted, so that the network can fully extract the edge and texture characteristic information of the image under the condition of few parameters; hierarchical characteristics of distillation network connection are improved by utilizing a channel shuffling structure, and the characteristic sharing performance among channels is further improved, so that the accuracy of image super-resolution reconstruction is improved; furthermore, a multi-scale space attention mechanism module is adopted, so that the weight of the fusion characteristics can be adaptively re-calibrated; the post-up-sampling reconstruction part is an up-sampling-three-dimensional pixel attention mechanism-based method, and the efficiency of super-resolution image reconstruction is further improved.

Description

Lightweight image super-resolution reconstruction method based on progressive distillation network
Technical Field
The disclosure relates to the technical field of image super-resolution reconstruction, in particular to a light-weight image super-resolution reconstruction method based on a progressive distillation network.
Background
With the popularization and rapid development of various image display devices, mobile devices and network construction, the requirements on the quality of images and videos are continuously increased, and how to obtain high-resolution images or videos with better effects and more economical and practical purposes becomes more and more important. However, in practice, the acquisition and processing of digital images are affected by many factors, resulting in reduced image quality and often resulting in less than ideal images on the image display device. Therefore, how to obtain high-quality image formation which is economical and effective is a problem which needs to be solved urgently.
Single-image super-resolution is a technique for reconstructing a high-resolution image from a low-resolution image by an identity mapping function, but this is a pathological problem because a single low-resolution image can reconstruct multiple high-resolution images. In order to solve the problem, the image super-resolution reconstruction method based on the deep learning is to learn the non-linear mapping relationship between the low-resolution image and the high-resolution image through a large amount of external data, so as to directly acquire the high-resolution image by means of the learned corresponding relationship. Such algorithms generally consist of two processes, training for learning mapping relationships and reconstruction for obtaining final results.
Disclosure of Invention
In order to overcome the problems in the related art, the lightweight image super-resolution reconstruction method based on the progressive distillation network is provided.
According to a first aspect of the embodiments of the present disclosure, there is provided a light-weight image super-resolution reconstruction method based on a progressive distillation network, including;
a progressive distillation module is arranged to realize parallel decoupling of a channel separation mechanism into a 1 x 1 convolution and core feature extraction module;
reconstructing a high-resolution image by setting an attention up-sampling module;
training a convolutional neural network to obtain a high-resolution image;
and reconstructing the super-resolution of the image according to the trained convolutional neural network.
Optionally, in an implementation manner, the decoupling the channel separation mechanism into the 1 × 1 convolution and core feature extraction module in parallel by providing the progressive distillation module includes:
the output channel of the 1 × 1 convolution is half of the input channel, and the generated image features are used for channel combination; the image features generated by the core feature extraction module are used for further refining the processed features of the previous layer;
and (4) circularly and iteratively executing the operation three times, and enabling the 3 multiplied by 3 convolutional layer to enter a channel merging layer to obtain the merged image characteristics.
Optionally, in one implementation, the attention upsampling module employs a voxel attention mechanism.
Optionally, in an implementation, the core feature extraction module includes an asymmetric convolution residual module with dilation convolution and an asymmetric convolution residual module, where:
the asymmetric convolution residual module with the expansion convolution consists of four parts, namely 1 multiplied by 3DConv, 3 multiplied by 1DConv, reLU and identity mapping;
the asymmetric convolution residual module consists of four parts, namely 1 multiplied by 3Conv, 3 multiplied by 1Conv, reLU and identity mapping.
Optionally, in an implementation manner, the reconstructing a high-resolution image by setting the attention upsampling module includes:
and 6 progressive distillation modules, wherein the first 3 core extraction modules adopt asymmetric convolution residual modules without expansion convolution, and the last 3 core extraction modules adopt asymmetric convolution residual modules with expansion convolution.
Optionally, in an implementation, a channel shuffle layer operation is performed on the combined image features to obtain a processed combined image.
Optionally, in an implementation manner, a multi-scale spatial attention mechanism is applied to the processed merged image to obtain an output feature map.
Optionally, in an implementation manner, the obtaining an output feature map by using a multi-scale spatial attention mechanism on the processed combined image includes:
the number of channels is reduced by setting 1 multiplied by 1 convolution;
the increase of the perception visual field of the attention mechanism is realized by setting the combination of the stride curl and the Maxpooling;
and the setting adjustment is made by the expansion convolution, the upsampling function, and the 1 × 1 convolution:
modeling the channel relationship by 1 × 1 convolution, a feature map is output.
Optionally, in an implementation, the setting adjustment performed by the dilation convolution, the upsampling function, and the 1 × 1 convolution includes:
further amplification of the receptive field is realized through expansion convolution with different expansion rates;
restoring the spatial dimension and the channel dimension by convolution of an upsampling function and 1 multiplied by 1;
and applying a combination of 1 × 1 convolution and depth-wise convolution of negligible parameters on the input features to realize the enhancement of the weight of the key information of the features.
Optionally, in an implementation, the reconstructing the super-resolution of the image according to the trained convolutional neural network includes:
image I that will need to be over-divided LR Inputting the trained convolutional neural network to obtain an output image I SR
For image I LR Performing bilinear interpolation upsampling to obtain sum I SR Images I of the same size BI
Will I BI And I SR Adding the pixel values of the corresponding pixel points to obtain a super-resolution image I F
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: according to the method, according to the receptive field progressive principle of convolution kernels with different expansion rates, an asymmetric extended convolution residual error is used as a core extraction feature module of a distillation network, meanwhile, a channel shuffling structure is used for improving the hierarchical features of distillation network connection, and the feature sharing performance among channels is further improved, so that the accuracy of image super-resolution reconstruction is improved, and the method can adaptively recalibrate the weight of fusion features by using the attention mechanism principle. Compared with the traditional network, the super-resolution image reconstruction method can effectively improve the quality of the super-resolution reconstructed image, reduces network parameters, accelerates calculation speed and integrally improves the super-resolution image reconstruction efficiency.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flowchart illustrating a method for lightweight super-resolution image reconstruction based on a progressive distillation network according to an exemplary embodiment.
Fig. 2 is a block diagram of a single local progressive distillation network module shown in accordance with an exemplary embodiment.
Fig. 3 is a diagram illustrating a network architecture in accordance with an example embodiment.
FIG. 4 is a diagram illustrating the effect of super-resolution reconstruction of the same image by different models according to an exemplary embodiment.
Fig. 5 is a table illustrating super resolution tasks according to an exemplary embodiment in 2 x, 3 x, 4 x prior art methods.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below do not represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
With the popularization and rapid development of various image display devices, mobile devices and network construction, the requirements on the quality of images and videos are continuously improved, and how to obtain high-resolution images or videos with better effects and more economical and practical becomes more and more important. However, in practice, the acquisition and processing of digital images are affected by many factors, resulting in reduced image quality and often resulting in less than ideal images on the image display device. Therefore, how to obtain cost-effective and high-quality image formation is a problem which needs to be solved urgently.
Single-image super-resolution is a technique for reconstructing a high-resolution image from a low-resolution image by an identity mapping function, but it is a pathological problem because a single low-resolution image can reconstruct multiple high-resolution images. In order to solve the problem, the image super-resolution reconstruction method based on deep learning learns the non-linear mapping relationship between the low-resolution image and the high-resolution image through a large amount of external data, so that the high-resolution image is directly acquired by means of the learned corresponding relationship. Such algorithms generally consist of two processes, training for learning mapping relationships and reconstruction for obtaining final results.
With the deepening of networks, the performance of the super-resolution is better and better, but the networks rely on stacking convolution blocks or increasing the number of convolution channels to extract more and more effective features; however, this would result in excessive increase of network parameters, consumption of huge amount of computation, and high demand for memory resources of computer, which is not suitable for small devices and real-world scenarios. Therefore, it is necessary to propose a method for reducing the network size, the network parameters and the amount of computation while maintaining the super-resolution reconstruction effect.
Based on the above problems, the present disclosure provides a light-weight image super-resolution reconstruction method based on a progressive distillation network, and referring to fig. 1 to fig. 3, the light-weight image super-resolution reconstruction method based on the progressive distillation network provided by the present disclosure specifically includes the following steps:
101, decoupling a channel separation mechanism into a 1 × 1 convolution and core feature extraction module in parallel by arranging a progressive distillation module;
the method comprises the following steps of (1) replacing Feature distillation connection of an RFDN super-resolution reconstruction model by a designed progressive distillation module, wherein the progressive distillation module consists of a 1 multiplied by 1 convolution layer, a core Feature extraction layer, a channel merging layer, a channel shuffling layer and a multi-scale space attention mechanism layer;
the progressive distillation module is used for decoupling a channel separation mechanism into a 1 × 1 convolution (generating extraction characteristics) and a core characteristic extraction module (further refining the characteristics for coarse extraction), wherein an output channel of the 1 × 1 convolution is half of an input channel so as to further reduce network parameters, and the generated image characteristics are used for channel combination; the image features generated by the core feature extraction module are used for further refining the processed features of the previous layer; the process is carried out for three cycles, and a conventional 3 multiplied by 3 convolution layer is adopted to enter a channel merging layer smoothly after the third core feature extraction module refines the generated image features; therefore, four parts of combined image features are obtained in the channel combination layer.
The distillation layer operation is a convolution layer with convolution kernel size of 1 multiplied by 1, input channel number of 48 and output channel number of 24, and 24 channel characteristic graphs are obtained by distilling the input 48 channel characteristic graphs to obtain distillation characteristics;
the core feature extraction module of the progressive distillation module is divided into two types, wherein one type is an asymmetric convolution residual module with expansion convolution, and the asymmetric convolution residual module consists of four parts, namely 1 multiplied by 3DConv (expansion convolution), 3 multiplied by 1DConv (expansion convolution), reLU (activation function) and identity mapping; the other asymmetric convolution residual module consists of four parts, namely 1 multiplied by 3Conv, 3 multiplied by 1Conv, reLU (activation function) and identity mapping; decomposing the common 3 × 3 convolution into a group of 1 × 3 convolution and 3 × 1 convolution, and reducing 33% of convolution calculation amount under the condition of ensuring that the convolution receptive field is not reduced; however, in the core feature extraction module, only a single convolution processes the upper layer features, and still because the narrow receptive field cannot sufficiently extract the texture information of the image, the receptive field of the convolution kernel is amplified to different degrees by adopting the expansion convolutions with different expansion rates, so that the edge and texture feature information of the image can be extracted;
since the dilation convolution inserts holes between pixel cells during convolution, some information is lost when extracting features, especially in the underlying network. Therefore, the best effect of super-resolution reconstruction performance is achieved by combining two core feature extraction modules; in the disclosure, 6 progressive distillation modules are used, wherein the first 3 core extraction modules adopt asymmetric convolution residual modules without expansion convolution so as to avoid losing important characteristic information, and the last 3 core extraction modules adopt asymmetric convolution residual modules with expansion convolution so as to fully extract edge and texture characteristic information of an image;
and the channel merging layer operation is to call a Concatenate function, merge the four image features obtained by distillation, and obtain the image feature with the number of channels being 96.
The channel shuffling layer operation is used for combining the image characteristics with the number of 96 channels, so that the characteristic sharing performance among the channels is further improved, the combined characteristic information can be ensured to flow among different groups, and the accuracy of image super-resolution reconstruction is improved.
After the channel shuffling layer, the weights of the fused features may be adaptively re-calibrated using a multi-scale spatial attention mechanism. Firstly, reducing the number of channels by adopting 1 × 1 convolution to ensure the light weight of the whole attention mechanism, then increasing the receptive field of the attention mechanism by using a combination of step convolution and Maxpoling, then further amplifying the receptive field by respectively utilizing expansion convolutions with different expansion rates to achieve the effect of extracting more detailed characteristics, and restoring the space dimension and the channel dimension by respectively adopting an upsampling function and 1 × 1 convolution relative to the front layer; secondly, we will apply a 1 × 1 convolution with negligible parameters and depth-wise convolution combination directly on the input features to enhance the weight of the key information of the features;
and finally, modeling the channel relation through 1 multiplied by 1 convolution to ensure that the number of output characteristic channels is equal to the number of input characteristic channels and a characteristic diagram is output.
102, reconstructing a high-resolution image by setting an attention up-sampling module;
it should be noted that the present disclosure employs a pixel-based attention upsampling module instead of sub-pixels employed in most super-resolution reconstruction methods, and the pixel-based attention upsampling module innovatively applies a three-dimensional pixel attention mechanism to an upsampling portion, so that reconstruction performance is significantly improved.
103, training the convolutional neural network to obtain a high-resolution image;
it should be noted that the specific training process is as follows:
the present disclosure trains using RGB tiles cut out of low resolution images to a size of 64 x 64, and augments the training data by random horizontal flipping and 90 ° rotation;
selecting Adam as an optimizer and setting a parameter beta thereof 1 =0.9、β 2 =0.999、ε=10-8;
The learning rate is initially set to 5 × 10-4 and halved every 2 × 105 minimum batches for finer learning;
taking the original image set and the output image set as L 1 Input of loss function
Figure BDA0003301508580000061
Figure BDA0003301508580000062
Updating the network parameters, and obtaining a trained convolutional neural network through presetting training period number;
in the formula, | × | non-conducting phosphor 1 Represents the norm of L1 and the standard number of the L,
Figure BDA0003301508580000063
for the ith high resolution image that was over-divided by the network model of the present disclosure,
Figure BDA0003301508580000071
is an original high resolution truth image
Referring to fig. 4, step 104, reconstructing the super-resolution of the image according to the trained convolutional neural network.
It should be noted that the reconstruction is:
image I that will need to be over-divided LR Inputting the trained convolutional neural network to obtain an output image I SR
For image I LR Performing bilinear interpolation upsampling to obtain sum I SR Images I of the same size BI
Will I BI And I SR Adding the pixel values of the corresponding pixel points to obtain a super-resolution image I F
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice in the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (4)

1. A lightweight image super-resolution reconstruction method based on a progressive distillation network is characterized by comprising the following steps:
the channel separation mechanism is decoupled into a 1 × 1 convolution and a core feature extraction module in parallel by arranging a progressive distillation module, wherein an output channel of the 1 × 1 convolution is half of an input channel, and generated image features are used for channel combination; the image features generated by the core feature extraction module are used for refining the processed features of the previous layer; the operation is executed three times in a circulating iteration mode, the 3 x 3 convolutional layer enters a channel merging layer to obtain merged image characteristics, the channel shuffling layer operation is adopted for the merged image characteristics to obtain a processed merged image, the core characteristic extraction module comprises an asymmetric convolution residual error module with expansion convolution and an asymmetric convolution residual error module, and the asymmetric convolution residual error module with expansion convolution consists of four parts, namely 1 x 3DConv, 3 x 1DConv, reLU and identity mapping; the asymmetric convolution residual module consists of four parts, namely 1 multiplied by 3Conv, 3 multiplied by 1Conv, reLU and identity mapping;
reconstructing a high-resolution image by setting an attention up-sampling module;
training a convolutional neural network to obtain a high-resolution image;
reconstructing the super-resolution of the image according to the trained convolutional neural network, and inputting the image ILR needing the super-resolution into the trained convolutional neural network to obtain an output image ISR;
carrying out bilinear interpolation up-sampling on the image ILR to obtain an image IBI with the same size as the ISR;
adding pixel values of pixel points corresponding to the IBI and the ISR to obtain a super-resolution image IF;
obtaining an output characteristic diagram by adopting a multi-scale space attention mechanism for the processed combined image, wherein the output characteristic diagram comprises the following steps:
the number of channels is reduced by setting 1 multiplied by 1 convolution;
the perception visual field of the attention mechanism is increased by setting a step volume and Maxpooling combination;
and the setting adjustment is made by the dilation convolution, the upsampling function, and the 1 × 1 convolution:
modeling a channel relation through 1 multiplied by 1 convolution, and outputting a characteristic diagram;
the setting adjustment by the expansion convolution, the upsampling function and the 1 × 1 convolution includes:
the amplification of the receptive field is realized through the expansion convolution with different expansion rates;
recovering the spatial dimension and the channel dimension by convolution of an upsampling function and 1 multiplied by 1;
and applying a combination of 1 × 1 convolution and depth-wise convolution of negligible parameters on the input features to realize the enhancement of the weight of key information of the features.
2. The method of claim 1, wherein the attention upsampling module employs a voxel attention mechanism.
3. The method of claim 2, wherein reconstructing the high resolution image by providing an attention up-sampling module comprises:
and 6 progressive distillation modules, wherein the first 3 core extraction modules adopt asymmetric convolution residual modules without expansion convolution, and the last 3 core extraction modules adopt asymmetric convolution residual modules with expansion convolution.
4. The method of claim 1, wherein a multi-scale spatial attention mechanism is applied to the processed merged image to obtain an output feature map.
CN202111191958.XA 2021-10-13 2021-10-13 Lightweight image super-resolution reconstruction method based on progressive distillation network Active CN113837946B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111191958.XA CN113837946B (en) 2021-10-13 2021-10-13 Lightweight image super-resolution reconstruction method based on progressive distillation network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111191958.XA CN113837946B (en) 2021-10-13 2021-10-13 Lightweight image super-resolution reconstruction method based on progressive distillation network

Publications (2)

Publication Number Publication Date
CN113837946A CN113837946A (en) 2021-12-24
CN113837946B true CN113837946B (en) 2022-12-06

Family

ID=78968777

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111191958.XA Active CN113837946B (en) 2021-10-13 2021-10-13 Lightweight image super-resolution reconstruction method based on progressive distillation network

Country Status (1)

Country Link
CN (1) CN113837946B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114708148A (en) * 2022-04-12 2022-07-05 中国电子技术标准化研究院 Infrared image super-resolution reconstruction method based on transfer learning
CN114782256B (en) * 2022-06-21 2022-09-02 腾讯科技(深圳)有限公司 Image reconstruction method and device, computer equipment and storage medium
CN115131242B (en) * 2022-06-28 2023-08-29 闽江学院 Light-weight super-resolution reconstruction method based on attention and distillation mechanism
CN116228546B (en) * 2023-05-04 2023-07-14 北京蔚领时代科技有限公司 Image superdivision method and system based on channel recombination

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109345449B (en) * 2018-07-17 2020-11-10 西安交通大学 Image super-resolution and non-uniform blur removing method based on fusion network
CN109949224B (en) * 2019-02-26 2023-06-30 北京悦图遥感科技发展有限公司 Deep learning-based cascade super-resolution reconstruction method and device
CN111047515B (en) * 2019-12-29 2024-01-09 兰州理工大学 Attention mechanism-based cavity convolutional neural network image super-resolution reconstruction method
CN111161150B (en) * 2019-12-30 2023-06-23 北京工业大学 Image super-resolution reconstruction method based on multi-scale attention cascade network
CN111553403B (en) * 2020-04-23 2023-04-18 山东大学 Smog detection method and system based on pseudo-3D convolutional neural network
CN111861880B (en) * 2020-06-05 2022-08-30 昆明理工大学 Image super-fusion method based on regional information enhancement and block self-attention
CN112017116B (en) * 2020-07-23 2024-02-23 西北大学 Image super-resolution reconstruction network based on asymmetric convolution and construction method thereof
CN113096019B (en) * 2021-04-28 2023-04-18 中国第一汽车股份有限公司 Image reconstruction method, image reconstruction device, image processing equipment and storage medium

Also Published As

Publication number Publication date
CN113837946A (en) 2021-12-24

Similar Documents

Publication Publication Date Title
CN113837946B (en) Lightweight image super-resolution reconstruction method based on progressive distillation network
CN106683067B (en) Deep learning super-resolution reconstruction method based on residual sub-images
CN110136062B (en) Super-resolution reconstruction method combining semantic segmentation
CN108122197B (en) Image super-resolution reconstruction method based on deep learning
CN110020989B (en) Depth image super-resolution reconstruction method based on deep learning
CN111242846B (en) Fine-grained scale image super-resolution method based on non-local enhancement network
CN111640060A (en) Single image super-resolution reconstruction method based on deep learning and multi-scale residual dense module
CN110930306B (en) Depth map super-resolution reconstruction network construction method based on non-local perception
CN111951164B (en) Image super-resolution reconstruction network structure and image reconstruction effect analysis method
CN112767283A (en) Non-uniform image defogging method based on multi-image block division
CN112580473A (en) Motion feature fused video super-resolution reconstruction method
CN115797176A (en) Image super-resolution reconstruction method
CN114841859A (en) Single-image super-resolution reconstruction method based on lightweight neural network and Transformer
Li et al. D2c-sr: A divergence to convergence approach for real-world image super-resolution
CN113436094B (en) Gray level image automatic coloring method based on multi-view attention mechanism
CN115170921A (en) Binocular stereo matching method based on bilateral grid learning and edge loss
Yang Super resolution using dual path connections
Yang et al. Depth map super-resolution via multilevel recursive guidance and progressive supervision
CN113674151A (en) Image super-resolution reconstruction method based on deep neural network
Yang et al. Enhanced two-phase residual network for single image super-resolution
CN113793269B (en) Super-resolution image reconstruction method based on improved neighborhood embedding and priori learning
CN117391959B (en) Super-resolution reconstruction method and system based on multi-granularity matching and multi-scale aggregation
Dong et al. Trans-GAN network for image super-resolution reconstruction
CN114022360B (en) Rendered image super-resolution system based on deep learning
Synthiya Vinothini et al. Attention-Based SRGAN for Super Resolution of Satellite Images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant