CN114331831A - Light-weight single-image super-resolution reconstruction method - Google Patents

Light-weight single-image super-resolution reconstruction method Download PDF

Info

Publication number
CN114331831A
CN114331831A CN202111398659.3A CN202111398659A CN114331831A CN 114331831 A CN114331831 A CN 114331831A CN 202111398659 A CN202111398659 A CN 202111398659A CN 114331831 A CN114331831 A CN 114331831A
Authority
CN
China
Prior art keywords
image
resolution
super
network
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111398659.3A
Other languages
Chinese (zh)
Inventor
刘云清
蒋一纯
朱德鹏
詹伟达
石艳丽
郝子强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun University of Science and Technology
Original Assignee
Changchun University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun University of Science and Technology filed Critical Changchun University of Science and Technology
Priority to CN202111398659.3A priority Critical patent/CN114331831A/en
Publication of CN114331831A publication Critical patent/CN114331831A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A light-weight single-image super-resolution reconstruction method belongs to the field of image super-resolution reconstruction, and aims to solve the problem that the existing super-resolution method is high in space complexity and time complexity, and comprises the following steps: constructing a network model: the entire network comprises four main modules: the device comprises a shallow layer feature extraction module, a deep layer feature extraction module, an information fusion module and an up-sampling module; preparing a data set: performing analog degradation on the data set, and using the obtained high-resolution image pair and the obtained low-resolution image pair to train the whole convolutional neural network; training a network model; minimizing the loss value; fine-tuning the model; and (4) saving the model: and (3) solidifying the finally obtained model parameters, and directly loading the image and the network parameters into a network to obtain a final super-resolution image when the super-resolution reconstruction operation is required. On the premise of keeping higher reconstruction quality, the method greatly reduces the parameter quantity and the calculation quantity of the network, and is more suitable for being realized on embedded equipment.

Description

Light-weight single-image super-resolution reconstruction method
Technical Field
The invention relates to a light single-image super-resolution reconstruction method, and belongs to the field of image super-resolution reconstruction.
Background
Single-image super-resolution has been widely used in various applications such as infrared imaging, remote sensing imaging, and medical imaging. The image super-resolution is realized by combining external prior knowledge and internal structure information, so that the details of an image are supplemented and the resolution of the image is increased. Because the traditional image super-resolution technology is only based on the constraint of manual design, additional external information cannot be introduced, and the traditional image super-resolution technology cannot adapt to a complicated image degradation process, so that the reconstruction quality is poor. Therefore, it is necessary to reconstruct a high-quality image closer to the true high resolution finally by acquiring sufficiently rich external information and constructing a mapping from a low-resolution image to a high-resolution image in a deep learning-based manner. However, most of the existing super-resolution reconstruction methods based on deep learning rely on complex network structures, which results in huge spatial complexity and computational complexity of the network, thereby causing great difficulty in application of the network on mobile devices and limiting the practicability of the methods.
The Chinese patent publication number is 'CN 111353940B', the name is 'an image super-resolution reconstruction method based on deep learning iteration up-down sampling', the method firstly down-samples a high-resolution image to a low-resolution image; then, extracting the characteristics of the input image through a series of up-down sampling residual error modules; then, splicing the outputs of all the up-sampling modules; and finally, outputting a 3-by-3 reconstructed convolution layer to obtain a fused image. The method carries out up-sampling operation, greatly increases the size and the calculation time of the network while introducing redundant information, and adopts a very deep network structure, thereby resulting in a complex implementation process and low efficiency.
Disclosure of Invention
The invention provides a light-weight single-image super-resolution reconstruction method, aiming at solving the problem that the existing super-resolution method is high in space complexity and time complexity. On the premise of keeping higher reconstruction quality, the method greatly reduces the parameter quantity and the calculation quantity of the network, and is more suitable for being realized on embedded equipment.
The technical scheme for solving the technical problem is as follows:
a light-weight single-image super-resolution reconstruction method is characterized by comprising the following steps:
step 1, constructing a network model: the entire network comprises four main modules: the device comprises a shallow layer feature extraction module, a deep layer feature extraction module, an information fusion module and an up-sampling module; the shallow layer feature extraction module consists of two convolution layers and is used for preliminarily extracting structural features of the image; the deep layer feature extraction module is formed by stacking sixteen same light-weight double information flow residual blocks, and image features are sequentially extracted by the sixteen light-weight double information flow residual blocks after shallow layer information is input into the deep layer feature extraction module so as to further obtain deep layer information of an image; the information fusion module fuses and screens deep information output by each level of lightweight double information stream residual block; the up-sampling module fuses the shallow features output by the shallow information extraction module and the deep features output by the information fusion module, and then performs pixel recombination to finally obtain a super-resolution image;
step 2, preparing a data set: performing analog degradation on the data set, and using the obtained high-resolution image pair and the obtained low-resolution image pair to train the whole convolutional neural network;
step 3, training a network model: selecting an optimizer and setting corresponding parameters, and inputting the high-resolution image and the low-resolution image of the data set prepared in the step 2 into the neural network model constructed in the step 1 for training;
step 4, minimizing loss value: by minimizing the loss value between the super-resolution image output by the network and the real high-resolution image, the model parameter can be considered to be trained and finished when the loss value reaches a set threshold value or the training frequency reaches a set upper limit value, and the model parameter is stored;
step 5, fine tuning the model: training and fine-tuning the model by adopting a special training method to obtain model parameters with the best effect, and further improving the super-resolution reconstruction capability of the model;
and 6, saving the model: and (3) solidifying the finally obtained model parameters, and directly loading the image and the network parameters into a network to obtain a final super-resolution image when the super-resolution reconstruction operation is required.
The lightweight dual-information-stream residual error module in the step 1 consists of two branches, one is a multiplicative branch and the other is an additive branch, the multiplicative branch consists of a 1 × 1 convolution I, a depth separable convolution I, a 1 × 1 convolution II and a depth separable convolution II, and multiplicative components in the image degradation inverse process are sequentially extracted; the additive branch consists of a 1 × 1 convolution III, a depth separable convolution III, a 1 × 1 convolution IV and a depth separable convolution IV, and additive components in the image degradation inverse process are sequentially extracted; and summing the product of the multiplicative component and the input characteristic with the input characteristic and the additive component to generate the output characteristic.
The information fusion module in the step 1 is a 1 × 1 convolution layer, and the module splices output feature maps of the lightweight dual-information-stream residual modules at all levels, and then fuses and screens features at all levels.
The up-sampling module in the step 1 is composed of two convolution layers and pixel recombination, wherein the convolution layer three and the convolution layer four are used for compressing the characteristic diagram channels, and the pixel recombination is used for combining the characteristic diagrams of different channels so as to directly output the super-resolution image.
In the step 1, all the post-convolution activation functions in the network use linear units with leakage correction, all down-sampling operations and batch normalization operations are removed, and the step length and the filling of all convolution operations are 1.
In the step 4, the loss value is obtained by a loss function, and the loss function selects and uses a combined function of structural similarity and pixel loss; the obtained super-resolution image is consistent with the high-resolution image at the edge, color and brightness of the image and better approaches to the real high-resolution image.
In the step 5, a cosine annealing training method is adopted in the process of fine tuning the model parameters.
The invention has the following beneficial effects:
1. the method is based on a deep learning method, adopts a multi-level feature fusion and screening mechanism, fully excavates useful information at different depths, enhances the super-resolution reconstruction effect, and is not easy to cause difficulty in training.
2. According to the method, a double information flow structure is added into the residual block, additive and multiplicative information is learned, and the capability of a network for constructing the mapping from the low-resolution image to the high-resolution image is improved.
3. The invention improves the feature extraction structure in the residual block, uses two groups of 1 multiplied by 1 convolutions and depth separable convolutions, and greatly reduces the parameter quantity and the calculated quantity of the network while ensuring the performance.
4. The invention improves the structure of an up-sampling module, firstly performs dimension compression on the obtained image characteristics, and then obtains a super-resolution image directly through pixel recombination, thereby avoiding a large number of convolution layers which are originally used for expanding the dimension before the pixel recombination, and effectively simplifying the process of image reconstruction.
Drawings
FIG. 1 is a flow chart of a light-weighted single-image super-resolution reconstruction method.
FIG. 2 is a network structure diagram of the light-weighted single-image super-resolution reconstruction method.
Fig. 3 is a specific composition of the lightweight dual stream residual block of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
As shown in fig. 1, a light-weighted single-image super-resolution reconstruction method specifically includes the following steps:
step 1, constructing a network model. The entire network comprises four main modules: the device comprises a shallow layer feature extraction module, a deep layer feature extraction module, an information fusion module and an up-sampling module. The shallow layer feature extraction module consists of a convolution layer I and a convolution layer II with convolution kernel size of 3 multiplied by 3, and step length filling is 1, so as to extract shallow layer image information. The deep layer feature extraction module extracts deep layer image information from the shallow layer image information output by the shallow layer feature extraction module, the deep layer image information comprises 16 light-weight double information flow residual blocks, each light-weight double information flow residual block is provided with a multiplicative branch and an additive branch, the multiplicative branch comprises a 1 × 1 convolution I, a depth separable convolution I, a 1 × 1 convolution II and a depth separable convolution II, and multiplicative components in an image degradation inverse process are sequentially extracted; the additive branch consists of a 1 × 1 convolution III, a depth separable convolution III, a 1 × 1 convolution IV and a depth separable convolution IV, and additive components in the image degradation inverse process are sequentially extracted; and summing the product of the multiplicative component and the input characteristic with the input characteristic and the additive component to generate the output characteristic. The information fusion module is composed of a 1 x 1 convolution layer, and output characteristic graphs of all levels of lightweight double-information-flow residual modules are spliced and then fused and screened. The up-sampling module consists of a convolution layer three, a convolution layer four and a pixel recombination layer, the size of a convolution kernel is 3 multiplied by 3, the step length and the filling are both 1, after the up-sampling module carries out information decoding on the shallow layer information output by the convolution layer two and the deep layer information output by the information fusion module, the super-resolution image is output through the convolution layer three and the convolution layer four, and the super-resolution image is output through the pixel recombination.
And 2, preparing a data set, wherein the DIV2K and Flickr2K data sets are used in the training process. Firstly, performing simulated degradation on an image by using bicubic downsampling, then combining the degraded image and the original high-resolution image into a pair of high-low resolution image pairs, and using the obtained high-low resolution image pairs to train the whole convolutional neural network.
And 3, training the network model, selecting an optimizer, setting corresponding parameters, and inputting the data set prepared in the step 2 into the constructed network model for training.
And 4, minimizing the loss function value. And outputting the loss function of the image and the label by the minimized network, considering that the model parameters are trained and finished until the training times reach a set threshold value or the value of the loss function reaches a set range, and storing the model parameters. The loss function selection uses a combination of structural similarity and pixel loss during the training process. The method aims to ensure that the obtained super-resolution image keeps consistent with a high-resolution image at the edge, color and brightness of the image and better approaches to a real high-resolution image.
And 5, fine-tuning the model. And (3) training and fine-tuning the model by using a cosine annealing training method, so that the super-resolution reconstruction capability of the model is further improved.
And 6, saving the model parameters. And (3) solidifying the finally obtained model parameters, and directly loading the image and the network parameters into a network to obtain a final super-resolution image when the super-resolution reconstruction operation is required.
Example (b):
in the network model structure in step 1, as shown in fig. 2, the entire network includes four main modules: the device comprises a shallow layer feature extraction module, a deep layer feature extraction module, an information fusion module and an up-sampling module. The shallow layer feature extraction module consists of a convolution layer I and a convolution layer II with convolution kernel size of 3 multiplied by 3, and step length filling is 1, so as to extract shallow layer image information. The deep layer feature extraction module extracts deep layer image information from the shallow layer image information output by the shallow layer feature extraction module, and the deep layer image information comprises 16 light-weight double information flow residual blocks, as shown in fig. 3, each light-weight double information flow residual block comprises a multiplicative branch and an additive branch, the multiplicative branch comprises a 1 × 1 convolution one, a depth separable convolution one, a 1 × 1 convolution two and a depth separable convolution two, and multiplicative components of an image degradation inverse process are sequentially extracted; the additive branch consists of a 1 × 1 convolution III, a depth separable convolution III, a 1 × 1 convolution IV and a depth separable convolution IV, and additive components in the image degradation inverse process are sequentially extracted; and summing the product of the multiplicative component and the input characteristic with the input characteristic and the additive component to generate the output characteristic. The information fusion module is composed of a 1 x 1 convolution layer, the output characteristic graphs of all levels of lightweight double-information-flow residual modules are spliced, and then all levels of characteristics are fused and screened. The up-sampling module consists of a convolution layer three, a convolution layer four and a pixel reconstruction layer, the size of a convolution kernel is 3 multiplied by 3, the step length and the filling are both 1, after shallow layer information output by the convolution layer two and deep layer information output by the information fusion module are decoded through the convolution layer three and the convolution layer four, and then a super-resolution image is output through pixel reconstruction. In order to adapt to low-level visual tasks and keep more structural information, all post-convolution activation functions in the network use linear units with leakage correction, and all down-sampling operations and batch normalization operations are deleted. The linear unit function with leakage correction is defined as follows:
Figure RE-GDA0003515770280000051
the data set in step 2 uses DIV2K and Flickr 2K. The DIV2K data set contains 800 high-resolution images, the Flickr2K image contains 2650 high-resolution images, the images are all cut into 256 multiplied by 256 image blocks, and then the low-resolution images are obtained by double-cubic downsampling and combined into a high-low resolution image pair. To expand the amount of data, the image is then flip transformed.
And 4, calculating a loss function by the network output and the label in the step 4, and achieving a better super-resolution reconstruction effect by minimizing the loss function. The loss function selects structural similarity and pixel loss. The structural similarity calculation formula is as follows:
SSIM(x,y)=[l(x,y)]α·[c(x,y)]β·[s(x,y)]γ
where l (x, y) represents a brightness contrast function, c (x, y) represents a contrast function, s (x, y) represents a texture contrast function, and the three functions are defined as follows:
Figure RE-GDA0003515770280000061
Figure RE-GDA0003515770280000062
Figure RE-GDA0003515770280000063
in practical application, alpha, beta and gamma all take the values of 1, C3Is 0.5C2Thus, the structural similarity formula can be expressed as:
Figure RE-GDA0003515770280000064
x and y respectively represent pixel points of a window with the size of NxN in the two images, and muxAnd muyRespectively representing the mean values of x and y, which can be used as brightness estimation; sigmaxAnd σyThe variances of x and y are respectively expressed and can be used as contrast estimation; sigmaxyRepresenting the covariance of x and y, which can be used as a structural similarity measure. c. C1And c2For the minimum parameter, a denominator of 0 can be avoided, typically 0.01 and 0.03 respectively. So by definition, the structural similarity of the entire image is calculated as follows:
Figure RE-GDA0003515770280000065
x and Y represent the two images to be compared, MN is the total number of windows, XijAnd yijFor each partial window in the two pictures. Structural similarity has symmetry with a range of values [0,1 ]]The closer the value is to 1, the greater the structural similarity and the smaller the difference between the two images. In general, the difference between 1 and 1 can be directly reduced through network optimization, and the loss of structural similarity is as follows:
SSIMloss=1-MSSIM(L,O)
l and O represent the output of the tag and the network, respectively. By optimizing the loss of structural similarity, the difference between the output image and the input image can be gradually reduced, so that the images are closer in brightness and contrast, are also closer in intuition perception, and the generated image has higher quality.
The pixel loss function is defined as follows:
Figure RE-GDA0003515770280000066
out and label represent the output and label of the network.
The overall loss function is defined as:
Tloss=Ploss+SSIMloss
the training frequency is set to be 100, the number of the network pictures input each time is about 16-32, the upper limit of the number of the network pictures input each time is mainly determined according to the performance of a computer graphic processor, generally, the number of the network pictures input each time is within a range of 16-32, so that the network training is more stable and the training result is better. The learning rate of the training process is set to be 0.0001, so that the training speed can be ensured, and the problem of gradient explosion can be avoided. The learning rate is reduced to 0.5 of the current learning rate every 50 times of training, and the optimal value of the parameter can be better approached. The network parameter optimizer selects an adaptive moment estimation algorithm, and has the advantages that after offset correction, the learning rate of each iteration has a certain range, so that the parameters are relatively stable. The loss function value threshold is set to about 0.003, and less than this threshold, the training of the entire network is considered to be substantially complete.
The period of the cosine annealing method used in the step 5 is 10 training periods, the maximum value is 0.0001, and the minimum value is 0.000001.
In the step 6, after the network training is completed, all parameters in the network need to be stored, and then images with any size are input to obtain a super-resolution reconstruction result.
Wherein the implementation of convolution, depth separable convolution, stitching operation, upsampling operation and pixel rebinning is an algorithm well known to those skilled in the art, and the specific procedures and methods can be referred to in the corresponding textbooks or technical literature.
The invention can obtain the super-resolution reconstruction effect with higher quality by constructing the light-weighted single-image super-resolution reconstruction method, has less parameters and calculated amount compared with the prior complex network due to the light-weighted structure, and can be applied to various mobile devices. The feasibility and the superiority of the method are further verified by calculating the relevant indexes of the image obtained by the existing method. The correlation indexes of the prior art and the method proposed by the present invention are shown in table 1:
Figure RE-GDA0003515770280000081
as can be seen from the table, the method provided by the invention not only has fewer parameters and calculation amount, but also has two indexes of higher peak signal-to-noise ratio and structural similarity, and the indexes further illustrate that the method not only is lighter, but also has better super-resolution reconstruction quality.

Claims (7)

1. A light-weight single-image super-resolution reconstruction method is characterized by comprising the following steps:
step 1, constructing a network model: the entire network comprises four main modules: the device comprises a shallow layer feature extraction module, a deep layer feature extraction module, an information fusion module and an up-sampling module; the shallow layer feature extraction module consists of two convolution layers and is used for preliminarily extracting structural features of the image; the deep layer feature extraction module is formed by stacking sixteen same light-weight double information flow residual blocks, and image features are sequentially extracted by the sixteen light-weight double information flow residual blocks after shallow layer information is input into the deep layer feature extraction module so as to further obtain deep layer information of an image; the information fusion module fuses and screens deep information output by each level of lightweight double information stream residual block; the up-sampling module fuses the shallow features output by the shallow information extraction module and the deep features output by the information fusion module, and then performs pixel recombination to finally obtain a super-resolution image;
step 2, preparing a data set: performing analog degradation on the data set, and using the obtained high-resolution image pair and the obtained low-resolution image pair to train the whole convolutional neural network;
step 3, training a network model: selecting an optimizer and setting corresponding parameters, and inputting the high-resolution image and the low-resolution image of the data set prepared in the step 2 into the neural network model constructed in the step 1 for training;
step 4, minimizing loss value: by minimizing the loss value between the super-resolution image output by the network and the real high-resolution image, the model parameter can be considered to be trained and finished when the loss value reaches a set threshold value or the training frequency reaches a set upper limit value, and the model parameter is stored;
step 5, fine tuning the model: training and fine-tuning the model by adopting a special training method to obtain model parameters with the best effect, and further improving the super-resolution reconstruction capability of the model;
and 6, saving the model: and (3) solidifying the finally obtained model parameters, and directly loading the image and the network parameters into a network to obtain a final super-resolution image when the super-resolution reconstruction operation is required.
2. The method for reconstructing super-resolution of a light-weighted single image as claimed in claim 1, wherein the light-weighted dual stream residual block in step 1 is composed of two branches, one is a multiplicative branch and the other is an additive branch, the multiplicative branch is composed of a 1 x 1 convolution one, a depth separable convolution one, a 1 x 1 convolution two and a depth separable convolution two, and multiplicative components of an image degradation inverse process are sequentially extracted; the additive branch consists of a 1 × 1 convolution III, a depth separable convolution III, a 1 × 1 convolution IV and a depth separable convolution IV, and additive components in the image degradation inverse process are sequentially extracted; and summing the product of the multiplicative component and the input characteristic with the input characteristic and the additive component to generate the output characteristic.
3. The method for reconstructing the super-resolution single-image with light weight according to claim 1, wherein the information fusion module in step 1 is a 1 × 1 convolution layer, and the module splices the output feature maps of the light-weighted dual-information-stream residual modules at each stage, and then fuses and screens the features at each stage.
4. The method for super-resolution reconstruction of single-image with light weight according to claim 1, wherein the up-sampling module in step 1 is composed of two convolutional layers and pixel recombination, wherein convolutional layer three and convolutional layer four are used for compressing the feature map channels, and pixel recombination is used for combining feature maps of different channels to directly output the super-resolution image.
5. The method for reconstructing the super-resolution of the light-weighted single image as claimed in claim 1, wherein the activation function after all the convolution layers in the network in step 1 uses a linear unit with leakage correction, all the downsampling operations and batch normalization operations are removed, and all the convolution operations have a step size and a fill of 1.
6. The method for reconstructing super-resolution of single-image with light weight according to claim 1, wherein the loss value in step 4 is obtained by a loss function, and the loss function is selected from a combination function of structural similarity and pixel loss; the obtained super-resolution image is consistent with the high-resolution image at the edge, color and brightness of the image and better approaches to the real high-resolution image.
7. The method for super-resolution reconstruction of single-image with light weight according to claim 1, wherein the training method of cosine annealing is adopted in the process of fine tuning the model parameters in step 5.
CN202111398659.3A 2021-11-19 2021-11-19 Light-weight single-image super-resolution reconstruction method Pending CN114331831A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111398659.3A CN114331831A (en) 2021-11-19 2021-11-19 Light-weight single-image super-resolution reconstruction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111398659.3A CN114331831A (en) 2021-11-19 2021-11-19 Light-weight single-image super-resolution reconstruction method

Publications (1)

Publication Number Publication Date
CN114331831A true CN114331831A (en) 2022-04-12

Family

ID=81046455

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111398659.3A Pending CN114331831A (en) 2021-11-19 2021-11-19 Light-weight single-image super-resolution reconstruction method

Country Status (1)

Country Link
CN (1) CN114331831A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115239557A (en) * 2022-07-11 2022-10-25 河北大学 Light-weight X-ray image super-resolution reconstruction method
CN115761684A (en) * 2023-01-10 2023-03-07 常熟理工学院 AGV target recognition and attitude angle resolving method and system based on machine vision
CN115841423A (en) * 2022-12-12 2023-03-24 之江实验室 Wide-field illumination fluorescence super-resolution microscopic imaging method based on deep learning
CN116402679A (en) * 2022-12-28 2023-07-07 长春理工大学 Lightweight infrared super-resolution self-adaptive reconstruction method
CN116993592A (en) * 2023-09-27 2023-11-03 城云科技(中国)有限公司 Construction method, device and application of image super-resolution reconstruction model

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115239557A (en) * 2022-07-11 2022-10-25 河北大学 Light-weight X-ray image super-resolution reconstruction method
CN115239557B (en) * 2022-07-11 2023-10-24 河北大学 Light X-ray image super-resolution reconstruction method
CN115841423A (en) * 2022-12-12 2023-03-24 之江实验室 Wide-field illumination fluorescence super-resolution microscopic imaging method based on deep learning
CN116402679A (en) * 2022-12-28 2023-07-07 长春理工大学 Lightweight infrared super-resolution self-adaptive reconstruction method
CN116402679B (en) * 2022-12-28 2024-05-28 长春理工大学 Lightweight infrared super-resolution self-adaptive reconstruction method
CN115761684A (en) * 2023-01-10 2023-03-07 常熟理工学院 AGV target recognition and attitude angle resolving method and system based on machine vision
CN116993592A (en) * 2023-09-27 2023-11-03 城云科技(中国)有限公司 Construction method, device and application of image super-resolution reconstruction model
CN116993592B (en) * 2023-09-27 2023-12-12 城云科技(中国)有限公司 Construction method, device and application of image super-resolution reconstruction model

Similar Documents

Publication Publication Date Title
CN114092330B (en) Light-weight multi-scale infrared image super-resolution reconstruction method
CN114331831A (en) Light-weight single-image super-resolution reconstruction method
CN109903228B (en) Image super-resolution reconstruction method based on convolutional neural network
CN113362223B (en) Image super-resolution reconstruction method based on attention mechanism and two-channel network
CN111429347A (en) Image super-resolution reconstruction method and device and computer-readable storage medium
CN112435191B (en) Low-illumination image enhancement method based on fusion of multiple neural network structures
CN112801901A (en) Image deblurring algorithm based on block multi-scale convolution neural network
CN110717868B (en) Video high dynamic range inverse tone mapping model construction and mapping method and device
CN111681166A (en) Image super-resolution reconstruction method of stacked attention mechanism coding and decoding unit
CN111951164B (en) Image super-resolution reconstruction network structure and image reconstruction effect analysis method
CN111402128A (en) Image super-resolution reconstruction method based on multi-scale pyramid network
CN111640060A (en) Single image super-resolution reconstruction method based on deep learning and multi-scale residual dense module
CN113902658B (en) RGB image-to-hyperspectral image reconstruction method based on dense multiscale network
CN112699844A (en) Image super-resolution method based on multi-scale residual error level dense connection network
CN115393186A (en) Face image super-resolution reconstruction method, system, device and medium
CN116402679A (en) Lightweight infrared super-resolution self-adaptive reconstruction method
CN113298744B (en) End-to-end infrared and visible light image fusion method
CN114359039A (en) Knowledge distillation-based image super-resolution method
CN114022356A (en) River course flow water level remote sensing image super-resolution method and system based on wavelet domain
CN112150360A (en) IVUS image super-resolution reconstruction method based on dense residual error network
CN116668738A (en) Video space-time super-resolution reconstruction method, device and storage medium
CN116029908A (en) 3D magnetic resonance super-resolution method based on cross-modal and cross-scale feature fusion
CN116362995A (en) Tooth image restoration method and system based on standard prior
CN114219738A (en) Single-image multi-scale super-resolution reconstruction network structure and method
CN113077385A (en) Video super-resolution method and system based on countermeasure generation network and edge enhancement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination