CN111047514B - Single image super-resolution method - Google Patents

Single image super-resolution method Download PDF

Info

Publication number
CN111047514B
CN111047514B CN201911215983.XA CN201911215983A CN111047514B CN 111047514 B CN111047514 B CN 111047514B CN 201911215983 A CN201911215983 A CN 201911215983A CN 111047514 B CN111047514 B CN 111047514B
Authority
CN
China
Prior art keywords
saak
transformation
resolution
image
super
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911215983.XA
Other languages
Chinese (zh)
Other versions
CN111047514A (en
Inventor
张永兵
李晶晶
季向阳
王好谦
戴琼海
杨芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen International Graduate School of Tsinghua University
Original Assignee
Shenzhen International Graduate School of Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen International Graduate School of Tsinghua University filed Critical Shenzhen International Graduate School of Tsinghua University
Priority to CN201911215983.XA priority Critical patent/CN111047514B/en
Publication of CN111047514A publication Critical patent/CN111047514A/en
Application granted granted Critical
Publication of CN111047514B publication Critical patent/CN111047514B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a super-resolution method for a single image, which comprises the following steps: sampling a single low-resolution image to obtain an image P1; inputting P1 into a VDSR, and outputting a high resolution image P2; respectively carrying out Saak transformation on P1 and P2 by using transformation kernels T1 and T2, and respectively obtaining 2n by P1 and P2 2 A Saak characteristic diagram is displayed; the transformation kernels T1 and T2 are obtained by calculation of P1 and P2 respectively; the convolution kernel of the Saak transformation is nxn, and n is a natural number; from 2n of P1 2 Selecting the first m Saak feature maps as a training set to train a convolutional neural network, inputting the selected m Saak feature maps into the convolutional neural network, and outputting an m Zhang Tezheng map of P1; m is more than or equal to 1 and less than or equal to n 2 (ii) a From 2n of P2 2 Selecting the last 2n in the Saak feature map 2 -m, to form 2n with the m characteristic maps of P1 output by said convolutional neural network 2 Carrying out Saak inverse transformation on the image to obtain a high-resolution image P3; and fusing the P2 and the P3 to obtain a final high-resolution image P4.

Description

Single image super-resolution method
Technical Field
The invention relates to the field of computer vision and digital image processing, in particular to a super-resolution method for a single image.
Background
The single-image super-resolution reconstruction technology is an image processing technology for processing a low-resolution image by using a computer to recover a high-resolution image, and is always a very popular direction in the field of computer vision. High resolution means that the image has a high pixel density and can provide more detail that tends to play a key role in the application. Therefore, the method is widely applied to the fields of video monitoring, target detection, face recognition, automatic driving and the like.
This is typically an ill-posed problem due to the asymmetry of the high and low resolution image information, i.e., a low resolution image tends to correspond to an infinite number of high resolution solutions. In the current super-resolution algorithm, no matter the traditional algorithm or the deep learning model, it is mostly assumed that a low-resolution image is obtained by down-sampling a high-resolution image by a known blur kernel (such as gaussian blur, bicubic interpolation, etc.), and then a high-resolution image is reversely resolved from the low-resolution image.
The super-resolution algorithm based on a single image is mainly divided into two directions: an image internal interpolation super-resolution and external sample dictionary learning super-resolution based on a traditional algorithm improves the algorithm effect by adding various regular constraints; the other is based on the super-resolution of the image of deep learning, and the mapping from the low-resolution image to the high-resolution image is learned through a neural network. The image super-resolution technology based on deep learning achieves the best performance at present.
The deep learning-based method mainly refers to a sparse dictionary learning method introduced by machine learning, firstly, a high-resolution image set is subjected to certain degradation model to obtain a corresponding low-resolution image set, the high-resolution image set and the low-resolution image set are used as training sets for training, a dictionary capable of describing image features is learned, and a high-resolution image is reconstructed through mapping of the dictionary. The method has the advantages that the recovery effect is good, but the mapping quality of the generated dictionary depends on the quantity and quality of the training set, the calculation amount is large, and the processing efficiency is low.
The image quality evaluation index is an important component for evaluating the quality of super-resolution images and mainly comprises objective evaluation and subjective evaluation, wherein common objective evaluation indexes comprise Mean Square Error (MSE), peak signal-to-noise ratio (PSNR) and Structural Similarity Index (SSIM), but the objective evaluation index often cannot truly reflect the subjective visual effect of human beings. Subjective evaluation mainly depends on methods such as manual grading and mean value taking, and the like, and the methods can effectively reflect the quality of visual effects, but the evaluation results are different from person to person due to different subjective feelings.
The above background disclosure is only for the purpose of assisting understanding of the inventive concept and technical solutions of the present invention, and does not necessarily belong to the prior art of the present patent application, and should not be used for evaluating the novelty and inventive step of the present application in the case that there is no clear evidence that the above content is disclosed before the filing date of the present patent application.
Disclosure of Invention
The invention mainly aims to provide a super-resolution method for a single image, which restrains the training process of the image in a depth network by introducing Saak transformation so as to restore the high-frequency texture detail information of the image, so that the image detail obtained by super-resolution is more vivid and more accords with the visual perception of human eyes.
The invention provides the following technical scheme for achieving the purpose:
a single image super-resolution method comprises the following steps:
s1, carrying out up-sampling on a single low-resolution image to obtain an image P1;
s2, taking the P1 as the input of a deep learning network VDSR, and outputting a high-resolution image P2;
s3, carrying out Saak transformation on P1 by using transformation kernel T1 to obtain 2n of P1 2 A Saak feature map is developed; carrying out Saak transformation on P2 by using transformation kernel T2 to obtain 2n of P2 2 A Saak characteristic diagram is displayed; the transformation core T1 is obtained by calculating P1, and the transformation core T2 is obtained by calculating P2; the size of a transformation core sliding window of the Saak transformation is n multiplied by n, and n is a natural number;
s4, 2n from P1 2 Selecting the first m Saak feature maps from the Saak feature maps as a training set, training by using a back propagation algorithm to obtain a convolution neural network, taking the first m Saak feature maps as the input of the convolution neural network, and outputting an m Zhang Tezheng map of P1; wherein m is more than or equal to 1 and less than or equal to n 2
S5, 2n from P2 2 Selecting the last 2n in the Saak feature map 2 -m Saak feature maps combined with m Zhang Tezheng map of P1 output by convolutional neural network in step S4 to 2n 2 Expanding a picture, and carrying out Saak inverse transformation to obtain a high-resolution picture P3;
and S6, fusing the P2 and the P3 to obtain a final high-resolution image P4.
Furthermore, in step S1, a bicubic interpolation method, a nearest neighbor interpolation method, or a bilinear interpolation method is used for upsampling.
Further, the loss function L of the deep learning network VDSR in step S2 is:
Figure BDA0002299514250000031
where N is the number of pictures in batch processing in each iteration, x i For the input picture, SR (x) i ) Images, y, representing outputs of a deep learning network VDSR i For input picture x i The corresponding label, w, is the weight of the loss function, saak (SR (x) i ) Represents SR (x) i ) Saak transformation result of (1), saak (y) i ) Denotes y i Saak transform results of (1).
Further, a 20-layer network structure is selected for deep learning network VDSR training, the batch size is set to 64, the momentum parameter is set to 0.9, and the weight attenuation parameter is set to 0.0001.
Further, the calculating the transformation kernel T1 by using P1 in step S3 specifically includes: performing Karl Hu Ning-Leff transformation on the vector matrix of the P1 to obtain a corresponding covariance matrix; measuring an opposite vector for each vector in the covariance matrix, and combining the opposite vector with the covariance matrix to form a new matrix; applying a nonlinear unit to each vector in the new matrix for filtering to obtain a transformation kernel T1; the transformation kernel T2 is computed by P2 in the same way as P1.
Further, when the convolutional neural network is trained in step S4, the batch size is set to 4, the momentum parameter of the adam optimizer is set to 0.9, and the initial learning rate is set to 10 -4 Training is set to a learning rate of 10 after 40000 iterations -5
Further, the Saak inverse transformation in step S5 is performed by using the transformation kernel T1 and the combined 2n 2 The sheet is deconvoluted.
Further, in step S6, P2 and P3 are fused by using weighted average.
Compared with the prior art, the invention has the beneficial effects that: the existing technology for performing image super-resolution by using a deep learning method adopts an L _2 distance per pixel as a loss function, does not consider structural information of an image, and all local areas are processed similarly. The invention combines deep learning and Saak transformation, utilizes the concentration of the Saak transformation on different frequency information, and restrains high-frequency spectral band information in the image in network learning so as to recover the structural information and edge information of the image, so that the finally generated high-resolution image is finer and sharper and better conforms to the visual effect of human eyes.
Drawings
Fig. 1 is a schematic diagram of a single-image super-resolution method according to an embodiment of the present invention.
Detailed Description
The invention is further described with reference to the following figures and detailed description of embodiments. It should be understood that in the technical field related to super-resolution of images, the low-resolution images and the high-resolution images are only relative concepts, are not defined by specific numerical divisions of resolution, and do not belong to the uncertain terms identified in the examination guidelines.
The specific implementation mode of the invention provides a single image super-resolution method, and with reference to fig. 1, the method includes the following steps S1 to S6:
s1, performing upsampling on a single low-resolution image (marked as P0) by utilizing a bicubic interpolation method, a nearest neighbor interpolation method or a bilinear interpolation method and the like to obtain an image P1; the picture P1 has a higher resolution than the picture P0.
And step S2, taking the P1 as the input of the deep learning network VDSR, and outputting a high-resolution image P2. Here, VDSR is called Very Deep Convolutional Networks Super Resolution, that is, a Very Deep Convolutional Super Resolution network, which is trained when used in this step, and the loss function used is:
Figure BDA0002299514250000041
wherein N is in each iterationNumber of pictures, x, processed in batches i For the input picture, SR (x) i ) Images, y, representing outputs of a deep learning network VDSR i For input picture x i The corresponding label, w, is the weight of the loss function, saak (SR (x) i ) Represents SR (x) i ) Saak transformation result of (1), saak (y) i ) Denotes y i Saak transform results of (1). During training, a network structure of 20 layers is selected, batch processing size is set to be 64, momentum parameters are set to be 0.9, weight attenuation parameters are set to be 0.0001, VDSR (very large bit stream random access) is trained by using a reverse gradient propagation algorithm, after 40000 iterations, the learning rate of the network begins to change, under the hardware condition of an Nvidia Tesla K80 GPU, after 1 day, a loss function curve does not descend any more, the network converges, and training is completed.
And S3, respectively carrying out Saak transformation on the obtained images P1 and P2, wherein the Saak is Subspace approximation with augmented kernel space approximation. Wherein, the transformation kernel T1 is used for carrying out Saak transformation on P1 to obtain 2n of P1 2 A Saak feature map is developed; carrying out Saak transformation on P2 by using transformation kernel T2 to obtain 2n of P2 2 And (5) a Saak characteristic diagram. The transformation kernel T1 is obtained by P1 calculation, and the transformation kernel T2 is obtained by P2 calculation; the sliding window size of the transformation kernel of the Saak transformation is n × n, where n is a natural number.
The calculation process of the transformation kernel is specifically as follows:
performing Karl Hu Ning-Leff transformation on the vector matrix of the P1 to obtain a corresponding covariance matrix; measuring an opposite vector for each vector in the covariance matrix, and combining the opposite vector with the covariance matrix to form a new matrix; and applying a nonlinear unit to each vector in the new matrix for filtering to obtain a transformation kernel T1. The same way of calculating the transformation core T2 by using P2 is not described again.
The vectors in the obtained transformation kernel are all mutually orthogonal, and the orthogonal basis is formed, so that the high-dimensional expression vector of the picture can have unique expression in the space extended by the orthogonal basis, and the loss of picture information in the low-dimensional space is minimum.
The specific procedure for performing Saak transformation on an image, with the transformation kernel available, is a well-known technique, namely to facilitateAnd performing sliding convolution operation on the transformation kernel and the image to be transformed to obtain the Saak characteristic diagram, wherein the detailed operation process is not repeated herein. If the size of the sliding window (i.e. convolution kernel) is set to be n × n during transformation, a corresponding 2n can be obtained after one image is subjected to Saak transformation 2 A Saak feature map containing the first 2n 2 1 AC component diagram and the last DC component diagram with different spectrum energy gathered, and the block structure information of the image is usually mainly concentrated in the first half AC component diagram, i.e. the middle and high frequency bands. In a preferred embodiment of the present invention, n =2 is selected, i.e. the example shown in fig. 1, so that P1 and P2 can respectively obtain 8 Saak feature maps.
Step S4, 2n from P1 2 And selecting the first m Saak feature maps from the Saak feature maps as a training set, training a convolution neural network by using a back propagation algorithm, taking the first m Saak feature maps as the input of the convolution neural network, and outputting m feature maps of P1. As mentioned above, since the block structure information of an image is usually concentrated in the first half of the cross-component map, the value of m is preferably set to 1. Ltoreq. M.ltoreq.n 2 . In the preferred embodiment shown in fig. 1, the first 3 Saak feature maps of P1 are selected to train CNN, and these 3 Saak feature maps are used as input of CNN to output corresponding 3 Saak feature maps.
The convolutional neural network CNN used in step S4 may specifically adopt a VDSR network, and during training, the used loss function is the same as equation (1), the batch processing size is set to 4, the momentum parameter of the adam optimizer is set to 0.9, and the initial learning rate is set to 10 -4 Training begins to change the learning rate after 40000 iterations, which is set to 10 -5 Under the hardware condition of the Nvidia Tesla K80 GPU, the time is approximately 1 day.
Step S5, 2n from P2 2 Selecting the last 2n in the Saak feature map 2 -m Saak feature maps combined with m Zhang Tezheng map of P1 output by convolutional neural network in step S4 to 2n 2 And (5) opening the picture, and performing Saak inverse transformation to obtain a high-resolution image P3 with local texture details emphasized. Since the convolution kernel size of the Saak transform kernel is n × n, the Saak inverse transform requires 2n 2 And (5) opening an image. Taking FIG. 1 as an exampleOnly 3 feature maps are output by the CNN, so that the last 5 of the 8 Saak feature maps of P2 need to be selected to combine with the output of the CNN to form 8 Saak inverse transforms. The Saak inverse transform may be performed by deconvolution with the combined 8 graphs by using a transform kernel T1, and the specific operation process is well known in the art and will not be described herein.
And S6, finally, simply and quickly performing weighted average fusion on the high-resolution image P2 output by the VDSR network and the high-resolution image P3 obtained in the step S5 to obtain a final high-resolution image P4, so that super resolution of a single image is realized.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several equivalent substitutions or obvious modifications can be made without departing from the spirit of the invention, and all the properties or uses are considered to be within the scope of the invention.

Claims (7)

1. A single image super-resolution method is characterized by comprising the following steps:
s1, carrying out up-sampling on a single low-resolution image to obtain an image P1;
s2, taking the P1 as the input of a deep learning network VDSR, and outputting a high-resolution image P2;
s3, carrying out Saak transformation on P1 by using transformation kernel T1 to obtain 2n of P1 2 A Saak characteristic diagram is displayed; carrying out Saak transformation on P2 by using transformation kernel T2 to obtain 2n of P2 2 A Saak characteristic diagram is displayed; the transformation core T1 is obtained by calculating P1, and the transformation core T2 is obtained by calculating P2; the size of a transformation core sliding window of the Saak transformation is n multiplied by n, wherein n is a natural number;
the calculation of the transformation kernel T1 by using P1 specifically includes:
performing Karl Hu Ning-Leff transformation on the vector matrix of the P1 to obtain a corresponding covariance matrix;
measuring an opposite vector for each vector in the covariance matrix, and combining the opposite vector with the covariance matrix to form a new matrix;
applying a nonlinear unit to each vector in the new matrix for filtering to obtain a transformation kernel T1;
the mode of calculating the transformation core T2 by using P2 is the same as the mode of calculating the transformation core T1 by using P1;
s4, 2n from P1 2 Selecting the first m Saak feature maps from the Saak feature maps as a training set, training by using a back propagation algorithm to obtain a convolution neural network, taking the first m Saak feature maps as the input of the convolution neural network, and outputting an m Zhang Tezheng map of P1; wherein m is more than or equal to 1 and less than or equal to n 2
S5, 2n from P2 2 Selecting the last 2n in the Saak feature map 2 -m Saak feature maps combined with m Zhang Tezheng map of P1 output by convolutional neural network in step S4 to 2n 2 Expanding a picture, and carrying out Saak inverse transformation to obtain a high-resolution picture P3;
and S6, fusing the P2 and the P3 to obtain a final high-resolution image P4.
2. The single-image super-resolution method according to claim 1, wherein the up-sampling in step S1 is performed by bicubic interpolation, nearest neighbor interpolation or bilinear interpolation.
3. The single-image super-resolution method according to claim 1, wherein the loss function L of the deep learning network VDSR in step S2 is:
Figure FDA0004060300710000011
where N is the number of pictures in batch processing in each iteration, x i For the input picture, SR (x) i ) Images, y, representing outputs of a deep learning network VDSR i For an input picture x i The corresponding label, w is the weight of the loss function, saak (SR (x) i ) Represents SR (x) i ) Saak transformation result of (1), saak (y) i ) Denotes y i Saak transform results of (1).
4. The method for super-resolution of single image according to claim 3, wherein the network structure of 20 layers is selected during deep learning network VDSR training, the batch size is set to 64, the momentum parameter is set to 0.9, and the weight attenuation parameter is set to 0.0001.
5. The method for super resolution of a single image as claimed in claim 1, wherein in step S4, when training said convolutional neural network, the batch size is set to 4, the momentum parameter of the adam optimizer is set to 0.9, and the initial learning rate is set to 10 -4 Training is set to a learning rate of 10 after 40000 iterations -5
6. The method for super-resolution of a single image as claimed in claim 1, wherein the Saak inverse transformation in step S5 is performed by using a transformation kernel T1 and 2n obtained by combining 2 The sheet is deconvoluted.
7. The method for super-resolution of a single image as claimed in claim 1, wherein step S6 is performed by fusing P2 and P3 by using weighted averaging.
CN201911215983.XA 2019-12-02 2019-12-02 Single image super-resolution method Active CN111047514B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911215983.XA CN111047514B (en) 2019-12-02 2019-12-02 Single image super-resolution method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911215983.XA CN111047514B (en) 2019-12-02 2019-12-02 Single image super-resolution method

Publications (2)

Publication Number Publication Date
CN111047514A CN111047514A (en) 2020-04-21
CN111047514B true CN111047514B (en) 2023-04-18

Family

ID=70234400

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911215983.XA Active CN111047514B (en) 2019-12-02 2019-12-02 Single image super-resolution method

Country Status (1)

Country Link
CN (1) CN111047514B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106067161A (en) * 2016-05-24 2016-11-02 深圳市未来媒体技术研究院 A kind of method that image is carried out super-resolution
CN109741260A (en) * 2018-12-29 2019-05-10 天津大学 A kind of efficient super-resolution method based on depth back projection network
CN109829855A (en) * 2019-01-23 2019-05-31 南京航空航天大学 A kind of super resolution ratio reconstruction method based on fusion multi-level features figure

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106067161A (en) * 2016-05-24 2016-11-02 深圳市未来媒体技术研究院 A kind of method that image is carried out super-resolution
CN109741260A (en) * 2018-12-29 2019-05-10 天津大学 A kind of efficient super-resolution method based on depth back projection network
CN109829855A (en) * 2019-01-23 2019-05-31 南京航空航天大学 A kind of super resolution ratio reconstruction method based on fusion multi-level features figure

Also Published As

Publication number Publication date
CN111047514A (en) 2020-04-21

Similar Documents

Publication Publication Date Title
Suryanarayana et al. Accurate magnetic resonance image super-resolution using deep networks and Gaussian filtering in the stationary wavelet domain
CN110570353B (en) Super-resolution reconstruction method for generating single image of countermeasure network by dense connection
Zhang et al. Image restoration: From sparse and low-rank priors to deep priors [lecture notes]
Yu et al. A unified learning framework for single image super-resolution
Zhang et al. CCR: Clustering and collaborative representation for fast single image super-resolution
Ren et al. Single image super-resolution via adaptive high-dimensional non-local total variation and adaptive geometric feature
CN112734646B (en) Image super-resolution reconstruction method based on feature channel division
Sun et al. Lightweight image super-resolution via weighted multi-scale residual network
CN109636721B (en) Video super-resolution method based on countermeasure learning and attention mechanism
Tang et al. Deep inception-residual Laplacian pyramid networks for accurate single-image super-resolution
CN112837224A (en) Super-resolution image reconstruction method based on convolutional neural network
CN107590775B (en) Image super-resolution amplification method using regression tree field
CN111861886B (en) Image super-resolution reconstruction method based on multi-scale feedback network
Guo et al. Multiscale semilocal interpolation with antialiasing
Luo et al. Lattice network for lightweight image restoration
CN111340696B (en) Convolutional neural network image super-resolution reconstruction method fused with bionic visual mechanism
Chen et al. Single image super resolution using local smoothness and nonlocal self-similarity priors
Yang et al. Image super-resolution based on deep neural network of multiple attention mechanism
CN112163998A (en) Single-image super-resolution analysis method matched with natural degradation conditions
CN112184552B (en) Sub-pixel convolution image super-resolution method based on high-frequency feature learning
CN111899166A (en) Medical hyperspectral microscopic image super-resolution reconstruction method based on deep learning
CN116957964A (en) Small sample image generation method and system based on diffusion model
CN111047514B (en) Single image super-resolution method
CN115880158A (en) Blind image super-resolution reconstruction method and system based on variational self-coding
Hasan et al. Single image super-resolution using back-propagation neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant