CN110674862A - Super-resolution method based on neighborhood regression of internal sample - Google Patents

Super-resolution method based on neighborhood regression of internal sample Download PDF

Info

Publication number
CN110674862A
CN110674862A CN201910889471.5A CN201910889471A CN110674862A CN 110674862 A CN110674862 A CN 110674862A CN 201910889471 A CN201910889471 A CN 201910889471A CN 110674862 A CN110674862 A CN 110674862A
Authority
CN
China
Prior art keywords
image
resolution
block
vector
low
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910889471.5A
Other languages
Chinese (zh)
Other versions
CN110674862B (en
Inventor
端木春江
沈碧婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Normal University CJNU
Original Assignee
Zhejiang Normal University CJNU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Normal University CJNU filed Critical Zhejiang Normal University CJNU
Priority to CN201910889471.5A priority Critical patent/CN110674862B/en
Publication of CN110674862A publication Critical patent/CN110674862A/en
Application granted granted Critical
Publication of CN110674862B publication Critical patent/CN110674862B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an image super-resolution method, which comprises an off-line process and an on-line image super-resolution amplification process. In an off-line process, the center and mapping matrix for each class of horizontal gradient patch, and vertical gradient patch are generated by training the image. In the online process, firstly extracting a low-resolution image block vector, determining a horizontal gradient class according to the vector, and accordingly obtaining a high-resolution horizontal gradient block; similarly, the vertical gradient class can be determined, and a high-resolution vertical gradient block can be obtained according to the vertical gradient class. Then, all the obtained horizontal gradient blocks are spliced to obtain a horizontal gradient image; and splicing all the obtained vertical gradient blocks to obtain a vertical gradient image. And then, according to the low-resolution image, the obtained horizontal gradient image and the obtained vertical gradient image, an iterative method is adopted to solve and obtain an output high-resolution image.

Description

Super-resolution method based on neighborhood regression of internal sample
Technical Field
The invention relates to a super-resolution technology of a single image in digital image processing, namely, only one low-resolution image is used for obtaining one high-resolution image, and the high-resolution image is required to be clear and have better quality. The method has wide application fields, including video monitoring, medical image processing, image amplification in the Internet, remote sensing image processing and the like.
Background
According to the principle on which super-resolution reconstruction methods are based, there are three major categories of current super-resolution reconstruction methods: interpolation-based methods, reconstruction-based methods, and learning-based methods. The learning-based method uses a sample library for training to obtain a learning model, and the reconstructed high-resolution image has richer detail information, so that the method becomes a research hotspot in the field.
According to the characteristic that an image signal has sparse representation, Yang skillfully applies the characteristic to image super-resolution reconstruction, a super-resolution method (SCSR) based on sparse representation is provided, a training stage carries out combined training on high-resolution images and low-resolution images in a training set to obtain a pair of high-resolution dictionaries and low-resolution dictionaries, and a reconstruction stage multiplies the high-resolution dictionaries with corresponding sparse coefficients to reconstruct a high-resolution image block. The Zeyde makes important improvement on the basis of Yang's research, adopts principal component analysis algorithm to carry out dimension reduction on the characteristics of the low-resolution image block, and combines the optimized matching tracking algorithm to solve the sparse representation coefficient, so that the quality and time of the reconstructed image are improved.
Dai proposes a super-resolution method (JOR) of a joint optimization regressor, and performs alternate iterative optimization on dictionary training and clustering processes by adopting a maximum expectation algorithm, so that a better mapping function can be constructed, and the image resolution is improved. Dong proposes a deep convolutional network-based nonlinear regression super-resolution reconstruction method (SRCNN), and as the number of layers of the network increases, the quality of the image is further improved.
Disclosure of Invention
The content of the invention is as follows:
1. constructing a training set of multi-level high-resolution images
Small image blocks in natural images tend to repeat inside the image many times, whether at the same size or at different sizes. Thus, a high resolution image block may be repeated in many down-sampled images of the image. Based on this principle, a rich training set of multi-level, high-resolution images can be constructed. The invention constructs a training set of multi-level high-resolution images based on the input high-resolution image set. Meanwhile, different from the traditional method, the invention adopts an iterative back projection method when constructing images in a low-resolution training set.Let I0Is one image of the input high resolution image set, the invention uses I0A training set of multi-level high resolution images is constructed as high resolution images. Here, image I is taken0And its multi-level down-sampled image as a training set of multi-level high resolution images, this process can be represented by the following equation (1):
Hi=(I0)↓pi(1)
wherein ,HiRepresenting images in the high resolution image training set of the i-th layer, ↓, representing a downsampling operator, piDenotes a scale factor for constructing a down-sampling of the i-th layer high resolution image, where p is 0.98 and i is 1, 2. I.e. by means of a top-layer high-resolution image I0One image in a training set of 20 layers of high resolution images can be constructed, and the resolution of each layer is reduced by p times based on the resolution of the previous layer.
2. Constructing a training set of low resolution images
For the constructed high-resolution image training set, an image obtained by blurring and down-sampling the image in the high-resolution image training set is used as an initial image in the low-resolution image training set, and the process can be represented by formula (2):
Figure BSA0000190693520000021
wherein ,IiA low resolution image representing the ith layer,
Figure BSA0000190693520000022
representing the convolution operator, BiRepresenting a gaussian template and s a down-sampled scale factor.
To expand the images in the training set of low resolution images, the present invention flips and rotates the initial images in the training set. In order to enhance the initial image with low resolution, the invention uses an iterative back projection method to perform iterative optimization on the low resolution image of the training set, and the objective function and formula are shown in the following formula (3).
Figure BSA0000190693520000023
wherein ,
Figure BSA0000190693520000024
representing a low resolution initial image I to the ith layeriCarrying out bicubic interpolation amplification to obtain an image,
Figure BSA0000190693520000025
and D represents a matrix of a downsampling operator, and B represents a fuzzy matrix in the image degradation process. Solving equation (3) to obtain equation (4):
Figure BSA0000190693520000026
wherein ,
Figure BSA0000190693520000027
represents the low resolution image of the i-th layer after t iterations, q represents a gaussian filter template, and ≠ represents the upsampling operation, as described in equation (3),
Figure BSA0000190693520000028
this iterative process is repeated over a period of T iterations,
Figure BSA0000190693520000029
and
Figure BSA00001906935200000210
almost so far.
Figure BSA00001906935200000211
It is the image in the training image set of the i-th layer low resolution.
3. Constructing a training set of image blocks with high and low resolutions, clustering the image blocks, and extracting various centers and mapping matrixes
High resolution image H obtained for the ith layeriAnd corresponding low resolution image LiPerforming horizontal gradient image fetching operation and vertical gradient image fetching operation to obtain horizontal high resolution gradient image HHiVertical high resolution gradient image VHiHorizontal low resolution gradient image HLiAnd a vertical low resolution gradient image VLiThat is to say have
Figure BSA0000190693520000031
Wherein, the symbol
Figure BSA0000190693520000032
For the convolution sign, gh is the horizontal gradient template and gv is the vertical gradient template, in the present invention, gh [ -1, 1],gv=[-1,1]T. Then, the images are subjected to a blocking operation, and an image HH is subjected toi、 and HLiPerforming blocking operation, wherein the high-resolution image blocks in all the layers in the extracted image blocks have the same size, and the low-resolution image blocks in all the layers have the same size, so that the high-resolution horizontal gradient image block BHH can be obtainedjAnd the corresponding low-resolution horizontal gradient image block BHLjWhere j denotes the index of the image block. For image VHi、 and VLiSimilar blocking operation is carried out, so that a high-resolution vertical gradient image block BVH can be obtainedjAnd corresponding low resolution vertical gradient image block BVLjWhere j denotes the index of the image block.
Then, BHL is applied to all image blocksjAnd (3) clustering by using a K-means method to obtain the category c (j) where the image block is located, wherein c (j) is more than or equal to 1 and less than or equal to K, namely all the image blocks are copolymerized into K categories. And then vectorizing the obtained image blocks, wherein the vectorizing process adopts a method of extracting image block elements from top to bottom and from left to right. That is, a vector can be obtained by vectorizing an image block Bl having a size of m × n
Figure BSA0000190693520000033
Wherein i in Bl1Line, j1The elements in the column are(ii) of (i)1-1)*m+j1And (4) each element. Thus, BHL is applied to all of the low resolution image blocksjVectorizing to obtain a vector
Figure BSA0000190693520000035
BHH for all high resolution image blocksjVectorizing to obtain a vector
Figure BSA0000190693520000036
For class c1, 1 ≦ c1 ≦ K, all low resolution vectors in this class
Figure BSA0000190693520000037
Stacking up, a matrix MHL can be obtained, wherein each low resolution vector is one column; all the low-resolution vectors in this class c1Corresponding high resolution vector
Figure BSA0000190693520000039
Stacking in the same order as when stacking MHLs (c1) results in a matrix MHH (c 1). Thus, a mapping matrix of class c1 may be obtained
Pc1=MHH(c1)×((MHL(c1))T×MHL(c1)+λI))-1×MHL(c1) (6)
The center position vector for category c1 is then determined from the centers of all the low resolution vectors in category c1
Then, all image blocks BVL are processedjClustering by using a K-means method to obtain the class c2 where the class c2 is more than or equal to 1 and less than or equal to K, namely all image blocks shareInto K classes. And then vectorizing the obtained image blocks, wherein the vectorizing process adopts a method of extracting image block elements from top to bottom and from left to right. Thus, all the low-resolution image blocks BVL are processedjVectorizing to obtain a vector
Figure BSA0000190693520000042
For all high-resolution image blocks BVHjVectorizing to obtain a vectorFor class c2, all low resolution vectors in this class are used
Figure BSA0000190693520000044
Stacked, a matrix MVL is obtained, in which each low resolution vector
Figure BSA0000190693520000045
Is one of the columns; all the low-resolution vectors in this class c2
Figure BSA0000190693520000046
Corresponding high resolution vector
Figure BSA0000190693520000047
Stacking in the same order as MVL (c2) results in a matrix MVH (c 2). Thus, a mapping matrix of class c1 may be obtained
Pc2=MVH(c2)×((MVL(c2))T×MVL(c2)+λI))-1×MVL(c2) (7)
The center position vector for category c2 is then determined from the centers of all the low resolution vectors in category c2
Figure BSA0000190693520000048
4. Determining high-resolution horizontal gradient blocks and high-resolution vertical gradient blocks corresponding to the on-line low-resolution image blocks according to the on-line low-resolution image blocks
According to being on-lineLow resolution block B in imagelThe horizontal gradient block isVectorization, which is sequentially scanned from top to bottom and from left to right, may obtain a vector
Figure BSA00001906935200000410
According to the vector
Figure BSA00001906935200000411
And class center stored during training phase
Figure BSA00001906935200000412
Find the nearest class center and class c1*I.e. by
Figure BSA00001906935200000413
wherein ,
Figure BSA00001906935200000414
is the square of the two-norm of the vector x. Then, according to the vector
Figure BSA00001906935200000415
And category c1*Mapping matrix of
Figure BSA00001906935200000420
The vector by which they are multiplied can be obtained
Figure BSA00001906935200000416
Namely, it is
Figure BSA00001906935200000417
Then, for the vector
Figure BSA00001906935200000418
A process of performing inverse vectorization canObtaining the image block HLB of high resolution with size (m × s) × (n × s), where s is the amplification factor as described above and m × n is the size of the low resolution block as described above, i.e. the vector
Figure BSA00001906935200000419
Middle j2The value of an element being the number of HLB in an image block
Figure BSA0000190693520000051
Line, number (j)2mod m s), where
Figure BSA0000190693520000052
Representing the largest integer less than y, mod represents the remainder-taking operation in division.
From low resolution blocks B in the on-line imagelThe vertical gradient block is
Figure BSA0000190693520000053
Vectorizing VB to obtain a vector
Figure BSA0000190693520000054
Class centers stored according to training phaseFind the nearest class center and class c2*I.e. by
Figure BSA0000190693520000056
Then, according to the vector
Figure BSA0000190693520000057
And category c2*Mapping matrix of
Figure BSA00001906935200000513
The vector by which they are multiplied can be obtained
Figure BSA0000190693520000058
Namely, it is
Figure BSA0000190693520000059
Then, for the vector
Figure BSA00001906935200000510
Also, an inverse vectorization process is performed, and a high-resolution image block HVB having a size of (m × s) × (n × s) can be obtained.
5. Process for obtaining magnified horizontal gradient images and magnified vertical gradient images
The operation described in the above point 4 is performed on each image block in the low-resolution image blocks to obtain the corresponding HLB of each high-resolution image block, and then the image blocks are spliced to obtain a high-resolution image Igh. In the conventional method, at the time of stitching, the pixel values of the overlapping portions of two adjacent image blocks are determined by averaging. In the present invention, this is improved by using a method of weighting which is proportional to the mean square error value of the image block, i.e. for block B1And block B2The pixel values in the overlap region are:
Igh(i,j)=HLB1(m1,n1)*w1+HLB2(m2,n2)*w2(12)
wherein ,(m1,n1) For the (i, j) th pixel on the image at the block HLB1Position index of (m)2,n2) For the (i, j) th pixel on the image at the block HLB2Index of position in (1), w1Is block HLB1Is proportional to the block HLB1Variance of middle pixel value
Figure BSA00001906935200000511
w2Is block HLB2Is proportional to the block HLB2Variance of middle pixel value
Figure BSA00001906935200000512
And has w1+w21. Because image blocks with larger variances tend to have sharper image details, doing so may improve the sharpness of the image details.
The operation described in the above point 4 is performed on each of the low-resolution image blocks to obtain each of the corresponding high-resolution image blocks HVB, and then these image blocks are spliced together according to the method described above to obtain a high-resolution image Igv. At the time of splicing, the weighting method is adopted.
Thus, an enlarged horizontal gradient image I is obtainedghAnd enlarged vertical gradient image Igv
6. Process for obtaining a final high resolution image
From low-resolution images I input in an on-line processLAmplified horizontal gradient image I obtained aboveghAnd an enlarged vertical gradient image IgvThe final high resolution image is obtained by optimizing the objective function, i.e. having
Figure BSA0000190693520000061
Where gh and gv are the horizontal and vertical gradient extraction templates, respectively, as described above, D is the downsampling matrix, as described above, and B is the blurring matrix representing the image degradation process, as described above. In the process of solving (13), the invention adopts an iterative algorithm to carry out iterative optimization. For the first two terms in equation (13), the gradient descent method is used for updating, the last term is updated by using an iterative back projection method, and the updating is performed alternately until convergence.
Drawings
1. Fig. 1 is a flow chart of the proposed method. Fig. 1(a) is a flow chart of an off-line training phase of the proposed method, and fig. 1(b) is a flow chart of an on-line super-resolution magnification phase of the proposed method.
2. Fig. 2 is a diagram of the effect of gradient images on different images in the proposed method.
3. Fig. 3 is a comparison graph of the visual effect of the magnified images on the baby image by various super-resolution methods.
4. FIG. 4 is a comparison graph of the visual effect of various super-resolution methods on a head image.
5. Fig. 5 is a comparison graph of the visual effect of images magnified on a woman image by various super-resolution methods.
Detailed Description
The method comprises an off-line training process and an on-line image super-resolution amplification process. In the off-line training process, the center and mapping matrix of each class in the horizontal gradient image patch vector and the center and mapping matrix of each class in the vertical gradient image patch vector are generated through the training image. For the process of on-line super-resolution amplification of a low-resolution image, firstly extracting a low-resolution image block, then carrying out vectorization on the image block, determining a horizontal gradient class of the image block according to the vector, multiplying a mapping matrix of the class by the mapping matrix to obtain a high-resolution vector, carrying out inverse vectorization on the high-resolution vector to obtain a high-resolution horizontal gradient block; similarly, the vertical gradient class of the image block vector can be determined through the image block vector with low resolution, a vector with high resolution is obtained according to the mapping matrix of the class and the multiplication of the mapping matrix, and the vector is subjected to inverse vectorization, so that the vertical gradient block with high resolution can be obtained. Then, all the obtained high-resolution horizontal gradient blocks are spliced to obtain a high-resolution horizontal gradient image; and all the obtained high-resolution vertical gradient blocks are spliced to obtain a high-resolution vertical gradient image. And then, solving to obtain an output high-resolution image by adopting an iterative gradient descent method and a back projection method according to the low-resolution image, the obtained high-resolution horizontal gradient image and the obtained high-resolution vertical gradient image.
The steps of constructing the internal image training set in the off-line training process are as follows:
step A1) inputting an image I in an initial image training set0The image is madeIs an initial high resolution image;
step A2) inventive step I0A training set of multi-level high resolution images is constructed as high resolution images. Here, image I is taken0And its multi-level down-sampled image as a training set of multi-level high resolution images, this process can be represented by the following equation (14):
Hi=(I0)↓pi(14)
wherein ,HiRepresenting images in the high resolution image training set of the i-th layer, ↓, representing a downsampling operator, piHere, p is 0.98, i is 1, 2, …, and 20, which are scale factors for constructing a down-sampling of the i-th layer high resolution image. I.e. by means of a top-layer high-resolution image I0One image in a training set of 20 layers of high resolution images can be constructed, and the resolution of each layer is reduced by p times based on the resolution of the previous layer.
Step a3) of constructing a high-resolution image training set, and taking an image obtained by blurring and down-sampling the image in the high-resolution image training set as an image in the low-resolution image training set, where the process can be represented by equation (15):
Figure BSA0000190693520000071
wherein ,IiA low resolution image representing the ith layer, i ═ 1, 2, …, 20,
Figure BSA0000190693520000072
representing the convolution operator, BiRepresenting a gaussian template and s a down-sampled scale factor.
To expand the images in the training set of low resolution images, the present invention flips and rotates the initial images in the training set. In order to enhance the initial image with low resolution, the invention uses an iterative back projection method to carry out iterative optimization on the low resolution image of the training set, namely
Figure BSA0000190693520000073
wherein ,
Figure BSA0000190693520000074
representing a low-resolution image of an i-th layer after t iterations, D representing a matrix of a down-sampling operator, B representing a blur matrix in an image degradation process, s representing an amplification factor, q representing a filter template corresponding to the blur matrix in the image degradation process, and ℃. # representing an up-sampling operation,
Figure BSA0000190693520000075
representing a low resolution initial image I to the ith layeriAnd carrying out bicubic interpolation amplification to obtain an image. This iterative process is repeated over a period of T iterations,
Figure BSA0000190693520000076
and
Figure BSA0000190693520000077
almost so far.
Figure BSA0000190693520000078
It is the image in the training image set of the i-th layer low resolution.
Step a4) inputs the images in the next initial image training set, and the above operations a2 and A3 are performed on the images to obtain more high-resolution images and low-resolution images corresponding to the high-resolution images at more levels.
After the construction of the training set of the image is completed, a low-resolution image block vector, a high-resolution horizontal gradient vector and a high-resolution vertical gradient vector need to be constructed, the vectors are clustered, and the center of each class and the mapping matrix in each class are determined and stored for the use in the process of determining the horizontal gradient image and the vertical gradient image on line. The method comprises the following steps:
step B1) of blocking the image, a high resolution image H obtained on the i-th layeriAnd corresponding low resolution image LiGo forward and go forwardPerforming horizontal gradient image operation and vertical gradient image operation to obtain horizontal high-resolution gradient image HHiVertical high resolution gradient image VHiHorizontal low resolution gradient image HLiAnd a vertical low resolution gradient image VLiThat is to say have
Figure BSA0000190693520000081
Wherein, the symbol
Figure BSA0000190693520000082
For the convolution sign, gh is the horizontal gradient template and gv is the vertical gradient template, in the present invention, gh [ -1, 1],gv=[-1,1]T
Step B2) performs a blocking operation on the images, and when performing a blocking operation on an image, requires that, of the extracted image blocks, the image blocks of high resolution in all levels have the same size and the image blocks of low resolution in all levels have the same size, for each of the images HHi、 and HLiThe high-resolution horizontal gradient image block BHH can be obtained by blockingjAnd the corresponding low-resolution horizontal gradient image block BHLjWhere j denotes the index of the image block, respectively for the image VHi、 and VLiSimilar blocking operation is carried out, so that a high-resolution vertical gradient image block BVH can be obtainedjAnd corresponding low resolution vertical gradient image block BVLjWhere j denotes the index of the image block,
step B3) performs the blocking operation described in step B1 above on the high resolution images and the low resolution images obtained in all training sets at all levels,
step B4), vectorizing the obtained image block, wherein the vectorizing process adopts a method of extracting image block elements from top to bottom and from left to right, namely, a vector can be obtained after vectorizing an image block Bl with the size of m multiplied by n
Figure BSA0000190693520000083
Wherein B isI in l1Line, j1The elements in the column are
Figure BSA0000190693520000085
(ii) of (i)1-1)*m+j1Element such that BHL is applied to all of the low resolution horizontal gradient image blocksjVectorizing to obtain a vector
Figure BSA0000190693520000084
BHH for all high resolution horizontal gradient image blocksjVectorizing to obtain a vector
Figure BSA0000190693520000091
For all low resolution vertical gradient image blocks BVLjVectorizing to obtain a vector
Figure BSA0000190693520000092
For all high-resolution vertical gradient image blocks BVHjVectorizing to obtain a vector
Figure BSA0000190693520000093
Step B5) then, for all horizontal gradient low resolution image block vectors
Figure BSA0000190693520000094
Clustering by using a K-means method to obtain the class c1 where the class c1 is located, namely the horizontal gradient low-resolution image block vectors are copolymerized into K classes, and for the horizontal gradient high-resolution image block vectors, the class c1 is obtained, and the class c1 is more than or equal to K
Figure BSA0000190693520000095
The category of which is the horizontal gradient low resolution image block vector corresponding to the image block vector
Figure BSA0000190693520000096
Class of which it belongs, then, determining a horizontal gradient low resolution image block vectorFor the class c1, 1 ≦ c1 ≦ K, all low resolution vectors in this class
Figure BSA0000190693520000098
Stacked, a matrix MHL (c1) can be obtained, in which each low resolution vector
Figure BSA0000190693520000099
Is one of the columns; all the low-resolution vectors in this class c1
Figure BSA00001906935200000910
Corresponding high resolution vector
Figure BSA00001906935200000911
Stacking in the same order as when stacking MHL (c1) results in a matrix MHH (c1) and thus a mapping matrix of class c1
Pc1=MHH(c1)×((MHL(c1))T×MHL(c1)+λI))-1×MHL(c1) (18)
Where λ is a parameter that needs to be optimized experimentally, and then the center position vector of class c1 is determined from the centers of all the horizontal gradient low resolution vectors in class c1
Figure BSA00001906935200000912
Step B6) for all image block vectors
Figure BSA00001906935200000913
Clustering by using a K-means method to obtain the class c2 where the image blocks are located, wherein c2 is more than or equal to 1 and less than or equal to K, namely all the image blocks are copolymerized into K classes, and for the class c2, all the low-resolution vectors in the class are used
Figure BSA00001906935200000914
Stacked, a matrix MVL (c2) can be obtained, each of which is lowResolution vector
Figure BSA00001906935200000915
Is one of the columns; all the low-resolution vectors in this class c2
Figure BSA00001906935200000916
Corresponding high resolution vector
Figure BSA00001906935200000917
Stacking MVL (c2) in the same order as it was stacked results in a matrix MVH (c2) and thus a mapping matrix of class c2
Pc2=MVH(c2)×((MVL(c2))T×MVL(c2)+λI))-1×MVL(c2) (19)
The center position vector for category c2 is then determined from the centers of all the low resolution vectors in category c2
Figure BSA00001906935200000918
Step B7) storing the class centers of the obtained classes
Figure BSA00001906935200000919
And
Figure BSA00001906935200000920
and a mapping matrix P for each classc1 and Pc2
In this way, the off-line training phase proposed by the present invention is completed.
The input of the online amplification stage of the invention is a low-resolution image and various centers and mapping matrixes obtained in the training stage, and the output is a high-resolution image. The specific implementation steps are as follows.
Step C1) of blocking the input low-resolution image to obtain a low-resolution image block LLB of size m × nj1J1 is the index of the image block in the image, the overlap area between adjacent horizontal image blocksThe field is m × (n-2), and the overlapping area between adjacent image blocks in the vertical direction is (m-2) × n.
Step C2) for this image block LLBj1Performing horizontal gradient calculation to obtain horizontal gradient block HBj1I.e. by
Wherein, gh is the template for obtaining the horizontal gradient.
Step C3) determines a high resolution horizontal gradient block. From low resolution horizontal gradient blocks HB in online imagesj1Vectorization, which is sequentially scanned from top to bottom and from left to right, may obtain a vector
Figure BSA0000190693520000102
According to the vector
Figure BSA0000190693520000103
And class center stored during training phaseFind the nearest class center and class c1*I.e. by
Figure BSA0000190693520000105
wherein ,
Figure BSA0000190693520000106
is the square of the two-norm of the vector x. Then, according to the vector
Figure BSA0000190693520000107
And category c1*Mapping matrix of
Figure BSA00001906935200001013
The vector by which they are multiplied can be obtainedNamely, it is
Figure BSA0000190693520000109
Then, for the vectorPerforming inverse vectorization to obtain high-resolution image blocks HHB with size of (m × s) × (n × s)j1Where s is the magnification factor as described above and m x n is the size of the low resolution block as described above, i.e. a vector
Figure BSA00001906935200001011
Middle j2The value of an element being the number of HLB in an image block
Figure BSA00001906935200001012
Line, number (j)2mod m s) the values of the elements on the columns.
Step C4) obtains an enlarged horizontal gradient image. Performing the operations described in steps C2 and C3 above on each of the low-resolution image blocks may result in corresponding horizontal gradient image blocks HHB for each of the high-resolution image blocksj1Then, the image blocks are spliced together to obtain a high-resolution image Igh. During splicing, a method of weighting is adopted, the weighting of which is proportional to the mean square error value of the image block, i.e. for block HHBo1And block HHBo2The pixel values in the overlap region are:
Igh(i,j)=HHBo1(m1,n1)*w1+HHBo2(m2,n2)*w2(22)
wherein ,(m1,n1) For the (i, j) th pixel on the image in block HHBo1Position index of (m)2,n2) For the (i, j) th pixel on the image in block HHBo2Index of position in (1), w1Into block HHBo1Is proportional to block HHBo1Middle pixelVariance of valuew2Into block HHBo2Is proportional to block HHBo2Variance of middle pixel valueAnd has w1+w2=1。
Step C5) for this image block LLBj1The vertical gradient block VB can be obtained by performing the operation of solving the vertical gradientj1I.e. by
Figure BSA0000190693520000113
Wherein gv is the template for obtaining the vertical gradient.
Step C6) determines a high resolution vertical gradient block. From low resolution vertical gradient blocks VB in the online imagej1Vectorization, which is sequentially scanned from top to bottom and from left to right, may obtain a vectorAccording to vectors
Figure BSA0000190693520000115
And class center stored during training phase
Figure BSA0000190693520000116
Find the nearest class center and class c2*I.e. by
Then, according to the vector
Figure BSA0000190693520000118
And category c2*Mapping matrix of
Figure BSA00001906935200001114
The vector by which they are multiplied can be obtained
Figure BSA0000190693520000119
Namely, it is
Figure BSA00001906935200001110
Then, for the vector
Figure BSA00001906935200001111
Also, the inverse vectorization process is performed to obtain a high-resolution vertical gradient image block HVB with a size of (m × s) × (n × s)j1
Step C7) obtains an image of the magnified vertical gradient. The operations described in steps C5 and C6 above are performed on each of the low-resolution image blocks to obtain corresponding high-resolution vertical-gradient image blocks HVBj1Then, the image blocks are spliced together to obtain a high-resolution image Igh. During splicing, a method of weighting is adopted, the weighting of which is proportional to the mean square error value of the image block, i.e. for block HVBo1And block HVBo2The pixel values in the overlap region are:
Igv(i,j)=HVBo1(m1,n1)*w1+HVBo2(m2,n2)*w2(25)
wherein ,(m1,n1) For the (i, j) th pixel on the image in block HVBo1Position index of (m)2,n2) For the (i, j) th pixel on the image in block HVBo2Index of position in (1), w1As a block HVBo1Is proportional to the block HVBo1Variance of middle pixel value
Figure BSA00001906935200001112
w2As a block HVBo2Is proportional to the block HVBo2Variance of middle pixel value
Figure BSA00001906935200001113
And has w1+w2=1。
Step C8) adopts a bicubic interpolation amplification method to perform on-line process input low-resolution image ILAmplifying to obtain initial high-resolution image Ic
Step C9) based on the low resolution image I input in the on-line processLAmplified horizontal gradient image I obtained aboveghAnd an enlarged vertical gradient image IgvThe final high resolution image is obtained by optimizing the objective function, i.e. having
Figure BSA0000190693520000121
Wherein gh and gv are the horizontal and vertical gradient extraction templates, respectively, as described above, D is the downsampling matrix, B is the blurring matrix representing the image degradation process, as described above, and λLIs a balance parameter. In the process of solving (26), the invention adopts an iterative algorithm to carry out iterative optimization. The initial condition for this iteration is a high resolution image
Ih(0)=Ic(27)
Where 0 denotes the 0 th iteration, IcThe image obtained in step C8. For the first two terms in equation (26), the gradient descent method is used for updating, the last term is updated by using an iterative back projection method, and the updating is performed alternately until convergence. The converged image is an output high-resolution image.
Results and analysis of the experiments
All experiments of the invention were carried out on MATLAB software, and in order to verify the validity and feasibility of the proposed method, a standard image test set was used: set 5. The invention does not adopt an external natural image training set, and each input low-resolution image uses 20 images generated based on an internal sample of the input low-resolution image and an image obtained by turning and rotating the input low-resolution image. In the experiment, the parameter λ is 0.15, the size of the low-resolution image block is 9 × 9, and the number of extracted image blocks is 100 ten thousand, that is, there are 100 ten thousand low-resolution image blocks and 100 ten thousand corresponding high-resolution image blocks. The iteration time t of the iterative back projection method is 20, and the peak signal-to-noise ratio (PSNR) index is used to judge the quality of the image.
The classical image super-resolution method comprises the following steps: the Zeyde method, the ANR method, the a + method, the JOR method, and the SRCNN method, which were selected by the present invention to compare with the proposed methods. Table 1 shows the experimental results of 3 × 3 times image amplification, and it can be seen from the experimental results that the average peak signal-to-noise ratio (PSNR) of the proposed method is 2.53dB higher than that of the Zeyde method, 1dB higher than that of the ANR method, 0.34dB higher than that of the a + method, 0.35dB higher than that of the JOR method, and 0.54dB higher than that of the SRCNN method, respectively. This indicates that the method proposed herein is superior to the traditional super-resolution method.
TABLE 1 Experimental results at 3X 3 times magnification of the images
Figure BSA0000190693520000131
To further analyze the effectiveness of the proposed algorithm, fig. 2 shows the horizontal and vertical gradient images of the baby, head and wman images at a magnification of 3 × 3, respectively, and fig. 3, 4 and 5 show the contrast images of the visual effect of these three images at a magnification of 3 × 3, respectively. From the human sense, the image reconstructed by the method has richer detail information and no false edge.

Claims (3)

1. The invention provides an image super-resolution method, which is characterized by comprising the following steps: the method comprises an offline training process and an online image super-resolution amplification process, wherein in the offline training process, the center and the mapping matrix of each class in a horizontal gradient image block vector and the center and the mapping matrix of each class in a vertical gradient image block vector are generated through a training image, for the online super-resolution amplification process of a low-fraction image, an image block with low resolution is firstly extracted and then subjected to vectorization, the horizontal gradient class to which the image block belongs is determined according to the vector and various centers stored in a training stage, a high-resolution vector is obtained according to the multiplication of the mapping matrix of the class and the vector, and the high-resolution horizontal gradient block can be obtained by carrying out inverse vectorization on the image block; similarly, the vertical gradient class of the image block vector can be determined through the image block vector with low resolution, a vector with high resolution can be obtained by multiplying the mapping matrix of the class by the vector, the vector is subjected to inverse vectorization to obtain a vertical gradient block with high resolution, and then all the obtained horizontal gradient blocks with high resolution are spliced to obtain a horizontal gradient image with high resolution; and then, solving by adopting an iterative gradient descent method and a back projection method according to the input low-resolution image, the obtained high-resolution horizontal gradient image and the obtained high-resolution vertical gradient image to obtain an output high-resolution image.
2. The method for super-resolution of images according to claim 1, wherein: the process of constructing an internal image training set in the off-line training process comprises the following steps A1 to A4, and the process of vectorizing, clustering, determining and storing the centers of each class and the mapping matrix for the image blocks comprises the following steps B1 to B7;
step A1) inputting an image I in an initial image training set0Taking the image as an initial high-resolution image;
step A2) inventive step I0Constructing a training set of multi-level high resolution images as high resolution images, where image I is taken0And its multi-level down-sampled image as a training set of multi-level high resolution images, this process can be represented by the following equation (14):
Hi=(I0)↓pi(14)
wherein ,HiImage in the high resolution image training set representing the ith layer, ↓ representing the downward samplingSample operator, piThe scaling factor for constructing the down-sampling of the I-th layer high-resolution image is shown, where p is 0.98, I is 1, 20One image in a training set of 20 layers of high-resolution images can be constructed, the resolution of each layer is reduced by p times based on the resolution of the previous layer,
step a3) of constructing a high-resolution image training set, and taking an image obtained by blurring and down-sampling the image in the high-resolution image training set as an image in the low-resolution image training set, where the process can be represented by equation (15):
Figure FSA0000190693510000011
wherein ,IiA low resolution image representing the ith layer, i 1, 2, 20,
Figure FSA0000190693510000021
representing the convolution operator, BiRepresenting a gaussian template in the image degradation process, s representing a down-sampled scale factor,
in order to enlarge the images in the low-resolution image training set, the invention overturns and rotates the initial images in the training set, and also performs the operations from step A2 to step A3 on the images obtained by overturning and rotating, and in order to enhance the initial images with low resolution, the invention uses an iterative back projection method to iteratively optimize the low-resolution images in the training set, namely
Figure FSA0000190693510000022
wherein ,
Figure FSA0000190693510000023
representing the low-resolution image of the ith layer after t iterations, D representing the matrix of a down-sampling operator, B representing the matrix corresponding to a Gaussian template, s representing an amplification factor, and q representing the image degradation processThe filter template corresponding to the blur matrix, # x @ represents the upsampling operation,representing a low resolution initial image I to the ith layeriCarrying out bicubic interpolation amplification to obtain an image, wherein after T iterative convergence,
Figure FSA0000190693510000025
and
Figure FSA0000190693510000026
basically, the method is almost the same as that of the prior art,the images in the training image set of the i-th layer low resolution,
step a4) inputting the images in the next initial image training set, performing the above operations a2 and A3 on the images to obtain more high-resolution images and corresponding low-resolution images at more levels,
after the construction of the training set of the image is completed, constructing a low-resolution image block vector, a low-resolution horizontal gradient vector, a high-resolution horizontal gradient vector, a low-resolution vertical gradient vector and a high-resolution vertical gradient vector, clustering the vectors, determining and storing the center of each class and a mapping matrix in each class for the process of determining a horizontal gradient high-resolution image and a vertical gradient high-resolution image in an online stage, wherein the process comprises the following steps B1 to B7;
step B1) performs this operation on all images in the training set, and on the high-resolution image H obtained on the i-th layeriAnd corresponding low resolution image LiPerforming horizontal gradient image fetching operation and vertical gradient image fetching operation to obtain horizontal high resolution gradient image HHiVertical high resolution gradient image VHiHorizontal low resolution gradient image HLiAnd a vertical low resolution gradient image VLiThat is to say have
Figure FSA0000190693510000028
Wherein, the symbolFor the convolution sign, gh is the horizontal gradient template and gv is the vertical gradient template, in the present invention, gh [ -1, 1],gv=[-1,1]T
Step B2) performs a blocking operation on the images, and when performing a blocking operation on an image, requires that, of the extracted image blocks, the image blocks of high resolution in all levels have the same size and the image blocks of low resolution in all levels have the same size, for each of the images HHi、 and HLiThe high-resolution horizontal gradient image block BHH can be obtained by blockingjAnd the corresponding low-resolution horizontal gradient image block BHLjWhere j denotes the index of the image block, respectively for the image VHi、 and VLiSimilar blocking operation is carried out, so that a high-resolution vertical gradient image block BVH can be obtainedjAnd corresponding low resolution vertical gradient image block BVLjWhere j denotes the index of the image block,
step B3) performs the blocking operation described in steps B1 to B2 above on all the high resolution images and low resolution images obtained in the training set at all levels,
step B4), vectorizing all the obtained image blocks, wherein the vectorizing process adopts a method of extracting image block elements from top to bottom and from left to right, namely, a vector can be obtained after vectorizing an image block Bl with the size of m multiplied by n
Figure FSA00001906935100000316
Wherein i in Bl1Line, j1The elements in the column are
Figure FSA0000190693510000031
To (1)(i1-1)*m+j1Element such that BHL is applied to all of the low resolution horizontal gradient image blocksjVectorizing to obtain a vector
Figure FSA0000190693510000032
BHH for all high resolution horizontal gradient image blocksjVectorizing to obtain a vectorFor all low resolution vertical gradient image blocks BVLjVectorizing to obtain a vector
Figure FSA0000190693510000034
For all high-resolution vertical gradient image blocks BVHjVectorizing to obtain a vector
Figure FSA0000190693510000035
Step B5) then, for all horizontal gradient low resolution image block vectors
Figure FSA0000190693510000036
Clustering by using a K-means method to obtain the class c1 where the class c1 is located, namely the horizontal gradient low-resolution image block vectors are copolymerized into K classes, and for the horizontal gradient high-resolution image block vectors, the class c1 is obtained, and the class c1 is more than or equal to K
Figure FSA0000190693510000037
The category of which is the horizontal gradient low resolution image block vector corresponding to the image block vector
Figure FSA0000190693510000038
Class of which it belongs, then, determining a horizontal gradient low resolution image block vector
Figure FSA0000190693510000039
For the type c1, the center and mapping matrix of each type are less than or equal to 1c1 ≦ K, all low resolution vectors in this categoryStacked, a matrix MHL (c1) can be obtained, in which each low resolution vector
Figure FSA00001906935100000311
Is one of the columns; all the low-resolution vectors in this class c1
Figure FSA00001906935100000312
Corresponding high resolution vector
Figure FSA00001906935100000313
Stacking in the same order as when stacking MHL (c1) results in a matrix MHH (c1) and thus a mapping matrix of class c1
Pc1=MHH(c1)×((MHL(c1))T×MHL(c1)+λI))-1×MHL(c1) (18)
Where λ is a parameter that needs to be optimized experimentally, and then the center position vector of class c1 is determined from the centers of all the horizontal gradient low resolution vectors in class c1
Figure FSA00001906935100000314
Step B6) for all image block vectors
Figure FSA00001906935100000315
Clustering by using a K-means method to obtain the class c2 where the image blocks are located, wherein c2 is more than or equal to 1 and less than or equal to K, namely all the image blocks are copolymerized into K classes, and for the class c2, all the low-resolution vectors in the class are used
Figure FSA0000190693510000041
Stacked, a matrix MVL (c2) can be obtained, where each low resolution vector
Figure FSA0000190693510000042
Is one of the columns; all the low-resolution vectors in this class c2
Figure FSA0000190693510000043
Corresponding high resolution vector
Figure FSA0000190693510000044
Stacking MVL (c2) in the same order as it was stacked results in a matrix MVH (c2) and thus a mapping matrix of class c2
Pc2=MVH(c2)×((MVL(c2))T×MVL(c2)+λI))-1×MVL(c2) (19)
The center position vector for category c2 is then determined from the centers of all the low resolution vectors in category c2
Figure FSA0000190693510000045
Step B7) storing the class centers of the obtained classes
Figure FSA0000190693510000046
And
Figure FSA0000190693510000047
and a mapping matrix P for each classc1 and Pc2
3. The method for super-resolution of images according to claim 1, wherein: the input of the on-line magnification stage is a low-resolution image and various types of center and mapping matrices obtained in the training stage, and the output is a high-resolution image, which is implemented as described in the following steps C1 to C9,
step C1) of blocking the input low-resolution image to obtain a low-resolution image block LLB of size m × nj1And j1 is the number of the imageAn index of each image block, an overlapping area between adjacent image blocks in the horizontal direction is m × (n-2), an overlapping area between adjacent image blocks in the vertical direction is (m-2) × n,
step C2) for this image block LLBj1Performing horizontal gradient calculation to obtain horizontal gradient block HBj1I.e. by
Figure FSA0000190693510000048
Wherein gh is the template for obtaining the horizontal gradient,
step C3) determining a high resolution horizontal gradient block, HB, from the low resolution horizontal gradient block in the online imagej1Vectorization, which is sequentially scanned from top to bottom and from left to right, may obtain a vector
Figure FSA0000190693510000049
According to the vector
Figure FSA00001906935100000410
And class center stored during training phase
Figure FSA00001906935100000411
Find the nearest class center and class c1*I.e. by
Figure FSA00001906935100000412
wherein ,
Figure FSA00001906935100000413
is the square of the two-norm of the vector x, and is then based on the vector
Figure FSA00001906935100000414
And category c1*Mapping matrix of
Figure FSA00001906935100000415
The vector by which they are multiplied can be obtained
Figure FSA00001906935100000416
Namely, it is
Then, for the vector
Figure FSA0000190693510000052
Performing inverse vectorization to obtain high-resolution image blocks HHB with size of (m × s) × (n × s)j1Where s is the magnification factor as described above and m x n is the size of the low resolution block as described above, i.e. a vector
Figure FSA0000190693510000053
Middle j2The value of an element being the number of HLB in an image blockLine, number (j)2mod m s), where
Figure FSA0000190693510000055
Representing the largest integer less than y, mod represents the remainder-taking operation in division,
step C4) obtaining an enlarged horizontal gradient image, and performing the operations described in steps C2 and C3 above on each of the low-resolution image blocks to obtain corresponding horizontal gradient image blocks HHB for each of the high-resolution image blocksj1Then, the image blocks are spliced together to obtain a high-resolution image IghDuring splicing, a weighting method is adopted, wherein the weighting is proportional to the mean square error value of the image block, namely for block HHBo1And block HHBo2The pixel values in the overlap region are:
Igh(i,j)=HHBo1(m1,n1)*w1+HHBo2(m2,n2)*w2(22)
wherein ,(m1,n1) For the (i, j) th pixel on the image in block HHBo1Position index of (m)2,n2) For the (i, j) th pixel on the image in block HHBo2Index of position in (1), w1Into block HHBo1Is proportional to block HHBo1Mean square error of middle pixel value
Figure FSA00001906935100000515
w2Into block HHBo2Is proportional to block HHBo2Mean square error of middle pixel value
Figure FSA00001906935100000516
And has w1+w2=1,
Step C5) for the image block LLB obtained in step C2j1The vertical gradient block VB can be obtained by performing the operation of solving the vertical gradientj1I.e. by
Figure FSA0000190693510000056
Wherein gv is the template for obtaining the vertical gradient,
step C6) determining the high resolution vertical gradient block based on the low resolution vertical gradient block VB in the online imagej1Vectorization, which is sequentially scanned from top to bottom and from left to right, may obtain a vector
Figure FSA0000190693510000057
According to vectors
Figure FSA0000190693510000058
And class center stored during training phase
Figure FSA0000190693510000059
Find the nearest class center and class c2*I.e. by
Figure FSA00001906935100000510
Then, according to the vector
Figure FSA00001906935100000511
And category c2*Mapping matrix of
Figure FSA00001906935100000512
The vector by which they are multiplied can be obtainedNamely, it is
Figure FSA00001906935100000514
Then, for the vector
Figure FSA0000190693510000061
Also, the inverse vectorization process is performed to obtain a high-resolution vertical gradient image block HVB with a size of (m × s) × (n × s)j1
Step C7) performs the operations described in steps C5 and C6 above on each of the low-resolution tiles resulting in a corresponding high-resolution vertical gradient tile HVBj1Then, the image blocks are spliced together to obtain a high-resolution image IghDuring splicing, a weighting method is adopted, the weighting of which is proportional to the mean square error value of the image block, namely for the block HVBo1And block HVBo2The pixel values in the overlap region are:
Igv(i,j)=HVBo1(m1,n1)*w1+HVBo2(m2,n2)*w2(25)
wherein ,(m1,n1) Is the (i, j) th pixel on the imageIn block HVBo1Position index of (m)2,n2) For the (i, j) th pixel on the image in block HVBo2Index of position in (1), w1As a block HVBo1Is proportional to the block HVBo1Variance of middle pixel value
Figure FSA0000190693510000062
w2As a block HVBo2Is proportional to the block HVBo2Variance of middle pixel value
Figure FSA0000190693510000063
And has w1+w2=1,
Step C8) based on the low resolution image I input in the on-line processLObtaining an initial high-resolution image I by adopting a bicubic interpolation amplification methodc
Step C9) based on the low resolution image I input in the on-line processLAmplified horizontal gradient image I obtained aboveghAnd an enlarged vertical gradient image IgvThe final high resolution image is obtained by optimizing the objective function, i.e. having
Figure FSA0000190693510000064
Wherein, gh and gv are the above mentioned horizontal and vertical gradient extracting template, D is the down sampling matrix, B is the matrix representing the image degradation process, and beta is a balance parameter, in the process of solving (26), the invention adopts an iterative algorithm to iteratively optimize the image, the initial condition of the iteration is high resolution image
Ih(0)=Ic(27)
Wherein 0 represents the 0 th iteration, the first two terms in the formula (26) are updated by using a gradient descent method, the last term is updated by using an iterative back projection method, the updating is carried out alternately until convergence, and the converged image is the output high-resolution image.
CN201910889471.5A 2019-09-12 2019-09-12 Super-resolution method based on neighborhood regression of internal sample Active CN110674862B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910889471.5A CN110674862B (en) 2019-09-12 2019-09-12 Super-resolution method based on neighborhood regression of internal sample

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910889471.5A CN110674862B (en) 2019-09-12 2019-09-12 Super-resolution method based on neighborhood regression of internal sample

Publications (2)

Publication Number Publication Date
CN110674862A true CN110674862A (en) 2020-01-10
CN110674862B CN110674862B (en) 2023-05-26

Family

ID=69078277

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910889471.5A Active CN110674862B (en) 2019-09-12 2019-09-12 Super-resolution method based on neighborhood regression of internal sample

Country Status (1)

Country Link
CN (1) CN110674862B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021258529A1 (en) * 2020-06-22 2021-12-30 北京大学深圳研究生院 Image resolution reduction and restoration method, device, and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108257108A (en) * 2018-02-07 2018-07-06 浙江师范大学 A kind of super-resolution image reconstruction method and system
CN108764368A (en) * 2018-06-07 2018-11-06 西安邮电大学 A kind of image super-resolution rebuilding method based on matrix mapping
CN109215093A (en) * 2018-07-27 2019-01-15 深圳先进技术研究院 Low dosage PET image reconstruction method, device, equipment and storage medium
CN110084750A (en) * 2019-04-12 2019-08-02 浙江师范大学 Single image super-resolution method based on multilayer ridge regression

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108257108A (en) * 2018-02-07 2018-07-06 浙江师范大学 A kind of super-resolution image reconstruction method and system
CN108764368A (en) * 2018-06-07 2018-11-06 西安邮电大学 A kind of image super-resolution rebuilding method based on matrix mapping
CN109215093A (en) * 2018-07-27 2019-01-15 深圳先进技术研究院 Low dosage PET image reconstruction method, device, equipment and storage medium
CN110084750A (en) * 2019-04-12 2019-08-02 浙江师范大学 Single image super-resolution method based on multilayer ridge regression

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021258529A1 (en) * 2020-06-22 2021-12-30 北京大学深圳研究生院 Image resolution reduction and restoration method, device, and readable storage medium

Also Published As

Publication number Publication date
CN110674862B (en) 2023-05-26

Similar Documents

Publication Publication Date Title
Anwar et al. Densely residual laplacian super-resolution
Yuan et al. Unsupervised image super-resolution using cycle-in-cycle generative adversarial networks
CN108475415B (en) Method and system for image processing
Ren et al. Single image super-resolution via adaptive high-dimensional non-local total variation and adaptive geometric feature
Li et al. Learning a deep dual attention network for video super-resolution
US8538200B2 (en) Systems and methods for resolution-invariant image representation
Ren et al. Single image super-resolution using local geometric duality and non-local similarity
CN110689483B (en) Image super-resolution reconstruction method based on depth residual error network and storage medium
Liu et al. Effective image super resolution via hierarchical convolutional neural network
CN108830792B (en) Image super-resolution method using multi-class dictionary
CN111626927B (en) Binocular image super-resolution method, system and device adopting parallax constraint
Fan et al. Compressed multi-scale feature fusion network for single image super-resolution
CN111553867B (en) Image deblurring method and device, computer equipment and storage medium
Liang et al. Improved non-local iterative back-projection method for image super-resolution
Liu et al. Multi-scale residual hierarchical dense networks for single image super-resolution
CN110097503B (en) Super-resolution method based on neighborhood regression
Tang et al. Deep residual networks with a fully connected reconstruction layer for single image super-resolution
CN108765287B (en) Image super-resolution method based on non-local mean value
Yang et al. Hierarchical accumulation network with grid attention for image super-resolution
CN110674862B (en) Super-resolution method based on neighborhood regression of internal sample
CN115867933A (en) Computer-implemented method, computer program product and system for processing images
CN116188272B (en) Two-stage depth network image super-resolution reconstruction method suitable for multiple fuzzy cores
CN116029943A (en) Infrared image super-resolution enhancement method based on deep learning
Ren et al. Compressed image restoration via deep deblocker driven unified framework
Krishna et al. A Trained CNN based Resolution Enhancement of Digital Images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant