CN108090873B - Pyramid face image super-resolution reconstruction method based on regression model - Google Patents

Pyramid face image super-resolution reconstruction method based on regression model Download PDF

Info

Publication number
CN108090873B
CN108090873B CN201711381261.2A CN201711381261A CN108090873B CN 108090873 B CN108090873 B CN 108090873B CN 201711381261 A CN201711381261 A CN 201711381261A CN 108090873 B CN108090873 B CN 108090873B
Authority
CN
China
Prior art keywords
image
resolution
low
face image
resolution face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201711381261.2A
Other languages
Chinese (zh)
Other versions
CN108090873A (en
Inventor
于明
熊敏
刘依
郭迎春
于洋
师硕
毕容甲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei University of Technology
Original Assignee
Hebei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei University of Technology filed Critical Hebei University of Technology
Priority to CN201711381261.2A priority Critical patent/CN108090873B/en
Publication of CN108090873A publication Critical patent/CN108090873A/en
Application granted granted Critical
Publication of CN108090873B publication Critical patent/CN108090873B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • G06T3/4076Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Battery Electrode And Active Subsutance (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a pyramid face image super-resolution reconstruction method based on a regression model, which relates to the enhancement or restoration of images and utilizes the characteristic that the images have non-local similarity, searching similar blocks of reconstructed image blocks in corresponding characteristic images of the low-resolution face images in the test set to obtain a position set of all the similar blocks, taking face image blocks of all the low-resolution images in the position set in the training set as a low-resolution training set corresponding to the low-resolution face image blocks in the test set, and constructing a constraint condition by using the sum of the distance between the characteristic image blocks corresponding to the low-resolution face image blocks in the test set and the characteristic image blocks corresponding to the low-resolution face image blocks in the training set and the distance between the characteristic image blocks corresponding to the face image blocks after interpolation amplification of the low-resolution images in the test set and the characteristic image blocks corresponding to the high-resolution image blocks in the training set; the method overcomes the defects in the face image reconstruction process in the prior art.

Description

Pyramid face image super-resolution reconstruction method based on regression model
Technical Field
The technical scheme of the invention relates to image enhancement or restoration, in particular to a pyramid face image super-resolution reconstruction method based on a regression model.
Background
In the process of image acquisition, due to the limitation of an imaging system and the influence of environmental factors, the acquired image and a real scene often have deviation. How to improve the spatial resolution of images and improve the image quality has been an important problem that is solved by image acquisition technology. With the development of science and technology, the performance of hardware equipment of an imaging system is better, but the method for improving the image quality by improving the hardware system needs high cost. On the basis that the hardware level reaches a certain height, the method for improving the image quality through the software technology becomes an economic and effective method, and Super Resolution (SR) is an effective method based on the method.
In a broad sense, the image super-resolution reconstruction method is mainly classified into a super-resolution reconstruction method based on a plurality of images and a super-resolution reconstruction method based on a single image. The latter has a wide application range and a good learning effect, and therefore, has become one of the important points of research of numerous scholars in recent years. For example, document 1 proposes a method for multi-layer face super-resolution reconstruction based on neighborhood embedding and an intermediate dictionary with position limitation, which performs super-resolution reconstruction by using manifold limitation of a local geometric structure of an image block, captures a degradation process of an image, and enhances consistency between a reconstructed high-resolution face image and an original high-resolution image by a method for constructing an intermediate dictionary, so as to reconstruct a high-quality face image, but a mapping relationship obtained by directly applying a low-resolution image to the high-resolution image is not completely suitable for the high-resolution image, and a difference between the low-resolution image and the high-resolution image easily causes an error of image reconstruction. CN103824272B discloses a face super-resolution reconstruction method based on K neighbor re-recognition, in which the method updates the recognized neighbor image blocks by using the geometric information of the low-resolution manifold and the high-resolution manifold, the weight coefficient is calculated from the re-recognized neighbor image blocks, the information provided by the high-resolution image makes up the deficiency of the information provided by the low-resolution image, and the quality of the reconstructed image is greatly improved, however, in the method, after one search of the K neighbor image blocks, one search of the neighbor blocks is performed again in all the high-resolution image blocks, and then the image block with the most repetition times is selected as the training image block, and the efficiency of the reconstruction method is reduced by the two search processes and the one comparison process. The two face super-resolution reconstruction methods based on neighborhood embedding both have the defect that the image is easy to generate a fuzzy phenomenon due to over-fitting or under-fitting. In order to solve the problem, sparse priori knowledge is introduced into face super-resolution reconstruction, CN103325104A proposes a face image super-resolution reconstruction method based on iterative sparse expression, in which a high-resolution face estimation image is linearly expressed by using a high-resolution face image dictionary, the obtained high-resolution face estimation result is converged to a stable value by using a local linear regression method, and a final reconstructed face image is obtained. The super-resolution reconstruction method based on neighborhood embedding and sparse expression has the defect that the reconstructed face image still cannot meet the requirement of people on high-quality images. In order to fully utilize the characteristics of similarity of different face images, document 2 proposes a face image super-resolution reconstruction method based on position blocks, which directly uses face image blocks at the same positions in a training set to form a set to reconstruct a face image, assuming that image blocks at the same positions of different face images have the same image structure. Document 3 proposes to select and reconstruct image blocks in the same category as the input block in the training set by adding a low rank constraint, but this method has a drawback of being overly dependent on the training set and not utilizing the properties of the input image itself. Document 4 proposes to construct a weight matrix according to the distance between an input block and a block at the same position in a training set to solve a mapping matrix, however, in the method based on the position block, the mapping relationship between face image blocks is trained from low-resolution face image blocks, the relationship between high-resolution face image blocks is not considered, which may affect the reconstruction effect of the face image, and the reconstruction process of the face image cannot reflect the attenuation process of the image, and the reconstructed high-resolution face image has the defect of a local ghost phenomenon.
In summary, the prior art of the face image super-resolution reconstruction method has the defects that the problem that the difference existing between high-resolution images affects the quality of the reconstructed images is not solved, the defect that the reconstruction process of the face images cannot truly reflect the degradation process of the face images exists, and the phenomenon that the reconstructed face images still have local ghosts still exists.
The prior art papers referred to in the above text are derived from the following:
document 1: jiang, J., Hu, R., Wang, Z., & Han, Z. (2014), Face super-resolution video multimedia layer-constrained iterative neighbor embedding and intermediate differential learning IEEE Transactions on Image Processing,23(10), 4220-.
Document 2: ma, x, Zhang, j, & Qi, c. (2010), hall accounting face by position-patch, pattern Recognition,43(6), 2224-.
Document 3: gao, G., ding, X.Y., Huang, P., Zhou, Q., Wu, S., & Yue, D. (2016.) Local-Constrained Double Low-Rank reproduction for efficient surface halogen, IEEE Access,4, 8775-.
Document 4: jiang, J., Chen, C., Ma, J., Wang, Z., & Hu, R. (2017). SRLSP A face image super-resolution algorithm with local structure rule IEEE Transactions on Multimedia,19(1),27-40.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: providing a pyramid face image super-resolution reconstruction method based on a regression model, extracting gradient features from high-resolution and low-resolution face images in a training set respectively to obtain corresponding gradient feature images, and then partitioning the high-resolution and low-resolution images in the training set and the corresponding gradient feature images in an overlapping manner respectively; extracting gradient features from the low-resolution face images in the test set to obtain feature images, searching reconstructed image blocks in the feature images by using non-local similarity to obtain similar blocks, and expanding a training set of the reconstructed face image blocks by using the low-resolution face image blocks with the same positions as the similar blocks in the training set; during reconstruction, a constraint condition is constructed by using the distances between the feature image blocks corresponding to the low-resolution face image blocks in the test set and the feature image blocks corresponding to the interpolated and amplified face image blocks and the feature image blocks corresponding to the face image blocks in the training set, so that the reconstruction regression process is smoother; and carrying out different-scale blocking construction on the low-resolution face image and the amplified high-resolution face image in the test set to construct a pyramid model so as to realize face super-resolution reconstruction. The method of the invention overcomes the problems that the difference existing between high-resolution images in a training set is not considered to influence the quality of the reconstructed image when the face image is reconstructed in the prior art, and the defects that the degradation process of the face image cannot be truly reflected in the face image reconstruction process and the reconstructed face image still has a local ghost phenomenon.
The technical scheme adopted by the invention for solving the technical problem is as follows: the super-resolution reconstruction method of the pyramid face image based on the regression model comprises the following specific steps:
A. training the low-resolution face image set and the high-resolution face image set in the training set:
the first step is to expand a low-resolution face image set and a high-resolution face image set in a training set:
according to the symmetric characteristics of the face images, the low-resolution face image set and the high-resolution face image set in the training set are expanded in a left-right turning mode, the size of the images is unchanged, the number of the images is expanded by two times, and the expanded low-resolution face image sets are obtained respectively
Figure GDA0002893272690000031
And extended high resolution face image set
Figure GDA0002893272690000032
Wherein l represents a low resolution image, with a size of a pixels, h represents a high resolution image, with a size of (d a) pixels, d being a multiple, and M represents the number of images;
secondly, expanding the low-resolution face image set PlAnd a high resolution face image set PhRespectively extracting gradient features:
for the extended low-resolution face image set PlAnd a high resolution face image set PhRespectively extracting a first-order gradient and a second-order gradient from each human face image to form a gradient feature as a component, and obtaining a low-resolution human face image set PlMedium low resolution face gradient feature image set
Figure GDA0002893272690000033
And a high resolution face image set PhHigh resolution face gradient feature image set
Figure GDA0002893272690000034
Thirdly, expanding the high-resolution face image set PhAnd corresponding high-resolution face gradient characteristic image set G thereofhRespectively partitioning:
for the extended high-resolution face image set PhEach of the face images in (1)
Figure GDA0002893272690000035
And corresponding high-resolution human face gradient characteristic image
Figure GDA0002893272690000036
Respectively performing overlapped blocks, each block having a size of R1*R1Pixel, R1The numerical value of (A) is 8-12, and the overlapping mode is that K is respectively overlapped between the current block and the upper and lower adjacent image blocks1Column pixels, and left and right adjacent image blocksOverlap of K1Column pixels, and 0 ≤ K1≤R12, then for each high resolution face image, in order from top to bottom and from left to right
Figure GDA0002893272690000037
And its corresponding gradient feature image
Figure GDA0002893272690000038
The number of all the blocks is 1,2, and U, which is the total number of each image block, and the image blocks with the same number are called the image blocks at the same position, thereby completing the process of expanding the high-resolution face image set PhAnd corresponding high-resolution face gradient characteristic image set G thereofhRespectively partitioning;
fourthly, the extended low-resolution face image set P is processedlAnd corresponding low-resolution face gradient characteristic image set G thereoflRespectively partitioning:
and the high-resolution face image set PhThe block dividing mode is the same, and the extended low-resolution face image set P is subjected tolEach low resolution face image of
Figure GDA0002893272690000041
And corresponding low-resolution face gradient characteristic image
Figure GDA0002893272690000042
Respectively performing overlapped blocks with each block size of (R)1/d)*(R1D) pixel, R1The number of the image blocks is 8-12, and the overlapping mode is that K is overlapped between the current image block and the upper and lower adjacent image blocks1D lines of pixels, and the overlap K between the left and right adjacent image blocks1D columns of pixels, and then applying the sequence from top to bottom and from left to right to each low-resolution face image
Figure GDA0002893272690000043
And its corresponding gradient feature image
Figure GDA0002893272690000044
The number of all the blocks is 1,2, a, U, U is the total number of each image block, and the image blocks with the same number are called the image blocks at the same position, thereby completing the low-resolution face image set P after expansionlAnd corresponding low-resolution face gradient characteristic image set G thereoflRespectively partitioning;
at this point, finishing the A. training set low-resolution face image set PlAnd a high resolution face image set PhThe training process of (2);
B. and (3) testing the reconstruction process of the low-resolution face image in the set:
fifthly, amplifying the low-resolution face images in the test set to obtain an amplified high-resolution face image:
inputting the low-resolution face image to be tested into a computer to obtain a low-resolution face image I in a test settlAmplifying a certain low-resolution face image in the test set by adopting a bicubic interpolation mode to obtain an amplified image serving as an amplified high-resolution face image I in the test setthTo make the amplified high-resolution face image I in the test setthAnd high-resolution face image in training set
Figure GDA0002893272690000045
The sizes are equal;
sixthly, carrying out low-resolution face image I in the test settlAnd enlarged high resolution face image IthRespectively extracting gradient features:
respectively extracting the low-resolution face image I in the test set obtained in the fifth steptlAnd enlarged high resolution face image IthThe first-order gradient and the second-order gradient are used as components to form respective gradient features, and low-resolution face gradient feature images g corresponding to the gradient features are obtainedtlAnd high-resolution human face gradient feature image gth
Seventhly, amplifying the high-resolution face image I in the test setthAnd its corresponding high scoreResolution human face gradient characteristic image gthPartitioning:
amplifying the high-resolution face image I in the test set obtained in the fifth stepthAnd the corresponding high-resolution face gradient characteristic image g in the sixth stepthRespectively performing overlapped partitioning, each block having a size of R1*R1Pixel, R1The numerical value of (A) is 8-12, so that the block size is the same as that of the high-resolution face image in the training set, and the overlapping mode is that K is overlapped between the current image block and the upper and lower adjacent image blocks1Line pixels, overlap K with left and right adjacent image blocks1The method comprises the following steps of (1) column pixels, numbering all blocks of each face image respectively in an order from top to bottom and from left to right, wherein the numbering is 1, 2.
Eighthly, testing the low-resolution face image I in the settlAnd corresponding low-resolution face gradient characteristic image g thereoftlPartitioning:
for the low resolution face image I in the test set obtained in the fifth steptlAnd the corresponding low-resolution face gradient characteristic image g in the sixth steptlRespectively performing overlapped blocks with each block size of (R)1/d)*(R1/d),R1The numerical value of (A) is 8-12, so that the block size is the same as the block size of the low-resolution face image in the training set, and the overlapping mode is that K is overlapped between the current image block and the upper and lower adjacent image blocks1D lines of pixels, and the overlap K between the left and right adjacent image blocks1The method comprises the following steps of (1)/d columns of pixels, numbering all blocks of each face image respectively in an order from top to bottom and from left to right, wherein the numbering is 1, 2.
Ninth, using the low resolution face image I in the test settlCorresponding low-resolution face gradient characteristic image gtlNumbering similar blocks:
according to the following stepsSequentially comparing the low-resolution face images I in the test set obtained in the eighth step from left to righttlThe image block of (1) is reconstructed, for example, the jth image block is reconstructed, and a low-resolution face image I in a test set is utilizedtlCorresponding low-resolution face gradient characteristic image gtlNon-local similarity of (1), low resolution face image I in test settlFinding out the similar block of the jth image block, and setting the low-resolution face image I in the test settlCorresponding low-resolution face gradient characteristic image gtlThe jth human face gradient characteristic image block is gtl,jFor the low-resolution face gradient feature image gtlScanning all the face image blocks in the image block list from top to bottom and from left to right, wherein the scanned image blocks are not repeated with the jth image block, calculating Euclidean distances between the scanned face gradient characteristic image blocks and the jth face gradient characteristic image block, then sequencing the distances of all the low-resolution face gradient characteristic image blocks according to the sequence of the distances from small to large, and taking the first n blocks with the smallest distance as the jth low-resolution face gradient characteristic image block gtl,jThe number set of the similar image blocks of the low-resolution human face gradient feature image is set as [ v ]1,v2,...,vn]The set of the low-resolution face gradient characteristic image blocks corresponding to the number set is
Figure GDA0002893272690000051
Thereby completing the utilization of the low-resolution face image I in the test settlCorresponding low-resolution face gradient characteristic image gtlThe process of numbering the similar blocks;
step ten, solving the extended low-resolution face gradient characteristic image set G in the training set by using the position number of the similar blocklA set of image blocks of all images at the same number:
the low-resolution face gradient characteristic image set G after the expansion of the training set in the second step is carried outl1,2, M face images
Figure GDA0002893272690000052
The number set of the human face feature image block with the middle number of j and the similar low-resolution human face gradient feature image block in the ninth step is [ v1,v2,...,vn]The same image blocks in the set
Figure GDA0002893272690000053
Then the extended low-resolution face gradient characteristic image set G in the training setlThe serial number set [ v ] of the image block with the serial number j in all the images and the similar low-resolution human face gradient characteristic image block1,v2,...,vn]Set of image blocks of
Figure GDA0002893272690000054
Comprises the following steps:
Figure GDA0002893272690000055
to facilitate writing, will
Figure GDA0002893272690000056
Is recorded as:
Figure GDA0002893272690000057
wherein M (1+ n) represents M face images, and each face image has 1+ n image blocks;
the eleventh step, the position number of the similar block is used for solving the high-resolution face gradient characteristic image set G after being expanded in the training sethA set of image blocks of all images at the same number:
the high-resolution face gradient characteristic image set G after the expansion of the training set in the second step is carried outh1,2, M images
Figure GDA0002893272690000058
Middle number j and the ninth stepThe number set of similar low-resolution human face gradient characteristic image blocks in (1) is [ v ]1,v2,...,vn]The image block composition set
Figure GDA0002893272690000061
Then the high-resolution face gradient characteristic image set G after expansion in the training sethWherein all images are numbered j and [ v ]1,v2,...,vn]Set of image blocks of
Figure GDA0002893272690000062
Comprises the following steps:
Figure GDA0002893272690000063
to facilitate writing, will
Figure GDA0002893272690000064
Is recorded as:
Figure GDA0002893272690000065
the twelfth step, the position number of the similar block is used to solve the extended low-resolution face image set PlThe image blocks of all the face images at the same number are combined into a set:
the extended low-resolution face image set P in the first stepl1,2, M face images
Figure GDA0002893272690000066
J and the number set of the similar low-resolution human face gradient characteristic image blocks in the ninth step is [ v1,v2,...,vn]The image block composition set
Figure GDA0002893272690000067
Then P islWherein all images are numbered j and [ v ]1,v2,...,vn]Group of picture blocksSet of (a) and (b)
Figure GDA0002893272690000068
Comprises the following steps:
Figure GDA0002893272690000069
to facilitate writing, will
Figure GDA00028932726900000610
Is recorded as:
Figure GDA00028932726900000611
step thirteen, the position number of the similar block is used for solving the extended high-resolution face image set PhThe image blocks of all the face images at the same number are combined into a set:
the extended high-resolution face image set P in the first steph1,2, M face images
Figure GDA00028932726900000612
J and the number set of the similar low-resolution human face gradient characteristic image blocks in the ninth step is [ v1,v2,...,vn]The image block composition set
Figure GDA00028932726900000613
Then P ishWherein all images are numbered j and [ v ]1,v2,...,vn]The image block composition set
Figure GDA00028932726900000614
Comprises the following steps:
Figure GDA00028932726900000615
to facilitate writing, will
Figure GDA00028932726900000616
Is recorded as:
Figure GDA00028932726900000617
fourthly, calculating a weight matrix corresponding to the jth human face image block:
the low resolution face image I in the eighth step test set is calculated by the following formula (9)tlThe jth human face image block g of the corresponding gradient characteristic imagetl,jObtained by the tenth step
Figure GDA00028932726900000618
Euclidean distance set of all human face image blocks
Figure GDA00028932726900000619
And then the following formula (10) is used for calculating the high-resolution face image I amplified in the seventh step test setthCorresponding high-resolution face gradient characteristic image gthJ-th block image block gth,jAs in the tenth step above
Figure GDA00028932726900000620
Set of Euclidean distances of all image blocks
Figure GDA00028932726900000621
Figure GDA0002893272690000071
Figure GDA0002893272690000072
After obtaining the above distance, the weight matrix W of the jth blockjThe following equation (11) is obtained:
Figure GDA0002893272690000073
wherein α is a smoothing factor;
and fifteenth, calculating a mapping matrix corresponding to the jth face image block:
recording the mapping process of the jth high-resolution face image block obtained from the jth low-resolution face image block in the training set as a simple mapping relation to obtain a formula:
Figure GDA0002893272690000074
wherein A isjAnd (3) a mapping matrix for the jth face image block is represented, T represents the transpose of the matrix, and the optimal mapping matrix is obtained by the following formula (13):
Figure GDA0002893272690000075
since the high-resolution face image blocks and the low-resolution face image blocks are not in a simple mapping relationship, performing smooth constraint on the formula (13) by using the distance matrix obtained in the fourteenth step to obtain the following smooth regression formula (14):
Figure GDA0002893272690000076
wherein
Figure GDA0002893272690000077
Where tr () is the trace of the matrix, adding a regularization term to make the mapping process smoother yields the following equation (15):
Figure GDA0002893272690000078
wherein
Figure GDA0002893272690000079
F represents Frobenius norm, and lambda is used for balancing reconstruction error and AjThe mapping matrix corresponding to the jth block image is obtained by simplification:
Figure GDA00028932726900000710
wherein E represents an identity matrix;
sixthly, reconstructing the low-resolution face image blocks in the test set to obtain high-resolution face image blocks:
by passing
Figure GDA00028932726900000711
Obtaining a low-resolution face image I in a test settlFace image block I intl,jHigh-frequency information of corresponding high-resolution face image block, and then interpolating the high-frequency information to Itl,jTo obtain a reconstructed face image block I'th,j
Seventeenth, combining all the reconstructed image blocks into a reconstructed high-resolution face image:
combining all the reconstructed face image blocks according to the serial numbers in the sequence from top to bottom and from left to right, averaging the overlapped parts in the combination process to obtain a reconstructed high-resolution face image I'th
Eighteenth, constructing a pyramid face super-resolution reconstruction model:
(18.1) to I 'obtained in the seventeenth step'thDimensionality reduction is carried out by using a nearest neighbor interpolation method to obtain a dimensionality-reduced low-resolution face image I'tlThe face image after dimension reduction is combined with the ItlAre the same in size;
(18.2) reconstructing all the low-resolution facial images in the training set by the steps from the first step to the seventeenth step, and reconstructing the ith low-resolution facial image in the training set
Figure GDA0002893272690000081
The process of reconstruction is as follows:
Figure GDA0002893272690000082
in the training set as low-resolution face images in the test set
Figure GDA0002893272690000083
And
Figure GDA0002893272690000084
as a training set, obtaining a high-resolution image by utilizing the reconstruction from the first step to the seventeenth step
Figure GDA0002893272690000085
Then using nearest neighbor interpolation method to pair
Figure GDA0002893272690000086
Reducing the vitamin content to obtain
Figure GDA0002893272690000087
(18.3) taking the block size of the high-resolution face image as R2*R2Pixel, R2Has a value of 6 to 10, and R2≠R1The number of pixels overlapped between the high resolution image blocks is K2The block size of the low resolution face image is (R)2/d)*(R2D) pixels, wherein d is a reduction multiple and has the same value as d in the first step, and the number of overlapped pixels among the low-resolution image blocks is K2L 'from (18.1)'tlAs a low-resolution face image in the test set, obtained (18.2)
Figure GDA0002893272690000088
And
Figure GDA0002893272690000089
as a training set, performing a face image super-resolution reconstruction process again to obtain a final reconstructed face image;
and finishing the reconstruction process of the low-resolution face image in the test set B, and finally finishing the super-resolution reconstruction of the pyramid face image based on the regression model.
In the above pyramid face image super-resolution reconstruction method based on the regression model, in the first step, the sizes of the low-resolution face image set and the high-resolution face image set in the extended training set are (d a) b pixels, d is a multiple, and the value of d is 2; the third step is to carry out the expansion of the high-resolution face image set PhAnd corresponding high-resolution face gradient characteristic image set G thereofhRespectively overlapping K between the image blocks adjacent to the left and the right in the block division1Column pixels, the K1The value of (A) is 4; the fourth step is that the extended low-resolution face image set P is processedlAnd corresponding low-resolution face gradient characteristic image set G thereoflEach block in the block is respectively divided into the size (R)1/d)*(R1A/d) pixel, the value of d being 2; overlap K with left and right adjacent image blocks1A/d column of pixels, the K1The value of (A) is 4; the seventh step is to test the amplified high-resolution face image I in the setthAnd corresponding high-resolution face gradient characteristic image g thereofthThe mode of overlapping in the blocks is that the current image block and the upper and lower adjacent image blocks are overlapped by K1Line pixels, overlap K with left and right adjacent image blocks1Column pixels, the K1The value of (A) is 4; the eighth step, for the low resolution face image I in the test settlAnd corresponding low-resolution face gradient characteristic image g thereoftlEach block in the block is made to have a size of (R)1/d)*(R1D), the value of d being 2; overlap K with left and right adjacent image blocks1A/d column of pixels, the K1The value of (A) is 4; and eighteenth step, the number of overlapped pixels between the high-resolution image blocks in (18.3) for constructing the pyramid face super-resolution reconstruction model is K2K is the same as2The value of (A) is 4; the block size of the low resolution face image is (R)2/d)*(R2D) pixel, d is the reduction multiple and is the same as the value of d in the first step, and the value of d is 2.
Known techniques used in the present invention include: gradient features, non-local similarity, and linear regression.
The invention has the beneficial effects that: compared with the prior art, the invention has the prominent substantive characteristics and remarkable progress as follows:
(1) the invention utilizes the characteristic that the image has non-local similarity, searches the low-resolution face image in the test set for the similar blocks of the reconstructed image block in the corresponding characteristic image to obtain the position set of all the similar blocks, uses the face image block of all the low-resolution images in the training set in the position set as the low-resolution training set corresponding to the low-resolution face image block in the test set instead of the method described in the above documents 1,2, 3 and 4, only uses the set formed by all the face image blocks at a certain position in the low-resolution face image in the training set, or compares the distances between all the face image blocks in the low-resolution training set and the low-resolution face image block in the test set, and uses the set formed by some face image blocks with the nearest distance as the low-resolution training set And (4) collecting.
(2) Compared with the method recorded in the prior art CN103824272B, the method directly combines the distance between the low-resolution face image blocks in the test set and the distance between the high-resolution face image blocks in the training set by using the distance between the feature image blocks corresponding to the low-resolution face image blocks in the test set and the feature image blocks corresponding to the low-resolution face image blocks in the training set after interpolation amplification of the low-resolution images in the test set, and constructs the constraint condition by using the sum of the distances between the feature image blocks corresponding to the high-resolution face image blocks in the training set, and only needs to search the similar blocks once in one feature image to obtain the positions of the similar blocks without sequencing the distances between the low-resolution face image blocks in the test set and sequencing the distances between the high-resolution face image blocks in the test set and the high-resolution face image blocks in the training set, when the distance is calculated, all the low-resolution face image blocks and all the high-resolution face image blocks in the training set do not need to be searched, accurate constraint conditions are guaranteed to be obtained, meanwhile, the method has higher searching efficiency, and has prominent substantive characteristics.
(3) The invention constructs the pyramid model of the face image reconstruction according to different block sizes, ensures that the reconstruction process of the face image covers a plurality of different scales, thereby effectively fusing the characteristics of the face images with different scales, ensuring that the image details are recovered more clearly, and the pyramid model overcomes the problem that the existing face super-resolution reconstruction method can not truly reflect the image degradation process, ensures that the reconstructed face image is closer to the real face image, and also overcomes the problems that the difference existing between high-resolution images during the face image reconstruction in the prior art can not influence the quality of the reconstructed image and the defect that the face image reconstruction process can not truly reflect the degradation process of the face image.
(4) According to the invention, the data set is expanded in a left-right overturning mode according to the left-right symmetrical characteristic of the face image, a training set with richer information is obtained, and the purpose of reconstructing an input block by having enough abundant similar image blocks under the condition of a small sample is ensured.
(5) According to the method, the characteristic that the images have non-local similarity is utilized, a set consisting of the input blocks and the image blocks at the positions with the same number as the similar blocks in the training set is constructed through the non-local similarity of the images, the block sets at the positions with the same number in the face image super-resolution reconstruction method based on the position blocks are enriched, and the face image reconstruction effect is guaranteed.
(6) According to the method, the weight matrix is constructed through the sum of the distance between the low-resolution image block in the test set and the low-resolution image block in the training set and the distance between the interpolation amplification image block in the test set and the high-resolution image block in the training set, the low-resolution image information and the high-resolution image information can be simultaneously utilized, the defect that the reconstructed image is inaccurate when the low-resolution image has large difference is avoided, the image reconstruction process is smoother through the constraint of the weight, and the image detail is recovered more accurately.
Drawings
The invention is further illustrated with reference to the following figures and examples.
FIG. 1 is a schematic block flow diagram of the method of the present invention.
Fig. 2 is a schematic diagram of a blocking process of a high-resolution face image in the method.
Fig. 3 is a schematic diagram of an interpolation process in the method of the present invention.
FIG. 4 is an example of samples in the FERET database and CAS-PEAL-R1 database, an example of samples in the FERET database for the first activity, and an example of samples in the CAS-PEAL-R1 database for the second activity.
Fig. 5 is a diagram showing the effect of reconstructing an image by applying Bicubic, ANR, a +, LINE, and SRLSP in the FERET database and six different methods of the present invention.
FIG. 6 is a diagram illustrating the effect of reconstructing an image by applying Bicubic, ANR, A +, LINE, SRLSP and six different methods according to the present invention in the CAS-PEAL-R1 database.
Detailed Description
The embodiment shown in fig. 1 shows that the process of the method of the present invention comprises:
Figure GDA0002893272690000101
a, training a low-resolution face image set and a high-resolution face image set in a training set: expanding the low resolution face image set and the high resolution face image set in the training set → expanding the low resolution face image set PlAnd a high resolution face image set PhRespectively extracting gradient features → pair of extended high-resolution face image set PhAnd corresponding high-resolution face gradient characteristic image set G thereofhPartitioning → respectively, to the extended low resolution face image set PlAnd corresponding low-resolution face gradient characteristic image set G thereoflPartitioning → separately
Figure GDA0002893272690000102
And B, the reconstruction process of the low-resolution face image in the test set comprises the following steps: amplifying the low-resolution face image in the test set to obtain an amplified high-resolution face image → amplifying the low-resolution face image I in the test settlAnd enlarged high resolution face image IthSeparate extraction of gradient features → amplification of high resolution face image I in test setthAnd corresponding high-resolution face gradient characteristic image g thereofthPartitioning → for low resolution face image I in test settlAnd corresponding low-resolution face gradient characteristic image g thereoftlPartitioning → Using the Low resolution face image I in the test settlCorresponding low-resolution face gradient characteristic image gtlNumbering similar blocks → numbering the positions of the similar blocks to obtain the extended low-resolution face gradient feature image set G in the training setlSet formed by image blocks of all images at the same number → the position number of the similar block is utilized to solve the high-resolution face gradient characteristic image set G expanded in the training sethSet of all images in the same number composed of image blocks → extended low resolution face image set P by position number of similar blockslSet formed by image blocks of all face images at the same number → position number of similar blocks is utilized to solve extended high-resolution face image set PhThe set of all the face images at the same number → the weight matrix corresponding to the jth face image block->Calculating a mapping matrix corresponding to the jth face image block → reconstructing the low-resolution face image blocks in the test set to obtain high-resolution face image blocks → combining all the reconstructed image blocks to obtain a reconstructed high-resolution face image → constructing a pyramid face super-resolution reconstruction model.
The example shown in FIG. 2 shows that in the figure, R1Is a block size, K1The method comprises the step of partitioning the high-resolution face image into blocks with the size of R for overlapping pixels1*R1And K is respectively overlapped between the current image block and the upper and lower adjacent image blocks1The row pixels are respectively overlapped with the left and right adjacent image blocks by K1And columns of pixels. The blocking process of the low-resolution face image is similar to the above.
The embodiment shown in fig. 3 indicates that, in the diagram, LR represents low resolution, HR represents high resolution, each black dot represents a low-resolution pixel point in a low-resolution face image block, and each white dot represents a high-resolution pixel point in reconstructed high-frequency information; the interpolation process in the method of the invention is as follows: inputting an LR image → an LR image block → adding HR information → outputting an HR image block, namely sequentially interpolating the obtained high-frequency information to a low-resolution face image block from top to bottom and from left to right to obtain a reconstructed high-resolution face image block.
The embodiment shown in FIG. 4 shows sample examples in the FERET database and CAS-PEAL-R1 database, sample examples in the FERET database for the first activity, and sample examples in the CAS-PEAL-R1 database for the second activity. The FERET database comprises 200 persons, in the embodiment, 80 men each have one front face image and 70 women each have one front face image to form a training set, and 28 men each have one front face image and 22 women each have one front face image to be used for testing are selected from the rest persons; the CAS-PEAL-R1 database contains 1040 individuals, and in the embodiment, a training set consisting of 103 positive face images of each male and 97 positive face images of each female is randomly selected from the CAS-PEAL-R1 database, and 57 positive face images of each male and 43 positive face images of each female are randomly selected for testing.
The embodiment shown in fig. 5 shows that an effect diagram of reconstructing an image by applying Bicubic, ANR, a +, LINE, SRLSP, and six different methods of the present invention to a FERET database, where each LINE represents the same face image in the FERET database, and each LINE is sequentially 5 selected face images from top to bottom. For each LINE, representing a high-resolution face image and an original high-resolution face image reconstructed by using Bicubic, ANR, A +, LINE and SRLSP and the method from left to right in sequence, wherein the Bicubic method is used as a basic comparison, so that the face image obtained by the Bicubic method is most blurred, the details of the ANR and A + methods at the positions of eyes and mouths are recovered more blurrily, and although the details of the LINE and SRLSP methods are recovered better, the image local ghost phenomenon is more serious. The method of the invention overcomes the ghost phenomenon of the image while ensuring the detail recovery, and obtains the optimal reconstructed face image.
The embodiment shown in fig. 6 shows an effect diagram of reconstructing an image by applying Bicubic, ANR, a +, LINE, SRLSP and six different methods of the present invention to a CAS-PEAL-R1 database, where each row represents the same face image in a CAS-PEAL-R1 database, and each row is sequentially 5 selected face images from top to bottom. For each LINE, representing a high-resolution face image and an original high-resolution face image reconstructed by using Bicubic, ANR, A +, LINE and SRLSP and the method from left to right in sequence, wherein the Bicubic method is used as a basic comparison, so that the face image obtained by the Bicubic method is most blurred, the details of the ANR and A + methods at the positions of eyes and mouths are recovered to be more blurred, and although the details of the LINE and SRLSP methods are recovered to be better, a local ghost phenomenon occurs, and an edge sawtooth phenomenon also occurs in a partial image. The method not only can recover the details of the image most clearly, but also overcomes the local ghost phenomenon and the edge sawtooth phenomenon existing in other methods, and obtains the optimal reconstructed face image.
Example 1
The super-resolution reconstruction method for the pyramid face image based on the regression model comprises the following specific steps:
A. training the low-resolution face image set and the high-resolution face image set in the training set:
the first step is to expand a low-resolution face image set and a high-resolution face image set in a training set:
according to the symmetric characteristics of the face images, the low-resolution face image set and the high-resolution face image set in the training set are expanded in a left-right turning mode, the size of the images is unchanged, the number of the images is expanded by two times, and the expanded low-resolution face image sets are obtained respectively
Figure GDA0002893272690000121
And extended high resolution face image set
Figure GDA0002893272690000122
Wherein l represents a low resolution image, having a size of a pixels, h represents a high resolution image, having a size of (d a) pixels, d being a multiple, d having a value of 2, and M represents the number of images;
secondly, expanding the low-resolution face image set PlAnd a high resolution face image set PhRespectively extracting gradient features:
for the extended low-resolution face image set PlAnd a high resolution face image set PhRespectively extracting a first-order gradient and a second-order gradient from each human face image to form a gradient feature as a component, and obtaining a low-resolution human face image set PlMedium low resolution face gradient feature image set
Figure GDA0002893272690000123
And a high resolution face image set PhHigh resolution face gradient feature image set
Figure GDA0002893272690000124
Thirdly, expanding the high-resolution face image set PhAnd corresponding high-resolution face gradient characteristic image set G thereofhRespectively partitioning:
for the extended high-resolution face image set PhEach of the face images in (1)
Figure GDA0002893272690000125
And corresponding high-resolution human face gradient characteristic image
Figure GDA0002893272690000126
Respectively performing overlapped blocks, each block having a size of R1*R1Pixel, R1The value of (2) is 8, and the overlapping mode is that K is respectively overlapped between the current block and the upper and lower adjacent image blocks1Line pixels, overlap K with left and right adjacent image blocks1Column pixels, K1Is 4, then taken from top to bottom and from leftSequence to right for each high resolution face image
Figure GDA0002893272690000127
And its corresponding gradient feature image
Figure GDA0002893272690000128
The number of all the blocks is 1,2, and U, which is the total number of each image block, and the image blocks with the same number are called the image blocks at the same position, thereby completing the process of expanding the high-resolution face image set PhAnd corresponding high-resolution face gradient characteristic image set G thereofhRespectively partitioning;
fourthly, the extended low-resolution face image set P is processedlAnd corresponding low-resolution face gradient characteristic image set G thereoflRespectively partitioning:
and the high-resolution face image set PhThe block dividing mode is the same, and the extended low-resolution face image set P is subjected tolEach low resolution face image of
Figure GDA0002893272690000129
And corresponding low-resolution face gradient characteristic image
Figure GDA00028932726900001210
Respectively performing overlapped blocks with each block size of (R)1/d)*(R1D) pixel, R1The numerical value of (d) is 8, the numerical value of d is 2, and the overlapping mode is that the current image block and the upper and lower adjacent image blocks are overlapped by K1D lines of pixels, and the overlap K between the left and right adjacent image blocks1Column/d pixels, K1Is 4, and then each low resolution face image is processed in a top-to-bottom and left-to-right order
Figure GDA00028932726900001211
And its corresponding gradient feature image
Figure GDA00028932726900001212
The number of all the blocks is 1,2, a, U, U is the total number of each image block, and the image blocks with the same number are called the image blocks at the same position, thereby completing the low-resolution face image set P after expansionlAnd corresponding low-resolution face gradient characteristic image set G thereoflRespectively partitioning;
at this point, finishing the A. training set low-resolution face image set PlAnd a high resolution face image set PhThe training process of (2);
B. and (3) testing the reconstruction process of the low-resolution face image in the set:
fifthly, amplifying the low-resolution face images in the test set to obtain an amplified high-resolution face image:
inputting the low-resolution face image to be tested into a computer to obtain a low-resolution face image I in a test settlAmplifying a certain low-resolution face image in the test set by adopting a bicubic interpolation mode to obtain an amplified image serving as an amplified high-resolution face image I in the test setthTo make the amplified high-resolution face image I in the test setthAnd high-resolution face image in training set
Figure GDA0002893272690000131
The sizes are equal;
sixthly, carrying out low-resolution face image I in the test settlAnd enlarged high resolution face image IthRespectively extracting gradient features:
respectively extracting the low-resolution face image I in the test set obtained in the fifth steptlAnd enlarged high resolution face image IthThe first-order gradient and the second-order gradient are used as components to form respective gradient features, and low-resolution face gradient feature images g corresponding to the gradient features are obtainedtlAnd high-resolution human face gradient feature image gth
Seventhly, amplifying the high-resolution face image I in the test setthAnd corresponding high-resolution face gradient characteristic image g thereofthPartitioning:
amplifying the high-resolution face image I in the test set obtained in the fifth stepthAnd the corresponding high-resolution face gradient characteristic image g in the sixth stepthRespectively performing overlapped partitioning, each block having a size of R1*R1Pixel, R1The numerical value of (2) is 8, so that the block size is the same as that of the high-resolution face image in the training set, and the overlapping mode is that the current image block and the upper and lower adjacent image blocks are overlapped by K1Line pixels, overlap K with left and right adjacent image blocks1Column pixels, K1The number of the image blocks is 4, then all the blocks of each human face image are numbered in sequence from top to bottom and from left to right, the number is 1,2, the.
Eighthly, testing the low-resolution face image I in the settlAnd corresponding low-resolution face gradient characteristic image g thereoftlPartitioning:
for the low resolution face image I in the test set obtained in the fifth steptlAnd the corresponding low-resolution face gradient characteristic image g in the sixth steptlRespectively performing overlapped blocks with each block size of (R)1/d)*(R1/d),R1The numerical value of (d) is 8, the numerical value of d is 2, the block size is the same as the block size of the low-resolution face image in the training set, and the overlapping mode is that K is overlapped between the current image block and the upper and lower adjacent image blocks1D lines of pixels, and the overlap K between the left and right adjacent image blocks1Column/d pixels, K1The number of the image blocks is 4, then all the blocks of each human face image are numbered in sequence from top to bottom and from left to right, the number is 1,2, the.
Ninth, using the low resolution face image I in the test settlCorresponding low-resolution face gradient characteristic image gtlNumbering similar blocks:
according to the following stepsSequentially comparing the low-resolution face images I in the test set obtained in the eighth step from left to righttlThe image block of (1) is reconstructed, for example, the jth image block is reconstructed, and a low-resolution face image I in a test set is utilizedtlCorresponding low-resolution face gradient characteristic image gtlNon-local similarity of (1), low resolution face image I in test settlFinding out the similar block of the jth image block, and setting the low-resolution face image I in the test settlCorresponding low-resolution face gradient characteristic image gtlThe jth human face gradient characteristic image block is gtl,jFor the low-resolution face gradient feature image gtlScanning all the face image blocks in the image block list from top to bottom and from left to right, wherein the scanned image blocks are not repeated with the jth image block, calculating Euclidean distances between the scanned face gradient characteristic image blocks and the jth face gradient characteristic image block, then sequencing the distances of all the low-resolution face gradient characteristic image blocks according to the sequence of the distances from small to large, and taking the first n blocks with the smallest distance as the jth low-resolution face gradient characteristic image block gtl,jThe number set of the similar image blocks of the low-resolution human face gradient feature image is set as [ v ]1,v2,...,vn]The set of the low-resolution face gradient characteristic image blocks corresponding to the number set is
Figure GDA0002893272690000141
Thereby completing the utilization of the low-resolution face image I in the test settlCorresponding low-resolution face gradient characteristic image gtlThe process of numbering the similar blocks;
step ten, solving the extended low-resolution face gradient characteristic image set G in the training set by using the position number of the similar blocklA set of image blocks of all images at the same number:
the low-resolution face gradient characteristic image set G after the expansion of the training set in the second step is carried outl1,2, M face images
Figure GDA0002893272690000142
The number set of the human face feature image block with the middle number of j and the similar low-resolution human face gradient feature image block in the ninth step is [ v1,v2,...,vn]The same image blocks in the set
Figure GDA0002893272690000143
Then the extended low-resolution face gradient characteristic image set G in the training setlThe serial number set [ v ] of the image block with the serial number j in all the images and the similar low-resolution human face gradient characteristic image block1,v2,...,vn]Set of image blocks of
Figure GDA0002893272690000144
Comprises the following steps:
Figure GDA0002893272690000145
to facilitate writing, will
Figure GDA0002893272690000146
Is recorded as:
Figure GDA0002893272690000147
wherein M (1+ n) represents M face images, and each face image has 1+ n image blocks;
the eleventh step, the position number of the similar block is used for solving the high-resolution face gradient characteristic image set G after being expanded in the training sethA set of image blocks of all images at the same number:
the high-resolution face gradient characteristic image set G after the expansion of the training set in the second step is carried outh1,2, M images
Figure GDA0002893272690000148
Middle number j and the ninth stepThe number set of similar low-resolution human face gradient characteristic image blocks in (1) is [ v ]1,v2,...,vn]The image block composition set
Figure GDA0002893272690000149
Then the high-resolution face gradient characteristic image set G after expansion in the training sethWherein all images are numbered j and [ v ]1,v2,...,vn]Set of image blocks of
Figure GDA00028932726900001410
Comprises the following steps:
Figure GDA00028932726900001411
to facilitate writing, will
Figure GDA0002893272690000151
Is recorded as:
Figure GDA0002893272690000152
the twelfth step, the position number of the similar block is used to solve the extended low-resolution face image set PlThe image blocks of all the face images at the same number are combined into a set:
the extended low-resolution face image set P in the first stepl1,2, M face images
Figure GDA0002893272690000153
J and the number set of the similar low-resolution human face gradient characteristic image blocks in the ninth step is [ v1,v2,...,vn]The image block composition set
Figure GDA0002893272690000154
Then P islWherein all images are numbered j and [ v ]1,v2,...,vn]Group of picture blocksSet of (a) and (b)
Figure GDA0002893272690000155
Comprises the following steps:
Figure GDA0002893272690000156
to facilitate writing, will
Figure GDA0002893272690000157
Is recorded as:
Figure GDA0002893272690000158
step thirteen, the position number of the similar block is used for solving the extended high-resolution face image set PhThe image blocks of all the face images at the same number are combined into a set:
the extended high-resolution face image set P in the first steph1,2, M face images
Figure GDA0002893272690000159
J and the number set of the similar low-resolution human face gradient characteristic image blocks in the ninth step is [ v1,v2,...,vn]The image block composition set
Figure GDA00028932726900001510
Then P ishWherein all images are numbered j and [ v ]1,v2,...,vn]The image block composition set
Figure GDA00028932726900001511
Comprises the following steps:
Figure GDA00028932726900001512
to facilitate writing, will
Figure GDA00028932726900001513
Is recorded as:
Figure GDA00028932726900001514
fourthly, calculating a weight matrix corresponding to the jth human face image block:
the low resolution face image I in the eighth step test set is calculated by the following formula (9)tlThe jth human face image block g of the corresponding gradient characteristic imagetl,jObtained by the tenth step
Figure GDA00028932726900001515
Euclidean distance between all face image blocks
Figure GDA00028932726900001516
And then the following formula (10) is used for calculating the high-resolution face image I amplified in the seventh step test setthCorresponding high-resolution face gradient characteristic image gthJ-th block image block gth,jAs in the tenth step above
Figure GDA00028932726900001517
Set of Euclidean distances of all image blocks
Figure GDA00028932726900001518
Figure GDA00028932726900001519
Figure GDA00028932726900001520
After obtaining the above distance, the weight matrix W of the jth blockjThe following equation (11) is obtained:
Figure GDA0002893272690000161
wherein α is a smoothing factor;
and fifteenth, calculating a mapping matrix corresponding to the jth face image block:
recording the mapping process of the jth high-resolution face image block obtained from the jth low-resolution face image block in the training set as a simple mapping relation to obtain a formula:
Figure GDA0002893272690000162
wherein A isjAnd (3) a mapping matrix for the jth face image block is represented, T represents the transpose of the matrix, and the optimal mapping matrix is obtained by the following formula (13):
Figure GDA0002893272690000163
since the high-resolution face image blocks and the low-resolution face image blocks are not in a simple mapping relationship, performing smooth constraint on the formula (13) by using the distance matrix obtained in the fourteenth step to obtain the following smooth regression formula (14):
Figure GDA0002893272690000164
wherein
Figure GDA0002893272690000165
Where tr () is the trace of the matrix, adding a regularization term to make the mapping process smoother yields the following equation (15):
Figure GDA0002893272690000166
wherein
Figure GDA0002893272690000167
F represents Frobenius norm, and lambda is used for balancing reconstruction error and AjThe mapping matrix corresponding to the jth block image is obtained by simplification:
Figure GDA0002893272690000168
wherein E represents an identity matrix;
sixthly, reconstructing the low-resolution face image blocks in the test set to obtain high-resolution face image blocks:
by passing
Figure GDA0002893272690000169
Obtaining a low-resolution face image I in a test settlFace image block I intl,jHigh-frequency information of corresponding high-resolution face image block, and then interpolating the high-frequency information to Itl,jTo obtain a reconstructed face image block I'th,j
Seventeenth, combining all the reconstructed image blocks into a reconstructed high-resolution face image:
combining all the reconstructed face image blocks according to the serial numbers in the sequence from top to bottom and from left to right, averaging the overlapped parts in the combination process to obtain a reconstructed high-resolution face image I'th
Eighteenth, constructing a pyramid face super-resolution reconstruction model:
(18.1) to I 'obtained in the seventeenth step'thDimensionality reduction is carried out by using a nearest neighbor interpolation method to obtain a dimensionality-reduced low-resolution face image I'tlThe face image after dimension reduction is combined with the ItlAre the same in size;
(18.2) reconstructing all the low-resolution facial images in the training set by the steps from the first step to the seventeenth step, and reconstructing the ith low-resolution facial image in the training set
Figure GDA0002893272690000171
The process of reconstruction is as follows:
Figure GDA0002893272690000172
in the training set as low-resolution face images in the test set
Figure GDA0002893272690000173
And
Figure GDA0002893272690000174
as a training set, using the high-resolution images reconstructed from the first step to the seventeenth step
Figure GDA0002893272690000175
Then using nearest neighbor interpolation method to pair
Figure GDA0002893272690000176
Reducing the vitamin content to obtain
Figure GDA0002893272690000177
(18.3) taking the block size of the high-resolution face image as R2*R2Pixel, R2Is 6, and the number of pixels overlapped between the high resolution image blocks is K2,K2Has a value of 4, and the block size of the low-resolution face image is (R)2/d)*(R2D) pixels, wherein d is a reduction multiple and has the same value as d in the first step and 2, and the number of overlapped pixels among the low-resolution image blocks is K2L 'from (18.1)'tlAs a low-resolution face image in the test set, obtained (18.2)
Figure GDA0002893272690000178
And
Figure GDA0002893272690000179
as a training set, performing a face image super-resolution reconstruction process again to obtain a final reconstructed face image;
and finishing the reconstruction process of the low-resolution face image in the test set B, and finally finishing the super-resolution reconstruction of the pyramid face image based on the regression model.
Example 2
Except for R in the third step1Is 10, R in the fourth step1Is 10, R in the seventh step1Is 10, R in the eighth step1Is 10, R in the eighteenth step (18.3)2Except that the numerical value of (2) is 8, the same as in example 1 was conducted.
Example 3
Except for R in the third step1Is 12, R in the fourth step1Is 12, R in the seventh step1Is 12, R in the eighth step1Is 12, R in the eighteenth step (18.3)2The same as example 1 except that the numerical value of (1) was 10.
The known techniques used in the above embodiments are: gradient features, non-local similarity, and linear regression.

Claims (2)

1. The super-resolution reconstruction method of the pyramid face image based on the regression model is characterized by comprising the following specific steps of:
A. training the low-resolution face image set and the high-resolution face image set in the training set:
the first step is to expand a low-resolution face image set and a high-resolution face image set in a training set:
according to the symmetric characteristics of the face images, the low-resolution face image set and the high-resolution face image set in the training set are expanded in a left-right turning mode, the size of the images is unchanged, the number of the images is expanded by two times, and the expanded low-resolution face image sets are obtained respectively
Figure FDA0002893272680000011
And extended high resolution face image set
Figure FDA0002893272680000012
Where l represents the low resolution image with a pixels size a x b, h represents the high resolution image with a pixels size (d x a) x (d x b) imagePrime, d is a multiple, M represents the number of images;
secondly, expanding the low-resolution face image set PlAnd a high resolution face image set PhRespectively extracting gradient features:
for the extended low-resolution face image set PlAnd a high resolution face image set PhRespectively extracting a first-order gradient and a second-order gradient from each human face image to form a gradient feature as a component, and obtaining a low-resolution human face image set PlMedium low resolution face gradient feature image set
Figure FDA0002893272680000013
And a high resolution face image set PhHigh resolution face gradient feature image set
Figure FDA0002893272680000014
Thirdly, expanding the high-resolution face image set PhAnd corresponding high-resolution face gradient characteristic image set G thereofhRespectively partitioning:
for the extended high-resolution face image set PhEach of the face images in (1)
Figure FDA0002893272680000015
And corresponding high-resolution human face gradient characteristic image
Figure FDA0002893272680000016
Respectively performing overlapped blocks, each block having a size of R1*R1Pixel, R1The numerical value of (A) is 8-12, and the overlapping mode is that K is respectively overlapped between the current block and the upper and lower adjacent image blocks1Line pixels, overlap K with left and right adjacent image blocks1Column pixels, and 0 ≤ K1≤R12, then for each high resolution face image, in order from top to bottom and from left to right
Figure FDA0002893272680000017
And its corresponding gradient feature image
Figure FDA0002893272680000018
The number of all the blocks is 1,2, and U, which is the total number of each image block, and the image blocks with the same number are called the image blocks at the same position, thereby completing the process of expanding the high-resolution face image set PhAnd corresponding high-resolution face gradient characteristic image set G thereofhRespectively partitioning;
fourthly, the extended low-resolution face image set P is processedlAnd corresponding low-resolution face gradient characteristic image set G thereoflRespectively partitioning:
and the high-resolution face image set PhThe block dividing mode is the same, and the extended low-resolution face image set P is subjected tolEach low resolution face image of
Figure FDA0002893272680000019
And corresponding low-resolution face gradient characteristic image
Figure FDA00028932726800000110
Respectively performing overlapped blocks with each block size of (R)1/d)*(R1D) pixel, R1The number of the image blocks is 8-12, and the overlapping mode is that K is overlapped between the current image block and the upper and lower adjacent image blocks1D lines of pixels, and the overlap K between the left and right adjacent image blocks1D columns of pixels, and then applying the sequence from top to bottom and from left to right to each low-resolution face image
Figure FDA00028932726800000111
And its corresponding gradient feature image
Figure FDA00028932726800000112
All the blocks are numbered respectively, the number is 1,2, and U is the total number of the blocks of each image, and the numbers are the sameThe image blocks are called as image blocks at the same position, thereby completing the low-resolution face image set P after expansionlAnd corresponding low-resolution face gradient characteristic image set G thereoflRespectively partitioning;
at this point, finishing the A. training set low-resolution face image set PlAnd a high resolution face image set PhThe training process of (2);
B. and (3) testing the reconstruction process of the low-resolution face image in the set:
fifthly, amplifying the low-resolution face images in the test set to obtain an amplified high-resolution face image:
inputting the low-resolution face image to be tested into a computer to obtain a low-resolution face image I in a test settlAmplifying a certain low-resolution face image in the test set by adopting a bicubic interpolation mode to obtain an amplified image serving as an amplified high-resolution face image I in the test setthTo make the amplified high-resolution face image I in the test setthAnd high-resolution face image in training set
Figure FDA0002893272680000021
The sizes are equal;
sixthly, carrying out low-resolution face image I in the test settlAnd enlarged high resolution face image IthRespectively extracting gradient features:
respectively extracting the low-resolution face image I in the test set obtained in the fifth steptlAnd enlarged high resolution face image IthThe first-order gradient and the second-order gradient are used as components to form respective gradient features, and low-resolution face gradient feature images g corresponding to the gradient features are obtainedtlAnd high-resolution human face gradient feature image gth
Seventhly, amplifying the high-resolution face image I in the test setthAnd corresponding high-resolution face gradient characteristic image g thereofthPartitioning:
amplifying the high-resolution face image in the test set obtained in the fifth stepIthAnd the corresponding high-resolution face gradient characteristic image g in the sixth stepthRespectively performing overlapped partitioning, each block having a size of R1*R1Pixel, R1The numerical value of (A) is 8-12, so that the block size is the same as that of the high-resolution face image in the training set, and the overlapping mode is that K is overlapped between the current image block and the upper and lower adjacent image blocks1Line pixels, overlap K with left and right adjacent image blocks1The method comprises the following steps of (1) column pixels, numbering all blocks of each face image respectively in an order from top to bottom and from left to right, wherein the numbering is 1, 2.
Eighthly, testing the low-resolution face image I in the settlAnd corresponding low-resolution face gradient characteristic image g thereoftlPartitioning:
for the low resolution face image I in the test set obtained in the fifth steptlAnd the corresponding low-resolution face gradient characteristic image g in the sixth steptlRespectively performing overlapped blocks with each block size of (R)1/d)*(R1/d),R1The numerical value of (A) is 8-12, so that the block size is the same as the block size of the low-resolution face image in the training set, and the overlapping mode is that K is overlapped between the current image block and the upper and lower adjacent image blocks1D lines of pixels, and the overlap K between the left and right adjacent image blocks1The method comprises the following steps of (1)/d columns of pixels, numbering all blocks of each face image respectively in an order from top to bottom and from left to right, wherein the numbering is 1, 2.
Ninth, using the low resolution face image I in the test settlCorresponding low-resolution face gradient characteristic image gtlNumbering similar blocks:
sequentially from top to bottom and from left to right for the low-resolution face images I in the test set obtained in the eighth steptlThe image block of (1) is reconstructed, for example, the jth image block is reconstructed, and the method is beneficial toWith low-resolution face image I in test settlCorresponding low-resolution face gradient characteristic image gtlNon-local similarity of (1), low resolution face image I in test settlFinding out the similar block of the jth image block, and setting the low-resolution face image I in the test settlCorresponding low-resolution face gradient characteristic image gtlThe jth human face gradient characteristic image block is gtl,jFor the low-resolution face gradient feature image gtlScanning all the face image blocks in the image block list from top to bottom and from left to right, wherein the scanned image blocks are not repeated with the jth image block, calculating Euclidean distances between the scanned face gradient characteristic image blocks and the jth face gradient characteristic image block, then sequencing the distances of all the low-resolution face gradient characteristic image blocks according to the sequence of the distances from small to large, and taking the first n blocks with the smallest distance as the jth low-resolution face gradient characteristic image block gtl,jThe number set of the similar image blocks of the low-resolution human face gradient feature image is set as [ v ]1,v2,...,vn]The set of the low-resolution face gradient characteristic image blocks corresponding to the number set is
Figure FDA0002893272680000031
Thereby completing the utilization of the low-resolution face image I in the test settlCorresponding low-resolution face gradient characteristic image gtlThe process of numbering the similar blocks;
step ten, solving the extended low-resolution face gradient characteristic image set G in the training set by using the position number of the similar blocklA set of image blocks of all images at the same number:
the low-resolution face gradient characteristic image set G after the expansion of the training set in the second step is carried outl1,2, M face images
Figure FDA0002893272680000032
The human face feature image block with the middle number j and the similar low-resolution human face gradient feature in the ninth stepThe number set of image blocks is [ v ]1,v2,...,vn]The same image blocks in the set
Figure FDA0002893272680000033
Then the extended low-resolution face gradient characteristic image set G in the training setlThe serial number set [ v ] of the image block with the serial number j in all the images and the similar low-resolution human face gradient characteristic image block1,v2,...,vn]Set of image blocks of
Figure FDA0002893272680000034
Comprises the following steps:
Figure FDA0002893272680000035
to facilitate writing, will
Figure FDA0002893272680000036
Is recorded as:
Figure FDA0002893272680000037
wherein M (1+ n) represents M face images, and each face image has 1+ n image blocks;
the eleventh step, the position number of the similar block is used for solving the high-resolution face gradient characteristic image set G after being expanded in the training sethA set of image blocks of all images at the same number:
the high-resolution face gradient characteristic image set G after the expansion of the training set in the second step is carried outh1,2, M images
Figure FDA0002893272680000038
J and the number set of the similar low-resolution human face gradient characteristic image blocks in the ninth step is [ v1,v2,...,vn]The image block composition set
Figure FDA0002893272680000039
Then the high-resolution face gradient characteristic image set G after expansion in the training sethWherein all images are numbered j and [ v ]1,v2,...,vn]Set of image blocks of
Figure FDA00028932726800000310
Comprises the following steps:
Figure FDA00028932726800000311
to facilitate writing, will
Figure FDA00028932726800000312
Is recorded as:
Figure FDA0002893272680000041
the twelfth step, the position number of the similar block is used to solve the extended low-resolution face image set PlThe image blocks of all the face images at the same number are combined into a set:
the extended low-resolution face image set P in the first stepl1,2, M face images
Figure FDA0002893272680000042
J and the number set of the similar low-resolution human face gradient characteristic image blocks in the ninth step is [ v1,v2,...,vn]The image block composition set
Figure FDA0002893272680000043
Then P islWherein all images are numbered j and [ v ]1,v2,...,vn]Image block ofSet of compositions
Figure FDA0002893272680000044
Comprises the following steps:
Figure FDA0002893272680000045
to facilitate writing, will
Figure FDA0002893272680000046
Is recorded as:
Figure FDA0002893272680000047
step thirteen, the position number of the similar block is used for solving the extended high-resolution face image set PhThe image blocks of all the face images at the same number are combined into a set:
the extended high-resolution face image set P in the first steph1,2, M face images
Figure FDA0002893272680000048
J and the number set of the similar low-resolution human face gradient characteristic image blocks in the ninth step is [ v1,v2,...,vn]The image block composition set
Figure FDA0002893272680000049
Then P ishWherein all images are numbered j and [ v ]1,v2,...,vn]The image block composition set
Figure FDA00028932726800000410
Comprises the following steps:
Figure FDA00028932726800000411
to facilitate writing, will
Figure FDA00028932726800000412
Is recorded as:
Figure FDA00028932726800000413
fourthly, calculating a weight matrix corresponding to the jth human face image block:
the low resolution face image I in the eighth step test set is calculated by the following formula (9)tlThe jth human face image block g of the corresponding gradient characteristic imagetl,jObtained by the tenth step
Figure FDA00028932726800000414
Euclidean distance set of all human face image blocks
Figure FDA00028932726800000415
And then the following formula (10) is used for calculating the high-resolution face image I amplified in the seventh step test setthCorresponding high-resolution face gradient characteristic image gthJ-th block image block gth,jAs in the tenth step above
Figure FDA00028932726800000416
Set of Euclidean distances of all image blocks
Figure FDA00028932726800000417
Figure FDA00028932726800000418
Figure FDA00028932726800000419
After obtaining the above distance, the weight matrix W of the jth blockjThe following equation (11) is obtained:
Figure FDA0002893272680000051
wherein α is a smoothing factor;
and fifteenth, calculating a mapping matrix corresponding to the jth face image block:
recording the mapping process of the jth high-resolution face image block obtained from the jth low-resolution face image block in the training set as a simple mapping relation to obtain a formula:
Figure FDA0002893272680000052
wherein A isjAnd (3) a mapping matrix for the jth face image block is represented, T represents the transpose of the matrix, and the optimal mapping matrix is obtained by the following formula (13):
Figure FDA0002893272680000053
since the high-resolution face image blocks and the low-resolution face image blocks are not in a simple mapping relationship, performing smooth constraint on the formula (13) by using the distance matrix obtained in the fourteenth step to obtain the following smooth regression formula (14):
Figure FDA0002893272680000054
wherein
Figure FDA0002893272680000055
Where tr () is the trace of the matrix, adding a regularization term to make the mapping process smoother yields the following equation (15):
Figure FDA0002893272680000056
wherein
Figure FDA0002893272680000057
F represents Frobenius norm, and lambda is used for balancing reconstruction error and AjThe mapping matrix corresponding to the jth block image is obtained by simplification:
Figure FDA0002893272680000058
wherein E represents an identity matrix;
sixthly, reconstructing the low-resolution face image blocks in the test set to obtain high-resolution face image blocks:
by passing
Figure FDA0002893272680000059
Obtaining a low-resolution face image I in a test settlFace image block I intl,jHigh-frequency information of corresponding high-resolution face image block, and then interpolating the high-frequency information to Itl,jTo obtain a reconstructed face image block I'th,j
Seventeenth, combining all the reconstructed image blocks into a reconstructed high-resolution face image:
combining all the reconstructed face image blocks according to the serial numbers in the sequence from top to bottom and from left to right, averaging the overlapped parts in the combination process to obtain a reconstructed high-resolution face image I'th
Eighteenth, constructing a pyramid face super-resolution reconstruction model:
(18.1) to I 'obtained in the seventeenth step'thDimensionality reduction is carried out by using a nearest neighbor interpolation method to obtain a dimensionality-reduced low-resolution face image I'tlThe face image after dimension reduction is combined with the ItlAre the same in size;
(18.2) reconstructing all the low-resolution facial images in the training set by the steps from the first step to the seventeenth step, and reconstructing the ith low-resolution facial image in the training set
Figure FDA0002893272680000061
The process of reconstruction is as follows:
Figure FDA0002893272680000062
in the training set as low-resolution face images in the test set
Figure FDA0002893272680000063
And
Figure FDA0002893272680000064
as a training set, obtaining a high-resolution image by utilizing the reconstruction from the first step to the seventeenth step
Figure FDA0002893272680000065
Then using nearest neighbor interpolation method to pair
Figure FDA0002893272680000066
Reducing the vitamin content to obtain
Figure FDA0002893272680000067
(18.3) taking the block size of the high-resolution face image as R2*R2Pixel, R2Has a value of 6 to 10, and R2≠R1The number of pixels overlapped between the high resolution image blocks is K2The block size of the low resolution face image is (R)2/d)*(R2D) pixels, wherein d is a reduction multiple and has the same value as d in the first step, and the number of overlapped pixels among the low-resolution image blocks is K2L 'from (18.1)'tlAs a low-resolution face image in the test set, obtained (18.2)
Figure FDA0002893272680000068
And
Figure FDA0002893272680000069
as a training set, performing a face image super-resolution reconstruction process again to obtain a final reconstructed face image;
and finishing the reconstruction process of the low-resolution face image in the test set B, and finally finishing the super-resolution reconstruction of the pyramid face image based on the regression model.
2. The regression model-based pyramid face image super-resolution reconstruction method of claim 1, wherein: the first step, the size of the low-resolution face image set and the high-resolution face image set in the training set is (d a) b pixels, d is a multiple, and the value of d is 2; the third step is to carry out the expansion of the high-resolution face image set PhAnd corresponding high-resolution face gradient characteristic image set G thereofhRespectively overlapping K between the image blocks adjacent to the left and the right in the block division1Column pixels, the K1The value of (A) is 4; the fourth step is that the extended low-resolution face image set P is processedlAnd corresponding low-resolution face gradient characteristic image set G thereoflEach block in the block is respectively divided into the size (R)1/d)*(R1A/d) pixel, the value of d being 2; overlap K with left and right adjacent image blocks1A/d column of pixels, the K1The value of (A) is 4; the seventh step is to test the amplified high-resolution face image I in the setthAnd corresponding high-resolution face gradient characteristic image g thereofthThe mode of overlapping in the blocks is that the current image block and the upper and lower adjacent image blocks are overlapped by K1Line pixels, overlap K with left and right adjacent image blocks1Column pixels, the K1The value of (A) is 4; the eighth step, for the low resolution face image I in the test settlAnd corresponding low-resolution face gradient characteristic image g thereoftlEach block in the block is made to have a size of (R)1/d)*(R1D), the value of d being 2; to the left and rightOverlap K between adjacent image blocks1A/d column of pixels, the K1The value of (A) is 4; and eighteenth step, the number of overlapped pixels between the high-resolution image blocks in (18.3) for constructing the pyramid face super-resolution reconstruction model is K2K is the same as2The value of (A) is 4; the block size of the low resolution face image is (R)2/d)*(R2D) pixel, d is the reduction multiple and is the same as the value of d in the first step, and the value of d is 2.
CN201711381261.2A 2017-12-20 2017-12-20 Pyramid face image super-resolution reconstruction method based on regression model Expired - Fee Related CN108090873B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711381261.2A CN108090873B (en) 2017-12-20 2017-12-20 Pyramid face image super-resolution reconstruction method based on regression model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711381261.2A CN108090873B (en) 2017-12-20 2017-12-20 Pyramid face image super-resolution reconstruction method based on regression model

Publications (2)

Publication Number Publication Date
CN108090873A CN108090873A (en) 2018-05-29
CN108090873B true CN108090873B (en) 2021-03-05

Family

ID=62177638

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711381261.2A Expired - Fee Related CN108090873B (en) 2017-12-20 2017-12-20 Pyramid face image super-resolution reconstruction method based on regression model

Country Status (1)

Country Link
CN (1) CN108090873B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109559278B (en) * 2018-11-28 2019-08-09 山东财经大学 Super resolution image reconstruction method and system based on multiple features study
CN109949240B (en) * 2019-03-11 2021-05-04 厦门美图之家科技有限公司 Image processing method and computing device
CN110189255B (en) * 2019-05-29 2023-01-17 电子科技大学 Face detection method based on two-stage detection

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102842115A (en) * 2012-05-31 2012-12-26 哈尔滨工业大学(威海) Compressed sensing image super-resolution reconstruction method based on double dictionary learning
CN103093444A (en) * 2013-01-17 2013-05-08 西安电子科技大学 Image super-resolution reconstruction method based on self-similarity and structural information constraint
CN105550988A (en) * 2015-12-07 2016-05-04 天津大学 Super-resolution reconstruction algorithm based on improved neighborhood embedding and structure self-similarity
CN107067367A (en) * 2016-09-08 2017-08-18 南京工程学院 A kind of Image Super-resolution Reconstruction processing method
CN107341776A (en) * 2017-06-21 2017-11-10 北京工业大学 Single frames super resolution ratio reconstruction method based on sparse coding and combinatorial mapping

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2908285A1 (en) * 2014-02-13 2015-08-19 Thomson Licensing Method for performing super-resolution on single images and apparatus for performing super-resolution on single images

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102842115A (en) * 2012-05-31 2012-12-26 哈尔滨工业大学(威海) Compressed sensing image super-resolution reconstruction method based on double dictionary learning
CN103093444A (en) * 2013-01-17 2013-05-08 西安电子科技大学 Image super-resolution reconstruction method based on self-similarity and structural information constraint
CN105550988A (en) * 2015-12-07 2016-05-04 天津大学 Super-resolution reconstruction algorithm based on improved neighborhood embedding and structure self-similarity
CN107067367A (en) * 2016-09-08 2017-08-18 南京工程学院 A kind of Image Super-resolution Reconstruction processing method
CN107341776A (en) * 2017-06-21 2017-11-10 北京工业大学 Single frames super resolution ratio reconstruction method based on sparse coding and combinatorial mapping

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Face Super-Resolution via Multilayer Locality-Constrained Iterative Neighbor Embedding and Intermediate Dictionary Learning;Junjun Jiang;《IEEE TRANSACTIONS ON IMAGE PROCESSING》;20141031;第4220-4231页 *
Hallucinatingfacebyposition-patch;Xiang Ma 等;《Pattern Recognition》;20101231;第2224–2236页 *
Locality-Constrained Double Low-Rank Representation for Effective Face Hallucination;Guangwei Gao 等;《IEEE Access》;20161231;第1-7页 *
SRLSP: A Face Image Super-Resolution Algorithm Using Smooth Regression with Local Structure Prior;Junjun Jiang 等;《IEEE TRANSACTIONS ON MULTIMEDIA》;20161231;第1-14页 *
基于MAP 框架的金字塔人脸超分辨率算法;薛翠红 等;《计算机工程》;20120531;第206-211页 *
基于约束块重建的人脸超分辨率方法;卫保国 等;《计算机仿真》;20111231;第277-280页 *

Also Published As

Publication number Publication date
CN108090873A (en) 2018-05-29

Similar Documents

Publication Publication Date Title
CN105741252B (en) Video image grade reconstruction method based on rarefaction representation and dictionary learning
CN106127684B (en) Image super-resolution Enhancement Method based on forward-backward recutrnce convolutional neural networks
CN110111256B (en) Image super-resolution reconstruction method based on residual distillation network
CN101976435B (en) Combination learning super-resolution method based on dual constraint
CN103093444B (en) Image super-resolution reconstruction method based on self-similarity and structural information constraint
CN110119780A (en) Based on the hyperspectral image super-resolution reconstruction method for generating confrontation network
CN109255755A (en) Image super-resolution rebuilding method based on multiple row convolutional neural networks
CN101615290B (en) Face image super-resolution reconstructing method based on canonical correlation analysis
Zhang et al. Collaborative network for super-resolution and semantic segmentation of remote sensing images
CN110473142B (en) Single image super-resolution reconstruction method based on deep learning
CN108090873B (en) Pyramid face image super-resolution reconstruction method based on regression model
Shi et al. Exploiting multi-scale parallel self-attention and local variation via dual-branch transformer-CNN structure for face super-resolution
CN104504672B (en) Low-rank sparse neighborhood insertion ultra-resolution method based on NormLV features
CN110852326B (en) Handwriting layout analysis and multi-style ancient book background fusion method
Wang et al. A group-based embedding learning and integration network for hyperspectral image super-resolution
Li et al. Spectral feature fusion networks with dual attention for hyperspectral image classification
CN112598604A (en) Blind face restoration method and system
CN108257093A (en) The single-frame images ultra-resolution method returned based on controllable core and Gaussian process
Wang et al. Group shuffle and spectral-spatial fusion for hyperspectral image super-resolution
CN115526779A (en) Infrared image super-resolution reconstruction method based on dynamic attention mechanism
Liu et al. Fine-grained image inpainting with scale-enhanced generative adversarial network
CN116664435A (en) Face restoration method based on multi-scale face analysis map integration
Deng et al. Multiple frame splicing and degradation learning for hyperspectral imagery super-resolution
CN107085826A (en) Based on the non local single image super resolution ratio reconstruction method for returning priori of weighted overlap-add
Xu et al. AS 3 ITransUNet: Spatial-Spectral Interactive Transformer U-Net with Alternating Sampling for Hyperspectral Image Super-Resolution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210305

Termination date: 20211220