CN111292238A - Face image super-resolution reconstruction method based on orthogonal partial least squares - Google Patents

Face image super-resolution reconstruction method based on orthogonal partial least squares Download PDF

Info

Publication number
CN111292238A
CN111292238A CN202010069636.7A CN202010069636A CN111292238A CN 111292238 A CN111292238 A CN 111292238A CN 202010069636 A CN202010069636 A CN 202010069636A CN 111292238 A CN111292238 A CN 111292238A
Authority
CN
China
Prior art keywords
resolution
image
residual
face
low
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010069636.7A
Other languages
Chinese (zh)
Other versions
CN111292238B (en
Inventor
袁运浩
李进
李云
强继朋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yangzhou University
Original Assignee
Yangzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yangzhou University filed Critical Yangzhou University
Priority to CN202010069636.7A priority Critical patent/CN111292238B/en
Publication of CN111292238A publication Critical patent/CN111292238A/en
Application granted granted Critical
Publication of CN111292238B publication Critical patent/CN111292238B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2132Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on discrimination criteria, e.g. discriminant analysis
    • G06F18/21322Rendering the within-class scatter matrix non-singular
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2132Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on discrimination criteria, e.g. discriminant analysis
    • G06F18/21322Rendering the within-class scatter matrix non-singular
    • G06F18/21328Rendering the within-class scatter matrix non-singular involving subspace restrictions, e.g. nullspace techniques
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a face image super-resolution reconstruction method based on orthogonal partial least squares, which comprises the following steps: 1. performing feature extraction by using an orthogonal partial least square method, mapping the face image into a subspace to enable the covariance between corresponding high-resolution and low-resolution image matrixes to be maximum, and reconstructing a high-resolution global face of a low-resolution input image in the subspace by using a neighborhood reconstruction idea; 2. dividing the human face residual into a plurality of overlapped blocks, and reconstructing the blocks of each region by using a neighborhood to construct high-resolution residual compensation; 3. and finally, adding residual compensation to the global face by the high-resolution face image output by the algorithm. The method has more advantages in the aspects of the outline and the detail content of the super-resolution reconstruction result, the objective index score is higher than that of a classical algorithm, the performance is more excellent under multiple postures and different zoom factors, the method has satisfactory super-resolution reconstruction performance and stronger robustness, and certain market implementation feasibility is realized.

Description

Face image super-resolution reconstruction method based on orthogonal partial least squares
Technical Field
The invention relates to the field of super-resolution reconstruction, in particular to a face image super-resolution reconstruction method based on orthogonal partial least squares.
Background
Multivariate Analysis methods are often used for super-resolution reconstruction for feature extraction, among which Principal Component Analysis (PCA) and Canonical Correlation Analysis (CCA) are popular. Feature extraction steps are typically used to reduce the dimensionality of the data and reduce noise. PCA extracts useful information of a human face by preserving appropriate dimensions and filters noise. Wang et al propose a framework for generating high resolution faces by deriving linear combination coefficients of images by PCA. Huang et al propose a super-resolution method for extracting the relationship between high-resolution and low-resolution images using CCA. Also as a multivariate analysis method, Orthogonal Partial Least Squares (OPLS) is different from CCA-maximized data correlation, OPLS is based on Partial Least Squares (PLS), which projects continuous variables orthogonally to underlying structures, thereby separating the variables into two types, predictable and irrelevant, and aims to find a feature vector that maximizes covariance between different data sets, predict output labels, and its projection vector is more discriminative.
Due to the low resolution problem caused by factors such as posture change, long distance, illumination condition influence and the like, the face recognition is difficult, and a classic face recognition algorithm cannot well process a low-resolution face image. To solve this problem, many effective face super-resolution algorithms have been proposed by researchers. The super-resolution of the face image means that a corresponding high-resolution face image is generated by an input low-resolution face image, which is called face illusion, and is widely applied to the field of video monitoring. Existing face image super-resolution algorithms can be roughly classified into three categories: interpolation-based, learning-based, and reconstruction-based super-resolution algorithms. The learning-based method predicts the high-resolution image by learning the relationship between the high-resolution and low-resolution training sets, and recently, many researchers combine deep learning with the learning-based super-resolution method to achieve great success, for example, the super-resolution method based on the convolutional neural network model has excellent effect; interpolation-based methods generate high resolution images by predicting unknown pixel information, but because no new information is brought in, the results are often very blurred; reconstruction-based methods construct high-resolution images using a priori knowledge and constraint information, but still do not perform well in terms of details of the output results.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a face image super-resolution reconstruction method based on orthogonal partial least squares.
The purpose of the invention is realized as follows: a face image super-resolution reconstruction method based on orthogonal partial least squares comprises the following steps:
step 1, extracting features of high-resolution and low-resolution images in a training set, using PCA to reduce dimensions of data, using an OPLS method to extract features, calculating a projection vector, projecting an image matrix to a subspace to enable the covariance between the high-resolution and low-resolution image matrices to be maximum, projecting an input low-resolution face image to the same subspace, and constructing a high-resolution global face corresponding to the input face in the subspace by using field reconstruction;
step 2, calculating to obtain high-resolution and low-resolution human face residual image sets, dividing the residual image into a plurality of blocks with equal side length and overlapping mutually, projecting the blocks to a subspace by using OPLS (optical phase location setup), constructing a high-resolution human face residual block in the subspace by using a neighborhood reconstruction method, and synthesizing the residual blocks to obtain high-resolution human face residual compensation;
and 3, finally reconstructing the high-resolution face image to be the high-resolution global face and adding the high-resolution face residual compensation.
As a further limitation of the present invention, the high resolution global face reconstruction in step 1 includes the following steps:
(1) for a high resolution image set in the training set, X ═ X1,x2,…,xmY, low resolution image set Y ═ Y1,y2,…,ymGet rid of the mean value μx、μyFeature extraction for high resolution image set and low resolution image set separately using PCA
Figure BDA0002376974830000031
Figure BDA0002376974830000032
Wherein T isxAnd TyPCA transformation matrixes of the high-resolution image and the low-resolution image are respectively, wherein 1 represents a column vector with elements of 1;
(2) using principal component features
Figure BDA0002376974830000033
And
Figure BDA0002376974830000034
mean value of
Figure BDA0002376974830000035
And
Figure BDA0002376974830000036
centralization
Figure BDA0002376974830000037
And
Figure BDA00023769748300000319
then, using OPLS pairs
Figure BDA0002376974830000038
And
Figure BDA0002376974830000039
performing feature extractionSolving the following generalized eigenvalue problem:
Figure BDA00023769748300000310
wherein
Figure BDA00023769748300000311
E[·]Represents a mathematical expectation such that the following conditions are satisfied:
Figure BDA00023769748300000312
from WxAnd WyD pairs of eigenvectors with the largest eigenvalue are selected to form a projection matrix Vx、VyThe main component
Figure BDA00023769748300000313
Projection into subspace to obtain
Figure BDA00023769748300000314
And
Figure BDA00023769748300000315
(3) for the tested low resolution image ItThe principal component is also determined
Figure BDA00023769748300000316
Using projection vector VyProject it to the same subspace:
Figure BDA00023769748300000317
using a neighborhood reconstruction method to search k nearest neighbors in a subspace, and calculating corresponding weight values
Figure BDA00023769748300000318
Figure BDA0002376974830000041
Wherein, Kij=(Ct-cyi)T(Ct-Cyj) Next, the feature of the high-resolution global face in the subspace is constructed using the weight values:
Figure BDA0002376974830000042
convert it from subspace features back to the pixel domain to get a high resolution global face image Ig
Figure BDA0002376974830000043
As a further limitation of the present invention, said residual compensation in step 2 comprises the steps of:
(1) for all low resolution images in the training set
Figure BDA0002376974830000044
Obtaining the high-resolution global face image set Y by using the stepsgObtaining a residual set R of the high-resolution image setx=X-YgResidual set R of low resolution image sety=Y-Y′gAnd inputting residual R of the low resolution test imaget=It-I′gOf which is Y'gAnd l'gRespectively represent YgAnd IgPerforming downsampling on the result;
(2) dividing all residual images into a plurality of residual blocks with equal size and overlapping with each other
Figure BDA0002376974830000045
Figure BDA0002376974830000046
N represents the number of blocks to be divided,
Figure BDA0002376974830000047
representing the set of all high resolution training image residual blocks located at the i position,
Figure BDA0002376974830000048
a set of residual blocks representing all low resolution training images located at position i; carrying out dimensionality reduction on the residual blocks through PCA, and for each residual block of the test image, corresponding to a training set
Figure BDA0002376974830000049
And
Figure BDA00023769748300000410
the block at the position and the blocks at the eight positions around the position are jointly formed, k' nearest neighbors are searched in the training residual block to construct a weight set of the position
Figure BDA00023769748300000411
Figure BDA00023769748300000412
The calculated residual block
Figure BDA00023769748300000413
Are combined to obtain RhAveraging the overlapped regions; r is to behConverting the image into a pixel domain to obtain a high-resolution residual error human face Ir
As a further limitation of the present invention, the final reconstructed high resolution face image in step 3 is Ih=Ig+Ir
Compared with the prior art, the invention has the beneficial effects that: the invention utilizes orthogonal partial least squares to extract features, maps the features into a subspace to enable the covariance between corresponding high-resolution and low-resolution image matrixes to be maximum, reconstructs a high-resolution global face of a low-resolution input image by utilizing the idea of neighborhood reconstruction in the subspace, divides a face residual into a plurality of overlapped blocks, reconstructs the neighborhood for each block of each area to construct high-resolution residual compensation, and finally, the high-resolution face image output by an algorithm is the global face plus the residual compensation.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention.
Fig. 2 is a comparison graph of the results of the global face experiment of the three methods.
FIG. 3 is a comparison graph of the results of super-resolution reconstruction on a CAS-PEAL database by the five methods.
Fig. 4 is a comparison graph of the super-resolution reconstruction results of the five methods on the FERET database.
Detailed Description
As shown in fig. 1, a face image super-resolution reconstruction method based on orthogonal partial least squares includes the following steps:
step 1, extracting features of high-resolution and low-resolution images in a training set, using PCA to reduce dimensions of data, using an OPLS method to extract features, calculating a projection vector, projecting an image matrix to a subspace to enable the covariance between the high-resolution and low-resolution image matrices to be maximum, projecting an input low-resolution face image to the same subspace, and constructing a high-resolution global face corresponding to the input face in the subspace by using field reconstruction;
the high-resolution global face reconstruction in the step 1 comprises the following steps:
(1) for a high resolution image set in the training set, X ═ X1,x2,…,xmY, low resolution image set Y ═ Y1,y2,…,ymGet rid of the mean value μx、μyFeature extraction for high resolution image set and low resolution image set separately using PCA
Figure BDA0002376974830000061
Figure BDA0002376974830000062
Wherein T isxAnd TyPCA transformation matrixes of the high-resolution image and the low-resolution image are respectively, wherein 1 represents a column vector with elements of 1;
(2) using principal component features
Figure BDA0002376974830000063
And
Figure BDA0002376974830000064
mean value of
Figure BDA0002376974830000065
And
Figure BDA0002376974830000066
centralization
Figure BDA0002376974830000067
And
Figure BDA0002376974830000068
then, using OPLS pairs
Figure BDA0002376974830000069
And
Figure BDA00023769748300000610
and (3) carrying out feature extraction by solving the following generalized eigenvalue problem:
Figure BDA00023769748300000611
wherein
Figure BDA00023769748300000612
E[·]Represents a mathematical expectation such that the following conditions are satisfied:
Figure BDA00023769748300000613
from WxAnd WyD pairs of eigenvectors with the largest eigenvalue are selected to form a projection matrix Vx、VyThe main component
Figure BDA00023769748300000614
Projection into subspace to obtain
Figure BDA00023769748300000615
And
Figure BDA00023769748300000616
(3) for the tested low resolution image ItThe principal component is also determined
Figure BDA00023769748300000617
Using projection vector VyProject it to the same subspace:
Figure BDA00023769748300000618
using a neighborhood reconstruction method to search k nearest neighbors in a subspace, and calculating corresponding weight values
Figure BDA00023769748300000619
Figure BDA0002376974830000071
Wherein, Kij=(Ct-cyi)T(Ct-Cyj) Next, the feature of the high-resolution global face in the subspace is constructed using the weight values:
Figure BDA0002376974830000072
convert it from subspace features back to the pixel domain to get a high resolution global face image Ig
Figure BDA0002376974830000073
Step 2, calculating to obtain high-resolution and low-resolution human face residual image sets, dividing the residual image into a plurality of blocks with equal side length and overlapping mutually, projecting the blocks to a subspace by using OPLS (optical phase location setup), constructing a high-resolution human face residual block in the subspace by using a neighborhood reconstruction method, and synthesizing the residual blocks to obtain high-resolution human face residual compensation;
the residual compensation in step 2 comprises the following steps:
(1) for all low resolution images in the training set
Figure BDA0002376974830000074
Obtaining the high-resolution global face image set Y by using the stepsgObtaining a residual set R of the high-resolution image setx=X-YgResidual set R of low resolution image sety=Y-Y′gAnd inputting residual R of the low resolution test imaget=It-I′gOf which is Y'gAnd l'gRespectively represent YgAnd IgPerforming downsampling on the result;
(2) dividing all residual images into a plurality of residual blocks with equal size and overlapping with each other
Figure BDA0002376974830000075
Figure BDA0002376974830000076
N represents the number of blocks to be divided,
Figure BDA0002376974830000077
representing the set of all high resolution training image residual blocks located at the i position,
Figure BDA0002376974830000078
a set of residual blocks representing all low resolution training images located at position i; performing dimensionality reduction on the residual blocks by PCA (principal component analysis), and performing dimensionality reduction on each residual block of the test imageIts corresponding training set is composed of
Figure BDA0002376974830000079
And
Figure BDA00023769748300000710
the block at the position and the blocks at the eight positions around the position are jointly formed, k' nearest neighbors are searched in the training residual block to construct a weight set of the position
Figure BDA0002376974830000081
Figure BDA0002376974830000082
The calculated residual block
Figure BDA0002376974830000083
Are combined to obtain RhAveraging the overlapped regions; r is to behConverting the image into a pixel domain to obtain a high-resolution residual error human face Ir
Step 3, the high-resolution face image finally reconstructed is the high-resolution global face plus the high-resolution face residual compensation, Ih=Ig+Ir
The invention can be further illustrated by the following experiments:
to test the effectiveness of the present invention, comparative tests were performed using the CAS-PEAL database and the FERET database, respectively. 1040 facial images are selected from the CAS-PEAL database, one image is selected for each person, the high-resolution image is 96 multiplied by 96, a 24 multiplied by 24 low-resolution image set is obtained by down-sampling the high-resolution image set, the zoom factor is four times, 1000 high-resolution images and 1000 low-resolution images serve as training sets, and the rest 40 high-resolution images and 40 low-resolution images serve as test sets. And selecting 1400 face images of 200 persons from the FERET database, wherein 7 face images with different postures and illumination are selected for each person. The first image of each person was used as the test set, and the remaining six were used as the training set, i.e., the training set contained 1400 high-resolution and low-resolution images and the test set contained 200 high-resolution and low-resolution images. The high resolution image size is 80 × 80, and the low resolution image has three sizes: 40 × 40, 27 × 27, and 20 × 20, with scaling factors of 2, 3, and 4, respectively.
Experiment 1 global face reconstruction comparative experiment based on CAS-PEAL database:
in this experiment, the present invention retained 98% of the PCA variance contribution, the number of neighborhoods K in the global face reconstruction was 350, and OPLS retained 100 dimensions. The CLLR-SR method retains 98% of PCA variance contribution rate, and the neighborhood number K is 300. The neighborhood number K in the SR2DCCA method is 400. The experiment is based on a CAS-PEAL data set, and the number of training images is 1000, and the number of testing images is 40. The obtained global face result is shown in fig. 2, and the following table shows average peak signal-to-noise ratio (PSNR) and Structural Similarity (SSIM) index scores of the global face in the three methods, and it can be seen that compared with the other two methods, the global face reconstruction result of the invention is more excellent in subjective visual effect and objective indexes of PSNR and SSIM.
TABLE 1 PSNR, SSIM index score of global face by three methods
The invention CLLR-SR SR2DCCA
PSNR(dB) 26.95 24.25 25.57
SSIM 82.29% 75.95% 79.62%
Experiment 2 super-resolution reconstruction contrast experiment based on CAS-PEAL database:
in this experiment, the global face part parameters are consistent with those of experiment 1, the number of neighborhoods K of the residual compensation part of the present invention is 250, the residual block is 16 × 16, and the overlap is 8 pixels. The CLLR-SR method has a neighborhood number K of 300, a residual block size of 16 × 16, and 8 pixels overlapped. The neighborhood number for the SR2DCCA method is set to 800. The RAISR method has an angle of 24, intensity of 3, coherence of 3, and block size of 11 × 11. The experiment is based on a CAS-PEAL data set, and the number of training images is 1000, and the number of testing images is 40. The comparison graph of the super-resolution reconstruction result is shown in fig. 3, the following table shows the average PSNR and SSIM index scores of the super-resolution results of the five methods, and it can be seen that the method has more advantages in the aspects of the profile and detail content of the result, and the objective index score is also higher than that of the classical algorithm.
TABLE 2 PSNR, SSIM index score of super-resolution result of five methods
The invention CLLR-SR SR2DCCA RAISR Bicubic
PSNR(dB) 30.01 28.95 27.59 29.36 28.11
SSIM 89.57% 88.07% 81.07% 89.33% 87.03%
Experiment 3 super-resolution reconstruction contrast experiment based on the FERET database:
in order to test the robustness of the invention, the experiment adopts a FERET data set which comprises face images under various postures and illumination conditions. The training set contains 1200 images and the test set contains 200 images. And respectively testing the super-resolution conditions under 2-time, 3-time and 4-time scaling factors. In this experiment, the present invention retained 98% of PCA variance contribution, 100 dimensions for OPLS, 350 for the number of neighbors in the global face reconstruction, 250 for the number of neighbors in the residual compensation step, 12 × 12 for the size of the residual block, overlapping 10 pixels. In the CLLR-SR method, the PCA variance contribution ratio is also maintained at 98%, the number of neighborhoods of the global face reconstruction part is 300, and the number of neighborhoods of the residual compensation part is 200. In the SR2DCCA method, the dimensionality is preserved by 80, the number of neighborhoods in the global face reconstruction is 400, and the number of neighborhoods in the residual compensation is 800. The parameters of the RAISR method are the same as those in experiment 2. The obtained 2-time, 3-time and 4-time super-resolution reconstruction results are shown in fig. 4, and the following table shows average PSNR and SSIM index scores of the super-resolution reconstruction results of the five methods at 2-time, 3-time and 4-time respectively. Therefore, the invention has excellent performance under multiple postures and different zoom factors.
TABLE 3 PSNR, SSIM index score of the global face and super-resolution result
Figure BDA0002376974830000101
In summary, the present invention utilizes an Orthogonal Partial Least Squares (OPLS) method to perform feature extraction, maps the face image into a subspace to maximize the covariance between corresponding high-resolution and low-resolution image matrices, and reconstructs a high-resolution global face of a low-resolution input image in the subspace by using the idea of neighborhood reconstruction. The present invention divides the face residual into several overlapping blocks, and uses neighborhood reconstruction for each block of the region to construct high resolution residual compensation. And finally, adding residual compensation to the global face by the high-resolution face image output by the algorithm. The super-resolution reconstruction result shows that the method has better performance on subjective vision, objective PSNR and SSIM indexes.
The present invention is not limited to the above-mentioned embodiments, and based on the technical solutions disclosed in the present invention, those skilled in the art can make some substitutions and modifications to some technical features without creative efforts according to the disclosed technical contents, and these substitutions and modifications are all within the protection scope of the present invention.

Claims (4)

1. A face image super-resolution reconstruction method based on orthogonal partial least squares is characterized by comprising the following steps:
step 1, extracting features of high-resolution and low-resolution images in a training set, using PCA to reduce dimensions of data, using an OPLS method to extract features, calculating a projection vector, projecting an image matrix to a subspace to enable the covariance between the high-resolution and low-resolution image matrices to be maximum, projecting an input low-resolution face image to the same subspace, and constructing a high-resolution global face corresponding to the input face in the subspace by using field reconstruction;
step 2, calculating to obtain high-resolution and low-resolution human face residual image sets, dividing the residual image into a plurality of blocks with equal side length and overlapping mutually, projecting the blocks to a subspace by using OPLS (optical phase location setup), constructing a high-resolution human face residual block in the subspace by using a neighborhood reconstruction method, and synthesizing the residual blocks to obtain high-resolution human face residual compensation;
and 3, finally reconstructing the high-resolution face image to be the high-resolution global face and adding the high-resolution face residual compensation.
2. The super-resolution reconstruction method for facial images based on orthogonal partial least squares as claimed in claim 1, wherein the high resolution global face reconstruction in step 1 comprises the following steps:
(1) for a high resolution image set in the training set, X ═ X1,x2,…,xmY, low resolution image set Y ═ Y1,y2,…,ymGet rid of the mean value μx、μyAnd respectively extracting the features of the high-resolution image set and the low-resolution image set by using PCA:
Figure FDA0002376974820000011
Figure FDA0002376974820000012
wherein T isxAnd TyPCA transformation matrixes of the high-resolution image and the low-resolution image are respectively, wherein 1 represents a column vector with elements of 1;
(2) using principal component features
Figure FDA0002376974820000021
And
Figure FDA0002376974820000022
mean value of
Figure FDA0002376974820000023
And
Figure FDA0002376974820000024
centralization
Figure FDA0002376974820000025
And
Figure FDA0002376974820000026
then, using OPLS pairs
Figure FDA0002376974820000027
And
Figure FDA0002376974820000028
and (3) carrying out feature extraction by solving the following generalized eigenvalue problem:
Figure FDA0002376974820000029
wherein
Figure FDA00023769748200000210
E[·]Represents a mathematical expectation such that the following conditions are satisfied:
Figure FDA00023769748200000211
from WxAnd WyD pairs of eigenvectors with the largest eigenvalue are selected to form a projection matrix Vx、VyThe main component
Figure FDA00023769748200000212
Projection into subspace to obtain
Figure FDA00023769748200000213
And
Figure FDA00023769748200000214
(3) for the tested low resolution image ItThe principal component is also determined
Figure FDA00023769748200000215
Using projection vector VyProject it to the same subspace:
Figure FDA00023769748200000216
using a neighborhood reconstruction method to search k nearest neighbors in a subspace, and calculating corresponding weight values
Figure FDA00023769748200000217
Figure FDA00023769748200000218
Wherein, Kij=(Ct-cyi)T(Ct-Cyj) Next, the feature of the high-resolution global face in the subspace is constructed using the weight values:
Figure FDA00023769748200000219
convert it from subspace features back to the pixel domain to get a high resolution global face image Ig
Figure FDA00023769748200000220
3. The super-resolution reconstruction method for facial images based on orthogonal partial least squares as claimed in claim 1, wherein the residual compensation in step 2 comprises the following steps:
(1) for all low resolution images in the training set
Figure FDA00023769748200000221
Use of the aboveTo find its high resolution global face image set YgObtaining a residual set R of the high-resolution image setx=X-YgResidual set R of low resolution image sety=Y-Y′gAnd inputting residual R of the low resolution test imaget=It-I′gOf which is Y'gAnd l'gRespectively represent YgAnd IgPerforming downsampling on the result;
(2) dividing all residual images into a plurality of residual blocks with equal size and overlapping with each other
Figure FDA0002376974820000031
Figure FDA0002376974820000032
N represents the number of blocks to be divided,
Figure FDA0002376974820000033
representing the set of all high resolution training image residual blocks located at the i position,
Figure FDA0002376974820000034
a set of residual blocks representing all low resolution training images located at position i; carrying out dimensionality reduction on the residual blocks through PCA, and for each residual block of the test image, corresponding to a training set
Figure FDA0002376974820000035
And
Figure FDA0002376974820000036
the block at the position and the blocks at the eight positions around the position are jointly formed, k' nearest neighbors are searched in the training residual block to construct a weight set of the position
Figure FDA0002376974820000037
Figure FDA0002376974820000038
The calculated residual block
Figure FDA0002376974820000039
Are combined to obtain RhAveraging the overlapped regions; r is to behConverting the image into a pixel domain to obtain a high-resolution residual error human face Ir
4. The super-resolution facial image reconstruction method based on orthogonal partial least squares as claimed in claim 3, wherein the final reconstructed high-resolution facial image in step 3 is Ih=Ig+Ir
CN202010069636.7A 2020-01-21 2020-01-21 Face image super-resolution reconstruction method based on orthogonal partial least square Active CN111292238B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010069636.7A CN111292238B (en) 2020-01-21 2020-01-21 Face image super-resolution reconstruction method based on orthogonal partial least square

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010069636.7A CN111292238B (en) 2020-01-21 2020-01-21 Face image super-resolution reconstruction method based on orthogonal partial least square

Publications (2)

Publication Number Publication Date
CN111292238A true CN111292238A (en) 2020-06-16
CN111292238B CN111292238B (en) 2023-08-08

Family

ID=71023472

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010069636.7A Active CN111292238B (en) 2020-01-21 2020-01-21 Face image super-resolution reconstruction method based on orthogonal partial least square

Country Status (1)

Country Link
CN (1) CN111292238B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116402691A (en) * 2023-06-05 2023-07-07 四川轻化工大学 Image super-resolution method and system based on rapid image feature stitching

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101615290A (en) * 2009-07-29 2009-12-30 西安交通大学 A kind of face image super-resolution reconstruction method based on canonical correlation analysis
CN106097250A (en) * 2016-06-22 2016-11-09 江南大学 A kind of based on the sparse reconstructing method of super-resolution differentiating canonical correlation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101615290A (en) * 2009-07-29 2009-12-30 西安交通大学 A kind of face image super-resolution reconstruction method based on canonical correlation analysis
CN106097250A (en) * 2016-06-22 2016-11-09 江南大学 A kind of based on the sparse reconstructing method of super-resolution differentiating canonical correlation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YUN-HAO YUAN: "LEARNING SIMULTANEOUS FACE SUPER-RESOLUTION USING MULTISET PARTIAL", 《IEEE》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116402691A (en) * 2023-06-05 2023-07-07 四川轻化工大学 Image super-resolution method and system based on rapid image feature stitching
CN116402691B (en) * 2023-06-05 2023-08-04 四川轻化工大学 Image super-resolution method and system based on rapid image feature stitching

Also Published As

Publication number Publication date
CN111292238B (en) 2023-08-08

Similar Documents

Publication Publication Date Title
CN110119780B (en) Hyper-spectral image super-resolution reconstruction method based on generation countermeasure network
CN106952228B (en) Super-resolution reconstruction method of single image based on image non-local self-similarity
CN112750082B (en) Human face super-resolution method and system based on fusion attention mechanism
CN110111256B (en) Image super-resolution reconstruction method based on residual distillation network
Huang et al. Deep hyperspectral image fusion network with iterative spatio-spectral regularization
CN112070670B (en) Face super-resolution method and system of global-local separation attention mechanism
CN111080567A (en) Remote sensing image fusion method and system based on multi-scale dynamic convolution neural network
Liu et al. Variational autoencoder for reference based image super-resolution
CN109272452A (en) Learn the method for super-resolution network in wavelet field jointly based on bloc framework subband
CN108830791B (en) Image super-resolution method based on self sample and sparse representation
CN106097250B (en) A kind of sparse reconstructing method of super-resolution based on identification canonical correlation
CN112686817B (en) Image completion method based on uncertainty estimation
CN106600533B (en) Single image super resolution ratio reconstruction method
Bao et al. SCTANet: A spatial attention-guided CNN-transformer aggregation network for deep face image super-resolution
CN113379597A (en) Face super-resolution reconstruction method
CN114332625A (en) Remote sensing image colorizing and super-resolution method and system based on neural network
CN115578262A (en) Polarization image super-resolution reconstruction method based on AFAN model
CN110084750B (en) Single image super-resolution method based on multi-layer ridge regression
CN109559278B (en) Super resolution image reconstruction method and system based on multiple features study
CN111292237B (en) Face image super-resolution reconstruction method based on two-dimensional multi-set partial least square
CN111292238B (en) Face image super-resolution reconstruction method based on orthogonal partial least square
Thuan et al. Edge-focus thermal image super-resolution using generative adversarial network
CN111611962A (en) Face image super-resolution identification method based on fractional order multi-set partial least square
CN111275624B (en) Face image super-resolution reconstruction and identification method based on multi-set typical correlation analysis
CN114511470B (en) Attention mechanism-based double-branch panchromatic sharpening method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant