CN111292238A - Face image super-resolution reconstruction method based on orthogonal partial least squares - Google Patents
Face image super-resolution reconstruction method based on orthogonal partial least squares Download PDFInfo
- Publication number
- CN111292238A CN111292238A CN202010069636.7A CN202010069636A CN111292238A CN 111292238 A CN111292238 A CN 111292238A CN 202010069636 A CN202010069636 A CN 202010069636A CN 111292238 A CN111292238 A CN 111292238A
- Authority
- CN
- China
- Prior art keywords
- resolution
- image
- residual
- face
- low
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 238000000605 extraction Methods 0.000 claims abstract description 8
- 238000012549 training Methods 0.000 claims description 28
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 27
- 238000012360 testing method Methods 0.000 claims description 16
- 239000011159 matrix material Substances 0.000 claims description 6
- 230000001815 facial effect Effects 0.000 claims description 5
- 230000009467 reduction Effects 0.000 claims description 4
- 238000012935 Averaging Methods 0.000 claims description 3
- 230000003287 optical effect Effects 0.000 claims description 3
- 230000002194 synthesizing effect Effects 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 230000036544 posture Effects 0.000 abstract description 5
- 238000013507 mapping Methods 0.000 abstract 1
- 238000002474 experimental method Methods 0.000 description 16
- 238000000513 principal component analysis Methods 0.000 description 15
- 238000005286 illumination Methods 0.000 description 3
- 230000000717 retained effect Effects 0.000 description 3
- 230000000052 comparative effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000000491 multivariate analysis Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000010219 correlation analysis Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2132—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on discrimination criteria, e.g. discriminant analysis
- G06F18/21322—Rendering the within-class scatter matrix non-singular
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2135—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2132—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on discrimination criteria, e.g. discriminant analysis
- G06F18/21322—Rendering the within-class scatter matrix non-singular
- G06F18/21328—Rendering the within-class scatter matrix non-singular involving subspace restrictions, e.g. nullspace techniques
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a face image super-resolution reconstruction method based on orthogonal partial least squares, which comprises the following steps: 1. performing feature extraction by using an orthogonal partial least square method, mapping the face image into a subspace to enable the covariance between corresponding high-resolution and low-resolution image matrixes to be maximum, and reconstructing a high-resolution global face of a low-resolution input image in the subspace by using a neighborhood reconstruction idea; 2. dividing the human face residual into a plurality of overlapped blocks, and reconstructing the blocks of each region by using a neighborhood to construct high-resolution residual compensation; 3. and finally, adding residual compensation to the global face by the high-resolution face image output by the algorithm. The method has more advantages in the aspects of the outline and the detail content of the super-resolution reconstruction result, the objective index score is higher than that of a classical algorithm, the performance is more excellent under multiple postures and different zoom factors, the method has satisfactory super-resolution reconstruction performance and stronger robustness, and certain market implementation feasibility is realized.
Description
Technical Field
The invention relates to the field of super-resolution reconstruction, in particular to a face image super-resolution reconstruction method based on orthogonal partial least squares.
Background
Multivariate Analysis methods are often used for super-resolution reconstruction for feature extraction, among which Principal Component Analysis (PCA) and Canonical Correlation Analysis (CCA) are popular. Feature extraction steps are typically used to reduce the dimensionality of the data and reduce noise. PCA extracts useful information of a human face by preserving appropriate dimensions and filters noise. Wang et al propose a framework for generating high resolution faces by deriving linear combination coefficients of images by PCA. Huang et al propose a super-resolution method for extracting the relationship between high-resolution and low-resolution images using CCA. Also as a multivariate analysis method, Orthogonal Partial Least Squares (OPLS) is different from CCA-maximized data correlation, OPLS is based on Partial Least Squares (PLS), which projects continuous variables orthogonally to underlying structures, thereby separating the variables into two types, predictable and irrelevant, and aims to find a feature vector that maximizes covariance between different data sets, predict output labels, and its projection vector is more discriminative.
Due to the low resolution problem caused by factors such as posture change, long distance, illumination condition influence and the like, the face recognition is difficult, and a classic face recognition algorithm cannot well process a low-resolution face image. To solve this problem, many effective face super-resolution algorithms have been proposed by researchers. The super-resolution of the face image means that a corresponding high-resolution face image is generated by an input low-resolution face image, which is called face illusion, and is widely applied to the field of video monitoring. Existing face image super-resolution algorithms can be roughly classified into three categories: interpolation-based, learning-based, and reconstruction-based super-resolution algorithms. The learning-based method predicts the high-resolution image by learning the relationship between the high-resolution and low-resolution training sets, and recently, many researchers combine deep learning with the learning-based super-resolution method to achieve great success, for example, the super-resolution method based on the convolutional neural network model has excellent effect; interpolation-based methods generate high resolution images by predicting unknown pixel information, but because no new information is brought in, the results are often very blurred; reconstruction-based methods construct high-resolution images using a priori knowledge and constraint information, but still do not perform well in terms of details of the output results.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a face image super-resolution reconstruction method based on orthogonal partial least squares.
The purpose of the invention is realized as follows: a face image super-resolution reconstruction method based on orthogonal partial least squares comprises the following steps:
step 1, extracting features of high-resolution and low-resolution images in a training set, using PCA to reduce dimensions of data, using an OPLS method to extract features, calculating a projection vector, projecting an image matrix to a subspace to enable the covariance between the high-resolution and low-resolution image matrices to be maximum, projecting an input low-resolution face image to the same subspace, and constructing a high-resolution global face corresponding to the input face in the subspace by using field reconstruction;
step 2, calculating to obtain high-resolution and low-resolution human face residual image sets, dividing the residual image into a plurality of blocks with equal side length and overlapping mutually, projecting the blocks to a subspace by using OPLS (optical phase location setup), constructing a high-resolution human face residual block in the subspace by using a neighborhood reconstruction method, and synthesizing the residual blocks to obtain high-resolution human face residual compensation;
and 3, finally reconstructing the high-resolution face image to be the high-resolution global face and adding the high-resolution face residual compensation.
As a further limitation of the present invention, the high resolution global face reconstruction in step 1 includes the following steps:
(1) for a high resolution image set in the training set, X ═ X1,x2,…,xmY, low resolution image set Y ═ Y1,y2,…,ymGet rid of the mean value μx、μyFeature extraction for high resolution image set and low resolution image set separately using PCA
Wherein T isxAnd TyPCA transformation matrixes of the high-resolution image and the low-resolution image are respectively, wherein 1 represents a column vector with elements of 1;
(2) using principal component featuresAndmean value ofAndcentralizationAndthen, using OPLS pairsAndperforming feature extractionSolving the following generalized eigenvalue problem:
from WxAnd WyD pairs of eigenvectors with the largest eigenvalue are selected to form a projection matrix Vx、VyThe main componentProjection into subspace to obtainAnd
(3) for the tested low resolution image ItThe principal component is also determinedUsing projection vector VyProject it to the same subspace:using a neighborhood reconstruction method to search k nearest neighbors in a subspace, and calculating corresponding weight values
Wherein, Kij=(Ct-cyi)T(Ct-Cyj) Next, the feature of the high-resolution global face in the subspace is constructed using the weight values:
convert it from subspace features back to the pixel domain to get a high resolution global face image Ig:
As a further limitation of the present invention, said residual compensation in step 2 comprises the steps of:
(1) for all low resolution images in the training setObtaining the high-resolution global face image set Y by using the stepsgObtaining a residual set R of the high-resolution image setx=X-YgResidual set R of low resolution image sety=Y-Y′gAnd inputting residual R of the low resolution test imaget=It-I′gOf which is Y'gAnd l'gRespectively represent YgAnd IgPerforming downsampling on the result;
(2) dividing all residual images into a plurality of residual blocks with equal size and overlapping with each other N represents the number of blocks to be divided,representing the set of all high resolution training image residual blocks located at the i position,a set of residual blocks representing all low resolution training images located at position i; carrying out dimensionality reduction on the residual blocks through PCA, and for each residual block of the test image, corresponding to a training setAndthe block at the position and the blocks at the eight positions around the position are jointly formed, k' nearest neighbors are searched in the training residual block to construct a weight set of the position
The calculated residual blockAre combined to obtain RhAveraging the overlapped regions; r is to behConverting the image into a pixel domain to obtain a high-resolution residual error human face Ir。
As a further limitation of the present invention, the final reconstructed high resolution face image in step 3 is Ih=Ig+Ir。
Compared with the prior art, the invention has the beneficial effects that: the invention utilizes orthogonal partial least squares to extract features, maps the features into a subspace to enable the covariance between corresponding high-resolution and low-resolution image matrixes to be maximum, reconstructs a high-resolution global face of a low-resolution input image by utilizing the idea of neighborhood reconstruction in the subspace, divides a face residual into a plurality of overlapped blocks, reconstructs the neighborhood for each block of each area to construct high-resolution residual compensation, and finally, the high-resolution face image output by an algorithm is the global face plus the residual compensation.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention.
Fig. 2 is a comparison graph of the results of the global face experiment of the three methods.
FIG. 3 is a comparison graph of the results of super-resolution reconstruction on a CAS-PEAL database by the five methods.
Fig. 4 is a comparison graph of the super-resolution reconstruction results of the five methods on the FERET database.
Detailed Description
As shown in fig. 1, a face image super-resolution reconstruction method based on orthogonal partial least squares includes the following steps:
step 1, extracting features of high-resolution and low-resolution images in a training set, using PCA to reduce dimensions of data, using an OPLS method to extract features, calculating a projection vector, projecting an image matrix to a subspace to enable the covariance between the high-resolution and low-resolution image matrices to be maximum, projecting an input low-resolution face image to the same subspace, and constructing a high-resolution global face corresponding to the input face in the subspace by using field reconstruction;
the high-resolution global face reconstruction in the step 1 comprises the following steps:
(1) for a high resolution image set in the training set, X ═ X1,x2,…,xmY, low resolution image set Y ═ Y1,y2,…,ymGet rid of the mean value μx、μyFeature extraction for high resolution image set and low resolution image set separately using PCA
Wherein T isxAnd TyPCA transformation matrixes of the high-resolution image and the low-resolution image are respectively, wherein 1 represents a column vector with elements of 1;
(2) using principal component featuresAndmean value ofAndcentralizationAndthen, using OPLS pairsAndand (3) carrying out feature extraction by solving the following generalized eigenvalue problem:
from WxAnd WyD pairs of eigenvectors with the largest eigenvalue are selected to form a projection matrix Vx、VyThe main componentProjection into subspace to obtainAnd
(3) for the tested low resolution image ItThe principal component is also determinedUsing projection vector VyProject it to the same subspace:using a neighborhood reconstruction method to search k nearest neighbors in a subspace, and calculating corresponding weight values
Wherein, Kij=(Ct-cyi)T(Ct-Cyj) Next, the feature of the high-resolution global face in the subspace is constructed using the weight values:
convert it from subspace features back to the pixel domain to get a high resolution global face image Ig:
Step 2, calculating to obtain high-resolution and low-resolution human face residual image sets, dividing the residual image into a plurality of blocks with equal side length and overlapping mutually, projecting the blocks to a subspace by using OPLS (optical phase location setup), constructing a high-resolution human face residual block in the subspace by using a neighborhood reconstruction method, and synthesizing the residual blocks to obtain high-resolution human face residual compensation;
the residual compensation in step 2 comprises the following steps:
(1) for all low resolution images in the training setObtaining the high-resolution global face image set Y by using the stepsgObtaining a residual set R of the high-resolution image setx=X-YgResidual set R of low resolution image sety=Y-Y′gAnd inputting residual R of the low resolution test imaget=It-I′gOf which is Y'gAnd l'gRespectively represent YgAnd IgPerforming downsampling on the result;
(2) dividing all residual images into a plurality of residual blocks with equal size and overlapping with each other N represents the number of blocks to be divided,representing the set of all high resolution training image residual blocks located at the i position,a set of residual blocks representing all low resolution training images located at position i; performing dimensionality reduction on the residual blocks by PCA (principal component analysis), and performing dimensionality reduction on each residual block of the test imageIts corresponding training set is composed ofAndthe block at the position and the blocks at the eight positions around the position are jointly formed, k' nearest neighbors are searched in the training residual block to construct a weight set of the position
The calculated residual blockAre combined to obtain RhAveraging the overlapped regions; r is to behConverting the image into a pixel domain to obtain a high-resolution residual error human face Ir。
Step 3, the high-resolution face image finally reconstructed is the high-resolution global face plus the high-resolution face residual compensation, Ih=Ig+Ir。
The invention can be further illustrated by the following experiments:
to test the effectiveness of the present invention, comparative tests were performed using the CAS-PEAL database and the FERET database, respectively. 1040 facial images are selected from the CAS-PEAL database, one image is selected for each person, the high-resolution image is 96 multiplied by 96, a 24 multiplied by 24 low-resolution image set is obtained by down-sampling the high-resolution image set, the zoom factor is four times, 1000 high-resolution images and 1000 low-resolution images serve as training sets, and the rest 40 high-resolution images and 40 low-resolution images serve as test sets. And selecting 1400 face images of 200 persons from the FERET database, wherein 7 face images with different postures and illumination are selected for each person. The first image of each person was used as the test set, and the remaining six were used as the training set, i.e., the training set contained 1400 high-resolution and low-resolution images and the test set contained 200 high-resolution and low-resolution images. The high resolution image size is 80 × 80, and the low resolution image has three sizes: 40 × 40, 27 × 27, and 20 × 20, with scaling factors of 2, 3, and 4, respectively.
Experiment 1 global face reconstruction comparative experiment based on CAS-PEAL database:
in this experiment, the present invention retained 98% of the PCA variance contribution, the number of neighborhoods K in the global face reconstruction was 350, and OPLS retained 100 dimensions. The CLLR-SR method retains 98% of PCA variance contribution rate, and the neighborhood number K is 300. The neighborhood number K in the SR2DCCA method is 400. The experiment is based on a CAS-PEAL data set, and the number of training images is 1000, and the number of testing images is 40. The obtained global face result is shown in fig. 2, and the following table shows average peak signal-to-noise ratio (PSNR) and Structural Similarity (SSIM) index scores of the global face in the three methods, and it can be seen that compared with the other two methods, the global face reconstruction result of the invention is more excellent in subjective visual effect and objective indexes of PSNR and SSIM.
TABLE 1 PSNR, SSIM index score of global face by three methods
The invention | CLLR-SR | SR2DCCA | |
PSNR(dB) | 26.95 | 24.25 | 25.57 |
SSIM | 82.29% | 75.95% | 79.62% |
Experiment 2 super-resolution reconstruction contrast experiment based on CAS-PEAL database:
in this experiment, the global face part parameters are consistent with those of experiment 1, the number of neighborhoods K of the residual compensation part of the present invention is 250, the residual block is 16 × 16, and the overlap is 8 pixels. The CLLR-SR method has a neighborhood number K of 300, a residual block size of 16 × 16, and 8 pixels overlapped. The neighborhood number for the SR2DCCA method is set to 800. The RAISR method has an angle of 24, intensity of 3, coherence of 3, and block size of 11 × 11. The experiment is based on a CAS-PEAL data set, and the number of training images is 1000, and the number of testing images is 40. The comparison graph of the super-resolution reconstruction result is shown in fig. 3, the following table shows the average PSNR and SSIM index scores of the super-resolution results of the five methods, and it can be seen that the method has more advantages in the aspects of the profile and detail content of the result, and the objective index score is also higher than that of the classical algorithm.
TABLE 2 PSNR, SSIM index score of super-resolution result of five methods
The invention | CLLR-SR | SR2DCCA | RAISR | Bicubic | |
PSNR(dB) | 30.01 | 28.95 | 27.59 | 29.36 | 28.11 |
SSIM | 89.57% | 88.07% | 81.07% | 89.33% | 87.03% |
Experiment 3 super-resolution reconstruction contrast experiment based on the FERET database:
in order to test the robustness of the invention, the experiment adopts a FERET data set which comprises face images under various postures and illumination conditions. The training set contains 1200 images and the test set contains 200 images. And respectively testing the super-resolution conditions under 2-time, 3-time and 4-time scaling factors. In this experiment, the present invention retained 98% of PCA variance contribution, 100 dimensions for OPLS, 350 for the number of neighbors in the global face reconstruction, 250 for the number of neighbors in the residual compensation step, 12 × 12 for the size of the residual block, overlapping 10 pixels. In the CLLR-SR method, the PCA variance contribution ratio is also maintained at 98%, the number of neighborhoods of the global face reconstruction part is 300, and the number of neighborhoods of the residual compensation part is 200. In the SR2DCCA method, the dimensionality is preserved by 80, the number of neighborhoods in the global face reconstruction is 400, and the number of neighborhoods in the residual compensation is 800. The parameters of the RAISR method are the same as those in experiment 2. The obtained 2-time, 3-time and 4-time super-resolution reconstruction results are shown in fig. 4, and the following table shows average PSNR and SSIM index scores of the super-resolution reconstruction results of the five methods at 2-time, 3-time and 4-time respectively. Therefore, the invention has excellent performance under multiple postures and different zoom factors.
TABLE 3 PSNR, SSIM index score of the global face and super-resolution result
In summary, the present invention utilizes an Orthogonal Partial Least Squares (OPLS) method to perform feature extraction, maps the face image into a subspace to maximize the covariance between corresponding high-resolution and low-resolution image matrices, and reconstructs a high-resolution global face of a low-resolution input image in the subspace by using the idea of neighborhood reconstruction. The present invention divides the face residual into several overlapping blocks, and uses neighborhood reconstruction for each block of the region to construct high resolution residual compensation. And finally, adding residual compensation to the global face by the high-resolution face image output by the algorithm. The super-resolution reconstruction result shows that the method has better performance on subjective vision, objective PSNR and SSIM indexes.
The present invention is not limited to the above-mentioned embodiments, and based on the technical solutions disclosed in the present invention, those skilled in the art can make some substitutions and modifications to some technical features without creative efforts according to the disclosed technical contents, and these substitutions and modifications are all within the protection scope of the present invention.
Claims (4)
1. A face image super-resolution reconstruction method based on orthogonal partial least squares is characterized by comprising the following steps:
step 1, extracting features of high-resolution and low-resolution images in a training set, using PCA to reduce dimensions of data, using an OPLS method to extract features, calculating a projection vector, projecting an image matrix to a subspace to enable the covariance between the high-resolution and low-resolution image matrices to be maximum, projecting an input low-resolution face image to the same subspace, and constructing a high-resolution global face corresponding to the input face in the subspace by using field reconstruction;
step 2, calculating to obtain high-resolution and low-resolution human face residual image sets, dividing the residual image into a plurality of blocks with equal side length and overlapping mutually, projecting the blocks to a subspace by using OPLS (optical phase location setup), constructing a high-resolution human face residual block in the subspace by using a neighborhood reconstruction method, and synthesizing the residual blocks to obtain high-resolution human face residual compensation;
and 3, finally reconstructing the high-resolution face image to be the high-resolution global face and adding the high-resolution face residual compensation.
2. The super-resolution reconstruction method for facial images based on orthogonal partial least squares as claimed in claim 1, wherein the high resolution global face reconstruction in step 1 comprises the following steps:
(1) for a high resolution image set in the training set, X ═ X1,x2,…,xmY, low resolution image set Y ═ Y1,y2,…,ymGet rid of the mean value μx、μyAnd respectively extracting the features of the high-resolution image set and the low-resolution image set by using PCA:
wherein T isxAnd TyPCA transformation matrixes of the high-resolution image and the low-resolution image are respectively, wherein 1 represents a column vector with elements of 1;
(2) using principal component featuresAndmean value ofAndcentralizationAndthen, using OPLS pairsAndand (3) carrying out feature extraction by solving the following generalized eigenvalue problem:
from WxAnd WyD pairs of eigenvectors with the largest eigenvalue are selected to form a projection matrix Vx、VyThe main componentProjection into subspace to obtainAnd
(3) for the tested low resolution image ItThe principal component is also determinedUsing projection vector VyProject it to the same subspace:using a neighborhood reconstruction method to search k nearest neighbors in a subspace, and calculating corresponding weight values
Wherein, Kij=(Ct-cyi)T(Ct-Cyj) Next, the feature of the high-resolution global face in the subspace is constructed using the weight values:
convert it from subspace features back to the pixel domain to get a high resolution global face image Ig:
3. The super-resolution reconstruction method for facial images based on orthogonal partial least squares as claimed in claim 1, wherein the residual compensation in step 2 comprises the following steps:
(1) for all low resolution images in the training setUse of the aboveTo find its high resolution global face image set YgObtaining a residual set R of the high-resolution image setx=X-YgResidual set R of low resolution image sety=Y-Y′gAnd inputting residual R of the low resolution test imaget=It-I′gOf which is Y'gAnd l'gRespectively represent YgAnd IgPerforming downsampling on the result;
(2) dividing all residual images into a plurality of residual blocks with equal size and overlapping with each other N represents the number of blocks to be divided,representing the set of all high resolution training image residual blocks located at the i position,a set of residual blocks representing all low resolution training images located at position i; carrying out dimensionality reduction on the residual blocks through PCA, and for each residual block of the test image, corresponding to a training setAndthe block at the position and the blocks at the eight positions around the position are jointly formed, k' nearest neighbors are searched in the training residual block to construct a weight set of the position
4. The super-resolution facial image reconstruction method based on orthogonal partial least squares as claimed in claim 3, wherein the final reconstructed high-resolution facial image in step 3 is Ih=Ig+Ir。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010069636.7A CN111292238B (en) | 2020-01-21 | 2020-01-21 | Face image super-resolution reconstruction method based on orthogonal partial least square |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010069636.7A CN111292238B (en) | 2020-01-21 | 2020-01-21 | Face image super-resolution reconstruction method based on orthogonal partial least square |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111292238A true CN111292238A (en) | 2020-06-16 |
CN111292238B CN111292238B (en) | 2023-08-08 |
Family
ID=71023472
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010069636.7A Active CN111292238B (en) | 2020-01-21 | 2020-01-21 | Face image super-resolution reconstruction method based on orthogonal partial least square |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111292238B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116402691A (en) * | 2023-06-05 | 2023-07-07 | 四川轻化工大学 | Image super-resolution method and system based on rapid image feature stitching |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101615290A (en) * | 2009-07-29 | 2009-12-30 | 西安交通大学 | A kind of face image super-resolution reconstruction method based on canonical correlation analysis |
CN106097250A (en) * | 2016-06-22 | 2016-11-09 | 江南大学 | A kind of based on the sparse reconstructing method of super-resolution differentiating canonical correlation |
-
2020
- 2020-01-21 CN CN202010069636.7A patent/CN111292238B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101615290A (en) * | 2009-07-29 | 2009-12-30 | 西安交通大学 | A kind of face image super-resolution reconstruction method based on canonical correlation analysis |
CN106097250A (en) * | 2016-06-22 | 2016-11-09 | 江南大学 | A kind of based on the sparse reconstructing method of super-resolution differentiating canonical correlation |
Non-Patent Citations (1)
Title |
---|
YUN-HAO YUAN: "LEARNING SIMULTANEOUS FACE SUPER-RESOLUTION USING MULTISET PARTIAL", 《IEEE》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116402691A (en) * | 2023-06-05 | 2023-07-07 | 四川轻化工大学 | Image super-resolution method and system based on rapid image feature stitching |
CN116402691B (en) * | 2023-06-05 | 2023-08-04 | 四川轻化工大学 | Image super-resolution method and system based on rapid image feature stitching |
Also Published As
Publication number | Publication date |
---|---|
CN111292238B (en) | 2023-08-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110119780B (en) | Hyper-spectral image super-resolution reconstruction method based on generation countermeasure network | |
CN106952228B (en) | Super-resolution reconstruction method of single image based on image non-local self-similarity | |
CN112750082B (en) | Human face super-resolution method and system based on fusion attention mechanism | |
CN110111256B (en) | Image super-resolution reconstruction method based on residual distillation network | |
Huang et al. | Deep hyperspectral image fusion network with iterative spatio-spectral regularization | |
CN112070670B (en) | Face super-resolution method and system of global-local separation attention mechanism | |
CN111080567A (en) | Remote sensing image fusion method and system based on multi-scale dynamic convolution neural network | |
Liu et al. | Variational autoencoder for reference based image super-resolution | |
CN109272452A (en) | Learn the method for super-resolution network in wavelet field jointly based on bloc framework subband | |
CN108830791B (en) | Image super-resolution method based on self sample and sparse representation | |
CN106097250B (en) | A kind of sparse reconstructing method of super-resolution based on identification canonical correlation | |
CN112686817B (en) | Image completion method based on uncertainty estimation | |
CN106600533B (en) | Single image super resolution ratio reconstruction method | |
Bao et al. | SCTANet: A spatial attention-guided CNN-transformer aggregation network for deep face image super-resolution | |
CN113379597A (en) | Face super-resolution reconstruction method | |
CN114332625A (en) | Remote sensing image colorizing and super-resolution method and system based on neural network | |
CN115578262A (en) | Polarization image super-resolution reconstruction method based on AFAN model | |
CN110084750B (en) | Single image super-resolution method based on multi-layer ridge regression | |
CN109559278B (en) | Super resolution image reconstruction method and system based on multiple features study | |
CN111292237B (en) | Face image super-resolution reconstruction method based on two-dimensional multi-set partial least square | |
CN111292238B (en) | Face image super-resolution reconstruction method based on orthogonal partial least square | |
Thuan et al. | Edge-focus thermal image super-resolution using generative adversarial network | |
CN111611962A (en) | Face image super-resolution identification method based on fractional order multi-set partial least square | |
CN111275624B (en) | Face image super-resolution reconstruction and identification method based on multi-set typical correlation analysis | |
CN114511470B (en) | Attention mechanism-based double-branch panchromatic sharpening method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |