CN109712069B - Face image multilayer reconstruction method based on CCA space - Google Patents
Face image multilayer reconstruction method based on CCA space Download PDFInfo
- Publication number
- CN109712069B CN109712069B CN201811322383.9A CN201811322383A CN109712069B CN 109712069 B CN109712069 B CN 109712069B CN 201811322383 A CN201811322383 A CN 201811322383A CN 109712069 B CN109712069 B CN 109712069B
- Authority
- CN
- China
- Prior art keywords
- resolution
- low
- dictionary
- face image
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a face image multilayer reconstruction method based on CCA space, which adopts large size to block a trained low-resolution face image, a trained high-resolution face image and a tested low-resolution face image to obtain a low-resolution dictionary and a high-resolution dictionary; secondly, performing CCA mapping on the two types of dictionaries once, and performing sparse updating on the two types of once-mapped dictionaries; then, performing inverse mapping on the two types of updated dictionaries, and performing CCA mapping on the two types of reflection dictionaries again; then, sorting the dictionaries by calculating Euclidean distances between each column vector in the two types of re-mapped dictionaries and the column vector of the corresponding image block in the tested low-resolution image, and obtaining a layer of reconstructed high-resolution face image by a super-resolution reconstruction method based on smoothness; then selecting a small size for blocking, repeating the process, introducing the constraint of a layer of reconstructed high-resolution image, and obtaining a high-resolution face image reconstructed by two layers; the advantage is that the reconstruction is effective.
Description
Technical Field
The invention relates to a face image reconstruction technology, in particular to a face image multilayer reconstruction method based on a CCA (Canonical Correlation Analysis) space.
Background
The most important face recognition problem in video surveillance is a difficult and complex one. For a face image in a monitoring video, edge structure information is unclear and details are blurred due to insufficient light, too far distance between a face and a monitoring device and the like. In order to solve the problems of unclear edge structure information and blurred details of a face image, it is necessary to use prior information to improve the resolution of the face image, that is, to reconstruct a high-resolution (HR) face image by using an observed low-resolution (LR) face image, which is a super-resolution (SR) reconstruction problem of the face image, and the reconstructed high-resolution face image provides more detailed information for the recognition and analysis of the face.
The super-resolution reconstruction technology of the face image has been widely concerned in the field of computer vision, and how to better reconstruct the face image is deeply researched by related organizations at home and abroad. Such as: jiang et al, J.Jiang, R.Hu, Z.Wang, and Z.Han, "Noise robust surface localization-constrained representation", IEEE trans.multimedia, vol.16, no.5, pp.1268-1281, aug.2014 (Jiang Junjun, hu Ruimin, wang Zhongyuan, han Zhen, noise robust face reconstruction [ J ], IEEE, multimedia, 2014, 1268-1281) propose a block-based local constraint model (LCR) with which reconstruction results show that the effect of Noise on the super-resolution reconstruction process can be reduced. On the basis, jiang J, ma J, chen C, et al. Noise Robust Face Image Super-Resolution [ J ]. IEEE Transactions on Cybernetics,2017, PP (99): 1-12. (Jiang Junjun, ma Jiayi, chen Chen, jiang Xinwei, wang Zheng), noise Robust Face reconstruction [ J ] based on Smooth Sparse Representation, IEEE, cybernetics,2017,1-12) provides a Super-Resolution reconstruction method (SSR) based on smoothness, which achieves certain smoothing and denoising effects.
The reconstruction technique proposed by Jiang et al is based on the assumption that the high-resolution dictionary and the low-resolution dictionary are highly correlated and have similar structural distribution, and the reconstruction of the face image can be performed based on the assumption. On one hand, however, although there is a certain similarity in structure and content between face images, a high-resolution dictionary and a low-resolution dictionary which are directly constructed based on a face image space cannot meet a highly relevant condition, so that a reconstruction effect is not ideal; on the other hand, the above reconstruction techniques all adopt a single-layer reconstruction mode, that is, the reconstruction is performed based on the size of a fixed block, in the block-based reconstruction techniques, the block size of the observed low-resolution face image is very important, if the block size is small, the number of blocks is large, the detailed information of the reconstructed high-resolution face image is abundant, but the structural information of the reconstructed high-resolution face image is not easy to grasp; if the size of the block is large, the number of blocks is small, the structural information of the reconstructed high-resolution face image is easy to grasp, but the detailed information of the reconstructed high-resolution face image is not rich. Therefore, a new face image reconstruction method needs to be researched for a low-resolution face image with the problems of unclear edge structure information and blurred details.
Disclosure of Invention
The invention aims to solve the technical problem of providing a human face image multilayer reconstruction method based on a CCA space, and the edge structure information of a high-resolution human face image obtained by utilizing the reconstruction method is clear, the details are clear, and the reconstruction effect is good.
The technical scheme adopted by the invention for solving the technical problems is as follows: a human face image multilayer reconstruction method based on CCA space is characterized by comprising the following steps:
the method comprises the following steps: selecting a face image database, wherein the face image database comprises at least two low-resolution face images and a high-resolution face image corresponding to each low-resolution face image, and correspondingly recording the nth low-resolution face image and the corresponding high-resolution face image in the face image database asAnd &>And the tested low-resolution face image is recorded as ^ or ^ based on>Wherein N is a positive integer, N is more than or equal to 1 and less than or equal to N, N represents the total number of low-resolution face images contained in the face image database, N is more than or equal to 2, and/or is greater than or equal to N>And &>Are all W in width>And &>All the heights of (A) are H;
step two: dividing each low-resolution face image in face image database into S by adopting sliding window technology 1 Each overlapping with a dimension of k 1 ×k 1 Image blocks ofS of (1) 1 An image block is marked as>Similarly, a sliding window technology is adopted to divide the high-resolution face image corresponding to each low-resolution face image in the face image database into S 1 Each overlapping with a dimension of k 1 ×k 1 The image block of (a) is selected, will->S of (1) 1 Each image block is recorded as>Will be picked up using a sliding window technique>Is divided into 1 Each overlapping with a dimension of k 1 ×k 1 The image block of (2) is selected, will be/are>S of (1) 1 Each image block is recorded as>Wherein the size of the sliding window is k 1 ×k 1 ,k 1 =5,7,9,11, sliding step of sliding window is 1 pixel point, S 1 =(W-k 1 +1)×(H-k 1 +1),s 1 Is a positive integer, s is not less than 1 1 ≤S 1 ;
Step three: arranging the pixel values of all pixel points in each image block in each low-resolution face image in a face image database to form corresponding column vectors, and arranging the pixel values of all pixel points in each image block in each low-resolution face image in the face image database to form a column vectorThe corresponding column vector is marked +>Similarly, arranging the pixel values of all pixel points in each image block in the high-resolution face image corresponding to each low-resolution face image in the face image database to form a corresponding column vector, and combining->The corresponding column vector is marked +>Will->The pixel values of all the pixel points in each image block are arranged to form a corresponding column vector, and the ^ is greater than or equal to>The corresponding column vector is marked +>Then, column vectors corresponding to image blocks at the same position in all low-resolution face images in the face image database form a low-resolution dictionary to form S 1 A low resolution dictionary for mapping the s-th image of all low resolution face images in the face image database 1 A low-resolution dictionary formed by column vectors corresponding to image blocks is recorded as->Similarly, column vectors corresponding to image blocks at the same position in all high-resolution face images in the face image database form a high-resolution dictionary, and form S 1 A high resolution dictionary for comparing the s-th of all high resolution face images in the face image database 1 The high-resolution dictionary formed by the column vectors corresponding to the image blocks is marked as->Wherein it is present> Are all (k) 1 ×k 1 )×1,/>Are all (k) 1 ×k 1 )×N,/>Is/is for the nth column vector of>Is/is->
Step four: calculating the projection matrix corresponding to each low-resolution dictionary and each high-resolution dictionary respectivelyThe corresponding projection matrix is marked as->Will->The corresponding projection matrix is recorded as +>Wherein it is present>And &>All dimensions of (c) are L × (k) 1 ×k 1 ) L represents the dimension of CCA space, and L is equal to {1,2, …, k 1 ×k 1 };
Step five: mapping each low-resolution dictionary from an image space to a CCA space to obtain a corresponding primary mapping low-resolution dictionary, and mapping each low-resolution dictionary to a CCA spaceThe corresponding one-time mapped low resolution dictionary is &>Similarly, mapping each high-resolution dictionary from the image space to the CCA space to obtain a corresponding once-mapped high-resolution dictionary, and combining>Corresponding one-time mapped high resolution dictionary +>Wherein it is present>And &>The dimensions of (A) are all L multiplied by N;
step six: calculating the sparse coefficient vector of each primary mapping low-resolution dictionaryIs marked as->By counting->Obtaining; then carrying out once sparse updating on the once mapped low-resolution dictionary by using the sparse coefficient vector of each once mapped low-resolution dictionary to obtain the updated dictionary of each once mapped low-resolution dictionary, and then judging whether the updated dictionary is the same as the updated dictionary or not>The updated dictionary is recorded as +>If/or>The nth element of (a) is a non-zero element, will then >>The nth column vector of (a) is extracted and the slave is then taken>All column vectors extracted in (a) are formed in original order->Similarly, performing sparse update once on each once-mapped high-resolution dictionary to obtain a dictionary updated by each once-mapped high-resolution dictionary, and judging whether the updated dictionary is matched with the updated dictionary or not>Updated dictionary record>If/or>Will be a non-zero element, will->The nth column vector of (a) is extracted and the slave is then taken>All column vectors extracted in (a) are formed in original order->Wherein it is present>Dimension of (a) is Nx 1, argmin () represents solving the residual minimum value, the symbol "| | | | non-conducting phosphor 2 Is "is 2 Norm regular term operation symbol, symbol "| | | | non-woven phosphor 1 Is "as 1 Norm regular term operator sign, λ 1 Is a constant, λ 1 ∈(0,1),/>And &>Has dimension L × M, M denotes->The total number of the non-zero elements in the alloy is more than or equal to 1 and less than N;
step seven: the updated dictionary of each once-mapped low-resolution dictionary is reversely mapped back to the image space from the CCA space to obtain a corresponding reflection low-resolution dictionary, and the dictionary is subjected to image matchingThe corresponding retroreflection low resolution dictionary is &>Similarly, each high resolution word is mapped onceThe dictionary updated by the dictionary is reversely mapped back to the image space from the CCA space to obtain a corresponding reflection high-resolution dictionary, and the dictionary is/are>The corresponding reflection high resolution dictionary is recorded as>Wherein it is present>Andhas a dimension of (k) 1 ×k 1 )×M;
Step eight: calculating the projection matrix corresponding to each reverse mapping low-resolution dictionary and each reverse mapping high-resolution dictionary respectivelyThe corresponding projection matrix is recorded as +>Will->The corresponding projection matrix is recorded as +>Wherein it is present>Andall dimensions of (a) are L × (k) 1 ×k 1 ) L represents the dimension of CCA space, and L is equal to {1,2, …, k 1 ×k 1 };
Step nine: mapping each reverse mapping low-resolution dictionary from the image space to the CCA space to obtain pairsShould remap the low resolution dictionary again, willCorresponding remap low resolution dictionary noted in>/>Similarly, mapping each reflection high-resolution dictionary from the image space to the CCA space to obtain a corresponding re-mapping high-resolution dictionary, and combining>Corresponding remapped high resolution dictionaryWill->Each image block in (1) is mapped to CCA space from image space to obtainWill ∑ be based on a corresponding primary mapped block of each image block in the image block>The corresponding one-time mapping block is marked as->Will->The pixel values of all pixel points in the primary mapping block corresponding to each image block are arranged to form a corresponding column vector, and the column vector is obtainedThe corresponding column vector is noted as/>Wherein it is present>And &>Are all LxM->Has dimension of L × 1;
step ten: computing the vector sum of each column in each remapped low resolution dictionaryIs used for ^ ing the Euclidean distance of the column vector corresponding to the primary mapping block corresponding to each image block in the>Calculate->Each column vector of (1)Is based on the Euclidean distance of->Obtaining M Euclidean distances; then, aiming at each M Euclidean distances obtained by re-mapping the low-resolution dictionary, sequencing the M Euclidean distances from large to small; then according to the magnitude sequence of M Euclidean distances obtained by aiming at each remapped low-resolution dictionary, carrying out position adjustment on all column vectors in each remapped low-resolution dictionary, recombining to obtain a corresponding recombined low-resolution dictionary, and then combining>The corresponding recombined low resolution dictionary is marked>The 1 st column vector and->Has the largest Euclidean distance and is greater than or equal to>And the last column vector and->Has the smallest euclidean distance; wherein it is present>Dimension of (d) is L × M;
similarly, calculate the sum of each column vector in each remapped high resolution dictionaryIs used for ^ ing the Euclidean distance of the column vector corresponding to the primary mapping block corresponding to each image block in the>Calculate->Each column vector of (1)The distance in degrees of euclidean of (c), for +>Obtaining M Euclidean distances; then, aiming at each M Euclidean distances obtained by re-mapping the high-resolution dictionary, sequencing the M Euclidean distances from large to small; then according to the magnitude sequence of M Euclidean distances obtained by mapping the high-resolution dictionary again for each time,adjusting the position of all column vectors in each remapped high-resolution dictionary, recombining to obtain a corresponding recombined high-resolution dictionary, and combining>The corresponding recombined high-resolution dictionary is marked>The 1 st column vector and->Has the largest Euclidean distance and is greater than or equal to>And the last column vector and->Has the smallest euclidean distance; wherein it is present>Dimension of (d) is L × M;
step eleven: computingWill @, for each image block of the first sparse coefficient vector>Is marked as £ the first sparse coefficient vector of> By passingCalculating to obtain; will then->The high-resolution face image obtained after reconstruction in one layer is recorded as ^ er>Will->In and->The area corresponding to the position is marked as->Will->The column vector formed by arranging the pixel values of all the pixel points is recorded asWherein it is present>Has dimension of M × 1,m which is a positive integer, M is more than or equal to 1 and less than or equal to M, and λ 2 And λ 3 Are all constant, λ 2 ∈(0,1),λ 3 ∈(0,1),/>Represents->The mth element of (4), is selected>Represents->The mth column vector of (4), based on the number of cells in the column->Represents->M-1 element of (1);
step twelve: the size of the sliding window is changed to k 2 ×k 2 (ii) a Then S is obtained in the same manner according to the process from step two to step ten 2 A recombined low resolution dictionary and S 2 Recombining the high resolution dictionary to give 2 Reorganize the low resolution dictionary asWill be(s) 2 Recombined high-resolution dictionary is marked as>Then calculates->Will ∑ the second sparse coefficient vector of each image block of>S of (1) 2 Image block>Is recorded as a second sparse coefficient vectorBy passingCalculating to obtain; then willThe high-resolution face image obtained after the two-layer reconstruction is recorded as ^ er>Will->In and>the area corresponding to the position is marked as->Will be/are>The column vector formed by arranging the pixel values of all the pixel points is recorded as ^ er> Wherein it is present>And &>Dimension of L × M, k 2 =3,5,7,9 and 1 < k 2 <k 1 ,S 2 Representing that each low-resolution face image in a face image database and the corresponding high-resolution face image are judged and judged by adopting a sliding window technology>Divided into mutually overlapping dimensions of k 2 ×k 2 Total number of image blocks, S 2 =(W-k 2 +1)×(H-k 2 +1),1≤s 2 ≤S 2 ,/>Has dimension of Mx 1->To be according to capturing>Is obtained in the same manner, the projection matrix is evaluated>Represents->The pixel values of all the pixel points in the image are arranged to form a corresponding column vector,represents->The mth element of (4), is selected>Represents->The m-1 th element of (4), is selected>Represents->M-th column vector of (1) 4 Is a constant, λ 4 ∈(0,1),/>To be according to capturing>Is obtained in the same manner, the projection matrix is evaluated>Represents->In and->The pixel values of all the pixel points in the corresponding area are arranged to form a corresponding column vector, and the corresponding column vector is greater than or equal to>Represents->The m-th column vector of (1).
Compared with the prior art, the invention has the advantages that:
1) According to the method, the low-resolution dictionary and the high-resolution dictionary are both mapped to the CCA space from the image space, so that the correlation between the low-resolution dictionary and the high-resolution dictionary is enhanced; meanwhile, redundant information and noise information exist in the low-resolution dictionary and the high-resolution dictionary, and the anti-noise performance and the reconstruction effect of the reconstruction method are influenced, so that the dictionary updated by mapping the low-resolution dictionary once and the dictionary updated by mapping the high-resolution dictionary once are back-mapped from the CCA space to the image space, and then the reflection low-resolution dictionary and the reflection high-resolution dictionary are mapped from the image space to the CCA space, namely, two CCA mappings are adopted, and the anti-noise performance and the reconstruction effect of the method are improved.
2) The method adopts a two-layer reconstruction mode, firstly divides the tested low-resolution face image into larger image blocks to perform one-layer reconstruction so as to grasp the structural information of the reconstructed high-resolution face image, then reconstructs the next reconstruction work with smaller blocks by taking the high-resolution face image reconstructed from the tested low-resolution face image as constraint to reconstruct the detail information of the high-resolution face image of the tested low-resolution face image, and the edge structural information of the high-resolution face image obtained by two-layer reconstruction is clear and has clear details and good reconstruction effect.
Drawings
FIG. 1 is a general flow diagram of the process of the present invention;
FIG. 2 is a noisy face image;
FIG. 3 is a high-resolution face image obtained by reconstructing the noise face image shown in FIG. 2 using a conventional smoothing-based super-resolution reconstruction method (SSR);
fig. 4 is a high-resolution face image obtained by reconstructing the noise face image shown in fig. 2 by adding CCA mapping once based on the existing smoothing-based super-resolution reconstruction method (SSR) and single-layer reconstruction, i.e., one-layer reconstruction;
fig. 5 is a high-resolution face image obtained by reconstructing the noise face image shown in fig. 2 by adding two CCA mappings based on the existing smoothing-based super-resolution reconstruction method (SSR) and performing single-layer reconstruction, i.e., one-layer reconstruction;
FIG. 6 is a high-resolution face image reconstructed from the noise face image shown in FIG. 2 by the method of the present invention;
fig. 7 is a real high resolution face image corresponding to the noise face image shown in fig. 2.
Detailed Description
The invention is described in further detail below with reference to the accompanying examples.
The invention provides a human face image multilayer reconstruction method based on a CCA space, the general flow block diagram of which is shown in figure 1, and the method comprises the following steps:
the method comprises the following steps: selecting a face image database, wherein the face image database comprises at least two low-resolution face images and a high-resolution face image corresponding to each low-resolution face image, and correspondingly recording the nth low-resolution face image and the corresponding high-resolution face image in the face image database asAnd &>And testing the low resolution faceImage is recorded as->Wherein N is a positive integer, N is more than or equal to 1 and less than or equal to N, N represents the total number of low-resolution face images contained in the face image database, N is more than or equal to 2, and if N =360 ″, then ^ N is selected>And &>Are all W in width>Andall heights of (2) are H.
Step two: dividing each low-resolution face image in face image database into S by adopting sliding window technology 1 Each overlapping with a dimension of k 1 ×k 1 Image blocks ofS of (1) 1 Each image block is recorded as>Similarly, a sliding window technology is adopted to divide the high-resolution face image corresponding to each low-resolution face image in the face image database into S 1 Each overlapping with a dimension of k 1 ×k 1 The image block of (2) is selected, will be/are>S of (1) 1 Each image block is recorded as>Will be picked up using a sliding window technique>Is divided into 1 Each overlapping with a dimension of k 1 ×k 1 Will->S of (1) 1 Each image block is recorded as>Wherein the size of the sliding window is k 1 ×k 1 ,k 1 =5,7,9,11, in this example k 1 =5, sliding step length of sliding window is 1 pixel point, S 1 =(W-k 1 +1)×(H-k 1 +1),s 1 Is a positive integer, s is not less than 1 1 ≤S 1 。
Step three: arranging the pixel values of all pixel points in each image block in each low-resolution face image in a face image database to form corresponding column vectors, and arranging the pixel values of all pixel points in each image block in each low-resolution face image in the face image database to form a column vectorThe corresponding column vector is marked +>Similarly, arranging the pixel values of all pixel points in each image block in the high-resolution face image corresponding to each low-resolution face image in the face image database to form a corresponding column vector, and combining->The corresponding column vector is marked +>Will->The pixel values of all the pixel points in each image block are arranged to form a corresponding column vector, and the ^ is greater than or equal to>The corresponding column vector is marked +>Then, column vectors corresponding to image blocks at the same position in all low-resolution face images in the face image database form a low-resolution dictionary to form S 1 A low resolution dictionary for mapping the s-th image of all low resolution face images in the face image database 1 A low-resolution dictionary formed by column vectors corresponding to image blocks is recorded as->Similarly, column vectors corresponding to image blocks at the same position in all high-resolution face images in the face image database form a high-resolution dictionary to form S 1 A high resolution dictionary for comparing the s-th of all high resolution face images in the face image database 1 The high-resolution dictionary formed by the column vectors corresponding to the image blocks is marked as->Wherein it is present> Are all (k) 1 ×k 1 )×1,/>Are all (k) 1 ×k 1 )×N,/>Is/is for the nth column vector of>The nth column vector ofIs->
Step four: calculating the projection matrix corresponding to each low-resolution dictionary and each high-resolution dictionary respectively, and calculating the projection matrixThe corresponding projection matrix is recorded as +>Will->The corresponding projection matrix is recorded as +>Wherein it is present>And &>All dimensions of (a) are L × (k) 1 ×k 1 ) L represents the dimension of CCA space, and L is equal to {1,2, …, k 1 ×k 1 };/>And &>Reference may be made to David R.Hardoon et al.Canonical Correlation Analysis: an Overview with Application to Learning Methods [ J]Neural Computation,2004,2639-2664 (David R-Hayton et al, canonical correlation analysis: overview applied to learning methods [ J]Neural computation,2004, 2639-2664).
Step five: mapping each low-resolution dictionary from image space to CCA space to obtain corresponding primary mapping low-resolution dictionary, and mapping each low-resolution dictionary to CCA spaceThe corresponding one-time mapped low resolution dictionary is &>Similarly, mapping each high-resolution dictionary from the image space to the CCA space to obtain a corresponding primary mapping high-resolution dictionary, and mapping each high-resolution dictionary to the CCA spaceCorresponding one-time mapped high resolution dictionary +>Wherein, is +>Andare all L N.
Step six: calculating the sparse coefficient vector of each primary mapping low-resolution dictionaryIs marked as-> By counting->Obtaining; then carrying out once sparse updating on the once mapped low-resolution dictionary by using the sparse coefficient vector of each once mapped low-resolution dictionary to obtain the updated dictionary of each once mapped low-resolution dictionary, and then judging whether the updated dictionary is the same as the updated dictionary or not>Updated dictionary record>If/or>Will be a non-zero element, will->The nth column vector of (a) is extracted and the slave is then taken>All column vectors extracted in (are) formed in the original order->Similarly, performing sparse update once on each once-mapped high-resolution dictionary to obtain a dictionary updated by each once-mapped high-resolution dictionary, and judging whether the updated dictionary is matched with the updated dictionary or not>The updated dictionary is recorded as +>If/or>Will be a non-zero element, will->The nth column vector of (a) is extracted and the slave is then taken>All column vectors extracted in (are) formed in the original order->Wherein the content of the first and second substances,dimension of (a) is Nx 1, argmin () represents solving the residual minimum value, the symbol "| | | | non-conducting phosphor 2 Is "is 2 Norm regular term operation symbol, symbol "| | | | non-woven phosphor 1 Is "as 1 Norm regular term operator sign, λ 1 Is a constant, λ 1 E (0,1), generally given as λ 1 =0.1,0.3,0.5, in this example λ 1 =0.3,/>And &>Has dimension L × M, M denotes->The total number of the non-zero elements in the alloy is more than or equal to 1, and M is less than N.
Step seven: the updated dictionary of each once-mapped low-resolution dictionary is reversely mapped back to the image space from the CCA space to obtain a corresponding reflection low-resolution dictionary, and the dictionary is subjected to image matchingThe corresponding retroreflection low resolution dictionary is &>Similarly, each updated dictionary with the once mapping high-resolution dictionary is inversely mapped from the CCA space to the image space to obtain a corresponding reflection high-resolution dictionary, and the dictionary is combined>The corresponding reflection high resolution dictionary is recorded as>Wherein it is present>Andhas a dimension of (k) 1 ×k 1 )×M。
Step eight: calculating the projection matrix corresponding to each back mapping low-resolution dictionary and each back mapping high-resolution dictionary respectively, and calculating the projection matrix corresponding to each back mapping low-resolution dictionary and each back mapping high-resolution dictionaryThe corresponding projection matrix is recorded as +>Will->The corresponding projection matrix is recorded as +>Wherein it is present>And &>All dimensions of (a) are L × (k) 1 ×k 1 ) L represents the dimension of CCA space, and L is equal to {1,2, …, k 1 ×k 1 };/>Andreference may be made to David R.Hardoon et al.Canonical Correlation Analysis: an Overview with Application to Learning Methods [ J]Neural Computation,2004,2639-2664 (David R-Hayton et al, canonical correlation analysis: overview applied to learning methods [ J]Neural computation,2004, 2639-2664).
Step nine: each is reflectedMapping the low-resolution dictionary from the image space to the CCA space to obtain a corresponding remapped low-resolution dictionary, and mapping the remapped low-resolution dictionary to the CCA spaceCorresponding remap low resolution dictionary is recorded as> Mapping each reflection high-resolution dictionary from the image space to the CCA space to obtain a corresponding re-mapping high-resolution dictionary, and combining>Corresponding remapped high resolution dictionaryWill->Each image block in (1) is mapped to CCA space from image space to obtainWill ∑ be based on a corresponding primary mapped block of each image block in the image block>The corresponding one-time mapping block is marked as->Will->The pixel values of all pixel points in the primary mapping block corresponding to each image block are arranged to form a corresponding column vector, and the column vector is obtainedThe corresponding column vector is marked +>Wherein it is present>And &>Are all LxM->Has dimension of L × 1.
Step ten: computing the sum of vectors per column in each remapped low resolution dictionaryIs used for ^ ing the Euclidean distance of the column vector corresponding to the primary mapping block corresponding to each image block in the>Calculate->Each column vector of (1)Is based on the Euclidean distance of->Obtaining M Euclidean distances; then, aiming at each M Euclidean distances obtained by re-mapping the low-resolution dictionary, sequencing the M Euclidean distances from large to small; then according to the magnitude sequence of M Euclidean distances obtained by mapping the low-resolution dictionaries again, carrying out position adjustment on all column vectors in the low-resolution dictionaries mapped again, recombining to obtain corresponding recombined low-resolution dictionaries, and combining>The corresponding recombined low resolution dictionary is marked>The 1 st column vector and->Has the largest Euclidean distance and is greater than or equal to>And the last column vector and->Has the smallest euclidean distance of (c); wherein it is present>Dimension (d) is L M.
Similarly, calculate the sum of each column vector in each remapped high resolution dictionaryThe euclidean distance of the column vector corresponding to the primary mapping block corresponding to each image block in (b), for +>Calculate->Each column vector of (1)Is based on the Euclidean distance of->Obtaining M Euclidean distances; then, sequencing the M Euclidean distances from large to small according to the M Euclidean distances obtained by re-mapping the high-resolution dictionary; according to each secondMapping the high-resolution dictionary to obtain the magnitude sequence of M Euclidean distances, performing position adjustment on all column vectors in each re-mapped high-resolution dictionary, recombining to obtain a corresponding recombined high-resolution dictionary, and then combining>The corresponding recombined high-resolution dictionary is marked>The 1 st column vector and->Has the largest Euclidean distance and is greater than or equal to>And/or the last column vector in (b)>Has the smallest euclidean distance; wherein it is present>Dimension (d) is L M.
Step eleven: computingWill @, for each image block of the first sparse coefficient vector>Is marked as £ the first sparse coefficient vector of> By passingCalculating to obtain; will then->The high-resolution face image obtained after reconstruction in one layer is recorded as ^ er>Will->In and->The area corresponding to the position is marked as->Will->The column vector formed by arranging the pixel values of all the pixel points is recorded asWherein it is present>The dimension of M is multiplied by 1,m is a positive integer, M is more than or equal to 1 and less than or equal to M, and lambda is 2 And λ 3 Are all constant, λ 2 E (0,1), generally given as λ 2 =0.1,0.3,0.5, in this example λ 2 =0.3,λ 3 E (0,1), in this example λ 3 =0.001,/>Represents->The mth element of (1), in>Represents->The mth column vector of (4), based on the number of cells in the column->Represents->The m-1 th element in (1).
Step twelve: the size of the sliding window is changed to k 2 ×k 2 (ii) a Then S is obtained in the same manner according to the process from step two to step ten 2 Reorganized low resolution dictionary and S 2 Reorganize the high resolution dictionary into s 2 The recombined low resolution dictionary is notedWill be(s) 2 Recombined high-resolution dictionary is marked as>Then count>Will ∑ the second sparse coefficient vector of each image block of>S of (1) 2 Image block>Is recorded as a second sparse coefficient vector By passingCalculating to obtain; then will beThe high-resolution face image obtained after the two-layer reconstruction is recorded as ^ er>Will->In and->The area corresponding to the position is marked as->Will->The column vector formed by arranging the pixel values of all the pixel points is recorded as ^ er> Wherein it is present>And &>Dimension of L × M, k 2 =3,5,7,9 and 1 < k 2 <k 1 If k is 1 K is taken out if =5 2 =3, if k 1 K is taken out if =7 2 =3 or k 2 If k is =5 1 K is taken out of the equation of k =9 2 =3 or k 2 =5 or k 2 =7, if k 1 K is taken out of the equation of k =11 2 =3 or k 2 =5 or k 2 =7 or k 2 =9,S 2 The method comprises the steps of representing, by adopting a sliding window technology, each low-resolution face image in a face image database and a corresponding high-resolution face image,/>Divided into mutually overlapping dimensions of size k 2 ×k 2 Total number of image blocks, S 2 =(W-k 2 +1)×(H-k 2 +1),1≤s 2 ≤S 2 ,/>Has dimension of Mx 1->To be according to capturing>Is obtained in the same manner, the projection matrix is evaluated>Represents->The pixel values of all the pixel points in the column are arranged to form a corresponding column vector, and then the column vector is compared with the pixel value of the corresponding pixel point>Represents->The mth element of (4), is selected>Represents->The m-1 th element of (4), is selected>Represents->M-th column vector of (1) 4 Is a constant, λ 4 E (0,1), generally given as λ 4 =0.1,0.3,0.5, in this example λ 4 =0.3,/>In accordance with the obtaining>In the same manner, a projection matrix obtained in the same manner>Represents->In and->The pixel values of all the pixel points in the corresponding area are arranged to form a corresponding column vector, and the corresponding column vector is greater than or equal to>Represents->The m-th column vector of (1).
To further illustrate the feasibility and effectiveness of the method of the present invention, experiments were conducted on the method of the present invention.
Here, the method of the present invention was tested using FEI face data sets. The FEI face data set contains two high-resolution face images of 200 different persons (100 men and 100 women), one of which is a high-resolution face image with a normal expression, and the other is a high-resolution face image with a smile expression. Each high-resolution face image in the FEI face data set is subjected to down-sampling to obtain a corresponding low-resolution face image, the size of the down-sampled low-resolution face image is 30 × 25, and gaussian noises with standard deviations σ =10 and σ =30 are respectively added to all the low-resolution face images. In the experiment, 360 high-resolution face images of 180 persons in total and low-resolution face images corresponding to each high-resolution face image are randomly selected to form a training set, and 40 low-resolution face images of the rest 20 persons in total are used as tested low-resolution face images.
In order to verify the effectiveness of the method, the method is compared with other existing excellent Face Super-Resolution methods, such as Jiang J, ma J, chen C, et al. Noise Robust Face Image Super-Resolution Through Smooth Sparse reconstruction [ J ] IEEE Transactions on Cybernetics,2017, PP (99): 1-12. The method based on Smooth Super-Resolution reconstruction (SSR) proposed in the paper is compared with a method based on Smooth Super-Resolution reconstruction (SSR) in which CCA mapping is added once and reconstruction is performed as a layer (only one division of the size of an Image block is considered in the reconstruction process), and a method based on Smooth Super-Resolution reconstruction (SSR) in which CCA mapping is added twice and reconstruction is performed as a layer (only one division of the size of the Image block is considered in the reconstruction process) is performed as a layer. The method is to add two CCA mapping and two-layer (reconstruction, two image block sizes are considered in the reconstruction process, the image blocks with large sizes are firstly reconstructed, then the reconstruction result of the image blocks with large sizes is used for constraint, and the image blocks with small sizes are secondarily reconstructed) reconstruction on the basis of a smooth super-resolution reconstruction method (SSR). The sizes of the image blocks of the first sub-blocks of the low-resolution face images and the high-resolution face images in the training set and the tested low-resolution face images are 5 multiplied by 5, and the sizes of the image blocks of the second sub-blocks are 3 multiplied by 3.
The method comprises the steps of respectively adopting a super-resolution reconstruction method (SSR) based on smoothness, adding a method (simply referred to as CCA single layer) of CCA mapping and single-layer reconstruction forming on the basis of the super-resolution reconstruction method (SSR) based on smoothness, adding a method (simply referred to as 2CCA single layers) of CCA mapping and single-layer reconstruction forming on the basis of the super-resolution reconstruction method (SSR) based on smoothness, reconstructing the tested low-resolution face image, and giving an average PSNR and an SSIM of the high-resolution face image obtained after 40 tested low-resolution face images are reconstructed by adopting the methods under different noise environments (sigma =10 and sigma = 30) in table 1. As can be seen from the data listed in table 1, under the condition of severe noise, the method of the present invention has 0.75 improvement on PSNR compared to the SSR method, and has 0.0675 improvement on SSIM compared to the SSR method; meanwhile, the method is superior to a single-layer reconstruction method, namely a CCA single-layer method and a 2CCA single-layer method, in both PSNR indexes and SSIM indexes.
Table 1 shows the average PSNR and SSIM of the 40 tested low-resolution face images obtained by reconstructing the high-resolution face images by the above methods under different noise environments (σ =10 and σ = 30), respectively
FIG. 2 shows a noisy face image; FIG. 3 shows a high resolution face image reconstructed from the noisy face image shown in FIG. 2 using a conventional smoothing-based super-resolution reconstruction method (SSR); fig. 4 shows a high-resolution face image obtained by reconstructing the noise face image shown in fig. 2 by adding CCA mapping once and performing single-layer reconstruction, i.e., one-layer reconstruction, on the basis of the existing smoothing-based super-resolution reconstruction method (SSR); fig. 5 shows a high-resolution face image obtained by reconstructing the noise face image shown in fig. 2 by adding two CCA mappings based on the existing smoothing-based super-resolution reconstruction method (SSR) and performing single-layer reconstruction, i.e., one-layer reconstruction; FIG. 6 shows a high resolution face image reconstructed from the noise face image shown in FIG. 2 by the method of the present invention; fig. 7 shows a real high resolution face image corresponding to the noise face image shown in fig. 2. Comparing fig. 3, fig. 4, fig. 5, fig. 6 with fig. 7, it is obvious that the edge structure information of the high resolution face image shown in fig. 6 is clear, the details are clear, the reconstruction effect is good, and the edge structure information is closer to the real high resolution face image shown in fig. 7.
Claims (1)
1. A human face image multilayer reconstruction method based on CCA space is characterized by comprising the following steps:
the method comprises the following steps: selecting a face image database which contains at least two low-resolution face images and a high-resolution face image corresponding to each low-resolution face image, and recording the nth low-resolution face image and the corresponding high-resolution face image in the face image database as correspondingAnd &>And recording the tested low-resolution face image as->Wherein N is a positive integer, N is more than or equal to 1 and less than or equal to N, N represents the total number of low-resolution face images contained in the face image database, N is more than or equal to 2, and/or is greater than or equal to N>And &>Are all W in width>And &>All the heights of (A) are H;
step two: dividing each low-resolution face image in face image database into S by adopting sliding window technology 1 Each overlapping with a dimension of k 1 ×k 1 Image blocks ofS of (1) 1 Each image block is recorded as>Similarly, a sliding window technology is adopted to divide the high-resolution face image corresponding to each low-resolution face image in the face image database into S 1 Each overlapping with a dimension of k 1 ×k 1 Will->S of (1) 1 Each image block is recorded as>Will be picked up using a sliding window technique>Is divided into 1 Each overlapping with a dimension of k 1 ×k 1 Will->S of (1) 1 Each image block is recorded as>Wherein the size of the sliding window is k 1 ×k 1 ,k 1 =5,7,9,11, sliding step of sliding window is 1 pixel point, S 1 =(W-k 1 +1)×(H-k 1 +1),s 1 Is a positive integer, s is not less than 1 1 ≤S 1 ;
Step three: arranging the pixel values of all pixel points in each image block in each low-resolution face image in a face image database to form corresponding column vectors, and arranging the pixel values of all pixel points in each image block in each low-resolution face image in the face image database to form a column vectorThe corresponding column vector is marked +>Similarly, arranging the pixel values of all pixel points in each image block in the high-resolution face image corresponding to each low-resolution face image in the face image database to form a corresponding column vector, and then judging whether the pixel values of all pixel points in each image block in the high-resolution face image correspond to the low-resolution face images in the face image database or not>The corresponding column vector is marked as +>Will be/are>The pixel values of all the pixel points in each image block are arranged to form a corresponding column vector, and the ^ is greater than or equal to>The corresponding column vector is marked as +>Then, column vectors corresponding to image blocks at the same position in all low-resolution face images in the face image database form a low-resolution dictionary to form S 1 A low resolution dictionary for mapping the s-th image to all low resolution face images in the face image database 1 A low-resolution dictionary formed by column vectors corresponding to image blocks is recorded as->Similarly, column vectors corresponding to image blocks at the same position in all high-resolution face images in the face image database form a high-resolution dictionary, and form S 1 A high resolution dictionary for comparing the s-th of all high resolution face images in the face image database 1 The high-resolution dictionary formed by the column vectors corresponding to the image blocks is marked as->Wherein it is present> Are all (k) 1 ×k 1 )×1,/>Are all (k) 1 ×k 1 )×N,/>Is/is->Is/is->
Step four: calculating the projection matrix corresponding to each low-resolution dictionary and each high-resolution dictionary respectively, and calculating the projection matrixThe corresponding projection matrix is recorded as +>Will->The corresponding projection matrix is recorded as +>Wherein it is present>And &>All dimensions of (a) are L × (k) 1 ×k 1 ) L represents the dimension of CCA space, and L is equal to {1,2, …, k 1 ×k 1 };
Step five: mapping each low-resolution dictionary from an image space to a CCA space to obtain a corresponding primary mapping low-resolution dictionary, and mapping each low-resolution dictionary to a CCA spaceThe corresponding one-time mapped low resolution dictionary is &> Similarly, mapping each high-resolution dictionary from the image space to the CCA space to obtain a corresponding once-mapped high-resolution dictionary, and combining>Corresponding one-time mapped high resolution dictionary +> Wherein +>And &>The dimensions of (A) are all L multiplied by N;
step six: calculate each primary mapSparse coefficient vectors of the low resolution dictionaryIs marked as->By counting->Obtaining; then carrying out once sparse updating on the once mapped low-resolution dictionary by using the sparse coefficient vector of each once mapped low-resolution dictionary to obtain the updated dictionary of each once mapped low-resolution dictionary, and then judging whether the updated dictionary is the same as the updated dictionary or not>The updated dictionary is recorded asIf/or>Will be a non-zero element, will->The nth column vector of (a) is extracted and the slave is then taken>All column vectors extracted in (are) formed in the original order->Similarly, sparsely updating each once-mapped high-resolution dictionary to obtain a dictionary updated by each once-mapped high-resolution dictionary, and combining>The updated dictionary is recorded as +>If/or>Will be a non-zero element, will->The nth column vector of (a) is extracted and the slave is then taken>All column vectors extracted in (a) are formed in original order->Wherein it is present>Dimension of (a) is Nx 1, argmin () represents solving the residual minimum value, the symbol "| | | | non-conducting phosphor 2 Is "is 2 Norm regular term operation symbol, symbol "| | | | non-woven phosphor 1 Is "is 1 Norm regular term operator sign, λ 1 Is a constant, λ 1 ∈(0,1),/>And &>Has dimension L × M, M denotes->The total number of the non-zero elements in the alloy is more than or equal to 1, and M is less than N;
step seven: the updated dictionary of each once-mapped low-resolution dictionary is reversely mapped from the CCA space back to the image spaceTo obtain a corresponding reflection low resolution dictionaryThe corresponding retroreflection low resolution dictionary is &>Similarly, the updated dictionary of each once-mapping high-resolution dictionary is back-mapped to the image space from the CCA space to obtain a corresponding reflection high-resolution dictionary, and the dictionary is/are>The corresponding reflection high resolution dictionary is recorded as>Wherein it is present>And &>Has a dimension of (k) 1 ×k 1 )×M;
Step eight: calculating the projection matrix corresponding to each back mapping low-resolution dictionary and each back mapping high-resolution dictionary respectively, and calculating the projection matrix corresponding to each back mapping low-resolution dictionary and each back mapping high-resolution dictionaryThe corresponding projection matrix is recorded as +>Will->The corresponding projection matrix is recorded as +>Wherein it is present>And &>All dimensions of (a) are L × (k) 1 ×k 1 ) L represents the dimension of CCA space, and L is equal to {1,2, …, k 1 ×k 1 };
Step nine: mapping each reverse mapping low-resolution dictionary from the image space to the CCA space to obtain a corresponding re-mapping low-resolution dictionary, and mapping each reverse mapping low-resolution dictionary from the image space to the CCA spaceCorresponding remap low resolution dictionary is recorded as> Similarly, mapping each reflection high-resolution dictionary from the image space to the CCA space to obtain a corresponding re-mapping high-resolution dictionary, and combining>Corresponding remapped high resolution dictionary @> Will->Is mapped from image space to CCA space, resulting in ∑>A primary mapping block corresponding to each image block in the image data will beThe corresponding one-time mapping block is marked as->Will->The pixel values of all pixel points in the primary mapping block corresponding to each image block are arranged to form a corresponding column vector, and the pixel values are compared with the pixel values in the primary mapping block corresponding to each image block to determine whether the pixel values are greater than or equal to the pixel values in the primary mapping block>The corresponding column vector is marked +> Wherein it is present>And &>Are all LxM->Dimension of (a) is L × 1;
step ten: computing the sum of vectors per column in each remapped low resolution dictionaryThe euclidean distance of the column vector corresponding to the primary mapping block corresponding to each image block in (b), for->Calculate->And/or for each column vector in>Is based on the Euclidean distance of->Obtaining M Euclidean distances; then, aiming at each M Euclidean distances obtained by re-mapping the low-resolution dictionary, sequencing the M Euclidean distances from large to small; then according to the magnitude sequence of M Euclidean distances obtained by aiming at each remapped low-resolution dictionary, carrying out position adjustment on all column vectors in each remapped low-resolution dictionary, recombining to obtain a corresponding recombined low-resolution dictionary, and then combining>The corresponding recombined low resolution dictionary is marked>The 1 st column vector and +>Has a maximum Euclidean distance, and>and/or the last column vector in (b)>Has the smallest euclidean distance; wherein +>Dimension of (d) is L × M;
similarly, calculate the sum of each column vector in each remapped high resolution dictionaryIs used for ^ ing the Euclidean distance of the column vector corresponding to the primary mapping block corresponding to each image block in the>Calculate->And/or for each column vector in>Is based on the Euclidean distance of->Obtaining M Euclidean distances; then, aiming at each M Euclidean distances obtained by re-mapping the high-resolution dictionary, sequencing the M Euclidean distances from large to small; then according to the sequence of the M Euclidean distances obtained by aiming at each high-resolution dictionary re-mapped, carrying out position adjustment on all column vectors in each high-resolution dictionary re-mapped, recombining to obtain a corresponding recombined high-resolution dictionary, and then combining>Corresponding reorganized high resolution dictionaryThe 1 st column vector and +>Has the largest Euclidean distance and is greater than or equal to>And the last column vector and->Has the smallest euclidean distance; wherein it is present>Dimension of (d) is L × M;
step eleven: computingWill @, for each image block of the first sparse coefficient vector>Is marked as £ the first sparse coefficient vector of>By passingCalculating to obtain; will then->The high-resolution face image obtained after reconstruction in one layer is recorded as ^ er>Will->In and->The area corresponding to the position is marked as->Will->The column vector formed by arranging the pixel values of all the pixel points is recorded as ^ er> Wherein it is present>The dimension of M is multiplied by 1,m is a positive integer, M is more than or equal to 1 and less than or equal to M, and lambda is 2 And λ 3 Are all constant, λ 2 ∈(0,1),λ 3 ∈(0,1),/>Represents->The mth element of (4), is selected>Represents->The mth column vector of (4), based on the number of cells in the column->Represents->M-1 element of (1); />
Step twelve: the size of the sliding window is changed to k 2 ×k 2 (ii) a Then S is obtained in the same manner according to the process from step two to step ten 2 Has low recombinationResolution dictionary and S 2 Reorganize the high resolution dictionary into s 2 The recombined low resolution dictionary is notedWill be(s) 2 Recombined high-resolution dictionary is marked as>Then calculates->Will ∑ the second sparse coefficient vector of each image block of>S of (1) 2 Image block->Is recorded as a second sparse coefficient vectorBy passingCalculating to obtain; then will beThe high-resolution face image obtained after the two-layer reconstruction is recorded as ^ er>Will->In and->Zone corresponding to positionField is recorded as->Will->The column vector formed by arranging the pixel values of all the pixel points is recorded as ^ er> Wherein +>And &>Dimension of L × M, k 2 =3,5,7,9 and 1 < k 2 <k 1 ,S 2 Representing that each low-resolution face image in a face image database and the corresponding high-resolution face image are combined by adopting a sliding window technology>Divided into mutually overlapping dimensions of size k 2 ×k 2 Total number of image blocks, S 2 =(W-k 2 +1)×(H-k 2 +1),1≤s 2 ≤S 2 ,/>Has dimension of Mx 1->To be according to capturing>In (2)Projection matrices obtained in the same manner>Represents->The pixel values of all the pixel points in the image are arranged to form a corresponding column vector,represents->The mth element of (4), is selected>Represents->The m-1 th element in (a), a>Represents->M-th column vector of (1) 4 Is a constant, λ 4 ∈(0,1),/>To be according to capturing>In the same manner, a projection matrix obtained in the same manner>Represents->In and->The pixel values of all the pixel points in the corresponding area are arranged to form a corresponding column vector, and the corresponding column vector is greater than or equal to>Represents->The m-th column vector of (1). />
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811322383.9A CN109712069B (en) | 2018-11-08 | 2018-11-08 | Face image multilayer reconstruction method based on CCA space |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811322383.9A CN109712069B (en) | 2018-11-08 | 2018-11-08 | Face image multilayer reconstruction method based on CCA space |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109712069A CN109712069A (en) | 2019-05-03 |
CN109712069B true CN109712069B (en) | 2023-04-07 |
Family
ID=66254200
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811322383.9A Active CN109712069B (en) | 2018-11-08 | 2018-11-08 | Face image multilayer reconstruction method based on CCA space |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109712069B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110942495A (en) * | 2019-12-12 | 2020-03-31 | 重庆大学 | CS-MRI image reconstruction method based on analysis dictionary learning |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101615290A (en) * | 2009-07-29 | 2009-12-30 | 西安交通大学 | A kind of face image super-resolution reconstruction method based on canonical correlation analysis |
CN101697197A (en) * | 2009-10-20 | 2010-04-21 | 西安交通大学 | Method for recognizing human face based on typical correlation analysis spatial super-resolution |
CN107169928A (en) * | 2017-05-12 | 2017-09-15 | 武汉华大联创智能科技有限公司 | A kind of human face super-resolution algorithm for reconstructing learnt based on deep layer Linear Mapping |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10430922B2 (en) * | 2016-09-08 | 2019-10-01 | Carnegie Mellon University | Methods and software for generating a derived 3D object model from a single 2D image |
-
2018
- 2018-11-08 CN CN201811322383.9A patent/CN109712069B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101615290A (en) * | 2009-07-29 | 2009-12-30 | 西安交通大学 | A kind of face image super-resolution reconstruction method based on canonical correlation analysis |
CN101697197A (en) * | 2009-10-20 | 2010-04-21 | 西安交通大学 | Method for recognizing human face based on typical correlation analysis spatial super-resolution |
CN107169928A (en) * | 2017-05-12 | 2017-09-15 | 武汉华大联创智能科技有限公司 | A kind of human face super-resolution algorithm for reconstructing learnt based on deep layer Linear Mapping |
Non-Patent Citations (1)
Title |
---|
姚正元 ; 郭立君 ; 张荣 ; .基于CCA空间的平滑稀疏超分辨率人脸重构.传感器与微系统.2018,(第04期),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN109712069A (en) | 2019-05-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110020989B (en) | Depth image super-resolution reconstruction method based on deep learning | |
Zhang et al. | Adaptive residual networks for high-quality image restoration | |
CN106952228B (en) | Super-resolution reconstruction method of single image based on image non-local self-similarity | |
CN110111256B (en) | Image super-resolution reconstruction method based on residual distillation network | |
CN109087258B (en) | Deep learning-based image rain removing method and device | |
CN106952317B (en) | Hyperspectral image reconstruction method based on structure sparsity | |
CN113962893A (en) | Face image restoration method based on multi-scale local self-attention generation countermeasure network | |
CN111127374A (en) | Pan-sharing method based on multi-scale dense network | |
CN111222519B (en) | Construction method, method and device of hierarchical colored drawing manuscript line extraction model | |
CN111415323B (en) | Image detection method and device and neural network training method and device | |
CN110210282A (en) | A kind of moving target detecting method decomposed based on non-convex low-rank sparse | |
CN112634120A (en) | Image reversible watermarking method based on CNN prediction | |
CN114693577B (en) | Infrared polarized image fusion method based on Transformer | |
Chetty et al. | Digital video tamper detection based on multimodal fusion of residue features | |
CN109712069B (en) | Face image multilayer reconstruction method based on CCA space | |
CN109146785A (en) | A kind of image super-resolution method based on the sparse autocoder of improvement | |
CN115439325A (en) | Low-resolution hyperspectral image processing method and device and computer program product | |
Shi et al. | Exploiting multi-scale parallel self-attention and local variation via dual-branch transformer-cnn structure for face super-resolution | |
CN111611962A (en) | Face image super-resolution identification method based on fractional order multi-set partial least square | |
CN116523985B (en) | Structure and texture feature guided double-encoder image restoration method | |
CN110569763B (en) | Glasses removing method for fine-grained face recognition | |
CN115375537A (en) | Nonlinear sensing multi-scale super-resolution image generation system and method | |
CN115619681A (en) | Image reconstruction method based on multi-granularity Vit automatic encoder | |
CN111275624B (en) | Face image super-resolution reconstruction and identification method based on multi-set typical correlation analysis | |
CN115035170A (en) | Image restoration method based on global texture and structure |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |