Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, the invention aims to provide a face feature template protection method based on a complex matrix, which can protect the privacy of a face image and the safety of data based on an image local variance map and two-dimensional principal component analysis.
The invention also aims to provide a face feature template protection system based on a complex matrix.
In order to achieve the above object, an embodiment of the present invention provides a face feature template protection method based on a complex matrix, including: s1, acquiring a plurality of gray-scale face images, and calculating a local variance map corresponding to each gray-scale face image; s2, constructing a plurality of complex matrixes according to the plurality of gray-scale face images and the plurality of local variance maps; s3, calculating a covariance matrix according to the training images in the gray-scale face images and the complex matrixes, and generating an eigenvector matrix according to the covariance matrix; s4, scrambling the feature vector matrix to generate a scrambling matrix, and projecting the plurality of complex matrixes by using the scrambling matrix to generate features corresponding to the face image; and S5, comparing the test image with the training image in the multiple gray-scale face images, and identifying the corresponding features of the face images through a classifier.
The face feature template protection method based on the complex matrix not only uses the gray information of the face image, but also fully utilizes the detail information of the face image, thereby effectively improving the identification precision. Meanwhile, the feature matrix is scrambled, and the safety and privacy of the face image are fully considered. When the face image feature template is attacked or threatened, a new feature template can be reissued by modifying the scrambling matrix to replace the original attacked or threatened template, and the face image feature template can be applied to the field of identity recognition and authentication.
In addition, the face feature template protection method based on the complex matrix according to the above embodiment of the present invention may further have the following additional technical features:
further, in an embodiment of the present invention, the calculation formula of the local variance map corresponding to each gray-level face image is as follows:
wherein f is
i V(x, y) is a local variance map of the gray-scale face image, i is 1, 2, …, M is the number of the gray-scale face image, and the size of the gray-scale face image is W
1×W
2L represents the total number of pixels in the neighborhood,
representing the mean gray value of the neighborhood.
Further, in an embodiment of the present invention, the S2, further includes:
the gray-scale face image fi(x, y) as a real component, the local variance map fi V(x, y) as imaginary components, constructing the plurality of complex matrices fi c(x, y), specifically expressed as:
fi c(x,y)=fi(x,y)+fi V(x,y)i
wherein f isi(x, y) is a gray-scale face image, fi V(x, y) is a gray scale face mapLocal variance maps of the images.
Further, in an embodiment of the present invention, the S3 specifically includes:
s31, calculating the average value of the training images in the plurality of gray-scale face images
According to the average value
And said plurality of complex matrices f
i cGenerating the covariance matrix, wherein the covariance matrix is as follows:
wherein the content of the first and second substances,
the average value of training images in a plurality of gray level face images is shown, and T represents the transposition of a matrix;
s32, performing eigenvalue decomposition on the covariance matrix, selecting eigenvectors corresponding to the first d largest eigenvalues to generate the eigenvector matrix W ═ X1,X2,…,Xd]。
Further, in an embodiment of the present invention, the scrambling the feature vector matrix to generate a scrambling matrix specifically includes:
obtaining a dimension of W2×W2Performing row transformation and/or column transformation on the unit matrix to generate a new matrix R;
transforming the eigenvector matrix W and the new matrix R to obtain the scrambling matrix, wherein the scrambling matrix W' is:
W'=RTW。
in order to achieve the above object, an embodiment of another aspect of the present invention provides a face feature template protection system based on a complex matrix, including:
the calculation module is used for acquiring a plurality of gray-scale face images and calculating a local variance map corresponding to each gray-scale face image;
the construction module is used for constructing a plurality of complex matrixes according to the plurality of gray-scale face images and the plurality of local variance maps;
a generating module, configured to calculate a covariance matrix according to a training image in the multiple gray-scale face images and the multiple complex matrices, and generate an eigenvector matrix according to the covariance matrix;
the projection module is used for scrambling the characteristic vector matrix to generate a scrambling matrix, and projecting the plurality of complex matrixes by using the scrambling matrix to generate characteristics corresponding to the face image;
and the recognition protection module is used for comparing the test image in the multiple gray-scale face images with the training image and recognizing the characteristics corresponding to the face images through a classifier.
The face feature template protection system based on the complex matrix not only uses the gray information of the face image, but also fully utilizes the detail information of the face image, thereby effectively improving the identification precision. Meanwhile, the feature matrix is scrambled, and the safety and privacy of the face image are fully considered. When the face image feature template is attacked or threatened, a new feature template can be reissued by modifying the scrambling matrix to replace the original attacked or threatened template, and the face image feature template can be applied to the field of identity recognition and authentication.
In addition, the face feature template protection system based on the complex matrix according to the above embodiment of the present invention may further have the following additional technical features:
further, in an embodiment of the present invention, the calculation formula of the local variance map corresponding to each gray-level face image is as follows:
wherein f is
i V(x,y)The local variance atlas of the gray level face image is shown, i is 1, 2, …, M is the number of the gray level face image, and the size of the gray level face image is W
1×W
2L represents the total number of pixels in the neighborhood,
representing the mean gray value of the neighborhood.
Further, in one embodiment of the invention, the building block is, in particular for,
the gray-scale face image fi(x, y) as a real component, the local variance map fi V(x, y) as imaginary components, constructing the plurality of complex matrices fi c(x, y), specifically expressed as:
fi c(x,y)=fi(x,y)+fi V(x,y)i
wherein f isi(x, y) is a gray-scale face image, fi VAnd (x, y) is a local variance map of the gray-level face image.
Further, in an embodiment of the present invention, the generating module further includes: a first generation unit and a decomposition unit;
the first generation unit is used for calculating the average value of training images in the plurality of gray-scale face images
According to the average value
And said plurality of complex matrices f
i cGenerating the covariance matrix, wherein the covariance matrix is as follows:
wherein the content of the first and second substances,
the average value of training images in a plurality of gray level face images is shown, and T represents the transposition of a matrix;
the decomposition unit is used for decomposing the eigenvalues of the covariance matrix, selecting the eigenvectors corresponding to the first d largest eigenvalues and generating the eigenvector matrix W ═ X1,X2,…,Xd]。
Further, in an embodiment of the present invention, the scrambling the feature vector matrix to generate a scrambling matrix specifically includes:
obtaining a dimension of W2×W2Performing row transformation and/or column transformation on the unit matrix to generate a new matrix R;
transforming the eigenvector matrix W and the new matrix R to obtain the scrambling matrix, wherein the scrambling matrix W' is:
W'=RTW。
additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The following describes a face feature template protection method and system based on a complex matrix according to an embodiment of the present invention with reference to the accompanying drawings.
First, a face feature template protection method based on a complex matrix proposed according to an embodiment of the present invention will be described with reference to the accompanying drawings.
Fig. 1 is a flowchart of a face feature template protection method based on a complex matrix according to an embodiment of the present invention.
As shown in fig. 1, the face feature template protection method based on complex matrix includes the following steps:
in step S1, a plurality of gray-scale face images are obtained, and a local variance map corresponding to each gray-scale face image is calculated.
Specifically, assume that there are M gray-scale face images fi(x, y) (i ═ 1, 2, …, M) with dimension W1×W2Firstly, M local variance atlases corresponding to M gray-scale face images are calculated, wherein the neighborhood size of a pixel point is M multiplied by M, and then a local variance calculation formula is as follows:
wherein, L ═ m
2Representing the total number of pixel points in the neighborhood,
representing the mean gray value of the neighborhood. It should be noted that, the local variance of the brightness of the pixels at the image boundary is obtained by adopting a symmetric filling method.
In step S2, a plurality of complex matrices are constructed from the plurality of grayscale face images and the plurality of local variance maps.
Further, S2 further includes: the gray face image fi(x, y) local variance map f as real componenti V(x, y) as imaginary components, constructing a plurality of complex matrices fi c(x, y), specifically expressed as:
fi c(x,y)=fi(x,y)+fi V(x,y)i
wherein f isi(x, y) is a gray-scale face image, fi VAnd (x, y) is a local variance map of the gray-level face image.
Specifically, an original gray image is used as a real part component, a local variance map is used as an imaginary part component, and a complex matrix f is constructedi c(x, y), specifically expressed as:
fi c(x,y)=fi(x,y)+fi V(x,y)i
in step S3, a covariance matrix is calculated from the training images of the plurality of gray-scale face images and the plurality of complex matrices, and an eigenvector matrix is generated from the covariance matrix.
Further, S3 further includes:
s31, calculating the average value of the training images in the multiple gray-scale face images
According to the mean value
And a plurality of complex matrices f
i cGenerating a covariance matrix, wherein the covariance matrix is as follows:
wherein the content of the first and second substances,
the average value of training images in a plurality of gray level face images is shown, and T represents the transposition of a matrix;
s32, carrying out eigenvalue decomposition on the covariance matrix, selecting the eigenvectors corresponding to the first d largest eigenvalues to generate an eigenvector matrix W [ X ]1,X2,…,Xd]。
Specifically, first, the average value of the training set images is calculated
The covariance matrix is then:
then, eigenvalue decomposition is carried out on the covariance matrix, and eigenvectors corresponding to the first d maximum eigenvalues are selected to obtain an eigenvector matrix W ═ X1,X2,…,Xd]。
In step S4, the feature vector matrix is scrambled to generate a scramble matrix, and a plurality of complex matrices are projected by the scramble matrix to generate features corresponding to the face image.
Further, scrambling the eigenvector matrix to generate a scrambling matrix specifically includes:
obtaining a dimension of W2×W2Performing row transformation and/or column transformation on the unit matrix to generate a new matrix R;
and transforming the eigenvector matrix W and the new matrix R to obtain a scrambling matrix, wherein the scrambling matrix W' is as follows:
W'=RTW。
specifically, one W is generated2×W2The unit matrix of the size is subjected to one or more of row transformation or column transformation to obtain a new matrix R, and then the eigenvector matrix W obtained in the last step is transformed, wherein the transformation formula is as follows:
W'=RTW,
generating a permutation matrix W 'by transformation, and using the permutation matrix W' to pair a complex matrix fi cAnd (x, y) projecting to obtain the corresponding features of the face image.
In step S5, a test image of the plurality of grayscale face images is compared with the training image, and features corresponding to the face images are identified by the classifier.
Specifically, the test image is compared with the training image, and a nearest neighbor classifier based on Euclidean distance is used for classification to obtain the recognition rate.
Principal component analysis has been successfully used for face recognition, but face recognition methods based on principal component analysis have not been revocable. The biological feature identification method based on the scrambled principal component analysis protects the safety and privacy of biological features at the same time when the identification precision is not influenced; when the face image feature template is attacked or threatened, a new feature template can be released again. The human eye is usually sensitive to high frequency information because the high frequency part of the image usually reflects the structural information of the image, and the local variance of the image can better characterize the structural information of the image.
The method of the embodiment of the invention is based on the local variance atlas of the image and two-dimensional principal component analysis, not only uses the gray information of the face image, but also fully utilizes the detail information of the face image, and effectively improves the identification precision. Meanwhile, the feature matrix is scrambled, and the safety and privacy of the face image are fully considered. When the face image feature template is attacked or threatened, a new feature template can be reissued by modifying the scrambling matrix to replace the original attacked or threatened template, and the face image feature template can be applied to the field of identity recognition and authentication.
The face feature template protection method based on complex matrix according to the embodiment of the present invention is described below with an embodiment.
A library of RadfD facial images was used for a total of 67 people, 8 for each. Wherein 6 images are selected as training, and 2 images are selected as testing; the image size is 64 × 64, and the neighborhood sizes are 3 × 3, 5 × 5, and 7 × 7.
After the above steps are performed, as shown in fig. 2, fig. 2(a) is an original gray-scale face image, fig. 2(b) is a neighborhood 3 × 3 local variance map, fig. 2(c) is a neighborhood 5 × 5 local variance map, fig. 2(d) is a neighborhood 7 × 7 local variance map, and table 1 is a comparison table of recognition rates (%), which shows recognition results of the method according to the embodiment of the present invention and the method based on scrambling two-dimensional principal component analysis (RP-2DPCA), and it can be seen that the recognition rate can be effectively improved by using the image local variance map.
TABLE 1
According to the face feature template protection method based on the complex matrix, compared with the traditional principal component analysis method, the image structure information is merged into the principal component analysis, so that the information carrying amount is increased, and the identification precision is further improved; since the matrix R has randomness, the feature template can be reissued when the registered biometric data is lost or stolen.
Next, a face feature template protection system based on a complex matrix proposed according to an embodiment of the present invention is described with reference to the drawings.
Fig. 3 is a schematic structural diagram of a face feature template protection system based on a complex matrix according to an embodiment of the present invention.
As shown in fig. 3, the protection system includes: a computing module 100, a building module 200, a generating module 300, a projection module 400, and an identification protection module 500.
The calculating module 100 is configured to obtain a plurality of gray-scale face images, and calculate a local variance map corresponding to each gray-scale face image.
The construction module 200 is configured to construct a plurality of complex matrices according to the plurality of gray-scale face images and the plurality of local variance maps.
The generating module 300 is configured to calculate a covariance matrix according to a training image of the plurality of gray-level face images and the plurality of complex matrices, and generate an eigenvector matrix according to the covariance matrix.
The projection module 400 is configured to scramble the feature vector matrix to generate a scrambling matrix, and project the plurality of complex matrices by using the scrambling matrix to generate features corresponding to the face image.
The recognition protection module 500 is configured to compare a test image with a training image in a plurality of gray-scale face images, and recognize features corresponding to the face images through a classifier.
The protection system 10 can protect the privacy of the face image and the security of data.
Further, in an embodiment of the present invention, the calculation formula of the local variance map corresponding to each gray-level face image is as follows:
wherein f is
i V(x, y) is a local variance map of the gray-scale face image, i is 1, 2, …, M is the number of the gray-scale face image, and the size of the gray-scale face image is W
1×W
2L represents the total number of pixels in the neighborhood,
representing the mean gray value of the neighborhood.
Further, in one embodiment of the invention, a building block, in particular for,
the gray face image fi(x, y) local variance map f as real componenti V(x, y) as imaginary components, constructing a plurality of complex matrices fi c(x, y), specifically expressed as:
fi c(x,y)=fi(x,y)+fi V(x,y)i
wherein f isi(x, y) is a gray-scale face image, fi VAnd (x, y) is a local variance map of the gray-level face image.
Further, in an embodiment of the present invention, the generating module further includes: a first generation unit and a decomposition unit;
a first generation unit for calculating an average value of training images in the plurality of gray-scale face images
According to the mean value
And a plurality of complex matrices f
i cGenerating a covariance matrix, wherein the covariance matrix is as follows:
wherein the content of the first and second substances,
the average value of training images in a plurality of gray level face images is shown, and T represents the transposition of a matrix;
a decomposition unit for decomposing the eigenvalue of the covariance matrix, and selecting the eigenvector corresponding to the first d largest eigenvalues to generate an eigenvector matrix W ═ X1,X2,…,Xd]。
Further, in an embodiment of the present invention, scrambling the feature vector matrix to generate a scrambling matrix specifically includes:
obtaining a dimension of W2×W2Performing row transformation and/or column transformation on the unit matrix to generate a new matrix R; and transforming the eigenvector matrix W and the new matrix R to obtain a scrambling matrix, wherein the scrambling matrix W' is as follows:
W'=RTW。
it should be noted that the foregoing explanation of the embodiment of the face feature template protection method based on the complex matrix is also applicable to the apparatus of this embodiment, and is not repeated herein.
According to the face feature template protection system based on the complex matrix, provided by the embodiment of the invention, the gray information of the face image is used, and meanwhile, the detail information of the face image is fully utilized, so that the identification precision is effectively improved. Meanwhile, the feature matrix is scrambled, and the safety and privacy of the face image are fully considered. When the face image feature template is attacked or threatened, a new feature template can be reissued by modifying the scrambling matrix to replace the original attacked or threatened template, and the face image feature template can be applied to the field of identity recognition and authentication.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.