CN110991228A - Improved PCA face recognition algorithm resistant to illumination influence - Google Patents

Improved PCA face recognition algorithm resistant to illumination influence Download PDF

Info

Publication number
CN110991228A
CN110991228A CN201911015153.2A CN201911015153A CN110991228A CN 110991228 A CN110991228 A CN 110991228A CN 201911015153 A CN201911015153 A CN 201911015153A CN 110991228 A CN110991228 A CN 110991228A
Authority
CN
China
Prior art keywords
matrix
image
illumination
face
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911015153.2A
Other languages
Chinese (zh)
Inventor
王海涛
苏南溪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Zhongke Zhibao Technology Co ltd
Original Assignee
Qingdao Zhongke Zhibao Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Zhongke Zhibao Technology Co ltd filed Critical Qingdao Zhongke Zhibao Technology Co ltd
Priority to CN201911015153.2A priority Critical patent/CN110991228A/en
Publication of CN110991228A publication Critical patent/CN110991228A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the field of image processing, and discloses an improved PCA face recognition algorithm with illumination influence resistance, which comprises the following steps: (1) selecting a training sample set; (2) classifying according to the angle and illumination of the image, and dividing the image data into a plurality of subdata sets; (3) calculating the mean vector, the centralized data matrix and the covariance matrix of all the subdata sets; (4) calculating eigenvalues of the covariance matrix, selecting the largest k eigenvalues from the eigenvalues, and solving the corresponding eigenvectors of the eigenvalues; after weight coefficients are added to the feature vectors respectively, the feature vectors are arranged into a transformation matrix W according to columns; (5) calculating and storing projection matrixes of all images in the training sample set; (6) and calculating a projection matrix of the face to be recognized, traversing the projection matrices of all head portraits in the search sample training set to perform matching calculation, and obtaining a matching result. The invention can reduce the illumination influence and improve the recognition rate.

Description

Improved PCA face recognition algorithm resistant to illumination influence
Technical Field
The invention belongs to the field of image processing, and particularly relates to an improved PCA face recognition algorithm with illumination influence resistance.
Background
Pca (principal Component analysis), also known as principal Component analysis, is a commonly used effective method for processing, compressing, and extracting information based on a variable covariance matrix. The basic principle is that the main components of human face are extracted by K-L transformation to form a characteristic face space, a test image is projected to the space during recognition to obtain a group of projection coefficients, and the projection coefficients are recognized by comparing with each human face image.
The K-L transformation takes an orthogonal matrix formed by normalized orthogonal feature vectors of a covariance matrix of original data as a transformation matrix, and realizes data compression on a transformation domain. The method has the characteristics of decorrelation, energy concentration and the like, is a transformation which has the minimum distortion under the mean square error measure and can remove the correlation between original data, and PCA (principal component analysis) is a K-L transformation matrix formed by selecting eigenvectors formed by the first K maximum eigenvalues of a covariance matrix.
The number of the principal components is selected in such a way that the accumulated variance of the reserved part accounts for the percentage of the sum of the variances, and if one principal component is reserved, the accumulated variance is increased a little and is not reserved any more.
The implementation of conventional PCA theoretically requires many assumptions, which makes it impossible to identify it as effectively as theoretically in many cases. First, the conventional PCA algorithm requires that the standard training matrix conforms to gaussian distribution, and when the probability distribution of the investigated data does not meet the gaussian distribution, it cannot use variance and covariance to properly describe noise and redundancy, and cannot obtain a feature subspace well reflecting the training space, which inevitably makes the recognition rate of PCA relatively low.
1. The probability distribution of the investigated data can not meet the Gaussian distribution and can not resist the noise influence;
2. the image is greatly influenced by illumination change, the illumination influence is not considered in the traditional PCA, and the weight of the feature vector is the same.
Disclosure of Invention
In order to meet the actual requirements in the field of image processing, the invention overcomes the defects in the prior art and solves the technical problem that
In order to solve the technical problems, the invention adopts the technical scheme that: an improved PCA face recognition algorithm resistant to illumination effects, comprising the steps of:
(1) selecting a training sample set, wherein the training sample set comprises a plurality of face targets, each face target selects s images as training samples, and each image is written into a column vector form and arranged into a data matrix:
X=(X1,X2,...,Xn);
wherein n represents the number of images and n/s represents the number of human face targets;
(2) classifying according to the angle and illumination of the image, and dividing the image data matrix into j sub-data sets S1,S2...Sj(ii) a j is less than or equal to n, j represents the number of the sub data sets, S represents the collection set with the same attribute in the sample image, wherein, the number of the corresponding data matrix in the ith sub data set is assumed to be niThen n is1+n2+……+niN; subdata set SiThe expression of (a) may be expressed as:
Figure BDA0002245449320000021
wherein i is 1, 2, … … j;
(3) computing all the subdata sets S1,S2...SjMean vector mu of1,μ2……μj(ii) a Then, calculating a centralized data matrix C and a covariance matrix Sigma; the calculation formulas are respectively as follows:
Figure BDA0002245449320000022
C=(S11,S22,...,Sjj);
Figure BDA0002245449320000023
wherein, muiMean vector, S, representing the ith sub-datasetitRepresenting the t-th vector in the i-th sub data set;
(4) calculating the eigenvalues of the covariance matrix, selecting the largest k eigenvalues, and solving the eigenvectors e corresponding to the k eigenvalues from large to small in sequence1,e2……ek(ii) a And after weighting coefficients are added to the feature vectors respectively, the feature vectors are arranged into a transformation matrix according to columnsW;
(5) Calculating and storing projection matrixes of all images X1-Xn in the training sample set;
(6) calculating a projection matrix chZ of the face Z to be recognizediAnd traversing the projection matrixes of all head portraits in the search sample training set to perform matching calculation to obtain a matching result.
In the step (4), the first three principal components e in the k feature vectors are combined1、e2、e3Adding weight coefficient of 0.8 to the fourth and fifth eigenvectors e4、e5Each weight of 1.2 is added, and the transformation matrix W is arranged according to columns, namely:
W=(0.8e1,0.8e2,0.8e3,1.2e4,1.2e5,e6,...,ek)。
in the step (5), the nth image X in the training sample setnThe calculation formula of the projection matrix is as follows:
Qn=WT(Xnm);
wherein, WTA transpose matrix that is a transform matrix W; mu.smRepresenting the nth image XnMean vector of the m-th sub-data set.
In the step (6), the projection matrix chZ of the face ZiAnd the formula of the matching calculation is respectively as follows:
chZi=WT(Z-μi);
Q=min||Qi-chZi||k
wherein, the matching result is that the human face Z is the kth individual.
In the step (4), the value of k is not less than 5.
In the step (1), each face target selects s-4 images as training samples.
Compared with the prior art, the invention has the following beneficial effects:
(1) the invention carries out the block pre-treatment of the sub data sets before the image processing, and the training samples in each sub data set are used to accord with the Gaussian distribution, which is more effective than the traditional PCA algorithm.
(2) The invention carries out the block division of the subdata set and then uses the traditional PCA algorithm to carry out calculation, after the characteristic vector is obtained by calculation, the weighting reduces the proportion of the first three main components, increases the proportion of the fourth component and the fifth component, reduces the influence of illumination on the experimental result and improves the recognition rate;
(3) the algorithm of the invention can better reflect the face attribute and further improve the recognition rate by the characteristic vector obtained by blocking and processing the subdata set.
Drawings
FIG. 1 is a schematic flow chart of an improved PCA face recognition algorithm for resisting illumination influence according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of face recognition using an improved PCA face recognition algorithm for resisting illumination effects proposed for the embodiment of the present invention;
FIG. 3 is a schematic diagram of another face recognition process using an improved PCA face recognition algorithm for resisting illumination according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of another face recognition process using an improved PCA face recognition algorithm for resisting illumination according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of another face recognition process using an improved PCA face recognition algorithm for anti-illumination effect proposed for the embodiment of the present invention;
FIG. 6 is a schematic diagram of another face recognition process using an improved PCA face recognition algorithm for resisting illumination according to an embodiment of the present invention;
fig. 7 is a schematic diagram of another face recognition process using an improved PCA face recognition algorithm for resisting illumination according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to specific embodiments and accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention; all other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, an embodiment of the present invention provides an improved PCA face recognition algorithm with illumination resistance, which includes the following steps:
(1) selecting a training sample set, wherein the training sample set comprises a plurality of face targets, each face target selects s images as training samples, and each image is written into a column vector form and arranged into a data matrix:
X=(X1,X2,...,Xn); (1)
where n represents the number of images and n/s represents the number of face objects.
(2) Classifying according to the angle and illumination of the image, and dividing the image data matrix into j sub-data sets S1,S2...Sj(ii) a j is less than or equal to n, j represents the number of the sub data sets, S represents the collection of the same attribute in the sample image, wherein, the number of the corresponding data matrix (image) in the ith sub data set is assumed to be niThen n is1+n2+……+niN; subdata set SiThe expression of (a) may be expressed as:
Figure BDA0002245449320000041
wherein i is 1, 2, … … j; (2)
when the image data matrix is classified, if the angles or illumination of some images tend to be consistent, the images are classified into the same sub data set, the images in each sub data set have similar characteristics, and the data set contains the images with the consistent angles or illumination, so that j is less than or equal to n. The grouping through angles or illumination is a subjective judgment, and finding out image similarity points so that each sub data set better conforms to Gaussian distribution is the prior art, but the grouping is different according to different individuals of images, and is a preprocessing stage of the whole process. By grouping according to the angle and illumination of the image, the image characteristics of each subdata set tend to be consistent, and the internal image of the subdata set is ensured to meet the Gaussian distribution condition.
(3) Computing the mean vector μ for all sub-data sets1,μ2……μj(ii) a Then, calculating a centralized data matrix C and a covariance matrix Sigma; the calculation formulas are respectively as follows:
Figure BDA0002245449320000042
C=(S11,S22,...,Sjj); (4)
Figure BDA0002245449320000043
wherein the covariance matrix is calculated based on the centralized data matrix C after the data set is partitioned. Mu.siMean vector, S, representing the ith sub-datasetitRepresenting the t-th vector in the i-th sub-data set.
(4) By calculating CTC, calculating the eigenvalues of the covariance matrix, selecting the largest k eigenvalues, and solving the eigenvectors e corresponding to the k eigenvalues from large to small in sequence1,e2……ek(ii) a And after weighting coefficients are added to the feature vectors, the feature vectors are arranged into a transformation matrix W by columns.
Specifically, the weight coefficients are added to the first three principal components e in the k feature vectors1、e2、e3Adding weight coefficient of 0.8 to the fourth and fifth eigenvectors e4、e5Each weight of 1.2 is added, and the transformation matrix W is arranged according to columns, namely:
W=(0.8e1,0.8e2,0.8e3,1.2e4,1.2e5,e6,...,ek); (6)
specifically, the value of k is a positive integer of 5 or more.
(5) Calculating and storing projection matrixes of all images X1-Xn in the training sample set;
specifically, the calculation formula of the projection matrix of the nth image Xn in the training sample set is as follows:
Qn=WT(Xnm); (7)
wherein, WTA transpose matrix that is a transform matrix W; mu.smRepresents the mean vector of the mth sub-dataset in which the nth image Xn is located. When the projection of each image is calculated, the weight coefficient in the step (4) is introduced into the used feature vector, namely different coefficients are applied to different feature vectors to resist the influence of illumination change, the key information of the human face can be kept, and the identification accuracy rate is improved.
(6) Calculating a projection matrix chZ of the face Z to be recognizediAnd traversing the projection matrixes of all head portraits in the search sample training set to perform matching calculation to obtain a matching result.
In particular, the projection matrix chZ of the face ZiAnd the formula of the matching calculation is respectively as follows:
chZi=WT(Z-μi); (8)
Q=min||Qi-chZi||k; (9)
wherein, the matching result is that the human face Z is the kth individual.
And selecting 20 persons as face targets, selecting 4 images from each person, using a total of 80 images as training samples to form a sample training set, wherein the number of persons in the testing set is the same as that in the training set, and the facial expression of each person is different from that in the training set, so that the effect of the testing set is verified. As shown in fig. 2 to 7, in the experimental results, the left side is a test set image, which is a real image photographed by a mobile phone, and the right side is a training set image successfully matched. Experimental results show that when people make different expressions and are positioned at different shooting angles, the people can be successfully identified, and the identification rate can reach more than 98%.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the gist of the present invention.

Claims (6)

1. An improved PCA face recognition algorithm resistant to illumination effects, comprising the steps of:
(1) selecting a training sample set, wherein the training sample set comprises a plurality of face targets, each face target selects s images as training samples, and each image is written into a column vector form and arranged into a data matrix:
X=(X1,X2,...,Xn);
wherein n represents the number of images and n/s represents the number of human face targets;
(2) classifying according to the angle and illumination of the image, and dividing the image data matrix into j sub-data sets S1,S2...Sj(ii) a j is less than or equal to n, j represents the number of the sub data sets, S represents the collection set with the same attribute in the sample image, wherein, the number of the corresponding data matrix in the ith sub data set is assumed to be niThen n is1+n2+……+niN; subdata set SiThe expression of (a) may be expressed as:
Figure FDA0002245449310000011
wherein i is 1, 2, … … j;
(3) computing all the subdata sets S1,S2...SjMean vector mu of1,μ2……μj(ii) a Then, calculating a centralized data matrix C and a covariance matrix Sigma; the calculation formulas are respectively as follows:
Figure FDA0002245449310000012
C=(S11,S22,...,Sjj);
Figure FDA0002245449310000013
wherein, muiMean vector, S, representing the ith sub-datasetitRepresenting the t-th vector in the i-th sub data set;
(4) calculating the eigenvalues of the covariance matrix, selecting the largest k eigenvalues, and solving the eigenvectors e corresponding to the k eigenvalues from large to small in sequence1,e2……ek(ii) a After weight coefficients are added to the feature vectors respectively, the feature vectors are arranged into a transformation matrix W according to columns;
(5) calculating and storing projection matrixes of all images X1-Xn in the training sample set;
(6) calculating a projection matrix chZ of the face Z to be recognizediAnd traversing the projection matrixes of all head portraits in the search sample training set to perform matching calculation to obtain a matching result.
2. An improved PCA face recognition algorithm against illumination as claimed in claim 1 wherein in step (4) the first three principal components e of the k eigenvectors are combined1、e2、e3Adding weight coefficient of 0.8 to the fourth and fifth eigenvectors e4、e5Each weight of 1.2 is added, and the transformation matrix W is arranged according to columns, namely:
W=(0.8e1,0.8e2,0.8e3,1.2e4,1.2e5,e6,...,ek)。
3. the improved PCA face recognition algorithm against illumination as claimed in claim 1, wherein in step (5), the n image X in the training sample setnThe calculation formula of the projection matrix is as follows:
Qn=WT(Xnm);
wherein, WTA transpose matrix that is a transform matrix W; mu.smRepresenting the nth image XnMean vector of the m-th sub-data set.
4. An improved PCA face recognition algorithm against illumination as claimed in claim 1 wherein in step (6) the projection matrix chZ of face ZiAnd the formula of the matching calculation is respectively as follows:
chZi=WT(Z-μi);
Q=min||Qi-chZi||k
wherein, the matching result is that the human face Z is the kth individual.
5. The improved PCA face recognition algorithm against illumination of claim 1 wherein in step (4) k has a value of 5 or more.
6. The improved PCA face recognition algorithm against illumination as claimed in claim 1, wherein in step (1), each face target selects s-4 images as training samples.
CN201911015153.2A 2019-10-24 2019-10-24 Improved PCA face recognition algorithm resistant to illumination influence Pending CN110991228A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911015153.2A CN110991228A (en) 2019-10-24 2019-10-24 Improved PCA face recognition algorithm resistant to illumination influence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911015153.2A CN110991228A (en) 2019-10-24 2019-10-24 Improved PCA face recognition algorithm resistant to illumination influence

Publications (1)

Publication Number Publication Date
CN110991228A true CN110991228A (en) 2020-04-10

Family

ID=70082315

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911015153.2A Pending CN110991228A (en) 2019-10-24 2019-10-24 Improved PCA face recognition algorithm resistant to illumination influence

Country Status (1)

Country Link
CN (1) CN110991228A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113177879A (en) * 2021-04-30 2021-07-27 北京百度网讯科技有限公司 Image processing method, image processing apparatus, electronic device, and storage medium
CN114419383A (en) * 2022-01-21 2022-04-29 北部湾大学 Image illumination correction algorithm based on principal component analysis

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101281598A (en) * 2008-05-23 2008-10-08 清华大学 Method for recognizing human face based on amalgamation of multicomponent and multiple characteristics
CN101587543A (en) * 2009-06-19 2009-11-25 电子科技大学 Face recognition method
CN105354555A (en) * 2015-11-17 2016-02-24 南京航空航天大学 Probabilistic graphical model-based three-dimensional face recognition method
CN109903271A (en) * 2019-01-29 2019-06-18 福州大学 Placenta implantation B ultrasonic image feature extraction and verification method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101281598A (en) * 2008-05-23 2008-10-08 清华大学 Method for recognizing human face based on amalgamation of multicomponent and multiple characteristics
CN101587543A (en) * 2009-06-19 2009-11-25 电子科技大学 Face recognition method
CN105354555A (en) * 2015-11-17 2016-02-24 南京航空航天大学 Probabilistic graphical model-based three-dimensional face recognition method
CN109903271A (en) * 2019-01-29 2019-06-18 福州大学 Placenta implantation B ultrasonic image feature extraction and verification method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
赵鑫;汪维家;曾雅云;熊才伟;任彦嘉;: "改进的模块PCA人脸识别新算法" *
赵鑫;汪维家;曾雅云;熊才伟;任彦嘉;: "改进的模块PCA人脸识别新算法", 计算机工程与应用, no. 02, pages 161 - 176 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113177879A (en) * 2021-04-30 2021-07-27 北京百度网讯科技有限公司 Image processing method, image processing apparatus, electronic device, and storage medium
CN114419383A (en) * 2022-01-21 2022-04-29 北部湾大学 Image illumination correction algorithm based on principal component analysis

Similar Documents

Publication Publication Date Title
Asadi et al. A comparative study of face recognition with principal component analysis and cross-correlation technique
Kotropoulos et al. Frontal face authentication using discriminating grids with morphological feature vectors
CN110781766B (en) Grassman manifold discriminant analysis image recognition method based on characteristic spectrum regularization
CN107679539B (en) Single convolution neural network local information and global information integration method based on local perception field
CN110991228A (en) Improved PCA face recognition algorithm resistant to illumination influence
CN108710836B (en) Lip detection and reading method based on cascade feature extraction
CN114398611A (en) Bimodal identity authentication method, device and storage medium
Adami et al. A universal anti-spoofing approach for contactless fingerprint biometric systems
Lin et al. Domestic activities clustering from audio recordings using convolutional capsule autoencoder network
CN112800882A (en) Mask face posture classification method based on weighted double-flow residual error network
CN111931757A (en) Finger vein quick sorting method and device based on MDLBP block histogram and PCA dimension reduction
CN110287973B (en) Image feature extraction method based on low-rank robust linear discriminant analysis
Kekre et al. Transform based face recognition with partial and full feature vector using DCT and Walsh transform
CN107657223B (en) Face authentication method based on rapid processing multi-distance metric learning
CN115273202A (en) Face comparison method, system, equipment and storage medium
Ahmad et al. Palmprint recognition using local and global features
Lenc et al. Confidence Measure for Automatic Face Recognition.
Ambeth Kumar et al. Footprint based recognition system
Zafeiriou et al. Learning discriminant person-specific facial models using expandable graphs
Raghavendra et al. Qualitative weight assignment for multimodal biometric fusion
KR100634527B1 (en) Apparatus and method for processing image on the based of layers
CN113837161B (en) Identity recognition method, device and equipment based on image recognition
Pietkiewicz et al. Recognition of maritime objects based on FLIR images using the method of eigenimages
Motlicek et al. Bi-modal authentication in mobile environments using session variability modelling
CN113486875B (en) Cross-domain face representation attack detection method and system based on word separation and self-adaptation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination