The content of the invention
In consideration of it, the present invention provides a kind of face identification method, compared with the conventional method, it is possible to decrease amount of calculation, improve and know
Other efficiency.
To achieve the above object, the present invention provides following technical scheme:
A kind of face identification method, including:
S10:Feature is extracted to key point in the facial image of the user to be identified for gathering, is made up of the feature of each key point
Characteristic vector;
S11:By the characteristic vector and training matrix computing, generation model, the training matrix is by sample facial image
The characteristic vector obtained after extracting feature, input joint Bayesian model is trained the covariance matrix for obtaining and constitutes;
S12:The model is compared with facial image in Sample Storehouse, identifying user.
Alternatively, the step S10 is specifically included:
Multiple dimensioned scaling is carried out to facial image, feature is extracted simultaneously in the facial image of each yardstick for same key point
Connection, then the feature of each key point is connected, constitute the characteristic vector.
Alternatively, the step S10 also includes:Dimension compression is carried out to the characteristic vector.
Alternatively, feature is extracted to key point in facial image to specifically include:
Multiple key points are chosen in facial image, the local binary patterns feature at each key point is extracted.
Alternatively, the local binary patterns feature at key point is extracted to be described as:
Wherein, gcRepresent Strehl ratio, gpNeighborhood point brightness is represented, P represents that neighborhood is counted, and R represents the radius of neighbourhood, and
Defined function:
Alternatively, the characteristic vector is carried out in dimension compression treatment, computing is controlled when matrix multiplication operation is carried out
Chip preferentially accesses continuous region of memory, and carries out concurrent operation.
Alternatively, the facial image for gathering user to be identified includes:
The projection matrix of the facial image for collecting is calculated according to average face model, people is calculated according to the projection matrix
Face angle, chooses face figure of facial image of the facial angle in preset range as input from the facial image of collection
Picture.
Alternatively, the model is compared with facial image in Sample Storehouse, identifying user is realized by evaluation function,
Specially:
Construction:Wherein th represents threshold value;
Wherein, Xi represents i-th people, and N (Xi) represents i-th sample facial image;
Evaluation function is expressed as:
As shown from the above technical solution, face identification method provided by the present invention, the first user to be identified to gathering
Facial image extract feature, in facial image key point extract feature, by each key point feature constitutive characteristic vector;Will
Characteristic vector and training matrix computing generation model, the training matrix are obtained by sample facial image after extracting feature
Characteristic vector, input joint Bayesian model is trained the covariance matrix for obtaining and constitutes;By the model and sample that will generate
Facial image is compared in this storehouse, identifies user.Compared with the existing face recognition algorithms based on deep learning, the present invention
Face identification method required sample size when model training is carried out is few, and amount of calculation is few, can improve recognition of face efficiency.
Specific embodiment
In order that those skilled in the art more fully understand the technical scheme in the present invention, below in conjunction with of the invention real
The accompanying drawing in example is applied, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described implementation
Example is only a part of embodiment of the invention, rather than whole embodiments.Based on the embodiment in the present invention, this area is common
The every other embodiment that technical staff is obtained under the premise of creative work is not made, should all belong to protection of the present invention
Scope.
The embodiment of the present invention provides a kind of face identification method, Fig. 1 is refer to, for the recognition of face side that the present embodiment is provided
The flow chart of method, method includes step:
S10:Feature is extracted to key point in the facial image of the user to be identified for gathering, is made up of the feature of each key point
Characteristic vector.
Key point in facial image refers to the position of performance obvious characteristic in facial image, such as position such as nose, eyes
Region.After extracting the feature at each key point, the feature of each key point is connected and composed into characteristic vector.
S11:By the characteristic vector and training matrix computing, generation model, the training matrix is by sample facial image
The characteristic vector obtained after extracting feature, input joint Bayesian model is trained the covariance matrix for obtaining and constitutes.
In the present embodiment, model training is carried out using joint Bayesian model, by sample facial image by extracting feature
The characteristic vector for obtaining afterwards, input joint Bayesian model carries out computing, the covariance matrix composing training matrix for obtaining.
S12:The model is compared with facial image in Sample Storehouse, identifying user.
The present embodiment face identification method, the facial image of the user to be identified to gathering first extracts feature, to face
Key point extracts feature in image, by the feature constitutive characteristic vector of each key point;Characteristic vector and training matrix computing are given birth to
Into model, the characteristic vector that the training matrix is obtained by sample facial image after extracting feature is input into joint Bayes
Model is trained the covariance matrix for obtaining and constitutes;Compare with facial image in Sample Storehouse by by the model of generation,
Identify user.Compared with the existing face recognition algorithms based on deep learning, the present embodiment face identification method is carrying out mould
Required sample size is few during type training, and amount of calculation is few, can improve recognition of face efficiency, can be applied to various occasions.
The present embodiment face identification method is described in detail with reference to specific embodiment.The present embodiment face is known
Other method is comprised the following steps:
S10:Feature is extracted to key point in the facial image of the user to be identified for gathering, is made up of the feature of each key point
Characteristic vector.
For the facial image for collecting, first facial image size, angle can be normalized, and be converted into ash
Degree figure,
Characteristics of image is extracted in this method and uses local binary patterns (Local Binary Patterns, LBP) feature.
It is that multiple key points are chosen in facial image to extract feature detailed process in the picture, extracts local two at each key point
Value pattern feature.
The extracting method of local binary patterns feature is the gray scale with adjacent 8 pixels with window center pixel as threshold value
Value is compared, if surrounding pixel values greatly if be set to 1, be otherwise set to 0.
Specifically, defined function:
The local binary patterns feature at key point is then extracted to be described as:
Wherein, gcRepresent Strehl ratio, gpNeighborhood point brightness is represented, P represents that neighborhood is counted, and R represents the radius of neighbourhood.
A certain key point is extracted after feature is finished and will obtain a string of characteristics of binary number representation, under classical case
P=8, R=1.0, then obtain a string 8 bits, altogether 256 kinds of states.In actual applications, this 256 kinds of states occur
Probability and differ, for compressive state number, the 01 change number of times of being gone here and there with this in number distinguish different conditions.90% with
In the case of upper, 01 at most can only change twice.
Definition:
When U≤2, LBP features have 58 kinds;As U > 2, LBP features are classified as same.Therefore so by generation
Status number boil down to 58+1 kinds.
It is fitted in the facial image for collecting and chooses multiple key points, chooses the position of performance obvious characteristic as key
The position such as point, such as eyes, nose, face, eyebrow and face mask, the selection wherein preferable key point of effect carries out feature
Extract.Exemplary, 68 face key points in facial image can be fitted, the selection wherein preferable key point of effect carries out spy
Levy extraction.
Preferably, in a kind of specific embodiment of the present embodiment, to increase the universality of model, also have in this step
Body includes carrying out multiple dimensioned scaling to facial image, for same key point, feature is extracted simultaneously in the facial image of each yardstick
Connection, then the feature of each key point is connected, constitute the characteristic vector.
When feature is extracted, the multiple dimensioned scaling of pyramid is carried out to facial image, for same key point, in each chi
Feature is extracted in the facial image of degree and is connected, then connect the feature of all key points, constitute the characteristic vector.It is logical
The multiple dimensioned scaling that pyramid is carried out to facial image is crossed, makes to extract fine-feature under large scale simultaneously, under small yardstick
Extensive feature is extracted, the precision to facial image feature extraction is improved.Scaling number of times wherein to image can be adjusted accordingly simultaneously
Test, finally ensures to be issued to optimum efficiency in the suitable situation of amount of calculation.
Preferably, also include in this step:Dimension compression is carried out to the characteristic vector.
The feature connection of each key point of facial image for obtaining, constitutive characteristic vector will be extracted, the characteristic vector of formation is
High dimensional feature vector, dimension is tieed up in 10k-100k.Pca method (Principal is used in the present embodiment method
Component Analysis, PCA) dimension compression treatment is carried out to characteristic vector, feature vector dimension after compression exists
200-2000 is tieed up.Dimension-reduction treatment is carried out by characteristic vector, the amount of calculation of follow-up data computing can be reduced.
Preferably, in the present embodiment method, in carrying out dimension compression treatment to the characteristic vector, for computing chip
The computation performance of (i.e. CPU) is optimized, and when matrix multiplication operation is carried out, control computing chip connects in preferentially accessing internal memory
Continuous region of memory, and concurrent operation is carried out, arithmetic speed can be greatly improved, arithmetic speed can lift more than 10 times.
In addition, when the facial image of user to be identified is gathered, can automatically be adopted to user's face image by camera
Collection, user plane rotates the head several seconds to camera can completion automatic data collection.Length and interpupillary distance difference in view of face will not be too
Greatly, the projection matrix of the facial image for collecting can be calculated according to average face model, facial angle is calculated according to projection matrix,
Facial image of facial image of the facial angle in preset range as input is chosen from the facial image of collection.According to
Average face model and the faceform for detecting calculate projection matrix, estimate facial angle, it is assumed that face is 0 just in face of camera
Degree, facial angle crosses senior general and can not extract accurate key point, can influence to recognize accuracy.Therefore from user's face figure of collection
The image of angle adaptation is chosen as in as the facial image of input.Meanwhile, can also be manually added sample image.Sample image meeting
As user is using constantly updating, to reach optimum efficiency.
S11:By the characteristic vector and training matrix computing, generation model, the training matrix is by sample facial image
The characteristic vector obtained after extracting feature, input joint Bayesian model is trained the covariance matrix for obtaining and constitutes.
In the present embodiment method, model training is carried out using joint Bayesian model.Its basic thought is to be divided into face
Two parts, wherein a parts represent the difference of the different same genius locis of face, and b parts represent the same genius loci of same people
Difference under various circumstances.Variable a, variable b distinguish Gaussian distributed N (0, Sa), N (0, Sb).
Two log-likelihood ratio R (x1, x2) of face can be obtained by the covariance matrix for calculating Sa, Sb.If Hs tables
Show two faces for same people, Hd represents that two faces, for different people, two similarities of face are differentiated using log-likelihood ratio,
Its formula is described as
Wherein, A=(Sa+Sb)-1- (F+G), F=Sb-1, G=- (2Sa+Sb)-1SaSb-1, wherein iteration threshold can be set
It is 10^ (- 6).
When being trained, based on sample facial image, label (i.e. ID, the people of different people according to sample facial image
Face image ID is different) data pair of thousands of same persons and different people are generated at random, covariance matrix is calculated using iterative algorithm.Its
In, iterative algorithm can be iterated computing using EM algorithm, or can also use other types of iterative algorithm.
S12:The model is compared with facial image in Sample Storehouse, identifying user.
In the present embodiment, the model is compared with facial image in Sample Storehouse, identifying user passes through evaluation function
Realize, specially:
Construction:Wherein th represents threshold value;
Wherein, Xi is i-th people, and N (Xi) represents i-th sample facial image;
Evaluation function is expressed as:
If V (Xi)=1, judgement is identified as i-th people.
The present embodiment face identification method, required sample size is less during training pattern, and iteration speed is very fast, follow-up addition
Sample is convenient, and without re -training, development cost is low;And this method amount of calculation is small, speed fast, high precision, is particularly suited for
Embedded environment.
A kind of face identification method provided by the present invention is described in detail above.It is used herein specifically individual
Example is set forth to principle of the invention and implementation method, and the explanation of above example is only intended to help and understands of the invention
Method and its core concept.It should be pointed out that for those skilled in the art, not departing from the principle of the invention
On the premise of, some improvement and modification can also be carried out to the present invention, these are improved and modification also falls into the claims in the present invention
Protection domain in.