CN103136504B - Face identification method and device - Google Patents

Face identification method and device Download PDF

Info

Publication number
CN103136504B
CN103136504B CN201110385670.6A CN201110385670A CN103136504B CN 103136504 B CN103136504 B CN 103136504B CN 201110385670 A CN201110385670 A CN 201110385670A CN 103136504 B CN103136504 B CN 103136504B
Authority
CN
China
Prior art keywords
feature
facial image
sample
weight value
cluster
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201110385670.6A
Other languages
Chinese (zh)
Other versions
CN103136504A (en
Inventor
黄磊
彭菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hanwang Technology Co Ltd
Original Assignee
Hanwang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hanwang Technology Co Ltd filed Critical Hanwang Technology Co Ltd
Priority to CN201110385670.6A priority Critical patent/CN103136504B/en
Publication of CN103136504A publication Critical patent/CN103136504A/en
Application granted granted Critical
Publication of CN103136504B publication Critical patent/CN103136504B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a kind of face identification method and device, this face identification method comprises: cluster feature extraction step, for carrying out cluster feature extraction to through pretreated facial image; Determining step, determines according to the cluster feature extracted from described facial image the cluster feature classification that the training in advance of mating with described facial image obtains; Recognition feature extraction step, for the extraction carrying out P kind recognition feature through pretreated facial image, wherein P be greater than 1 natural number; Calculation procedure, for calculating the similarity of described P kind recognition feature and its character pair in the face template of registered in advance respectively, and determine the best weight value combination of described P kind recognition feature when being weighted fusion, to obtain the comprehensive similarity of described facial image and described face template according to the classification of the cluster feature determined in described determining step.Method of the present invention is used can effectively to improve recognition of face performance.

Description

Face identification method and device
Technical field
The present invention relates to Digital Image Processing and the area of pattern recognition based on computer vision, particularly a kind of face identification method and device.
Background technology
Biometrics identification technology is the effective technology of identification, the recently with fastest developing speed biometrics identification technology being face recognition technology and merging mutually with face recognition technology.
In order to improve the performance of recognition of face sorter, generally adopt multiple features Weighted Fusion at present.For different features, recognition performance is not quite similar, and weighting is exactly adopt different weights to merge to different features.The weights of each feature are determined by the characteristic of this feature itself (separability, discrimination etc.), the blending weight that different fusion features is corresponding different.The feature good to recognition performance gives larger weights, and the feature of recognition performance difference gives less weight.Publication number is that the patent application document of CN101276421A proposes face component similarity R1, R2, R3, R4, R5 and Gabor face characteristic similarity R6 to merge according to weighted sum rule, obtain face comprehensive similarity R0, its fusion coefficients is taken as 24:15:7.5:6:9:41 respectively.
But, the impact of other factors such as illumination condition or attitude may be subject in face recognition application.Under different illumination conditions and different attitude, the recognition performance of each feature is also non-constant.Therefore, the best weight value of the Fusion Features under different condition is also changing, if the blending weight of fixing each feature is to carry out the recognition of face under different condition, then will cause the hydraulic performance decline of recognition of face.
Summary of the invention
The object of the invention is the adaptive characteristic weighing scheme of proposition one, solve condition for identification change, as fixed the problem that weights scheme recognition performance declines when illumination variation and attitudes vibration etc. cause different characteristic performance to change.
The invention provides a kind of face identification method, comprising: cluster feature extraction step, for carrying out cluster feature extraction to through pretreated facial image; Determining step, determines according to the cluster feature extracted from described facial image the cluster feature classification that the training in advance of mating with described facial image obtains; Recognition feature extraction step, for the extraction carrying out P kind recognition feature through pretreated facial image, wherein P be greater than 1 natural number; Calculation procedure, for calculating the similarity of described P kind recognition feature and its character pair in the face template of registered in advance respectively, and determine the best weight value combination of described P kind recognition feature when being weighted fusion, to obtain the comprehensive similarity of described facial image and described face template according to the classification of the cluster feature determined in described determining step.
The present invention also provides a kind of face identification device, comprising: cluster feature extraction unit, for carrying out cluster feature extraction to through pretreated facial image; Determining unit, determines according to the cluster feature extracted from described facial image the cluster feature classification that the training in advance of mating with described facial image obtains; Recognition feature extraction unit, for the extraction carrying out P kind recognition feature through pretreated facial image, wherein P be greater than 1 natural number; Computing unit, for calculating the similarity of described P kind recognition feature and its character pair in the face template of registered in advance respectively, and the classification of the cluster feature determined according to described determining unit determines the best weight value combination of described P kind recognition feature when being weighted fusion, to obtain the comprehensive similarity of described facial image and described face template.
Face identification method provided by the invention and device, wherein have employed adaptive weight scheme, and it is more flexible that the program comparatively fixes weights scheme, the multiple features fusion performance in face recognition process can be made to remain best or close to the best.
Accompanying drawing explanation
Fig. 1 is the theory diagram of the face identification method of the embodiment of the present invention.
Fig. 2 is the FB(flow block) of the face identification method of the embodiment of the present invention.
Fig. 3 is the detail flowchart of the face identification method of the embodiment of the present invention.
Fig. 4 is the schematic diagram that LBP computation process is shown.
Fig. 5 is the schematic diagram that part UniformLBP is shown.
Fig. 6 illustrates the basic process schematic diagram using LBP feature to carry out recognition of face.
Fig. 7 is the structured flowchart of the face identification device of the embodiment of the present invention.
Embodiment
In one embodiment of the invention, first every width facial image of training sample set is carried out to the extraction of cluster feature, adopt Unsupervised clustering (namely adopting the sample of unknown classification), as K means Method, training sample set is divided into K class.Then to training sample set image zooming-out recognition feature, to the combination of the best weight value of K class training sample difference calculating K group recognition feature.Best weight value by weigh Different categories of samples maximization discrimination, the wrong rate such as to minimize, maximize the modes such as percent of pass and obtain.Next, when carrying out recognition of face, the cluster feature for judging cluster classification and the multiple recognition feature for identification are extracted respectively to test facial image.According to cluster feature, image to be identified is divided in classification corresponding to nearest k means Clustering, obtains corresponding best weight value and multiple recognition feature is merged, and then obtain recognition result.
Wherein, cluster feature can be illumination estimation feature or posture feature or other are on identifying the feature that impact is larger.At different environment-identifications, can according to the condition determination cluster feature affected recognition result, the present invention is not construed as limiting this.
Fig. 1 is the theory diagram of the face identification method of the embodiment of the present invention.As shown in Figure 1, face identification method of the present invention comprises training process and identifying.In the training process, first according to cluster feature, training sample set is divided into K class, then for the combination of the best weight value of the training sample difference calculating K group recognition feature of K classification.As shown in the flow process on the left of Fig. 1, training process comprises sample collection procedure, sample clustering characteristic extraction step, classifying step and best weight value calculation procedure.In identifying, first judge the classification belonging to tested object, then the best weight value choosing recognition feature corresponding to corresponding classification carries out Fusion Features, and then obtain best identified result.As shown in the flow process on the right side of Fig. 1, identifying comprises cluster feature extraction step, determining step, recognition feature extraction step and calculation procedure.About the implementation of training process and identifying will be described in detail following.
The present embodiment is described for illumination estimation feature as cluster feature method of the present invention.Fig. 2 is the FB(flow block) of the face identification method of the embodiment of the present invention.Fig. 3 is the detail flowchart of the face identification method of the embodiment of the present invention.The face identification method of the embodiment of the present invention is described below with reference to Fig. 2 and Fig. 3.
As shown in the flow process on the left of Fig. 3, in one embodiment, training process mainly comprises structure training sample set, extracts illumination estimation feature, classification based training sample set, extracts the recognition feature of all kinds of training sample respectively and calculate the step of best weight value of each recognition feature of Different categories of samples respectively.
As shown in step S1, structure training sample set.Training sample concentrates the sample comprised under various light condition, and is uniformly distributed as far as possible.The training sample set used in the present invention is the infrared face image of round-the-clock collection, comprises the sample under the various light intensity from half-light to high light.Usually, the training sample to gathering is needed first to carry out pre-service.In embodiments of the present invention, the pre-service carried out original facial image mainly comprises Face detection, image alignment, adjusted size, the gray scale of image and the normalized of variance.After pre-service, all picture size is identical, gray scale is unified to standard level, and gray-level is clearly more demarcated.
As shown in step s 2, illumination estimation feature extraction is carried out to through pretreated training sample.
Input training sample, extract the feature relevant to illumination, the present invention is referred to as illumination estimation feature.In embodiments of the present invention, this illumination estimation feature includes but not limited to the feature such as gray average, variance.The extraction gray average of image, the method for variance are technology known in those skilled in the art, and the present embodiment repeats no more.
As shown in step S3, classification based training sample set, is specially, and according to the illumination estimation feature of each sample extracted, carries out illumination-classification to the sample that training sample is concentrated.According to the illumination estimation feature of training sample set, in embodiments of the present invention, adopt k means Method, or other sorting techniques, the sample that training sample is concentrated is divided into K class automatically, and K is natural number, and obtains the cluster centre of K class.
As shown in step s 4, respectively recognition feature extraction is carried out to the sample in K class training sample, and calculate the similarity of the individual features of P kind recognition feature and the face template extracted respectively, and then calculate the comprehensive similarity that different similarity weights Weighted Fusion obtains, to obtain the K group best weight value combination of P kind recognition feature corresponding to similarity weights corresponding to maximum percent of pass, as shown in Figure 2, wherein P be greater than 1 natural number.
First feature being planted to the P (P >=2) that K class training sample extracts respectively for identifying, including but not limited to that global characteristics is if overall Fourier characteristic sum local feature is as local binary (LBP) feature.
The present embodiment is characterized as example with LBP.Local binary (LBP) feature is proposed when studying Texture classification by people such as T.Ojala at first, is a kind of method of being portrayed a certain pixel neighborhood of a point by binary mode.Owing to having the insensitivity of illumination variation and calculating the features such as simple and quick, this operator has been widely used in the field such as texture analysis, image recognition at present.
Original LBP operator be each pixel definition of image one centered by this pixel 3 × 3 texture cell, then with the gray-scale value of this center pixel for threshold value, binaryzation is carried out to all the other 8 pixels of this window.Point around this point 0 or 1 according to a fixing order, then form a binary number.This binary number is then the description value of this point, as shown in Figure 4.Owing to only needing the size information of each point and center gray-scale value, so the light and shade of entirety there is no impact to LBP value.
In an embodiment of the invention, UniformLBP (ULBP) operator improved is adopted.UniformLBP to refer in the binary string of LBP from 1 to 0 and only has at most those dual modes of twice from the change of 0 to 1.Such LBP descriptor has 58.If in LBP operator texture cell circumferentially, pixel gray value is designated as white point lower than the point of center gray-scale value, remaining point is designated as stain, then UniformLBP refers to those LBP that white point and stain can be unified into a line segment separately.List 9 kinds of UniformLBP in Fig. 5, they are rotated and can obtain whole 58 UniformLBP.UniformLBP mainly describes micro-feature such as line, edge, angle of texture, and the pattern except it is all classified as another kind of, is called non-Uniform pattern, supplements as one that describes UniformLBP.According to the method for above-mentioned proposition, namely piece image can be described by these 58 UniformLBP and non-UniformLBP is described and carry out representing and namely represented by 59 kinds of LBP descriptions.LBP operator after improving is called ULBP operator.
Prior art gives a kind of method extracting face ULBP feature: first image is divided into multiple little region unit, then in each region unit, ULBP histogram is extracted, finally the histogram of all pieces is together in series formation vector, as shown in Figure 6.Have two advantages with this vector representation image: 1. local histogram can the texture information of Description Image, the single histogram be 2. together in series can the space structure of Description Image.Therefore, the shape information that ULBP histogram feature had both contained image also contains the texture information of image.Because the ULBP dimension obtained is very high, generally need to carry out PCA or LDA dimensionality reduction.
After extracting recognition feature, the best weight value combination of each recognition feature of calculating K class sample respectively.Particularly, ready-portioned K class training sample is calculated respectively to the K group best weight value of each recognition feature.Best weight value combination by weigh Different categories of samples maximization discrimination, the wrong rate such as to minimize, maximize the modes such as percent of pass and obtain.The present embodiment adopt maximize discrimination and discrimination maximum time weights as best weight value, specific implementation is as follows.
In tentative standard template set T, total face sample M is individual, T={t 1, t 2..., t m, template t icorresponding label is labelt i(i=1 ..., M), described label can be understood as mark corresponding to face.The set X of kth class illumination is divided in training sample set X kinterior total face sample is N number of, X k={ x 1, x 2..., x m, training sample x jcorresponding label is labelx j(j=1 ..., N).Feature for identifying is P class.Suppose that the weight of l feature in the combination of kth class best weight value is kth class best weight value W kcan be expressed as overall optimal threshold W can be expressed as W={W k, k=1 ..., K}.
A given training sample x nwith a template t m, the Euclidean distance (similarity for assessment of template) between each feature is respectively given a certain group of weighed combination ω 1..., ω p, then training sample x nwith template t mbetween comprehensive distance be s m , n 0 = ω 1 * s m , n 1 + . . . + ω P * s m , n P .
Said process is expanded to whole template set T.A given training sample x n, with the comprehensive distance sequence of template set T be with training sample x in template set T nthe most close template distance is suppose this minor increment corresponding template is t m, according to arest neighbors rule, judge sample x under these given weights nwith template t mcoupling.Therefore, following judgement is carried out: if labelt m=labelx n, then identify correct, otherwise identification error.
Said process is expanded to whole training sample set X kwith whole template set T.Given training sample set X k, with the minimum comprehensive distance sequence of template set T be corresponding matching template label is labelr j(j=1 ..., N), identify correct sample number RecNum = Σ j = 1 N ( labelx j = = labelr j ) . Discrimination RecRate=RecNum/N*100%.
By fixed step size traversal weighed combination, different discriminations can be obtained, select weighed combination corresponding to maximum discrimination to be designated as such best weight value combination W k, the Similarity value of false acceptance rate 0.1% correspondence is designated as threshold value T k.
The best weight value combination of other class training samples and threshold value all can be obtained by said method.
As shown in the flow process on the right side of Fig. 3, in one embodiment, identifying mainly comprises collection facial image, extracts illumination estimation feature, determines illumination classification, chooses best weight value, extracts recognition feature, multiple features Weighted Fusion to obtain these steps of recognition result.Wherein, extract illumination estimation feature and extract the processing procedure of the corresponding steps in the step of recognition feature and training process similar, the present embodiment repeats no more.
As shown in step s 5, in identifying, first gather facial image, and adopt the method identical with training process to carry out pre-service to the facial image gathered; Then, as shown in step s beta, to pretreated image zooming-out illumination estimation feature.
Next, as shown in step S7, according to illumination estimation feature determination illumination classification.Relate to the classification of test pattern illumination, test pattern, according to the illumination estimation feature (gray average or variance as image) of the test pattern extracted, is divided into the illumination classification that in immediate sample training model, k means Clustering is corresponding by the present invention.Concrete, category division can be realized by the Euclidean distance of the illumination estimation characteristic sum cluster centre calculating test pattern.If divide to kth class by test pattern according to illumination, then corresponding best weight value is combined as W k, respective threshold is T k.Then, as shown in step S8, P kind recognition feature in advance.
Subsequently, in step s 9, calculate the similarity of P kind recognition feature and the corresponding P kind recognition feature of face template respectively, and combine according to the best weight value of P kind recognition feature corresponding to the illumination classification obtained and carry out Weighted Fusion, obtain comprehensive similarity, as shown in Figure 2.Particularly, calculate the similarity of P kind recognition feature and its character pair in face template respectively, as: the similarity calculating the position (as: eyes, forehead etc.) in facial image and the corresponding position in face template, and determine the best weight value of each recognition feature when being weighted fusion according to the classification of illumination, to obtain the comprehensive similarity of described facial image and described face template.
In the present invention, suppose that in template set T, total face sample M is individual, T={t 1, t 2..., t m, template t icorresponding label is labelt i(i=1 ..., M).By face y to be identified and face template t ml category feature value between Euclidean distance as similarity according to the best weight value W obtained in step S4 kcarry out similarity fusion, face y and template t to be identified mbetween comprehensive similarity distance be: s m 0 = ω k 1 * s m 1 + ω k 2 * s m 2 + . . . + ω k P * s m P . Travel through whole template set T, the comprehensive distance sequence of face y to be identified and template set T is the most close template distance is suppose this minor increment s 0corresponding template is t m, then face y and template t to be identified mtemplate matches.
Finally, the minimum Eustachian distance that comparison step S9 obtains, i.e. maximum similarity s 0with predetermined threshold value T ksize.If s 0>=T k, then judge to identify successfully, recognition result is template t mlabel labelt m; If s 0< T k, then refuse to know.
Finally, as shown in Figure 2, recognition result is exported according to obtained comprehensive similarity.
Embodiments of the invention are described for illumination estimation feature as cluster feature.It should be explicitly made clear at this point, if posture feature is very large on the impact identified, posture feature can be adopted equally as cluster feature, realize said method.First, gather the training sample comprising various attitude condition, and pre-service is carried out to training sample; Posture feature extraction is carried out to through pretreated training sample; According to the posture feature of each sample extracted, the sample that training sample is concentrated is classified, and the best weight value of P kind recognition feature corresponding to the sample obtaining K attitude classification combines; When identifying, first extract the posture feature of the facial image to be identified of collection, and the attitude classification k that the posture feature obtaining facial image to be identified belongs to, and then obtain the best weight value combination W of P kind recognition feature corresponding to this attitude classification kwith similarity threshold T k; Then the P kind recognition feature of facial image to be identified is extracted, calculate the similarity of P kind recognition feature and the corresponding P kind recognition feature of face template respectively, and carry out Weighted Fusion according to the best weight value combination of P kind recognition feature corresponding to illumination classification obtained, obtain comprehensive similarity; Recognition result judgement is carried out according to the comprehensive similarity obtained and threshold value.Wherein, posture feature can be the coordinate figure of nose and the corners of the mouth in pretreated facial image.Because usually during pre-service, with the coordinate of eyes for reference point carries out picture normalization, two eyes are fixed on certain two coordinate, so carry out cluster using the corners of the mouth and nose as posture feature, attitude classification can well be determined.
Present invention also offers a kind of face identification device, as shown in Figure 7, face identification device comprises cluster feature extraction unit 701, determining unit 702, recognition feature extraction unit 703 and computing unit 704, wherein, cluster feature extraction unit 701, with reference to the cluster feature extraction step shown in figure 1, carries out cluster feature extraction to through pretreated facial image; Determining unit 702, with reference to the determining step shown in figure 1, determines according to the cluster feature extracted from described facial image the cluster feature classification that the training in advance of mating with described facial image obtains; Recognition feature extraction unit 703 with reference to the recognition feature extraction step shown in figure 1, to the extraction carrying out P kind recognition feature through pretreated facial image, wherein P be greater than 1 natural number; Computing unit 704 is with reference to the calculation procedure shown in figure 1, calculate the similarity of described P kind recognition feature and its character pair in the face template of registered in advance respectively, and the classification of the cluster feature determined according to described determining unit determines the best weight value combination of described P kind recognition feature when being weighted fusion, to obtain the comprehensive similarity of described facial image and described face template.When embodiments of the invention adopt illumination estimation feature as cluster feature, cluster feature extraction unit 701, determining unit 702, recognition feature extraction unit 703 and computing unit 704 realize corresponding operation with reference to the step S5 in figure 3, S6, S7 and S8 respectively.
Above-mentioned face identification device also comprises sample collection unit, sample clustering feature extraction unit, taxon and best weight value computing unit, wherein, sample collection procedure shown in described sample collection elements reference Fig. 1, gather the facial image sample under multiple cluster feature condition, to construct training sample set; Sample clustering feature extraction unit, with reference to the sample clustering characteristic extraction step shown in figure 1, carries out cluster feature extraction to the sample concentrated through pretreated training sample; Taxon is with reference to the classifying step shown in figure 1, and according to the cluster feature extracted from training sample, adopt unsupervised clustering that the sample that described training sample is concentrated is divided into K class, wherein, K is positive integer; Best weight value computing unit is with reference to the best weight value calculation procedure shown in figure 1, extract the P kind recognition feature of each sample in K class sample respectively, calculate the similarity of P kind recognition feature and its character pair in the standard faces template preset respectively, and calculate by being weighted the comprehensive similarity merging gained to the different weights of each similarity, obtain the combination of the best weight value of recognition feature.When embodiments of the invention adopt illumination estimation feature as during according to cluster feature, sample collection unit, sample clustering feature extraction unit, taxon and best weight value computing unit realize corresponding operation with reference to the step S1 in figure 3, S2, S3 and S4 respectively.
In addition, above-mentioned best weight value computing unit also comprises threshold value determination unit, and this threshold value determination unit is by similarity threshold defining method mentioned above, and described threshold value determination unit is for determining the similarity threshold of the combination correspondence of this best weight value.
The present invention proposes an adaptive characteristic weighing scheme, environment-identification change can be solved, as fixed the problem that weights scheme causes recognition of face hydraulic performance decline when illumination acute variation or attitudes vibration cause different characteristic performance to change, and the best or the fusion performance close to the best can be remained.
Application of the present invention may extend to the situation that registered set and test set all relate to illumination variation.By template set and training sample set respectively cluster be K classification, form K*K illumination combination, obtain the best weight value under each illumination combination and threshold value respectively.During face recognition application, judge the cluster classification belonging to test pattern and registered images respectively, select best weight value to carry out the fusion of recognition feature, improve recognition performance.
One of ordinary skill in the art will appreciate that all or part of flow process realized in above-described embodiment method, that the hardware that can carry out instruction relevant by computer program has come, described program can be stored in computer read/write memory medium, this program, when performing, can comprise the flow process of the embodiment as above-mentioned each side method.Wherein, described storage medium can be disk, CD, ROM (read-only memory) or random access memory etc.
The above; be only the specific embodiment of the present invention, but protection scope of the present invention is not limited thereto, is anyly familiar with those skilled in the art in the technical scope that the present invention discloses; change can be expected easily or replace, all should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion with the protection domain of claim.

Claims (14)

1. a face identification method, is characterized in that, comprising:
Cluster feature extraction step, for carrying out cluster feature extraction to through pretreated facial image;
Determining step, determines according to the cluster feature extracted from described facial image the cluster feature classification that the training in advance of mating with described facial image obtains;
Recognition feature extraction step, for the extraction carrying out P kind recognition feature through pretreated facial image, wherein P be greater than 1 natural number; And
Calculation procedure, for calculating the similarity of described P kind recognition feature and its character pair in the face template of registered in advance respectively, and determine the best weight value combination of described P kind recognition feature when being weighted fusion, to obtain the comprehensive similarity of described facial image and described face template according to the classification of the cluster feature determined in described determining step.
2. face identification method according to claim 1, is characterized in that, comprises before recognition feature extraction step:
Sample collection procedure, for gathering the facial image sample under multiple cluster feature condition, to construct training sample set;
Sample clustering characteristic extraction step, for carrying out cluster feature extraction to the sample concentrated through pretreated training sample;
Classifying step, for the cluster feature that basis extracts from training sample, adopt unsupervised clustering that the sample that described training sample is concentrated is divided into K class, wherein, K is positive integer;
Best weight value calculation procedure, for extracting the P kind recognition feature of each sample in K class sample respectively, calculate the similarity of P kind recognition feature and its character pair in the standard faces template preset respectively, and calculate by being weighted the comprehensive similarity merging gained to the different weights of each similarity, obtain the combination of the best weight value of recognition feature.
3. face identification method according to claim 2, is characterized in that, described unsupervised clustering comprises k means clustering method.
4. face identification method according to claim 1 and 2, is characterized in that, described cluster feature comprises illumination estimation feature, and described illumination estimation feature comprises gray average or the variance of facial image sample.
5. face identification method according to claim 1 and 2, is characterized in that, described cluster feature comprises posture feature, and described posture feature comprises the coordinate figure of nose and the corners of the mouth in facial image.
6. face identification method according to claim 2, is characterized in that, in described best weight value calculation procedure, according to the maximization discrimination of facial image sample or the wrong rate such as to minimize or maximize the combination that percent of pass obtains described best weight value.
7. face identification method according to claim 2, is characterized in that, in described best weight value calculation procedure, also comprises the step of the similarity threshold of the combination correspondence determining this best weight value.
8. a face identification device, is characterized in that, comprising:
Cluster feature extraction unit, for carrying out cluster feature extraction to through pretreated facial image;
Determining unit, determines according to the cluster feature extracted from described facial image the cluster feature classification that the training in advance of mating with described facial image obtains;
Recognition feature extraction unit, for the extraction carrying out P kind recognition feature through pretreated facial image, wherein P be greater than 1 natural number; And
Computing unit, for calculating the similarity of described P kind recognition feature and its character pair in the face template of registered in advance respectively, and the classification of the cluster feature determined according to described determining unit determines the best weight value combination of described P kind recognition feature when being weighted fusion, to obtain the comprehensive similarity of described facial image and described face template.
9. face identification device according to claim 8, is characterized in that, also comprises:
Sample collection unit, for gathering the facial image sample under multiple cluster feature condition, to construct training sample set;
Sample clustering feature extraction unit, for carrying out cluster feature extraction to the sample concentrated through pretreated training sample;
Taxon, for the cluster feature that basis extracts from training sample, adopt unsupervised clustering that the sample that described training sample is concentrated is divided into K class, wherein, K is positive integer;
Best weight value computing unit, for extracting the P kind recognition feature of each sample in K class sample respectively, calculate the similarity of P kind recognition feature and its character pair in the standard faces template preset respectively, and calculate by being weighted the comprehensive similarity merging gained to the different weights of each similarity, obtain the combination of the best weight value of recognition feature.
10. face identification device according to claim 9, is characterized in that, described unsupervised clustering comprises k means clustering method.
11. face identification devices according to claim 8 or claim 9, it is characterized in that, described cluster feature comprises illumination estimation feature, and described illumination estimation feature comprises gray average or the variance of facial image sample.
12. face identification devices according to claim 8 or claim 9, it is characterized in that, described cluster feature comprises posture feature, and described posture feature comprises the coordinate figure of nose and the corners of the mouth in facial image.
13. face identification devices according to claim 9, is characterized in that, described best weight value computing unit is according to the maximization discrimination of facial image sample or the wrong rate such as to minimize or maximize the combination that percent of pass obtains described best weight value.
14. face identification devices according to claim 9, is characterized in that, described best weight value computing unit comprises threshold value determination unit, and described threshold value determination unit is for determining the similarity threshold of the combination correspondence of this best weight value.
CN201110385670.6A 2011-11-28 2011-11-28 Face identification method and device Active CN103136504B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110385670.6A CN103136504B (en) 2011-11-28 2011-11-28 Face identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110385670.6A CN103136504B (en) 2011-11-28 2011-11-28 Face identification method and device

Publications (2)

Publication Number Publication Date
CN103136504A CN103136504A (en) 2013-06-05
CN103136504B true CN103136504B (en) 2016-04-20

Family

ID=48496314

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110385670.6A Active CN103136504B (en) 2011-11-28 2011-11-28 Face identification method and device

Country Status (1)

Country Link
CN (1) CN103136504B (en)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105005763B (en) * 2015-06-26 2019-04-16 李战斌 A kind of face identification method and system based on local feature information excavating
CN106650558A (en) * 2015-11-04 2017-05-10 上海市公安局刑事侦查总队 Facial recognition method and device
CN106228628B (en) * 2016-07-15 2021-03-26 腾讯科技(深圳)有限公司 Check-in system, method and device based on face recognition
CN106250858B (en) * 2016-08-05 2021-08-13 重庆中科云从科技有限公司 Recognition method and system fusing multiple face recognition algorithms
CN106407916A (en) * 2016-08-31 2017-02-15 北京维盛视通科技有限公司 Distributed face recognition method, apparatus and system
CN106845462A (en) * 2017-03-20 2017-06-13 大连理工大学 The face identification method of feature and cluster is selected while induction based on triple
CN107871345A (en) * 2017-09-18 2018-04-03 深圳市盛路物联通讯技术有限公司 Information processing method and related product
CN108875493B (en) * 2017-10-12 2021-04-27 北京旷视科技有限公司 Method and device for determining similarity threshold in face recognition
CN107748866B (en) * 2017-10-20 2020-02-14 河北机电职业技术学院 Illegal parking automatic identification method and device
CN108921006B (en) * 2018-05-03 2020-08-04 西北大学 Method for establishing handwritten signature image authenticity identification model and authenticity identification method
CN109472240B (en) * 2018-11-12 2020-02-28 北京影谱科技股份有限公司 Face recognition multi-model adaptive feature fusion enhancement method and device
CN109753924A (en) * 2018-12-29 2019-05-14 上海乂学教育科技有限公司 It is a kind of for the face identification system of online education, method and application
CN109800699A (en) * 2019-01-15 2019-05-24 珠海格力电器股份有限公司 Image-recognizing method, system and device
CN109815887B (en) * 2019-01-21 2020-10-16 浙江工业大学 Multi-agent cooperation-based face image classification method under complex illumination
CN109670486A (en) * 2019-01-30 2019-04-23 深圳前海达闼云端智能科技有限公司 A kind of face identification method based on video, device and calculate equipment
CN111860062B (en) * 2019-04-29 2023-11-24 中国移动通信集团河北有限公司 Face recognition sample processing method and device
CN110222789B (en) * 2019-06-14 2023-05-26 腾讯科技(深圳)有限公司 Image recognition method and storage medium
CN110244574A (en) * 2019-06-28 2019-09-17 深圳市美兆环境股份有限公司 Intelligent home furnishing control method, device, system and storage medium
CN110647865B (en) * 2019-09-30 2023-08-08 腾讯科技(深圳)有限公司 Face gesture recognition method, device, equipment and storage medium
CN112766013A (en) * 2019-10-21 2021-05-07 深圳君正时代集成电路有限公司 Recognition method for performing multistage screening in face recognition
CN110991258B (en) * 2019-11-11 2023-05-23 华南理工大学 Face fusion feature extraction method and system
CN111291740B (en) * 2020-05-09 2020-08-18 支付宝(杭州)信息技术有限公司 Training method of face recognition model, face recognition method and hardware
TWI792017B (en) * 2020-07-01 2023-02-11 義隆電子股份有限公司 Biometric identification system and identification method thereof

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1908960A (en) * 2005-08-02 2007-02-07 中国科学院计算技术研究所 Feature classification based multiple classifiers combined people face recognition method
CN101510257A (en) * 2009-03-31 2009-08-19 华为技术有限公司 Human face similarity degree matching method and device
CN101587543A (en) * 2009-06-19 2009-11-25 电子科技大学 Face recognition method
CN101847163A (en) * 2010-05-28 2010-09-29 广东工业大学 Design patent image retrieval method with multi-characteristics fusion

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100745981B1 (en) * 2006-01-13 2007-08-06 삼성전자주식회사 Method and apparatus scalable face recognition based on complementary features

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1908960A (en) * 2005-08-02 2007-02-07 中国科学院计算技术研究所 Feature classification based multiple classifiers combined people face recognition method
CN101510257A (en) * 2009-03-31 2009-08-19 华为技术有限公司 Human face similarity degree matching method and device
CN101587543A (en) * 2009-06-19 2009-11-25 电子科技大学 Face recognition method
CN101847163A (en) * 2010-05-28 2010-09-29 广东工业大学 Design patent image retrieval method with multi-characteristics fusion

Also Published As

Publication number Publication date
CN103136504A (en) 2013-06-05

Similar Documents

Publication Publication Date Title
CN103136504B (en) Face identification method and device
CN110348319B (en) Face anti-counterfeiting method based on face depth information and edge image fusion
Endres et al. Category-independent object proposals with diverse ranking
CN111832605B (en) Training method and device for unsupervised image classification model and electronic equipment
JP4657934B2 (en) Face detection method, apparatus and program
EP3664019A1 (en) Information processing device, information processing program, and information processing method
CN106407911A (en) Image-based eyeglass recognition method and device
JP6897749B2 (en) Learning methods, learning systems, and learning programs
CN104463128A (en) Glass detection method and system for face recognition
CN104680144A (en) Lip language recognition method and device based on projection extreme learning machine
KR101434170B1 (en) Method for study using extracted characteristic of data and apparatus thereof
CN103218610B (en) The forming method of dog face detector and dog face detecting method
CN109993201A (en) A kind of image processing method, device and readable storage medium storing program for executing
US20110235901A1 (en) Method, apparatus, and program for generating classifiers
CN104395913A (en) Method, apparatus and computer readable recording medium for detecting a location of a face feature point using an ADABOOST learning algorithm
CN103839033A (en) Face identification method based on fuzzy rule
CN110008899B (en) Method for extracting and classifying candidate targets of visible light remote sensing image
CN110688888B (en) Pedestrian attribute identification method and system based on deep learning
CN113870254B (en) Target object detection method and device, electronic equipment and storage medium
CN112488229A (en) Domain self-adaptive unsupervised target detection method based on feature separation and alignment
CN104978569A (en) Sparse representation based incremental face recognition method
CN110458022A (en) It is a kind of based on domain adapt to can autonomous learning object detection method
CN109117810A (en) Fatigue driving behavioral value method, apparatus, computer equipment and storage medium
JP4749884B2 (en) Learning method of face discriminating apparatus, face discriminating method and apparatus, and program
CN106326927B (en) A kind of shoes print new category detection method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant