CN102867173B - Human face recognition method and system thereof - Google Patents

Human face recognition method and system thereof Download PDF

Info

Publication number
CN102867173B
CN102867173B CN201210310643.7A CN201210310643A CN102867173B CN 102867173 B CN102867173 B CN 102867173B CN 201210310643 A CN201210310643 A CN 201210310643A CN 102867173 B CN102867173 B CN 102867173B
Authority
CN
China
Prior art keywords
face
class
average
obtains
recognizer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210310643.7A
Other languages
Chinese (zh)
Other versions
CN102867173A (en
Inventor
徐向民
罗梦娜
郭咏诗
尹飞云
张阳东
吴丹丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201210310643.7A priority Critical patent/CN102867173B/en
Publication of CN102867173A publication Critical patent/CN102867173A/en
Application granted granted Critical
Publication of CN102867173B publication Critical patent/CN102867173B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a human face recognition method, which has the characteristics of self-learning function and space-time property. According to the human face recognition method, human face recognition is realized by the steps of human face detection, human face tracking, data acquisition and analysis, recognition, online learning, human-machine interaction. The invention further discloses a system for realizing the human face recognition method. The system comprises a detector, a tracker, a collector, an analyzer, an online learning module, a recognizer and a human-machine interaction module. Compared with the prior art, the human face recognition method and human face recognition system have the advantages of strong anti-interference capability and high recognition efficiency.

Description

A kind of face identification method and system thereof
Technical field
The present invention relates to human-computer interaction technology, particularly a kind of face identification method and system thereof.
Background technology
Man-machine interaction is a focus in current computer science research field in the world.In human-computer interaction technology, recognition of face is using it as computer recognizing user and the method the most easily of providing personalized service progressively is applied to the occasions such as Smart Home.Based on the recognition of face of computer vision, its core utilizes computer vision, and the technology such as image procossing process the video sequence that image capture device collects, and classify to user, thus respond accordingly.
In existing face recognition technology, achieve respectively and face has been detected, to the face tracking detected, to given image or video sequence carry out face characteristic extract and and face database data compare thus user identified, at this, we are the face identification method that this three part of module combines being called conventional face's recognition system.But current conventional face's recognition system can not carry out technology good for some effectiveness comparison the fusion of module, and thus there are the following problems:
(1) weak effect.Because conventional face's recognition system can not merge face identification system modules effectively, conventional face's recognition system easily produce face tracking less than, follow the tracks of drift, can not identify, the uncertain problem such as even to make mistakes, badly influence recognition of face effect.
(2) interactivity is poor.Conventional face's recognition system can not well and people carry out alternately.When occurring that recognition of face does not go out, uncertain when even makeing mistakes, system can not well and people carry out alternately, therefore when identifying inaccurate, system can not obtain the feedback of user in time, and performance can not get improving; Even think due to system when identifying inaccurate and self identify correct thus by the original parameter (self study) in the results modification system identified, cause system to use poorer and poorer.
(3) bad adaptability.Conventional face's recognition system is subject to the impact of the various external conditions such as illumination, beard, glasses, hair style, expression, and discrimination is reduced.Therefore, system availability is not strong.
Summary of the invention
In order to overcome the above-mentioned shortcoming of prior art with not enough, the object of the present invention is to provide a kind of face identification method, have self-learning function, space-time characterisation, antijamming capability is strong.
Another object of the present invention is to provide the face identification system realizing above-mentioned face identification method.
Object of the present invention is achieved through the following technical solutions:
A kind of face identification method, comprises the following steps:
S1 detecting device detects in frame sequence whether have face; If so, carry out step S2, if not, repeat step S1;
S2 tracker is followed the tracks of the face detected;
Facial image collected by S3 gatherer;
Whether the facial image that the analysis of S4 analyzer is collected is reliable sample; If so, carry out step S5, if not, repeat step S2 ~ S4;
S5 analyzer, to the target face extraction form parameter in reliable sample and parametric texture, is carried out modeling to the shape of target face and texture respectively, is obtained the average face of target face by Model Fusion;
S6 recognizer obtains the matching degree C closest to face class in average face and face class libraries by linear discriminant eigenface method;
If C<B, then carry out step S7; If C>A, identify user identity, end of identification; If B<C<A, then require that user inputs name by human-computer interaction module, if the face class corresponding to name of user's input has existed in face class libraries, carry out step S8; Otherwise, carry out step S7; The value of A, B is rule of thumb determined by user;
S7 on-line study module is a newly-built face class in face class libraries, to be added to by the average face of target face in newly-built face class and to mark user name, and this face class is passed to recognizer, carrying out step S9;
S8 on-line study module upgrades the face class corresponding to name of user's input, and the face class of renewal is passed to recognizer; Carry out step S9;
S9 recognizer upgrades face class libraries.
Analyzer described in step S5, to the target face extraction shape in reliable sample and textural characteristics, is carried out modeling to the shape of target face and texture respectively, is obtained the average face of target face, specifically comprise the following steps by Model Fusion:
S5.1 carries out described point to the target face in reliable sample;
S5.2 carries out modeling to the shape of face: first the facial image after described point is carried out Procrustes conversion between two, obtains average shape face, then obtains form parameter and shape by pivot analysis dimensionality reduction;
S5.3 carries out modeling to the texture of face: first average shape face is carried out delaunay triangle division, then carries out texture padding by the affine method of burst, finally obtains average texture model and parametric texture with principle component analysis dimensionality reduction;
Form parameter and parametric texture are weighted combination by S5.4, adopt principle component analysis dimensionality reduction to obtain fusion parameters, finally obtain average face.
Linear discriminant eigenface method described in step S6, comprises the following steps:
Face class in the average face that S6.1 transmits analyzer and face class libraries, by nearest samples algorithm between class, in class, obtains between class and the tolerance of difference in class;
Difference between the class that S6.2 obtains according to each face class and in class, to obtain between class scatter matrix in scatter matrix and class;
Scatter matrix in scatter matrix and class between the class that S6.3 obtains according to step S6.2, criterion obtains optimal discriminant vectors to utilize Fisher to differentiate;
The average face that analyzer transmits by S6.4 projects to optimal discriminant vectors, obtains the characteristic of low-dimensional;
S6.5, according to most proximity matching principle, obtains the matching degree C closest to face class in average face and face class libraries.
Described in step S5.1, described point is carried out to the target face in reliable sample, is specially:
Described point is carried out to the profile of the target face in reliable sample, eyebrow, eyes, nose, lip position, is write the coordinate position of point as vector form.
On-line study module described in step S8 upgrades the face class corresponding to name of user's input, is specially:
On-line study module computational analysis device passes the difference of all people's face sample in the face class corresponding to name that the average face of coming and user input, if difference exceedes inter-object distance, then upgrades this face class, otherwise does not upgrade.
Realize the face identification system of above-mentioned face identification method, comprise detecting device, tracker, gatherer, analyzer, on-line study module, recognizer and human-computer interaction module, described detecting device, tracker, gatherer, analyzer, recognizer, human-computer interaction module, on-line study module connect successively; Described on-line study module is also connected with recognizer.
Compared with prior art, the present invention has the following advantages and beneficial effect:
(1) the present invention has self-learning function and space-time characterisation, and by set Face datection, face tracking, data collection and analysis, identifies, on-line study and supervision six large modules identify to have strong anti-interference ability to face.
(2) interactivity of the present invention is good, by sending inquiry when system is uncertain to user, according to the feedback of user carry out next step operation, prevent just in case system performance due to a variety of causes performance start decline, user can send reset command by human-computer interaction module to system.Now system deletes the data in use added in storehouse, returns to init state, makes system performance be unlikely to decline more and more serious.
Accompanying drawing explanation
Fig. 1 is the frame diagram of face identification system of the present invention.
Fig. 2 is the process flow diagram of recognition of face of the present invention.
Fig. 3 is the example of destructive sample.
Fig. 4 is the example of reliable sample.
Fig. 5 is the face example through described point.
Embodiment
Below in conjunction with embodiment and accompanying drawing, the present invention is described in further detail, but embodiments of the present invention are not limited thereto.
Embodiment
As shown in Figure 1, face identification system of the present invention comprises detecting device, tracker, gatherer, analyzer, on-line study module, recognizer and human-computer interaction module, and described detecting device, tracker, gatherer, analyzer, recognizer, human-computer interaction module, on-line study module connect successively; Described on-line study module is also connected with recognizer.
As shown in Figure 2, face identification method of the present invention, comprises the following steps:
S1 detecting device detects in frame sequence whether have face; If so, carry out step S2, if not, repeat step S1;
The Face datection algorithm that Viola and Jones proposes is adopted in detecting device testing process.In the algorithm, select Adaboost training classifier, use the method for multi-pose classification, face can be detected fast by higher verification and measurement ratio.
S2 tracker is followed the tracks of the face detected;
A kind of algorithm of Camshift in conjunction with Kalman filter of improvement is adopted in tracing process.When there is the large area background interference close with color of object, start ROI(area-of-interest) frame difference method, only frame difference operation is carried out to Kalman estimation range, by the edge extraction Moving Objects face of Moving Objects, then carry out and computing with destination probability distribution plan, the jamming pattern do not moved is filtered out.When target is seriously blocked, because Camshift algorithm lost efficacy, adopt the optimal location value that Kalman predicted value replacement Camshift calculates, and using the observed reading that Kalman predicted value upgrades as Kalman filter, this algorithm can effectively overcome the problem of seriously blocking and causing Kalman filter to lose efficacy.This algorithm can carry out real-time follow-up according to the face detected in detecting device, to ensure the being face of same person in image-region.
Facial image collected by S3 gatherer.
Whether the facial image that the analysis of S4 analyzer is collected is reliable sample; If so, carry out step S5, if not, repeat step S2 ~ S4;
By fail-safe analysis, filter out the destructive sample of some end rotation angles more than 90 degree (such as sample shown in Fig. 3), retain the reliable sample (such as sample shown in Fig. 4) that can be used for identifying face.
S5 analyzer, to the target face extraction form parameter in reliable sample and parametric texture, is carried out modeling to the shape of target face and texture respectively, is obtained the average face of target face by Model Fusion; Specifically comprise the following steps:
S5.1 retouches 68 points (as shown in Figure 5) to the profile of the target face in reliable sample, eyebrow, eyes, nose, lip position, is write the coordinate position of point as vector form;
x={x1,y1,x2,y2,x3,y3,......,x68,y68}。
S5.2 carries out modeling to the shape of face: first the facial image after described point is carried out Procrustes conversion between two, obtains average shape face, then obtains form parameter and shape by pivot analysis dimensionality reduction;
S5.3 carries out modeling to the texture of face: first average shape face is carried out delaunay triangle division, then carries out texture padding by the affine method of burst, finally obtains average texture model and parametric texture with principle component analysis dimensionality reduction;
Form parameter and parametric texture are weighted combination by S5.4, adopt principle component analysis dimensionality reduction to obtain fusion parameters, finally obtain average face.
S6 recognizer obtains the matching degree C closest to face class in average face and face class libraries by linear discriminant eigenface method;
Described linear discriminant eigenface method, comprises the following steps:
Face class in the average face that S6.1 transmits analyzer and face class libraries, by nearest samples algorithm between class, in class, obtains between class and the tolerance of difference in class;
Difference between the class that S6.2 obtains according to each face class and in class, to obtain between class scatter matrix in scatter matrix and class;
Scatter matrix in scatter matrix and class between the class that S6.3 obtains according to step S6.2, criterion obtains optimal discriminant vectors to utilize Fisher to differentiate;
The average face that analyzer transmits by S6.4 projects to optimal discriminant vectors, obtains the characteristic of low-dimensional;
S6.5, according to most proximity matching principle, obtains the matching degree C closest to face class in average face and face class libraries;
If matching degree C is less than 50%, then carry out step S7; If matching degree C is greater than 95%, identify user identity, end of identification; If matching degree C is greater than 50% and be less than 95%, then require that user inputs name by human-computer interaction module, if the face class corresponding to name of user's input has existed in face class libraries, carry out step S8; Otherwise, carry out step S7.
S7 on-line study module is a newly-built face class in face class libraries, to be added to by the average face of target face in newly-built face class and to mark user name, and this face class is passed to recognizer, carrying out step S9.
S8 on-line study module upgrades the face class corresponding to name of user's input, and the face class of renewal is passed to recognizer; Carry out step S9;
Described on-line study module upgrades the face class corresponding to name of user's input, is specially:
On-line study module computational analysis device passes the difference of all people's face sample in the face class corresponding to name that the average face of coming and user input, if difference exceedes inter-object distance, then upgrades this face class, otherwise does not upgrade.
S9 recognizer upgrades face class libraries.
Above-described embodiment is the present invention's preferably embodiment; but embodiments of the present invention are not limited by the examples; change, the modification done under other any does not deviate from Spirit Essence of the present invention and principle, substitute, combine, simplify; all should be the substitute mode of equivalence, be included within protection scope of the present invention.

Claims (5)

1. a face identification method, is characterized in that, comprises the following steps:
S1 detecting device detects in frame sequence whether have face; If so, carry out step S2, if not, repeat step S1;
S2 tracker is followed the tracks of the face detected;
Facial image collected by S3 gatherer;
Whether the facial image that the analysis of S4 analyzer is collected is reliable sample; If so, carry out step S5, if not, repeat step S2 ~ S4;
S5 analyzer, to the target face extraction form parameter in reliable sample and parametric texture, is carried out modeling to the shape of target face and texture respectively, is obtained the average face of target face, specifically comprise the following steps by Model Fusion:
S5.1 carries out described point to the target face in reliable sample;
S5.2 carries out modeling to the shape of face: first the facial image after described point is carried out Procrustes conversion between two, obtains average shape face, then obtains form parameter and shape by pivot analysis dimensionality reduction;
S5.3 carries out modeling to the texture of face: first average shape face is carried out delaunay triangle division, then carries out texture padding by the affine method of burst, finally obtains average texture model and parametric texture with principle component analysis dimensionality reduction;
Form parameter and parametric texture are weighted combination by S5.4, adopt principle component analysis dimensionality reduction to obtain fusion parameters, finally obtain average face;
S6 recognizer obtains the matching degree C closest to face class in average face and face class libraries by linear discriminant eigenface method;
If C<B, then carry out step S7; If C>A, identify user identity, end of identification; If B<C<A, then require that user inputs name by human-computer interaction module, if the face class corresponding to name of user's input has existed in face class libraries, carry out step S8; Otherwise, carry out step S7; The value of A, B is rule of thumb determined by user;
S7 on-line study module is a newly-built face class in face class libraries, to be added to by the average face of target face in newly-built face class and to mark user name, and this face class is passed to recognizer, carrying out step S9;
S8 on-line study module upgrades the face class corresponding to name of user's input, and the face class of renewal is passed to recognizer; Carry out step S9;
S9 recognizer upgrades face class libraries.
2. face identification method according to claim 1, is characterized in that, linear discriminant eigenface method described in step S6, comprises the following steps:
Face class in the average face that S6.1 transmits analyzer and face class libraries, by nearest samples algorithm between class, in class, obtains between class and the tolerance of difference in class;
Difference between the class that S6.2 obtains according to each face class and in class, to obtain between class scatter matrix in scatter matrix and class;
Scatter matrix in scatter matrix and class between the class that S6.3 obtains according to step S6.2, criterion obtains optimal discriminant vectors to utilize Fisher to differentiate;
The average face that analyzer transmits by S6.4 projects to optimal discriminant vectors, obtains the characteristic of low-dimensional;
S6.5, according to most proximity matching principle, obtains the matching degree C closest to face class in average face and face class libraries.
3. face identification method according to claim 1, is characterized in that, carries out described point, be specially described in step S5.1 to the target face in reliable sample:
Described point is carried out to the profile of the target face in reliable sample, eyebrow, eyes, nose, lip position, is write the coordinate position of point as vector form.
4. face identification method according to claim 1, is characterized in that, on-line study module described in step S8 upgrades the face class corresponding to name of user's input, is specially:
On-line study module computational analysis device passes the difference of all people's face sample in the face class corresponding to name that the average face of coming and user input, if difference exceedes inter-object distance, then upgrades this face class, otherwise does not upgrade.
5. realize the face identification system of face identification method described in any one of Claims 1 to 4, it is characterized in that, comprise detecting device, tracker, gatherer, analyzer, on-line study module, recognizer and human-computer interaction module, described detecting device, tracker, gatherer, analyzer, recognizer, human-computer interaction module, on-line study module connect successively; Described on-line study module is also connected with recognizer.
CN201210310643.7A 2012-08-28 2012-08-28 Human face recognition method and system thereof Active CN102867173B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210310643.7A CN102867173B (en) 2012-08-28 2012-08-28 Human face recognition method and system thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210310643.7A CN102867173B (en) 2012-08-28 2012-08-28 Human face recognition method and system thereof

Publications (2)

Publication Number Publication Date
CN102867173A CN102867173A (en) 2013-01-09
CN102867173B true CN102867173B (en) 2015-01-28

Family

ID=47446037

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210310643.7A Active CN102867173B (en) 2012-08-28 2012-08-28 Human face recognition method and system thereof

Country Status (1)

Country Link
CN (1) CN102867173B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103745235B (en) * 2013-12-18 2017-07-04 小米科技有限责任公司 Face identification method, device and terminal device
CN104765739B (en) * 2014-01-06 2018-11-02 南京宜开数据分析技术有限公司 Extensive face database search method based on shape space
CN104091164A (en) * 2014-07-28 2014-10-08 北京奇虎科技有限公司 Face picture name recognition method and system
CN104794468A (en) * 2015-05-20 2015-07-22 成都通甲优博科技有限责任公司 Human face detection and tracking method based on unmanned aerial vehicle mobile platform
CN106326815B (en) * 2015-06-30 2019-09-13 芋头科技(杭州)有限公司 A kind of facial image recognition method
CN105354543A (en) * 2015-10-29 2016-02-24 小米科技有限责任公司 Video processing method and apparatus
CN106778470A (en) * 2016-11-15 2017-05-31 东软集团股份有限公司 A kind of face identification method and device
CN106778653A (en) * 2016-12-27 2017-05-31 北京光年无限科技有限公司 Towards the exchange method and device based on recognition of face Sample Storehouse of intelligent robot
CN106950844A (en) * 2017-04-01 2017-07-14 东莞市四吉电子设备有限公司 A kind of smart home monitoring method and device
CN107665341A (en) * 2017-09-30 2018-02-06 珠海市魅族科技有限公司 One kind identification control method, electronic equipment and computer product
CN109358649A (en) * 2018-12-14 2019-02-19 电子科技大学 Unmanned aerial vehicle station Control management system for taking photo by plane
CN109903412A (en) * 2019-02-01 2019-06-18 北京清帆科技有限公司 A kind of intelligent check class attendance system based on face recognition technology
CN110271557B (en) * 2019-06-12 2021-05-14 浙江亚太机电股份有限公司 Vehicle user feature recognition system
CN110334626B (en) * 2019-06-26 2022-03-04 北京科技大学 Online learning system based on emotional state

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006097902A3 (en) * 2005-03-18 2007-03-29 Philips Intellectual Property Method of performing face recognition
CN101377814A (en) * 2007-08-27 2009-03-04 索尼株式会社 Face image processing apparatus, face image processing method, and computer program
CN101587485A (en) * 2009-06-15 2009-11-25 无锡骏聿科技有限公司 Face information automatic login method based on face recognition technology
CN102004899A (en) * 2010-11-03 2011-04-06 无锡中星微电子有限公司 Human face identifying system and method
CN102402691A (en) * 2010-09-08 2012-04-04 中国科学院自动化研究所 Method for tracking gestures and actions of human face

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006097902A3 (en) * 2005-03-18 2007-03-29 Philips Intellectual Property Method of performing face recognition
CN101377814A (en) * 2007-08-27 2009-03-04 索尼株式会社 Face image processing apparatus, face image processing method, and computer program
CN101587485A (en) * 2009-06-15 2009-11-25 无锡骏聿科技有限公司 Face information automatic login method based on face recognition technology
CN102402691A (en) * 2010-09-08 2012-04-04 中国科学院自动化研究所 Method for tracking gestures and actions of human face
CN102004899A (en) * 2010-11-03 2011-04-06 无锡中星微电子有限公司 Human face identifying system and method

Also Published As

Publication number Publication date
CN102867173A (en) 2013-01-09

Similar Documents

Publication Publication Date Title
CN102867173B (en) Human face recognition method and system thereof
Lei et al. An efficient 3D face recognition approach using local geometrical signatures
Islam et al. Efficient detection and recognition of 3D ears
Enzweiler et al. Monocular pedestrian detection: Survey and experiments
Gong et al. Dynamic manifold warping for view invariant action recognition
Khan et al. A comparative analysis of gender classification techniques
Gong et al. Kernelized temporal cut for online temporal segmentation and recognition
Kim et al. Vision-based arm gesture recognition for a long-range human–robot interaction
Mavadati et al. Automatic detection of non-posed facial action units
Nishiyama et al. Recognizing faces of moving people by hierarchical image-set matching
Wang et al. A face recognition system based on local binary patterns and support vector machine for home security service robot
CN102831408A (en) Human face recognition method
Wang et al. A new hand gesture recognition algorithm based on joint color-depth superpixel earth mover's distance
Zhang et al. Hand gesture recognition based on HOG-LBP feature
Aggarwal et al. Online handwriting recognition using depth sensors
Rentzeperis et al. Impact of face registration errors on recognition
Zhang et al. View-invariant action recognition in surveillance videos
Hsieh et al. A real time hand gesture recognition system based on DFT and SVM
Sandıkcı et al. A comparison of facial landmark detection methods
CN103268477A (en) Three-dimensional face recognition system based on embedded platform
Riaz et al. A model based approach for expressions invariant face recognition
Das Comparative analysis of PCA and 2DPCA in face recognition
Paul et al. Automatic adaptive facial feature extraction using CDF analysis
Liang et al. Gesture recognition from one example using depth images
Ying et al. An automatic system for multi-view face detection and pose estimation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant