CN102867173A - Human face recognition method and system thereof - Google Patents

Human face recognition method and system thereof Download PDF

Info

Publication number
CN102867173A
CN102867173A CN2012103106437A CN201210310643A CN102867173A CN 102867173 A CN102867173 A CN 102867173A CN 2012103106437 A CN2012103106437 A CN 2012103106437A CN 201210310643 A CN201210310643 A CN 201210310643A CN 102867173 A CN102867173 A CN 102867173A
Authority
CN
China
Prior art keywords
face
people
class
average
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012103106437A
Other languages
Chinese (zh)
Other versions
CN102867173B (en
Inventor
徐向民
罗梦娜
郭咏诗
尹飞云
张阳东
吴丹丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201210310643.7A priority Critical patent/CN102867173B/en
Publication of CN102867173A publication Critical patent/CN102867173A/en
Application granted granted Critical
Publication of CN102867173B publication Critical patent/CN102867173B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a human face recognition method, which has the characteristics of self-learning function and space-time property. According to the human face recognition method, human face recognition is realized by the steps of human face detection, human face tracking, data acquisition and analysis, recognition, online learning, human-machine interaction. The invention further discloses a system for realizing the human face recognition method. The system comprises a detector, a tracker, a collector, an analyzer, an online learning module, a recognizer and a human-machine interaction module. Compared with the prior art, the human face recognition method and human face recognition system have the advantages of strong anti-interference capability and high recognition efficiency.

Description

A kind of face identification method and system thereof
Technical field
The present invention relates to human-computer interaction technology, particularly a kind of face identification method and system thereof.
Background technology
Man-machine interaction is a focus in the present in the world computer science research field.In human-computer interaction technology, recognition of face progressively is applied to the occasions such as Smart Home with it as computing machine identification user and the most easily method of providing personalized service.Based on the recognition of face of computer vision, its core is to utilize computer vision, and the technology such as image processing are processed the video sequence that image capture device collects, and the user is classified, thereby respond accordingly.
In existing face recognition technology, realized respectively people's face is detected, to the face tracking that detects, thereby given image or video sequence are carried out that face characteristic extracts and compare with the face database data user is identified, at this, we call traditional face identification system to the face identification method that this three part of module can be combined.Yet present traditional face identification system can not carry out the fusion of module to the reasonable technology of some effects, thereby there are the following problems:
(1) weak effect.Because traditional face identification system can not merge the face identification system modules effectively, the tradition face identification system easily produce face tracking less than, follow the tracks of drift, can not identify, the uncertain even problem such as make mistakes, badly influence the recognition of face effect.
(2) interactivity is poor.The tradition face identification system can not be well and the people carry out alternately.When occur recognition of face do not go out, uncertain even when makeing mistakes, system can not be well and the people carry out alternately, therefore system can not in time obtain user's feedback when identification is inaccurate, performance can not get improving; Thereby even because thinking, system original parameter (self study) in self correct results modification system with identifying of identification causes system to use poorer and poorer in identification when inaccurate.
(3) bad adaptability.The tradition face identification system is subject to the impact of the various external conditions such as illumination, beard, glasses, hair style, expression, and discrimination is reduced.Therefore, system availability is not strong.
Summary of the invention
Above-mentioned shortcoming and deficiency in order to overcome prior art the object of the present invention is to provide a kind of face identification method, have self-learning function, space-time characterisation, and antijamming capability is strong.
Another object of the present invention is to provide the face identification system of realizing above-mentioned face identification method.
Purpose of the present invention is achieved through the following technical solutions:
A kind of face identification method may further comprise the steps:
The S1 detecting device detects in the frame sequence whether people's face is arranged; If, carry out step S2, if not, repeating step S1;
The S2 tracker is followed the tracks of the people's face that detects;
The S3 gatherer is collected facial image;
Whether the facial image that the analysis of S4 analyzer is collected is reliable sample; If, carry out step S5, if not, repeating step S2 ~ S4;
The S5 analyzer carries out modeling to shape and the texture of target people face respectively to target face extraction form parameter and parametric texture in the reliable sample, merges the average face that obtains target people face by model;
The S6 recognizer by linear discriminant eigenface method obtain in average face and the people's face class libraries near the matching degree C of people's face class;
If C<B then carries out step S7; If C〉A, identification user identity, end of identification; If B<C<A then requires the user to input name by human-computer interaction module, if the corresponding people's face of the name class of user's input has existed in people's face class libraries, carry out step S8; Otherwise, carry out step S7; The value of A, B is rule of thumb determined by the user;
S7 on-line study module is newly-built people's face class in people's face class libraries, adds to the average face of target people face in newly-built people's face class and marks user name, and this people's face class is passed to recognizer, carries out step S9;
S8 on-line study module is upgraded the corresponding people's face of the name class of user's input, and people's face class of upgrading is passed to recognizer; Carry out step S9;
The S9 recognizer is new person's face class libraries more.
The described analyzer of step S5 carries out modeling to shape and the texture of target people face respectively to target face extraction shape and textural characteristics in the reliable sample, merges the average face that obtains target people face by model, specifically may further comprise the steps:
S5.1 carries out described point to the target people's face in the reliable sample;
S5.2 carries out modeling to the shape of people's face: at first the facial image behind the described point is carried out the Procrustes conversion in twos, obtain average shape people face, obtain form parameter and shape by the pivot analysis dimensionality reduction again;
S5.3 carries out modeling to the texture of people's face: first average shape people face is carried out the delaunay triangle division, carry out texture padding with the affine method of burst again, finally obtain average texture model and parametric texture with the principle component analysis dimensionality reduction;
S5.4 is weighted combination with form parameter and parametric texture, adopts the principle component analysis dimensionality reduction to obtain fusion parameters, finally obtains average face.
The described linear discriminant eigenface of step S6 method may further comprise the steps:
The average face that S6.1 transmits analyzer and the people's face class in people's face class libraries be by nearest samples algorithm between class, in the class, obtain between class and class in the tolerance of difference;
Between the class that S6.2 obtains according to everyone face class and the difference in the class, obtain between class scatter matrix in the scatter matrix and class;
Scatter matrix in scatter matrix and the class between the class that S6.3 obtains according to step S6.2 utilizes Fisher to differentiate that criterion obtains optimal discriminant vectors;
S6.4 does projection with the average face that analyzer transmits to optimal discriminant vectors, obtains the characteristic of low-dimensional;
S6.5 is according to proximity matching principle, obtain in average face and the people's face class libraries near the matching degree C of people's face class.
Step S5.1 is described to carry out described point to the target people's face in the reliable sample, is specially:
Described point is carried out at profile, eyebrow, eyes, nose, lip position to the target people's face in the reliable sample, is write the coordinate position of point as vector form.
The described on-line study module of step S8 is upgraded the corresponding people's face of the name class of user's input, is specially:
The difference of the somebody of the institute face sample in the corresponding people's face of the name class of the average face that on-line study module computational analysis device biography is come and user's input if difference surpasses inter-object distance, is then upgraded this people's face class, otherwise is not upgraded.
Realize the face identification system of above-mentioned face identification method, comprise detecting device, tracker, gatherer, analyzer, on-line study module, recognizer and human-computer interaction module, described detecting device, tracker, gatherer, analyzer, recognizer, human-computer interaction module, on-line study module connect successively; Described on-line study module also is connected with recognizer.
Compared with prior art, the present invention has the following advantages and beneficial effect:
(1) the present invention has self-learning function and space-time characterisation, detect by set people face, and face tracking, data collection and analysis, identification, on-line study and supervision six large modules are identified people's face, have strong anti-interference ability.
(2) interactivity of the present invention is good, send inquiry to the user when uncertain in system, feedback according to the user is carried out next step operation, has prevented the contingency system performance because a variety of causes performance begins to descend, and the user can send reset command to system by human-computer interaction module.This moment, the data that in use add in the storehouse were deleted by system, returned to init state, made system performance be unlikely to descend more and more seriously.
Description of drawings
Fig. 1 is the frame diagram of face identification system of the present invention.
Fig. 2 is the process flow diagram of recognition of face of the present invention.
Fig. 3 is the example of destructive sample.
Fig. 4 is the example of reliable sample.
Fig. 5 is the people's face example through described point.
Embodiment
Below in conjunction with embodiment and accompanying drawing, the present invention is described in further detail, but embodiments of the present invention are not limited to this.
Embodiment
As shown in Figure 1, face identification system of the present invention comprises detecting device, tracker, gatherer, analyzer, on-line study module, recognizer and human-computer interaction module, and described detecting device, tracker, gatherer, analyzer, recognizer, human-computer interaction module, on-line study module connect successively; Described on-line study module also is connected with recognizer.
As shown in Figure 2, face identification method of the present invention may further comprise the steps:
The S1 detecting device detects in the frame sequence whether people's face is arranged; If, carry out step S2, if not, repeating step S1;
The people's face detection algorithm that adopts Viola and Jones to propose in the detecting device testing process.In this algorithm, select the Adaboost training classifier, use the method for multi-pose classification, verification and measurement ratio that can be higher detects people's face fast.
The S2 tracker is followed the tracks of the people's face that detects;
Adopt a kind of improved Camshift in conjunction with the algorithm of Kalman filtering in the tracing process.When the close background interference of large tracts of land and color of object occurring, start the ROI(area-of-interest) frame difference method, only the frame difference operation is carried out in the Kalman estimation range, edge extraction Moving Objects face by Moving Objects, then carry out and computing with the destination probability distribution plan, the jamming pattern that does not move is filtered out.When target is seriously blocked, because the Camshift algorithm lost efficacy, the optimal location value that adopts the Kalman predicted value to replace Camshift to calculate, and with the observed reading that the Kalman predicted value is upgraded as Kalman filtering, this algorithm can effectively overcome and seriously blocks the problem that causes Kalman filtering to be lost efficacy.This algorithm can carry out real-time follow-up according to the people's face that detects in the detecting device, to guarantee that in the image-region be people's face of same person.
The S3 gatherer is collected facial image.
Whether the facial image that the analysis of S4 analyzer is collected is reliable sample; If, carry out step S5, if not, repeating step S2 ~ S4;
By fail-safe analysis, filter out the destructive sample (for example sample shown in Figure 3) that some end rotation angles surpass 90 degree, keep the reliable sample (for example sample shown in Figure 4) that can be used for identifying people's face.
The S5 analyzer carries out modeling to shape and the texture of target people face respectively to target face extraction form parameter and parametric texture in the reliable sample, merges the average face that obtains target people face by model; Specifically may further comprise the steps:
S5.1 retouches 68 points (as shown in Figure 5) to profile, eyebrow, eyes, nose, the lip position of the target people's face in the reliable sample, is write the coordinate position of point as vector form;
x={x1,y1,x2,y2,x3,y3,......,x68,y68}。
S5.2 carries out modeling to the shape of people's face: at first the facial image behind the described point is carried out the Procrustes conversion in twos, obtain average shape people face, obtain form parameter and shape by the pivot analysis dimensionality reduction again;
S5.3 carries out modeling to the texture of people's face: first average shape people face is carried out the delaunay triangle division, carry out texture padding with the affine method of burst again, finally obtain average texture model and parametric texture with the principle component analysis dimensionality reduction;
S5.4 is weighted combination with form parameter and parametric texture, adopts the principle component analysis dimensionality reduction to obtain fusion parameters, finally obtains average face.
The S6 recognizer by linear discriminant eigenface method obtain in average face and the people's face class libraries near the matching degree C of people's face class;
Described linear discriminant eigenface method may further comprise the steps:
The average face that S6.1 transmits analyzer and the people's face class in people's face class libraries be by nearest samples algorithm between class, in the class, obtain between class and class in the tolerance of difference;
Between the class that S6.2 obtains according to everyone face class and the difference in the class, obtain between class scatter matrix in the scatter matrix and class;
Scatter matrix in scatter matrix and the class between the class that S6.3 obtains according to step S6.2 utilizes Fisher to differentiate that criterion obtains optimal discriminant vectors;
S6.4 does projection with the average face that analyzer transmits to optimal discriminant vectors, obtains the characteristic of low-dimensional;
S6.5 is according to proximity matching principle, obtain in average face and the people's face class libraries near the matching degree C of people's face class;
If matching degree C less than 50%, then carries out step S7; If matching degree C greater than 95%, identifies user identity, end of identification; If matching degree C greater than 50% and less than 95%, then requires the user to input name by human-computer interaction module, if the corresponding people's face of the name class of user's input has existed in people's face class libraries, carry out step S8; Otherwise, carry out step S7.
S7 on-line study module is newly-built people's face class in people's face class libraries, adds to the average face of target people face in newly-built people's face class and marks user name, and this people's face class is passed to recognizer, carries out step S9.
S8 on-line study module is upgraded the corresponding people's face of the name class of user's input, and people's face class of upgrading is passed to recognizer; Carry out step S9;
Described on-line study module is upgraded the corresponding people's face of the name class of user's input, is specially:
The difference of the somebody of the institute face sample in the corresponding people's face of the name class of the average face that on-line study module computational analysis device biography is come and user's input if difference surpasses inter-object distance, is then upgraded this people's face class, otherwise is not upgraded.
The S9 recognizer is new person's face class libraries more.
Above-described embodiment is the better embodiment of the present invention; but embodiments of the present invention are not limited by the examples; other any do not deviate from change, the modification done under Spirit Essence of the present invention and the principle, substitutes, combination, simplify; all should be the substitute mode of equivalence, be included within protection scope of the present invention.

Claims (6)

1. a face identification method is characterized in that, may further comprise the steps:
The S1 detecting device detects in the frame sequence whether people's face is arranged; If, carry out step S2, if not, repeating step S1;
The S2 tracker is followed the tracks of the people's face that detects;
The S3 gatherer is collected facial image;
Whether the facial image that the analysis of S4 analyzer is collected is reliable sample; If, carry out step S5, if not, repeating step S2 ~ S4;
The S5 analyzer carries out modeling to shape and the texture of target people face respectively to target face extraction form parameter and parametric texture in the reliable sample, merges the average face that obtains target people face by model;
The S6 recognizer by linear discriminant eigenface method obtain in average face and the people's face class libraries near the matching degree C of people's face class;
If C<B then carries out step S7; If C〉A, identification user identity, end of identification; If B<C<A then requires the user to input name by human-computer interaction module, if the corresponding people's face of the name class of user's input has existed in people's face class libraries, carry out step S8; Otherwise, carry out step S7; The value of A, B is rule of thumb determined by the user;
S7 on-line study module is newly-built people's face class in people's face class libraries, adds to the average face of target people face in newly-built people's face class and marks user name, and this people's face class is passed to recognizer, carries out step S9;
S8 on-line study module is upgraded the corresponding people's face of the name class of user's input, and people's face class of upgrading is passed to recognizer; Carry out step S9;
The S9 recognizer is new person's face class libraries more.
2. face identification method according to claim 1, it is characterized in that, the described analyzer of step S5 is to target face extraction shape and textural characteristics in the reliable sample, respectively shape and the texture of target people face are carried out modeling, merge the average face that obtains target people face by model, specifically may further comprise the steps:
S5.1 carries out described point to the target people's face in the reliable sample;
S5.2 carries out modeling to the shape of people's face: at first the facial image behind the described point is carried out the Procrustes conversion in twos, obtain average shape people face, obtain form parameter and shape by the pivot analysis dimensionality reduction again;
S5.3 carries out modeling to the texture of people's face: first average shape people face is carried out the delaunay triangle division, carry out texture padding with the affine method of burst again, finally obtain average texture model and parametric texture with the principle component analysis dimensionality reduction;
S5.4 is weighted combination with form parameter and parametric texture, adopts the principle component analysis dimensionality reduction to obtain fusion parameters, finally obtains average face.
3. face identification method according to claim 1 is characterized in that, the described linear discriminant eigenface of step S6 method may further comprise the steps:
The average face that S6.1 transmits analyzer and the people's face class in people's face class libraries be by nearest samples algorithm between class, in the class, obtain between class and class in the tolerance of difference;
Between the class that S6.2 obtains according to everyone face class and the difference in the class, obtain between class scatter matrix in the scatter matrix and class;
Scatter matrix in scatter matrix and the class between the class that S6.3 obtains according to step S6.2 utilizes Fisher to differentiate that criterion obtains optimal discriminant vectors;
S6.4 does projection with the average face that analyzer transmits to optimal discriminant vectors, obtains the characteristic of low-dimensional;
S6.5 is according to proximity matching principle, obtain in average face and the people's face class libraries near the matching degree C of people's face class.
4. face identification method according to claim 2 is characterized in that, step S5.1 is described to carry out described point to the target people's face in the reliable sample, is specially:
Described point is carried out at profile, eyebrow, eyes, nose, lip position to the target people's face in the reliable sample, is write the coordinate position of point as vector form.
5. face identification method according to claim 1 is characterized in that, the described on-line study module of step S8 is upgraded the corresponding people's face of the name class of user's input, is specially:
The difference of the somebody of the institute face sample in the corresponding people's face of the name class of the average face that on-line study module computational analysis device biography is come and user's input if difference surpasses inter-object distance, is then upgraded this people's face class, otherwise is not upgraded.
6. realize the face identification system of each described face identification method of claim 1 ~ 5, it is characterized in that, comprise detecting device, tracker, gatherer, analyzer, on-line study module, recognizer and human-computer interaction module, described detecting device, tracker, gatherer, analyzer, recognizer, human-computer interaction module, on-line study module connect successively; Described on-line study module also is connected with recognizer.
CN201210310643.7A 2012-08-28 2012-08-28 Human face recognition method and system thereof Active CN102867173B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210310643.7A CN102867173B (en) 2012-08-28 2012-08-28 Human face recognition method and system thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210310643.7A CN102867173B (en) 2012-08-28 2012-08-28 Human face recognition method and system thereof

Publications (2)

Publication Number Publication Date
CN102867173A true CN102867173A (en) 2013-01-09
CN102867173B CN102867173B (en) 2015-01-28

Family

ID=47446037

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210310643.7A Active CN102867173B (en) 2012-08-28 2012-08-28 Human face recognition method and system thereof

Country Status (1)

Country Link
CN (1) CN102867173B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103745235A (en) * 2013-12-18 2014-04-23 小米科技有限责任公司 Human face identification method, device and terminal device
CN104765739A (en) * 2014-01-06 2015-07-08 南京宜开数据分析技术有限公司 Large-scale face database searching method based on shape space
CN104794468A (en) * 2015-05-20 2015-07-22 成都通甲优博科技有限责任公司 Human face detection and tracking method based on unmanned aerial vehicle mobile platform
WO2016015621A1 (en) * 2014-07-28 2016-02-04 北京奇虎科技有限公司 Human face picture name recognition method and system
CN105354543A (en) * 2015-10-29 2016-02-24 小米科技有限责任公司 Video processing method and apparatus
CN106326815A (en) * 2015-06-30 2017-01-11 芋头科技(杭州)有限公司 Human face image recognition method
CN106778470A (en) * 2016-11-15 2017-05-31 东软集团股份有限公司 A kind of face identification method and device
CN106778653A (en) * 2016-12-27 2017-05-31 北京光年无限科技有限公司 Towards the exchange method and device based on recognition of face Sample Storehouse of intelligent robot
CN106950844A (en) * 2017-04-01 2017-07-14 东莞市四吉电子设备有限公司 A kind of smart home monitoring method and device
CN107665341A (en) * 2017-09-30 2018-02-06 珠海市魅族科技有限公司 One kind identification control method, electronic equipment and computer product
CN109358649A (en) * 2018-12-14 2019-02-19 电子科技大学 Unmanned aerial vehicle station Control management system for taking photo by plane
CN109903412A (en) * 2019-02-01 2019-06-18 北京清帆科技有限公司 A kind of intelligent check class attendance system based on face recognition technology
CN110271557A (en) * 2019-06-12 2019-09-24 浙江亚太机电股份有限公司 A kind of vehicle user Feature Recognition System
CN110334626A (en) * 2019-06-26 2019-10-15 北京科技大学 A kind of on-line study system based on affective state

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006097902A2 (en) * 2005-03-18 2006-09-21 Philips Intellectual Property & Standards Gmbh Method of performing face recognition
CN101377814A (en) * 2007-08-27 2009-03-04 索尼株式会社 Face image processing apparatus, face image processing method, and computer program
CN101587485A (en) * 2009-06-15 2009-11-25 无锡骏聿科技有限公司 Face information automatic login method based on face recognition technology
CN102004899A (en) * 2010-11-03 2011-04-06 无锡中星微电子有限公司 Human face identifying system and method
CN102402691A (en) * 2010-09-08 2012-04-04 中国科学院自动化研究所 Method for tracking gestures and actions of human face

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006097902A2 (en) * 2005-03-18 2006-09-21 Philips Intellectual Property & Standards Gmbh Method of performing face recognition
WO2006097902A3 (en) * 2005-03-18 2007-03-29 Philips Intellectual Property Method of performing face recognition
CN101377814A (en) * 2007-08-27 2009-03-04 索尼株式会社 Face image processing apparatus, face image processing method, and computer program
CN101587485A (en) * 2009-06-15 2009-11-25 无锡骏聿科技有限公司 Face information automatic login method based on face recognition technology
CN102402691A (en) * 2010-09-08 2012-04-04 中国科学院自动化研究所 Method for tracking gestures and actions of human face
CN102004899A (en) * 2010-11-03 2011-04-06 无锡中星微电子有限公司 Human face identifying system and method

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103745235B (en) * 2013-12-18 2017-07-04 小米科技有限责任公司 Face identification method, device and terminal device
CN103745235A (en) * 2013-12-18 2014-04-23 小米科技有限责任公司 Human face identification method, device and terminal device
CN104765739A (en) * 2014-01-06 2015-07-08 南京宜开数据分析技术有限公司 Large-scale face database searching method based on shape space
CN104765739B (en) * 2014-01-06 2018-11-02 南京宜开数据分析技术有限公司 Extensive face database search method based on shape space
WO2016015621A1 (en) * 2014-07-28 2016-02-04 北京奇虎科技有限公司 Human face picture name recognition method and system
CN104794468A (en) * 2015-05-20 2015-07-22 成都通甲优博科技有限责任公司 Human face detection and tracking method based on unmanned aerial vehicle mobile platform
CN106326815A (en) * 2015-06-30 2017-01-11 芋头科技(杭州)有限公司 Human face image recognition method
CN106326815B (en) * 2015-06-30 2019-09-13 芋头科技(杭州)有限公司 A kind of facial image recognition method
CN105354543A (en) * 2015-10-29 2016-02-24 小米科技有限责任公司 Video processing method and apparatus
CN106778470A (en) * 2016-11-15 2017-05-31 东软集团股份有限公司 A kind of face identification method and device
CN106778653A (en) * 2016-12-27 2017-05-31 北京光年无限科技有限公司 Towards the exchange method and device based on recognition of face Sample Storehouse of intelligent robot
CN106950844A (en) * 2017-04-01 2017-07-14 东莞市四吉电子设备有限公司 A kind of smart home monitoring method and device
CN107665341A (en) * 2017-09-30 2018-02-06 珠海市魅族科技有限公司 One kind identification control method, electronic equipment and computer product
CN109358649A (en) * 2018-12-14 2019-02-19 电子科技大学 Unmanned aerial vehicle station Control management system for taking photo by plane
CN109903412A (en) * 2019-02-01 2019-06-18 北京清帆科技有限公司 A kind of intelligent check class attendance system based on face recognition technology
CN110271557A (en) * 2019-06-12 2019-09-24 浙江亚太机电股份有限公司 A kind of vehicle user Feature Recognition System
CN110334626A (en) * 2019-06-26 2019-10-15 北京科技大学 A kind of on-line study system based on affective state

Also Published As

Publication number Publication date
CN102867173B (en) 2015-01-28

Similar Documents

Publication Publication Date Title
CN102867173B (en) Human face recognition method and system thereof
Islam et al. Efficient detection and recognition of 3D ears
Lei et al. An efficient 3D face recognition approach using local geometrical signatures
CN102375970B (en) A kind of identity identifying method based on face and authenticate device
CN102385703B (en) A kind of identity identifying method based on face and system
Bilal et al. Vision-based hand posture detection and recognition for Sign Language—A study
Stenger Template-based hand pose recognition using multiple cues
CN102262727A (en) Method for monitoring face image quality at client acquisition terminal in real time
Mavadati et al. Automatic detection of non-posed facial action units
Nishiyama et al. Recognizing faces of moving people by hierarchical image-set matching
CN102831408A (en) Human face recognition method
CN112101208A (en) Feature series fusion gesture recognition method and device for elderly people
Aggarwal et al. Online handwriting recognition using depth sensors
Wang et al. A new hand gesture recognition algorithm based on joint color-depth superpixel earth mover's distance
KR20120089948A (en) Real-time gesture recognition using mhi shape information
Zhang et al. View-invariant action recognition in surveillance videos
Luo et al. Dynamic face recognition system in recognizing facial expressions for service robotics
Riaz et al. A model based approach for expressions invariant face recognition
Xu et al. Vision-based detection of dynamic gesture
Liang et al. Face pose estimation using near-infrared images
Jiang et al. A dynamic gesture recognition method based on computer vision
Sun et al. Method of analyzing and managing volleyball action by using action sensor of mobile device
Liu et al. Gesture recognition based on Kinect
Paul et al. Automatic adaptive facial feature extraction using CDF analysis
Ying et al. An automatic system for multi-view face detection and pose estimation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant