CN109800643B - Identity recognition method for living human face in multiple angles - Google Patents

Identity recognition method for living human face in multiple angles Download PDF

Info

Publication number
CN109800643B
CN109800643B CN201811537149.8A CN201811537149A CN109800643B CN 109800643 B CN109800643 B CN 109800643B CN 201811537149 A CN201811537149 A CN 201811537149A CN 109800643 B CN109800643 B CN 109800643B
Authority
CN
China
Prior art keywords
face
key points
formula
user
recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811537149.8A
Other languages
Chinese (zh)
Other versions
CN109800643A (en
Inventor
褚晶辉
汤文豪
王鹏
李敏
吕卫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yikaxing Science & Technology Co ltd
Tianjin University
Original Assignee
Beijing Yikaxing Science & Technology Co ltd
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yikaxing Science & Technology Co ltd, Tianjin University filed Critical Beijing Yikaxing Science & Technology Co ltd
Priority to CN201811537149.8A priority Critical patent/CN109800643B/en
Publication of CN109800643A publication Critical patent/CN109800643A/en
Application granted granted Critical
Publication of CN109800643B publication Critical patent/CN109800643B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a living human face multi-angle identity recognition method, which comprises the following steps: if the coordinate calculation of the 5 key points meets a first formula, judging the face as a positive face; if the key point coordinates during the front face and the side faces meet a second formula or a third formula, judging that the head of the user turns left or right, and collecting left face images or right face images of the user; respectively sending the face images of the front face, the left face and the right face of the user at three angles into a convolutional neural network model, and respectively outputting 2622-dimensional feature vectors; the feature vectors and the feature vectors in the face library are subjected to normalized dot product one by one to obtain a similarity score list; and taking one item with the highest score in the similarity list as a face recognition result, voting the final face recognition result by the recognition results of the faces from three angles, and taking the largest votes as the final recognition result. The invention eliminates the influence of the variability of the head posture; meanwhile, the left-right shaking motion of the user in the process of displaying the multi-angle face can be used as the basis for detecting the living body of the face.

Description

Multi-angle identity recognition method for living human face
Technical Field
The invention relates to the field of human face living body detection and human face recognition, in particular to a living body human face multi-angle identity recognition method.
Background
Face recognition is a hot problem in the field of computer vision and machine learning. The method plays a key role in application scenes such as video monitoring, access control, human-computer interfaces and the like. The face recognition technology is used for recognizing the identity of a user, and compared with other human body biological feature recognition such as fingerprints and genes, the face recognition provides a recognition method which is not easy to perceive by the user and is in a non-matching state. In general, a face recognition system includes four modules: face detection, face key point detection, face characterization and identity recognition.
In the 70 s of the 20 th century, face recognition technology has begun to be actively studied. The classical face recognition algorithm includes a Principal Component Analysis (PCA) method, a Linear Discriminant Analysis (LDA) method, an elastic matching technique, a Bayesian method and the like. Like many computer vision problems, face recognition problems can be divided into traditional methods and deep learning methods. The traditional methods include geometric feature methods, subspace analysis methods, statistical feature methods, template matching methods, and the like. Deep learning method emerging DeepID [1] 、DeepFace [2] 、VGGFace [3] 、FaceNet [4] And so on.
Although face recognition has advanced a great deal of academically, it has achieved very high accuracy in a variety of complex face data sets. However, in practical applications, the complex lighting conditions and the variability of the pose of the face's head make face recognition still challenging. On the other hand, the recognition only using the face plane photo can cause many security problems, such as cheating by means of face pictures, videos, models or masks of legal users. It is therefore necessary to perform the live body detection during the face recognition.
A multi-angle face recognition method and a multi-angle face recognition system (publication number CN102609695A, published 2012.07.25) use a plurality of cameras to collect face images, but only select front face images to recognize the collected multi-angle face images. The living body distinguishing method and device based on the face recognition (publication No. CN105389554A, published 2016.03.09) adopts a fusion method to extract the specular reflection feature and the texture change feature near the key points of the face for fusion, but the calculation amount for extracting the texture change feature near each key point of the face is large.
Disclosure of Invention
The invention provides a living human face multi-angle identity recognition method, which utilizes multi-angle human face characteristic information to carry out decision-level fusion and eliminates the influence of the variability of head postures; meanwhile, the left-right shaking motion of the user in the process of showing the multi-angle face can be used as the basis for the living body detection of the face, and the following description is detailed:
a living human face multi-angle identity recognition method comprises the following steps:
learning local binary features of each key point through a cascade regression tree, combining the local binary features, detecting the key points by using linear regression, and finally obtaining 5 key points;
if the coordinate calculation of the 5 key points meets a first formula, judging the face as a positive face;
if the key point coordinates during the front face and the side faces meet a second formula or a third formula, judging that the head of the user turns left or right, and collecting left face images or right face images of the user;
respectively sending the face images of the front face, the left face and the right face of the user at three angles into a convolutional neural network model, and respectively outputting 2622-dimensional feature vectors; the feature vectors and the feature vectors in the face library are subjected to normalized dot products one by one to obtain a similarity score list;
and taking one item with the highest score in the similarity list as a face recognition result, voting the final face recognition result by the recognition results of the faces from three angles, and taking the most votes as the final recognition result.
Wherein, the 5 key points are specifically: left and right external canthus, nose tip, left and right corners of mouth.
Further, the first formula is specifically:
Figure BDA0001907172470000021
wherein, (ax, ay), (bx, by), (cx, cy), (dx, dy), and (ex, ey) are the horizontal and vertical coordinates of 5 key points of the human face.
Wherein the second formula specifically is:
Figure BDA0001907172470000022
wherein ay1 and by1 are vertical coordinates of the key points A1 and B1 of the face in the side face, ax1, bx1 and cx1 are horizontal coordinates of the key points A1, B1 and C1 of the face in the side face, and ax0, bx0 and cx0 are horizontal coordinates of the key points A0, B0 and C0 of the face in the front face.
Further, the third formula is specifically:
Figure BDA0001907172470000031
in a specific implementation, the method further comprises:
and if the recognition results of the faces at the three angles are different, taking the face recognition result of the front face angle as a final face recognition result.
The technical scheme provided by the invention has the beneficial effects that:
1. the method comprehensively considers the human face characteristics of three angles, votes to obtain the recognition result, effectively solves the problem of false recognition caused by the variability of the head posture, and enhances the stability and accuracy of the human face recognition;
2. the invention can carry out living body detection on the current user while collecting the multi-angle face image of the user, thereby enhancing the safety of face recognition and effectively resisting the attack of the face picture;
3. the method can be applied to specific application scenes such as a mobile phone APP face registration login system, an intelligent vehicle-mounted machine driver identity recognition system, a building entrance guard testimony recognition system, an intelligent camera pedestrian identity recognition system and the like.
Drawings
FIG. 1 is a flow chart of a method for multi-angle identification of a living human face;
FIG. 2 is a schematic diagram of 68 key points of a human face;
FIG. 3 is a schematic diagram of coordinate representation of 5 key points of a human face;
FIG. 4 is a front face and face detection and key point detection experimental diagram;
FIG. 5 is a left side face detection and keypoint detection experimental graph;
FIG. 6 is a right-side face detection and keypoint detection experimental graph;
fig. 7 is an experimental diagram of a multi-angle face similarity score list.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention are described in further detail below.
Example 1
A multi-angle identity recognition method for a living human face is disclosed, and referring to figure 1, the method comprises the following steps:
101: learning the local binary characteristics of each key point through a cascade regression tree, combining the local binary characteristics, detecting the key points by using linear regression, and finally obtaining 5 key points;
102: if the coordinate calculation of the 5 key points meets the first formula, judging the face is positive, and executing step 103, otherwise, prompting the user to display the face until the face is detected, and executing step 103;
103: if the key point coordinates of the front face and the side face meet a second formula or a third formula, judging that the head of the user turns left or right, and collecting left face or right face images of the user;
104: respectively sending the face images of the front face, the left face and the right face of the user into a convolutional neural network model, and respectively outputting 2622-dimensional feature vectors; the feature vectors and the feature vectors in the face library are subjected to normalized dot product one by one to obtain a similarity score list;
105: and taking one item with the highest score in the similarity list as a face recognition result, voting the final face recognition result by the recognition results of the faces from three angles, and taking the largest votes as the final recognition result.
Wherein, the 5 key points in the step 101 are specifically: left and right external canthus, nose tip, left and right corners of mouth.
In particular, the method further comprises:
and if the recognition results of the faces at the three angles are different, taking the face recognition result of the front face angle as a final face recognition result.
In conclusion, the method comprehensively considers the face features of the three angles, votes to obtain the recognition result, effectively solves the problem of false recognition caused by the variability of the head posture, and enhances the stability and accuracy of face recognition.
Example 2
The scheme of example 1 is further described below with reference to specific examples, fig. 2 and fig. 3, and a calculation formula, which are described in detail below:
the first step is as follows: image pre-processing
Homomorphic filtering processing is carried out on the image to be identified, the gray scale range of the image is adjusted, the problem of illumination imbalance on the image is eliminated, the image details of a dark area are enhanced, and meanwhile, the image details of a bright area are not lost.
The specific operation of this step is well known to those skilled in the art, and details thereof are not described in the embodiments of the present invention.
The second step: face detection
And detecting whether the face exists in the preprocessed image. And performing face detection on the image by using a Haar feature and an Adaboost cascade classifier. The Haar features are calculated quickly, adaboost integrates a plurality of weak classifiers to form a strong classifier, and face detection can be performed quickly and effectively.
The third step: face keypoint detection
And (3) learning the local binary features of each key point by using a cascade regression tree method for the image after face detection, combining the local binary features, and detecting the key points by using linear regression.
The calibration of 68 key points of the human face is shown in fig. 2. For ease of calculation, 5 face keypoints were taken from 68 keypoints: taking the middle point of a connecting line of two points with the left eye inner and outer eye angles being numbered 39 and 36 as a point A; taking the middle point of a connecting line of two points with the serial numbers of 42 and 45 of the external canthus in the right eye, and recording as a point B; taking a point with the nasal tip number of 33, and recording as a point C; two points with nozzle angles of 48, 54 are taken and are denoted as point D and point E. The coordinate representation of 5 key points of the face is shown in fig. 3.
The fourth step: living body detection
And performing living body detection according to the change of the key points of the human face. The method comprises the following steps:
1) And judging whether the face is a positive face, if the coordinate calculation of the 5 key points of the face meets the formula (1), judging the face to be the positive face, otherwise prompting the user to show the positive face until the positive face is detected, and acquiring a positive face image of the user for processing in the fifth step after the positive face is detected.
Figure BDA0001907172470000051
Wherein, (ax, ay), (bx, by), (cx, cy), (dx, dy), (ex, ey) are the horizontal and vertical coordinates of 5 key points of the human face.
2) Note that the coordinates of 5 key points of the face in the face correction in step 1) are A0 (ax 0, ay 0), B0 (bx 0, by 0), C0 (cx 0, cy 0), D0 (dx 0, dy 0), and E0 (ex 0, ey 0). The coordinates of 5 key points of the face at the time of the side face are denoted as A1 (ax 1, ay 1), B1 (bx 1, by 1), C1 (cx 1, cy 1), D1 (dx 1, dy 1), and E1 (ex 1, ey 1).
The program system prompts the user to rotate the head left and right, if the face key point coordinates during the calculation of the front face and the side face meet the formula (2), the left rotation of the head of the user is judged, and at the moment, the right side face image of the user is collected for the processing of the fifth step; and (4) judging that the head of the user turns right if the face key point coordinates during face correction and side face calculation meet a formula (3), and collecting a left side face image of the user for processing in the fifth step.
Figure BDA0001907172470000061
Figure BDA0001907172470000062
Wherein ay1 and by1 are vertical coordinates of the key points A1 and B1 of the face in the side face, ax1, bx1 and cx1 are horizontal coordinates of the key points A1, B1 and C1 of the face in the side face, and ax0, bx0 and cx0 are horizontal coordinates of the key points A0, B0 and C0 of the face in the front face.
The fifth step: frontal face identification
And sending the face image of the front angle of the user acquired in the fourth step into a convolutional neural network model VGGFace, and outputting 2622-dimensional feature vectors.
And (3) performing normalized dot product on the feature vector and the feature vectors in the face library one by one to obtain a similarity score list, wherein the value range of the similarity score is [0,1], and the higher the score is, the more similar the score is.
And taking the item with the highest score in the similarity list, and recording the similarity score as Smax. If Smax is larger than a set threshold (empirical value is 0.8), taking the Smax as a final result of the face recognition, and not executing the sixth step; and if the Smax is smaller than the set threshold (the empirical value is 0.8), performing multi-angle face identity recognition in the sixth step.
Because the front face image provides main identity information, when the front face similarity score is reliable enough, the identity can be determined, and the identity recognition of a side face is not needed. And when the front face similarity score is not reliable enough, comprehensively considering the multi-angle face image. This can increase the speed of algorithm operation without loss of accuracy.
And a sixth step: multi-angle face identity recognition
And respectively sending the face images of the front face, the left face and the right face of the user acquired in the fourth step into a convolutional neural network model VGGFace, and outputting 2622-dimensional feature vectors.
And (4) performing normalized dot products on the feature vectors and the feature vectors in the face library one by one to obtain a similarity score list, wherein the value range of the similarity score is [0,1], and the higher the score is, the more similar the score is.
And taking one item with the highest score in the similarity list as a face recognition result, obtaining three recognition results for the faces at three angles respectively, voting the final face recognition result by the recognition results of the faces at the three angles, and taking the largest votes as the final recognition result. If an extreme situation occurs: if the recognition results of the faces at the three angles are different, namely the face recognition result at the front face angle is used as the final face recognition result under the condition of flat ticket.
In conclusion, the method comprehensively considers the face features of the three angles, votes to obtain the recognition result, effectively solves the problem of false recognition caused by the variability of the head posture, and enhances the stability and accuracy of face recognition.
Example 3
The embodiments of examples 1 and 2 are further described below in conjunction with fig. 4-7, and are described in detail below:
the first step is as follows: image pre-processing
Homomorphic filtering processing is carried out on the image to be identified, the gray scale range of the image is adjusted, the problem of illumination imbalance on the image is eliminated, the image details of a dark area are enhanced, and meanwhile, the image details of a bright area are not lost.
The specific operation of this step is well known to those skilled in the art, and details thereof are not described in the embodiments of the present invention.
The second step: face detection
And detecting whether the face exists in the preprocessed image. And performing face detection on the image by using a Haar feature and an Adaboost cascade classifier. The face detection adopts image pyramid and sliding window technology, the image pyramid reduction ratio is set to be between 1.1 and 1.5, the minimum adjacent sliding window matching number is set to be between 2 and 5, and the minimum matching size is set to be (20, 20). The detection results are shown in the rectangular boxes marked in fig. 4, 5 and 6.
The third step: face keypoint detection
And (4) detecting key points of the human face by using a cascade regression tree method on the image after the human face detection. The detected face rectangular frame and the original image are sent to a regression detector, the detector marks 68 key points of the face, 5 representative key points of the face are selected from the 68 key points, and the detection result is as shown in 5 marked key points in fig. 4, 5 and 6.
The fourth step: living body detection
After the face key points are calibrated, the living body detection is carried out according to the change of the face key points. The method comprises the following steps:
1) The front face image of the user is collected for the fifth step of processing, and the coordinates of the key points of the 5 faces when the face is corrected are recorded, as shown in fig. 4.
2) And (3) turning the head to the right by the user, detecting the right turning behavior of the user according to the relative position relationship of the key points, collecting the left face image of the user for the fifth step of processing, and recording the coordinates of the key points of 5 human faces during left face, as shown in fig. 5.
3) Turning the head to the left by the user, detecting the behavior of turning the head to the left by the user according to the relative position relationship of the key points, collecting the right face image of the user for the fifth step of processing, and recording the coordinates of the key points of 5 human faces during the right face, as shown in fig. 6.
The fifth step: frontal face identification
And (3) sending the user front face image acquired in the fourth step into a convolutional neural network model VGGFace to obtain a similarity score list, wherein the value range of the similarity score is [0,1], and the higher the score is, the more similar the score is. The positive face similarity score results are shown in the third column of the table of fig. 7. And (5) performing multi-angle face identity recognition in the sixth step when the similarity score ranked first is less than the experience threshold value of 0.8.
And a sixth step: multi-angle face identity recognition
And respectively sending the face images of the left side face and the right side face of the user collected in the fourth step into a convolutional neural network model VGGFace to obtain a similarity score list, wherein the left side face similarity score result is shown in a second column of a table shown in figure 7, and the right side face similarity score result is shown in a fourth column of the table shown in figure 7. The final face recognition result is generated by voting the result with the face similarity score of the three angles ranked first.
Reference documents
[1]Sun Y,Wang X,Tang X.Deep Learning Face Representation from Predicting 10,000Classes[C]//Computer Vision and Pattern Recognition.IEEE,2014:1891-1898.
[2]Taigman Y,Yang M,Ranzato M,et al.DeepFace:Closing the Gap to Human-Level Performance in Face Verification[C]//IEEE Conference on Computer Vision and Pattern Recognition.IEEE Computer Society,2014:1701-1708.
[3]Parkhi O M,Vedaldi A,Zisserman A.Deep face recognition[C]//BMVC.2015,1(3):6.
[4]Schroff F,Kalenichenko D,Philbin J.FaceNet:A unified embedding for face recognition and clustering[J].2015:815-823.
Those skilled in the art will appreciate that the drawings are only schematic illustrations of preferred embodiments, and the above-mentioned serial numbers of the embodiments of the present invention are only for description and do not represent the merits of the embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (3)

1. A multi-angle identity recognition method for a living human face is characterized by comprising the following steps:
learning local binary features of each key point through a cascade regression tree, combining the local binary features, detecting the key points by using linear regression, and finally obtaining 5 key points;
if the coordinate calculation of the 5 key points meets a first formula, judging the face as a positive face;
if the coordinates of the key points during the front face and the side face meet a second formula, judging that the head of the user turns left, and if the coordinates meet a third formula, judging that the head of the user turns right, and collecting left side face images and right side face images of the user;
respectively sending the face images of the front face, the left face and the right face of the user into a convolutional neural network model, and respectively outputting 2622-dimensional feature vectors; the feature vectors and the feature vectors in the face library are subjected to normalized dot products one by one to obtain a similarity score list;
taking one item with the highest score in the similarity list as a face recognition result, voting the final face recognition result by the recognition results of the faces from three angles, and taking the most votes as the final recognition result;
the first formula is specifically:
Figure FDA0003975068420000011
wherein, (ax, ay), (bx, by), (cx, cy), (dx, dy), (ex, ey) are the horizontal and vertical coordinates of 5 key points of the human face;
the second formula is specifically:
Figure FDA0003975068420000012
wherein ay1 and by1 are vertical coordinates of the key points A1 and B1 of the human face during the side face, ax1, bx1 and cx1 are horizontal coordinates of the key points A1, B1 and C1 of the human face during the side face, and ax0, bx0 and cx0 are horizontal coordinates of the key points A0, B0 and C0 of the human face during the front face;
the third formula is specifically:
Figure FDA0003975068420000021
2. the identity recognition method of a living human face from multiple angles according to claim 1, wherein the 5 key points are specifically: left and right external canthus, nose tip, left and right corners of mouth.
3. The method for multi-angle identification of the human face of the living body according to claim 1, further comprising:
and if the recognition results of the faces at the three angles are different, taking the face recognition result of the front face angle as a final face recognition result.
CN201811537149.8A 2018-12-14 2018-12-14 Identity recognition method for living human face in multiple angles Active CN109800643B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811537149.8A CN109800643B (en) 2018-12-14 2018-12-14 Identity recognition method for living human face in multiple angles

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811537149.8A CN109800643B (en) 2018-12-14 2018-12-14 Identity recognition method for living human face in multiple angles

Publications (2)

Publication Number Publication Date
CN109800643A CN109800643A (en) 2019-05-24
CN109800643B true CN109800643B (en) 2023-03-31

Family

ID=66556870

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811537149.8A Active CN109800643B (en) 2018-12-14 2018-12-14 Identity recognition method for living human face in multiple angles

Country Status (1)

Country Link
CN (1) CN109800643B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12136210B2 (en) * 2019-05-21 2024-11-05 Huawei Technologies Co., Ltd. Image processing method and apparatus

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110363091B (en) * 2019-06-18 2021-08-10 广州杰赛科技股份有限公司 Face recognition method, device and equipment under side face condition and storage medium
CN112307817B (en) * 2019-07-29 2024-03-19 中国移动通信集团浙江有限公司 Face living body detection method, device, computing equipment and computer storage medium
CN110443213A (en) * 2019-08-12 2019-11-12 北京比特大陆科技有限公司 Type of face detection method, object detection method and device
JP7540442B2 (en) * 2019-09-12 2024-08-27 日本電気株式会社 Image analysis device, control method, and program
CN113553882A (en) * 2020-04-24 2021-10-26 深圳市万普拉斯科技有限公司 Vision detection method and device, computer equipment and storage medium
CN111680608B (en) * 2020-06-03 2023-08-18 长春博立电子科技有限公司 Intelligent sports auxiliary training system and training method based on video analysis
CN111931677A (en) * 2020-08-19 2020-11-13 北京影谱科技股份有限公司 Face detection method and device and face expression detection method and device
CN112149559A (en) * 2020-09-22 2020-12-29 沈澈 Face recognition method and device, readable storage medium and computer equipment
CN112801066B (en) * 2021-04-12 2022-05-17 北京圣点云信息技术有限公司 Identity recognition method and device based on multi-posture facial veins
CN113191322A (en) * 2021-05-24 2021-07-30 口碑(上海)信息技术有限公司 Method and device for detecting skin of human face, storage medium and computer equipment
CN113837009A (en) * 2021-08-26 2021-12-24 张大艳 Internet of things data acquisition and analysis system based on artificial intelligence
CN114120376A (en) * 2021-11-18 2022-03-01 黑龙江大学 Multi-mode image acquisition device and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011215843A (en) * 2010-03-31 2011-10-27 Secom Co Ltd Facial image processing apparatus
CN106096560A (en) * 2016-06-15 2016-11-09 广州尚云在线科技有限公司 A kind of face alignment method
CN106503687A (en) * 2016-11-09 2017-03-15 合肥工业大学 The monitor video system for identifying figures of fusion face multi-angle feature and its method
CN106875191A (en) * 2017-02-27 2017-06-20 努比亚技术有限公司 One kind scanning payment processing method, device and terminal
WO2017106996A1 (en) * 2015-12-21 2017-06-29 厦门中控生物识别信息技术有限公司 Human facial recognition method and human facial recognition device
CN107480658A (en) * 2017-09-19 2017-12-15 苏州大学 Face identification device and method based on multi-angle video

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011215843A (en) * 2010-03-31 2011-10-27 Secom Co Ltd Facial image processing apparatus
WO2017106996A1 (en) * 2015-12-21 2017-06-29 厦门中控生物识别信息技术有限公司 Human facial recognition method and human facial recognition device
CN106096560A (en) * 2016-06-15 2016-11-09 广州尚云在线科技有限公司 A kind of face alignment method
CN106503687A (en) * 2016-11-09 2017-03-15 合肥工业大学 The monitor video system for identifying figures of fusion face multi-angle feature and its method
CN106875191A (en) * 2017-02-27 2017-06-20 努比亚技术有限公司 One kind scanning payment processing method, device and terminal
CN107480658A (en) * 2017-09-19 2017-12-15 苏州大学 Face identification device and method based on multi-angle video

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Investigating the Periocular-Based Face Recognition Across Gender Transformation;Gayathri Mahalingam,et al;《IEEE Transactions on Information Forensics and Security》;20141002;1-10 *
基于神经网络集成的多视角人脸识别;周志华,等;《计算机研究与发展》;20011030;1205-1210 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12136210B2 (en) * 2019-05-21 2024-11-05 Huawei Technologies Co., Ltd. Image processing method and apparatus

Also Published As

Publication number Publication date
CN109800643A (en) 2019-05-24

Similar Documents

Publication Publication Date Title
CN109800643B (en) Identity recognition method for living human face in multiple angles
Mahmood et al. WHITE STAG model: Wise human interaction tracking and estimation (WHITE) using spatio-temporal and angular-geometric (STAG) descriptors
Singh et al. Face detection and recognition system using digital image processing
Shao et al. Deep convolutional dynamic texture learning with adaptive channel-discriminability for 3D mask face anti-spoofing
Rodriguez et al. Face authentication using adapted local binary pattern histograms
Chintalapati et al. Automated attendance management system based on face recognition algorithms
Marcel et al. On the recent use of local binary patterns for face authentication
Masupha et al. Face recognition techniques, their advantages, disadvantages and performance evaluation
JP2006293644A (en) Information processing device and information processing method
CN105574509B (en) A kind of face identification system replay attack detection method and application based on illumination
CN107480586B (en) Face characteristic point displacement-based biometric photo counterfeit attack detection method
Bashbaghi et al. Watch-list screening using ensembles based on multiple face representations
Galdámez et al. Ear recognition using a hybrid approach based on neural networks
Arya et al. Automatic face recognition and detection using OpenCV, haar cascade and recognizer for frontal face
Bouhabba et al. Support vector machine for face emotion detection on real time basis
Ramsoful et al. Feature extraction techniques for dorsal hand vein pattern
Alsubari et al. Facial expression recognition using wavelet transform and local binary pattern
WO2006057475A1 (en) Face detection and authentication apparatus and method
Jahromi et al. Automatic access control based on face and hand biometrics in a non-cooperative context
Wei Unconstrained face recognition with occlusions
Drosou et al. Event-based unobtrusive authentication using multi-view image sequences
Bukis et al. Survey of face detection and recognition methods
Fratric et al. Real-time model-based hand localization for unsupervised palmar image acquisition
Ekinci et al. Kernel Fisher discriminant analysis of Gabor features for online palmprint verification
Wang et al. Expression robust three-dimensional face recognition based on Gaussian filter and dual-tree complex wavelet transform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant