CN111680639B - Face recognition verification method and device and electronic equipment - Google Patents

Face recognition verification method and device and electronic equipment Download PDF

Info

Publication number
CN111680639B
CN111680639B CN202010528279.6A CN202010528279A CN111680639B CN 111680639 B CN111680639 B CN 111680639B CN 202010528279 A CN202010528279 A CN 202010528279A CN 111680639 B CN111680639 B CN 111680639B
Authority
CN
China
Prior art keywords
user
face
data
verification
face recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010528279.6A
Other languages
Chinese (zh)
Other versions
CN111680639A (en
Inventor
任陶瑞
陈秀文
罗宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202010528279.6A priority Critical patent/CN111680639B/en
Publication of CN111680639A publication Critical patent/CN111680639A/en
Application granted granted Critical
Publication of CN111680639B publication Critical patent/CN111680639B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Abstract

The embodiment of the specification provides a face recognition verification method and device and electronic equipment. The method comprises the following steps: when a first user to be verified initiates a first face identification verification operation, acquiring face image data of the first user to be verified; extracting face recognition characteristic data of a first user to be verified according to the face image data of the first user to be verified; extracting face state data of a first user to be verified according to the face image data of the first user to be verified; calling the first verification information; and matching the first verification information with the face recognition characteristic data and the face state data of the first user to be verified, and determining a verification result of the first face recognition verification operation for the first user to be verified according to the matching result. According to the method of the embodiment of the specification, the condition that an illegal user can pass identity verification by forging face recognition feature data due to face image leakage is avoided, and therefore safety of face recognition verification is greatly improved.

Description

Face recognition verification method and device and electronic equipment
Technical Field
The present disclosure relates to the field of intelligent terminal technologies, and in particular, to a face recognition verification method and apparatus, and an electronic device.
Background
In the field of identity authentication technology, a common authentication scheme at present is a face recognition scheme. The face recognition scheme confirms the identity of a user by recognizing the face recognition features. In the face recognition scheme, the face recognition features are digitized user face feature descriptions extracted from a user face image.
Because the facial features are unique for most of human beings, the face recognition features extracted from the facial images of the users also have the uniqueness, and different users can be accurately distinguished according to the face recognition features, so that the accurate identification of the user identities is realized. However, in some application scenarios, there is a safety risk in face recognition.
For example, there is a case where the growth phases of a plurality of users are similar. Users with similar growth may be identified as the same user subject to the image acquisition accuracy of the facial image acquisition device and the feature extraction mechanism of the face recognition algorithm.
For another example, there is a case where an illegal user passes authentication by forging a human face. An illegitimate user may spoof the face recognition system using a picture of the face of a legitimate user or a 3D mask of the face of a legitimate user, thereby spoofing the identity of a legitimate user.
Disclosure of Invention
In order to solve the security problem existing in the face recognition scheme in the prior art, embodiments of the present specification provide a face recognition verification method, an apparatus, an electronic device, and a computer-readable storage medium.
The embodiment of the specification adopts the following technical scheme:
in a first aspect, an embodiment of the present specification provides a face recognition verification method, including:
when a first user to be verified initiates a first face identification verification operation, acquiring face image data of the first user to be verified;
extracting the face recognition characteristic data of the first user to be verified according to the face image data of the first user to be verified;
extracting facial state data of the first user to be verified according to the facial image data of the first user to be verified, wherein the facial state data comprise expression data used for describing facial expressions of the user and/or action data used for describing facial actions of the user;
calling first verification information, wherein the first verification information is information generated according to face recognition feature data of a first user and face state data corresponding to first face recognition verification operation of the first user, and the first user is a user identity used when the first user initiates first face recognition verification operation for a first user to be verified;
and matching the first verification information with the face recognition feature data and the face state data of the first user to be verified, and determining a verification result of the first face recognition verification operation for the first user to be verified according to a matching result.
In an embodiment of the present specification:
the first verification information comprises face recognition feature data of the first user and face state data of the first user aiming at the first face recognition verification operation;
determining a verification result of the first face identification verification operation for the first to-be-verified user according to the matching result, wherein:
and when the face recognition feature data and the face state data of the first user to be verified are respectively matched with the face recognition feature data and the face state data in the first verification information, judging that the first user to be verified passes the verification of the first face recognition verification operation.
In an embodiment of the present specification:
the first verification information includes face recognition feature data of the first user and a plurality of face state data of the first user for the first face recognition verification operation, one of the face state data in the first verification information corresponds to a verification result of the first face recognition verification operation, and one verification result of the first face recognition verification operation corresponds to one or more of the face state data in the first verification information;
the matching the first verification information and the face recognition feature data and the face state data of the first user to be verified, and determining a verification result of the first face recognition verification operation for the first user to be verified according to a matching result include:
matching the face recognition characteristic data of the first user to be verified with the face recognition characteristic data in the first verification information;
when the face recognition feature data of the first user to be verified are matched with the face recognition feature data in the first verification information, matching the face state data of the first user to be verified with a plurality of face state data in the first verification information;
when the face state data of the first user to be authenticated is matched with the first face state data in the plurality of face state data, taking the authentication result corresponding to the first face state data as the authentication result of the first face identification authentication operation on the first user to be authenticated.
In an embodiment of the present specification, the facial state data includes facial expression description information and/or facial action description information, or the facial state data includes a facial expression name and/or a facial action name.
In an embodiment of the present specification:
the first verification information comprises fusion feature data, and the fusion feature data in the first verification information is generated after feature fusion operation is carried out on the face recognition feature data of the first user and the face state data corresponding to the first face recognition verification operation of the first user;
the matching of the first verification information, the face recognition feature data and the face state data of the first user to be verified includes:
performing the feature fusion operation on the face recognition feature data and the face state data of the first user to be verified to generate fusion feature data of the first user to be verified;
and matching the fused feature data in the first verification information with the fused feature data of the first user to be verified.
In an embodiment of the present specification:
the first verification information includes a plurality of fused feature data, the plurality of fused feature data in the first verification information are generated after feature fusion operations are respectively performed on face identification feature data of the first user and a plurality of facial state data of the first user for the first face identification verification operation, one fused feature data in the first verification information corresponds to one verification result of the first face identification verification operation, and one verification result of the first face identification verification operation corresponds to one or more fused feature data in the first verification information;
the matching the first verification information and the face recognition feature data and the face state data of the first user to be verified, and determining a verification result of the first face recognition verification operation for the first user to be verified according to the matching result includes:
performing the feature fusion operation on the face recognition feature data and the face state data of the first user to be verified to generate fusion feature data of the first user to be verified;
matching the fused feature data of the first user to be verified with the plurality of fused feature data in the first verification information;
when the fused feature data of the first user to be verified is matched with the first fused feature data in the multiple fused feature data, taking a verification result corresponding to the first fused feature data as a verification result of the first face identification verification operation for the first user to be verified.
In an embodiment of the present specification, the method further includes:
monitoring whether a first image data output request exists or not, wherein the first image data output request is used for requesting to output first image data containing face image data of the first user to be verified;
when the first image data output request exists, adjusting the first image data to generate second image data, wherein the facial expression and/or facial action of the first user to be authenticated in the second image data are different from those of the first image data;
and outputting the second image data.
In an embodiment of the present specification, the adjusting the first image data to generate the second image data includes:
extracting first facial state data of the first user to be authenticated according to the first image data;
adjusting the first facial state data to generate second facial state data, wherein the second facial state data corresponds to a different facial expression and/or facial motion than the first facial state data;
loading the second face state data to the first image data to generate second image data.
In an embodiment of the present specification, the invoking the first verification information includes:
and calling the first verification information corresponding to the first face recognition verification operation from a verification information base, wherein the verification information base comprises one or more pieces of verification information, and one piece of verification information in the verification information base corresponds to one or more face recognition verification operations.
In an embodiment of the present specification, the method further includes:
acquiring face recognition characteristic data of the first user;
acquiring face state data corresponding to the first user aiming at the first face identification verification operation;
generating the first verification information according to the face recognition feature data of the first user and the face state data corresponding to the first user aiming at the first face recognition verification operation;
and saving the first verification information.
In an embodiment of the present specification, the acquiring face recognition feature data of the first user includes:
invoking face recognition feature data of the first user from a verification information repository for performing face recognition operations;
alternatively, the first and second electrodes may be,
and extracting the face recognition characteristic data of the first user from the face image data of the first user to be used as the face recognition characteristic data of the first user.
In an embodiment of the present specification, before the invoking the first verification information, the method further includes:
confirming the first user, including:
identifying the face recognition feature data of the first user to be verified to confirm the user identity of the first user to be verified, and confirming the first user according to the user identity of the first user to be verified;
alternatively, the first and second electrodes may be,
and confirming the first user according to the identity information input by the first user to be verified.
In a second aspect, an embodiment of the present specification provides a face recognition verification apparatus, including:
the image acquisition module is used for acquiring the face image data of a first user to be verified when the first user to be verified initiates a first face identification verification operation with the identity of the first user;
the feature extraction module is used for extracting face recognition feature data of the first user to be verified according to the face image data of the first user to be verified;
the state extraction module is used for extracting facial state data of the first user to be verified according to the facial image data of the first user to be verified, wherein the facial state data comprise expression data used for describing facial expressions of the user and/or action data used for describing facial actions of the user;
the verification information calling module is used for calling first verification information, wherein the first verification information is generated according to the face recognition feature data of the first user and the face state data corresponding to the first face recognition verification operation of the first user;
and the verification module is used for matching the first verification information with the face recognition feature data and the face state data of the first user to be verified and determining a verification result of the first face recognition verification operation aiming at the first user to be verified according to a matching result.
In a third aspect, an embodiment of the present specification provides an electronic device comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the electronic device to perform the method steps according to the first aspect.
In a fourth aspect, an embodiment of the present specification provides a computer-readable storage medium, in which a computer program is stored, which, when run on a computer, causes the computer to perform the method according to the first aspect.
According to the technical scheme provided by the embodiment of the specification, at least the following technical effects can be realized:
according to the method of the embodiment of the specification, in the face recognition verification process, the face characteristics of the user are verified, and whether the face state presented by the user is the preset face state is also verified, so that the condition that an illegal user can pass identity verification by forging face recognition characteristic data due to face image leakage is avoided, and the safety of face recognition verification is greatly improved.
Drawings
FIG. 1 is a flow chart illustrating an embodiment of a face recognition verification method according to the present disclosure;
FIG. 2 is a partial flow diagram of an embodiment of a face recognition verification method according to the present disclosure;
FIG. 3 is a partial flow diagram of an embodiment of a face recognition verification method according to the present disclosure;
fig. 4 is a block diagram illustrating an embodiment of a face recognition verification apparatus according to the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present disclosure more clear, the technical solutions of the present disclosure will be clearly and completely described below with reference to the specific embodiments of the present disclosure and the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present specification without any creative effort belong to the protection scope of the present specification.
The terminology used in the description of the embodiments section is for the purpose of describing particular embodiments only and is not intended to be limiting of the description.
The present specification provides a face recognition verification method, which aims at the security problem existing in the face recognition scheme in the prior art. In order to propose the method of the embodiment of the present specification, the inventor firstly analyzes the practical application scene of the face recognition. In general, face recognition is to confirm the identity of a user by recognizing face recognition features in a facial image of the user. Because the face recognition feature has uniqueness, theoretically, if a complete face image of a user is acquired, the face recognition feature of the user can be acquired, so that the face recognition feature of the user can be forged to falsify the identity of the user so as to pass identity verification of face recognition.
Further, in an actual application scenario, the facial image of the user may not only represent the face recognition features of the user, but also represent the facial expression and/or facial movement of the user. Unlike face recognition features, the personal difference between facial expressions and facial movements is relatively small and cannot be applied to identity recognition. However, the facial expression and/or facial movement have customizable characteristics, and the user can set out the facial expression and facial movement which the user wants. Therefore, if the facial expression and/or facial movement for identity authentication are preset, in the process of face recognition authentication, facial expression and/or facial movement recognition is carried out on the basis of face feature recognition, and whether the facial expression and/or facial movement currently put out by the user is the preset facial expression and/or facial movement is judged. Therefore, even if the illegal user forges the face recognition characteristics of the legal user, the illegal user cannot pass the identity authentication because the illegal user does not know the preset facial expression and/or facial action, and the safety of the face recognition authentication is greatly improved.
The technical solutions provided by the embodiments of the present description are described in detail below with reference to the accompanying drawings. In the embodiment shown in the drawings, various steps of the method shown in the embodiment can be realized in various different ways. The specific implementation described in the following embodiments or other implementations may be constructed by the skilled person according to the actual needs in the field.
Fig. 1 is a flowchart illustrating an embodiment of a face recognition verification method according to the present disclosure. As shown in fig. 1, in an embodiment of the present specification, the first face identification verification operation is a verification operation based on a face identification verification method proposed in the present specification, and the following steps are performed to implement the first face identification verification operation:
step 100, judging whether a first user to be verified initiates a first face identification verification operation;
if not, returning to the step 100;
if yes, go to step 110;
step 110, acquiring face image data of a first user to be verified;
step 111, extracting face recognition characteristic data of a first user to be verified according to the face image data of the first user to be verified;
step 112, extracting facial state data of the first user to be verified according to the facial image data of the first user to be verified, wherein the facial state data comprise expression data used for describing facial expressions (such as smiles, laughs, sadness, anger and the like) of the user and/or motion data used for describing facial and head motions (such as nodding, shaking, blinking and the like) of the user;
step 120, calling first verification information, wherein the first verification information is generated according to face recognition feature data of a first user and face state data corresponding to first face recognition verification operation of the first user, and the first user is a user identity used when the first user initiates the first face recognition verification operation for a first user to be verified;
and step 130, matching the first verification information with the face recognition feature data and the face state data of the first user to be verified, and determining a verification result of the first face recognition verification operation for the first user to be verified according to the matching result.
According to the method shown in the embodiment of fig. 1, in the process of face identification verification, not only the facial features of the user are verified, but also whether the facial state presented by the user is the preset facial state is verified, so that the condition that an illegal user can pass identity verification by forging face identification feature data due to face image leakage is avoided, and the safety of face identification verification is greatly improved.
Further, in order to implement the method flow of the embodiment shown in fig. 1, in an embodiment of this specification, a process of generating verification information for performing face recognition verification is also provided.
Specifically, fig. 2 is a partial flowchart of an embodiment of a face recognition verification method according to the present disclosure. As shown in fig. 2, in an embodiment of the present specification, the following steps are performed to obtain the first authentication information used in the embodiment shown in fig. 1:
step 210, obtaining face recognition characteristic data of a first user;
step 220, acquiring face state data corresponding to a first face identification verification operation of a first user;
step 230, generating first verification information according to the face recognition feature data of the first user and the face state data corresponding to the first user for the first face recognition verification operation;
step 240, saving the first verification information.
Specifically, in an implementation manner of step 210, facial image data of the first user is obtained, and facial recognition feature data of the first user is extracted from the facial image data of the first user.
Further, considering that in other face recognition verification operations, the first user is a registered legal user, the face recognition features of the first user are already corresponding and stored, and the face recognition features of the first user are unique, in an implementation manner of step 210, the face recognition features that the first user has already been correspondingly stored are directly called. Specifically, face recognition feature data of the first user is called from an authentication information repository for performing face recognition operations.
Further, in step 120, the first verification information is essentially information corresponding to the first user for the first facial recognition verification operation, and the first user is a user identity used when the first user initiates the first facial recognition verification operation for the first to-be-verified user, so that before invoking the first verification information, the first to-be-verified user needs to initiate the first facial recognition verification operation by confirming the first user of the first to-be-verified user, that is, confirming which user identity the first to-be-verified user wants to use.
Specifically, in an embodiment of the present specification, the specific information of the first user is obtained from the first to-be-authenticated user. That is, the first to-be-authenticated user himself declares which user identity he uses to initiate the first face recognition authentication operation.
Specifically, in an embodiment of the present specification, before performing step 120, the method further includes:
and confirming the first user, including confirming the first user according to the identity information input by the first user to be verified.
Further, considering that the user identity of the user to be authenticated is identifiable by the face recognition feature of the user to be authenticated, in an embodiment of the present specification, before performing step 120, the method further includes:
and confirming the first user, including recognizing the face recognition characteristic data of the first user to be verified so as to confirm the user identity of the first user to be verified, and confirming the first user according to the user identity of the first user to be verified.
Further, in an actual application scenario, because the face recognition features have uniqueness, for a certain user, the verification information generated only based on the face recognition features has uniqueness, and when the verification is performed only by using the face recognition features, the verification information called by different verification operation flows is the same and is the face recognition features corresponding to the user identity which the user to be verified wants to verify. However, the facial state can be preset into a plurality of different states due to the customization; therefore, the verification information generated based on the face state data may not be unique. Different authentication information may be set for different authentication operations. For example, for a user A, setting the verification information of the user A as verification information A aiming at the verification operation A, wherein the verification information A is generated according to the face recognition feature data of the user A and the face state data A preset by the user A; and setting the verification information of the verification operation B as verification information B aiming at the verification operation B, wherein the verification information B is generated according to the face recognition characteristic data of the user A and the face state data B preset by the user A.
Specifically, in an embodiment of the present specification, step 120 includes:
first verification information corresponding to a first face recognition verification operation is called from a verification information base, wherein the verification information base comprises one or more pieces of verification information, and one piece of verification information in the verification information base corresponds to the one or more face recognition verification operations.
Further, in an embodiment of the present specification, in the process of face identification verification, different verification processes are used to verify the face identification feature data and the face state data respectively. Specifically, in an embodiment of the present specification, the first verification information called in step 120 includes face recognition feature data of the first user and face state data of the first user for the first face recognition verification operation. In step 130, when the face recognition feature data and the face state data of the first user to be authenticated respectively match with the face recognition feature data and the face state data in the first authentication information, it is determined that the first user to be authenticated passes the authentication of the first face recognition authentication operation.
Specifically, in an implementation manner of step 130, it is first verified whether the face recognition feature data of the first user to be verified is matched with the face recognition feature data in the first verification information; if not, the verification fails; if so, verifying whether the face state data of the first user to be verified are respectively matched with the face state data in the first verification information; if not, the verification fails; if so, the verification is successful.
Further, in practical application scenarios, because the face recognition features have uniqueness, face recognition verification based on the face recognition features generally has only two verification results, for example, verification passes or verification fails. However, the facial state can be preset into a plurality of different states due to the customization; therefore, the face recognition verification based on the face state can have various different verification results corresponding to various preset face states. Also, a plurality of different face states may be preset to correspond to the same authentication result. For example, it is preset that the facial state a corresponds to the verification result a, and the facial states B and C correspond to the verification result B, so that when the user initiates face recognition verification in the facial state a, the user can be fed back with the verification result a, and when the user initiates face recognition verification in the facial state B or the facial state C, the user can be fed back with the verification result B.
Specifically, in an embodiment of the present specification, the first verification information called in step 120 includes face recognition feature data of the first user and a plurality of pieces of facial state data of the first user for a first face recognition verification operation, one piece of facial state data in the first verification information corresponds to one verification result of the first face recognition verification operation, and one verification result of the first face recognition verification operation corresponds to one or more pieces of facial state data in the first verification information. In step 130:
matching the face recognition characteristic data of the first user to be verified with the face recognition characteristic data in the first verification information;
when the face recognition feature data of the first user to be verified are matched with the face recognition feature data in the first verification information, matching the face state data of the first user to be verified with a plurality of face state data in the first verification information;
when the face state data of the first user to be verified is matched with the first face state data in the plurality of face state data, the verification result corresponding to the first face state data is used as the verification result of the first face identification verification operation for the first user to be verified.
For example, in an application scenario, it is set that, in the first face recognition verification operation, the verification result is "yes" when the user a smiles or nods, the verification result is "no" when the user a shakes, and the verification result is "cancel" when the user a fails. When the user to be verified operates with the identity of the user A, the user A initiates a first face recognition verification operation. And acquiring the face recognition feature data and the face state data of the user to be verified. Firstly, judging whether the face recognition characteristic data of the user to be verified is matched with the face recognition characteristic data corresponding to the user A, if not, the verification fails, and the verification result is 'cancel'. If so, the facial state data of the user to be authenticated is analyzed. Assuming that the face state presented by the user to be authenticated at this time is smiling, the authentication result is yes. Assuming that the face state presented by the user to be authenticated at this time is shaking head, the authentication result is no. If the face state of the user to be verified is smiling and cannot be matched with nodding, smiling and shaking, the verification fails, and the verification result is cancelled.
Further, in an embodiment of the present specification, in order to facilitate performing the comparison between the face state data, a facial expression name (e.g., smile, laugh, anger) and/or a facial action name (e.g., nod, shake, frown, blink) is used as the face state data. And when the verification information is corresponded, extracting the name of the facial expression and/or facial action presented by the user by carrying out facial expression and/or facial action recognition on the facial image of the user used for corresponding facial state data, and taking the extracted name as the facial state data. When the face recognition verification is carried out, facial expression and/or facial action recognition is carried out on a facial image used for carrying out recognition verification on a user to be verified, and the name of the facial expression and/or facial action presented by the user to be verified is extracted, so that the name is matched and compared with verification information. That is, the facial state data includes facial expression names and/or facial action names.
Further, in a practical application scenario, the facial expressions and/or facial movements are also provided with personal attributes. That is, the expression details or the action details are different for different persons for the same facial expression or facial action. For example, also a smile expression, facial expression details of different people when smiling are different (e.g., degree of mouth corner upwarping, degree of eye openness); for another example, the head nodding action is the same, and the action details of different people at the time of nodding are different (for example, head action amplitude, head nodding frequency, head nodding speed). Therefore, if the personality judgment (judging whether the facial expression action details presented by the user to be verified are the facial expression action details possessed by the registered user when the preset facial expression action is presented) is added on the basis of the common judgment (judging whether the facial expression action presented by the user to be verified is the preset facial expression action) when the matching of the facial state is carried out, the safety of the face recognition verification can be further improved.
Therefore, in an embodiment of the present specification, in step 220, not the facial expression name and/or the facial action name but the facial expression description information and/or the facial action description information is acquired. That is, the facial state data includes facial expression description information and/or facial action description information.
Further, in an embodiment of the present specification, in the process of face recognition verification, fusion verification is used to verify the face recognition feature data and the face state data. Specifically, in an embodiment of the present specification, the first verification information called in step 120 includes fusion feature data, and the fusion feature data in the first verification information is generated after a feature fusion operation is performed on the face recognition feature data of the first user and the face state data corresponding to the first face recognition verification operation by the first user. Step 130 comprises:
performing feature fusion operation on the face recognition feature data and the face state data of the first user to be verified to generate fusion feature data of the first user to be verified;
and matching the fused feature data in the first verification information with the fused feature data of the first user to be verified.
Furthermore, in an actual application scene, the facial state can be preset into various different states due to the self-definition; therefore, for the same user, a plurality of fusion feature data generated after the facial state feature data and the face recognition feature data are fused exist, and the face recognition verification based on the facial state can have a plurality of different verification results. Each verification result corresponds to one fused feature data, or a plurality of different fused feature data may be preset to correspond to the same verification result. For example, the fusion feature data A is generated by fusing the facial state data A and the face recognition feature data, and the corresponding verification result A of the fusion feature data A is preset; the fused feature data B is generated by fusing the face state data B and the face recognition feature data, the fused feature data C is generated by fusing the face state data C and the face recognition feature data, and the preset fused feature data B and C correspond to the verification result B. Then, when the user initiates face recognition authentication in the face state a, it can be fed back with the authentication result a, and when the user initiates face recognition authentication in the face state B or the face state C, it can be fed back with the authentication result B.
Specifically, in an embodiment of the present specification, the first verification information called in step 120 includes a plurality of fused feature data, the plurality of fused feature data in the first verification information are generated after feature fusion operations are respectively performed on face identification feature data of the first user and a plurality of facial state data of the first user for the first face identification verification operation, one fused feature data in the first verification information corresponds to one verification result of the first face identification verification operation, and one verification result of the first face identification verification operation corresponds to one or more fused feature data in the first verification information. Step 130 comprises:
performing feature fusion operation on the face recognition feature data and the face state data of the first user to be verified to generate fusion feature data of the first user to be verified;
matching the fusion characteristic data of the first user to be verified with the plurality of fusion characteristic data in the first verification information;
and when the fused feature data of the first user to be verified are matched with the first fused feature data in the plurality of fused feature data, taking the verification result corresponding to the first fused feature data as the verification result of the first face identification verification operation for the first user to be verified.
Further, in an embodiment of the present specification, the face state data and the face recognition feature data may be fused in a variety of different ways to generate the fused feature data.
For example, in an embodiment of the present specification, face state data and face recognition feature data are feature-stitched, and a result of the feature stitching is subjected to dimensionality reduction through Principal Component Analysis (PCA), so as to obtain fused feature data.
For another example, in an embodiment of the present specification, feature fusion between the face state data and the face state data is implemented based on a feature concatenation algorithm and a Linear Discriminant Analysis (LDA) algorithm. Specifically, the method comprises the following steps: performing feature splicing on the face state data and the face state data to obtain a feature sample set; and based on a linear discriminant analysis algorithm, projecting the feature sample set to a certain preset dimension to obtain fusion feature data.
Further, it is considered that in an actual application scene, there is a case where image data containing an image of a user's face is output. In the above case, the facial expression and/or facial movement of the user are/is displayed and displayed, so that there is a possibility that the facial expression and/or facial movement of the user may leak, which may not only adversely affect the privacy of the user, but also, if image data of the user is output when performing face recognition authentication based on the facial expression and/or facial movement, the facial expression and/or facial movement presented when the user performs authentication may leak, which may result in a great reduction in the security of the face recognition authentication.
Therefore, in an embodiment of the present specification, the user expression and/or the user action in the image data output by the user are adjusted and modified, so that the actual user expression and/or user action of the user are prevented from leaking. Fig. 3 is a partial flowchart of an embodiment of a face recognition verification method according to the present disclosure. As shown in fig. 1, in an embodiment of the present specification, the method further includes:
step 310, monitoring whether a first image data output request exists, wherein the first image data output request is used for requesting to output first image data containing face image data of a first user to be verified;
step 320, when there is a first image data output request, adjusting the first image data to generate second image data, wherein the facial expression and/or facial movement of the first user to be authenticated in the second image data are different from those of the first image data;
step 330, outputting the second image data.
In one implementation of step 320:
extracting first face state data of a first user to be authenticated according to the first image data;
adjusting the first face state data to generate second face state data, wherein the facial expressions and/or facial movements corresponding to the second face state data are different from the first face state data;
the second face state data is loaded to the first image data to generate second image data.
For example, in an application scenario, when a video image including a face image of a user to be authenticated needs to be output, before the video image is output, the video image needs to be modified so that a facial expression and/or a facial action of the face image of the user to be authenticated in the video image changes. For example, the facial expression and facial movement of the user to be authenticated in the video image to be output are smiling and nodding, and after the video image is modified, the facial image and facial movement of the user to be authenticated in the video image to be output are smiling and nodding.
Further, in an application scenario, when a user is performing facial expression/action-based face recognition verification, if video is synchronously output (e.g., live), the user's facial expression action is changed.
It is to be understood that some or all of the steps or operations in the above-described embodiments are merely examples, and other operations or variations of various operations may be performed by the embodiments of the present specification. Further, the various steps may be performed in a different order presented in the above-described embodiments, and it is possible that not all of the operations in the above-described embodiments are performed.
Further, based on the face recognition verification method provided in the embodiment of the present specification, the embodiment of the present specification further provides a face recognition verification apparatus. Fig. 4 is a block diagram illustrating an embodiment of a face recognition verification apparatus according to the present disclosure. In an embodiment of the present specification, as shown in fig. 4, in an embodiment of the present specification, a face recognition verification apparatus 400 includes:
the image acquisition module 410 is configured to acquire face image data of a first user to be authenticated when the first user to be authenticated initiates a first face identification authentication operation with an identity of the first user;
a feature extraction module 420, configured to extract face recognition feature data of the first user to be authenticated according to the face image data of the first user to be authenticated;
a state extraction module 430, configured to extract facial state data of the first user to be authenticated according to the facial image data of the first user to be authenticated, where the facial state data includes expression data describing facial expressions of the user and/or motion data describing facial and head movements of the user;
the verification information invoking module 440 is configured to invoke first verification information, where the first verification information is information generated according to the face recognition feature data of the first user and the face state data corresponding to the first user for the first face recognition verification operation;
a verification module 450, configured to match the first verification information with the face recognition feature data and the face state data of the first user to be verified, and determine, according to a matching result, a verification result of the first face recognition verification operation for the first user to be verified.
All the embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from other embodiments. In particular, as for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment. It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Further, in the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain a corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical blocks. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by an accessing party. A digital device is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate a dedicated integrated circuit chip. Furthermore, nowadays, instead of manually manufacturing an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to the software compiler used in program development, but the original code before compiling is also written in a specific Programming Language, which is called Hardware Description Language (HDL), and the HDL is not only one kind but many kinds, such as abel (advanced boot Expression Language), ahdl (alternate Language Description Language), communication, CUPL (computer universal Programming Language), HDCal (Java Hardware Description Language), langa, Lola, mylar, HDL, PALASM, rhydl (runtime Description Language), vhjhdul (Hardware Description Language), and vhygl-Language, which are currently used commonly. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
In the description of the embodiments of the present specification, for convenience of description, the device is described as being divided into various modules/units by functions, the division of each module/unit is only a division of logic functions, and the functions of each module/unit can be implemented in one or more pieces of software and/or hardware when the embodiments of the present specification are implemented.
Specifically, the apparatuses proposed in the embodiments of the present disclosure may be wholly or partially integrated into one physical entity or may be physically separated when actually implemented. And these modules can be realized in the form of software called by processing element; or may be implemented entirely in hardware; and part of the modules can be realized in the form of calling by the processing element in software, and part of the modules can be realized in the form of hardware. For example, the detection module may be a separate processing element, or may be integrated into a chip of the electronic device. Other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
For example, the above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more Digital Signal Processors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), etc. For another example, the modules may be integrated together and implemented in the form of a System-On-a-Chip (SOC).
Those of ordinary skill in the art will appreciate that the various elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of electronic hardware and computer software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present specification.
An embodiment of the present specification also proposes an electronic device comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the electronic device to perform the method steps according to an embodiment of the present specification.
Specifically, in an embodiment of the present specification, the one or more computer programs are stored in the memory, and the one or more computer programs include instructions that, when executed by the apparatus, cause the apparatus to perform the method steps described in the embodiment of the present specification.
This description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
Specifically, in an embodiment of the present specification, the processor of the electronic device may be an on-chip device SOC, and the processor may include a Central Processing Unit (CPU), and may further include other types of processors. Specifically, in an embodiment of the present specification, the processor of the electronic device may be a PWM control chip.
Specifically, in an embodiment of the present specification, the processors may include, for example, a CPU, a DSP, a microcontroller, or a digital Signal processor, and may further include a GPU, an embedded Neural Network Processor (NPU), and an Image Signal Processing (ISP), and the processors may further include necessary hardware accelerators or logic Processing hardware circuits, such as an ASIC, or one or more integrated circuits for controlling the execution of the program according to the technical solution of the present specification. Further, the processor may have the functionality to operate one or more software programs, which may be stored in the storage medium.
Specifically, in one embodiment of the present disclosure, the memory of the electronic device may be a read-only memory (ROM), another type of static memory device capable of storing static information and instructions, a Random Access Memory (RAM), or another type of dynamic memory device capable of storing information and instructions, an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM), or another optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), a magnetic disk storage medium, or another magnetic storage device, or any computer readable medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
In particular, in an embodiment of the present specification, the processor and the memory may be combined into a processing device, and more generally, are independent components, and the processor is configured to execute the program code stored in the memory to implement the method described in the embodiment of the present specification. In particular implementations, the memory may be integrated within the processor or may be separate from the processor.
Further, the apparatuses, devices, modules, or units illustrated in the embodiments of the present disclosure may be implemented by a computer chip or an entity, or implemented by a product with certain functions.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, apparatus, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied in the medium.
In the several embodiments provided in the present specification, any function, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the present specification may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present specification.
Specifically, an embodiment of the present specification further provides a computer-readable storage medium, in which a computer program is stored, and when the computer program runs on a computer, the computer is caused to execute the method provided by the embodiment of the present specification.
An embodiment of the present specification also provides a computer program product, which includes a computer program that, when run on a computer, causes the computer to perform the method provided by the embodiment of the present specification.
The embodiments in this specification are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices), and computer program products according to embodiments of the specification. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In the embodiments of the present invention, the term "at least one" means one or more, and the term "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, and means that there may be three relationships, for example, a and/or B, and may mean that a exists alone, a and B exist simultaneously, and B exists alone. Wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" and similar expressions refer to any combination of these items, including any combination of singular or plural items. For example, at least one of a, b, and c may represent: a, b, c, a and b, a and c, b and c or a and b and c, wherein a, b and c can be single or multiple.
In the embodiments of the present specification, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an embodiment of the present disclosure, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope disclosed in the present disclosure, and all the changes or substitutions should be covered within the protection scope of the present disclosure. The protection scope of this specification shall be subject to the protection scope of the claims.

Claims (14)

1. A face recognition verification method comprises the following steps:
when a first user to be verified initiates a first face identification verification operation, acquiring face image data of the first user to be verified;
extracting the face recognition characteristic data of the first user to be verified according to the face image data of the first user to be verified;
extracting facial state data of the first user to be verified according to the facial image data of the first user to be verified, wherein the facial state data comprise expression data used for describing facial expressions of the user and/or action data used for describing facial actions of the user, the facial state data comprise facial expression description information and/or facial action description information, or the facial state data comprise facial expression names and/or facial action names;
calling first verification information, wherein the first verification information is information generated according to face recognition feature data of a first user and face state data corresponding to first face recognition verification operation of the first user, and the first user is a user identity used when the first user initiates first face recognition verification operation for a first user to be verified;
and matching the first verification information with the face recognition feature data and the face state data of the first user to be verified, and determining a verification result of the first face recognition verification operation for the first user to be verified according to the matching result.
2. The method of claim 1, wherein:
the first verification information comprises face recognition feature data of the first user and face state data of the first user aiming at the first face recognition verification operation;
determining a verification result of the first face identification verification operation for the first to-be-verified user according to the matching result, wherein:
and when the face identification feature data and the face state data of the first user to be verified are respectively matched with the face identification feature data and the face state data in the first verification information, judging that the first user to be verified passes the verification of the first face identification verification operation.
3. The method of claim 1, wherein:
the first verification information includes face recognition feature data of the first user and a plurality of face state data of the first user for the first face recognition verification operation, one of the face state data in the first verification information corresponds to a verification result of the first face recognition verification operation, and one verification result of the first face recognition verification operation corresponds to one or more of the face state data in the first verification information;
the matching the first verification information and the face recognition feature data and the face state data of the first user to be verified, and determining a verification result of the first face recognition verification operation for the first user to be verified according to the matching result includes:
matching the face recognition characteristic data of the first user to be verified with the face recognition characteristic data in the first verification information;
when the face recognition feature data of the first user to be verified is matched with the face recognition feature data in the first verification information, matching the face state data of the first user to be verified with a plurality of face state data in the first verification information;
when the face state data of the first user to be authenticated is matched with the first face state data in the plurality of face state data, taking the authentication result corresponding to the first face state data as the authentication result of the first face identification authentication operation on the first user to be authenticated.
4. The method of claim 1, wherein:
the first verification information comprises fusion feature data, and the fusion feature data in the first verification information is generated after feature fusion operation is carried out on the face recognition feature data of the first user and the face state data corresponding to the first face recognition verification operation of the first user;
the matching of the first verification information, the face recognition feature data and the face state data of the first user to be verified includes:
performing the feature fusion operation on the face recognition feature data and the face state data of the first user to be verified to generate fusion feature data of the first user to be verified;
and matching the fused feature data in the first verification information with the fused feature data of the first user to be verified.
5. The method of claim 1, wherein:
the first verification information includes a plurality of fusion feature data, the plurality of fusion feature data in the first verification information are generated after feature fusion operations are respectively performed on face recognition feature data of the first user and a plurality of face state data of the first user for the first face recognition verification operation, one fusion feature data in the first verification information corresponds to one verification result of the first face recognition verification operation, and one verification result of the first face recognition verification operation corresponds to one or more fusion feature data in the first verification information;
the matching the first verification information and the face recognition feature data and the face state data of the first user to be verified, and determining a verification result of the first face recognition verification operation for the first user to be verified according to a matching result include:
performing the feature fusion operation on the face recognition feature data and the face state data of the first user to be verified to generate fusion feature data of the first user to be verified;
matching the fused feature data of the first user to be verified with the plurality of fused feature data in the first verification information;
when the fused feature data of the first user to be verified is matched with the first fused feature data in the multiple fused feature data, taking a verification result corresponding to the first fused feature data as a verification result of the first face identification verification operation for the first user to be verified.
6. A method according to any one of claims 1 to 5, wherein the method further comprises:
monitoring whether a first image data output request exists or not, wherein the first image data output request is used for requesting to output first image data containing face image data of the first user to be verified;
when the first image data output request exists, adjusting the first image data to generate second image data, wherein the facial expression and/or facial action of the first user to be authenticated in the second image data are different from those of the first image data;
and outputting the second image data.
7. The method of claim 6, wherein the adjusting the first image data to generate second image data comprises:
extracting first facial state data of the first user to be authenticated according to the first image data;
adjusting the first facial state data to generate second facial state data, wherein the second facial state data corresponds to a facial expression and/or facial movement that is different from the first facial state data;
loading the second face state data to the first image data to generate second image data.
8. The method of any of claims 1-5, wherein the invoking first authentication information comprises:
and calling the first verification information corresponding to the first face recognition verification operation from a verification information base, wherein the verification information base comprises one or more pieces of verification information, and one piece of verification information in the verification information base corresponds to one or more face recognition verification operations.
9. The method of any of claims 1-5, wherein the method further comprises:
acquiring face recognition feature data of the first user;
acquiring face state data corresponding to the first user aiming at the first face identification verification operation;
generating the first verification information according to the face recognition feature data of the first user and the face state data corresponding to the first face recognition verification operation of the first user;
and saving the first verification information.
10. The method of claim 9, wherein obtaining face recognition feature data of the first user comprises:
retrieving face recognition feature data of the first user from a verification information repository for performing face recognition operations;
alternatively, the first and second liquid crystal display panels may be,
and extracting the face recognition characteristic data of the first user from the face image data of the first user to be used as the face recognition characteristic data of the first user.
11. The method of any of claims 1-5, wherein prior to said invoking the first authentication information, the method further comprises:
confirming the first user, including:
identifying the face recognition feature data of the first user to be verified to confirm the user identity of the first user to be verified, and confirming the first user according to the user identity of the first user to be verified;
alternatively, the first and second liquid crystal display panels may be,
and confirming the first user according to the identity information input by the first user to be verified.
12. A face recognition verification apparatus, comprising:
the image acquisition module is used for acquiring the face image data of a first user to be verified when the first user to be verified initiates a first face identification verification operation with the identity of the first user;
the feature extraction module is used for extracting face identification feature data of the first user to be verified according to the face image data of the first user to be verified;
the state extraction module is used for extracting facial state data of the first user to be verified according to the facial image data of the first user to be verified, wherein the facial state data comprise expression data used for describing facial expressions of the user and/or action data used for describing facial actions of the user, the facial state data comprise facial expression description information and/or facial action description information, or the facial state data comprise facial expression names and/or facial action names;
the verification information calling module is used for calling first verification information, wherein the first verification information is generated according to the face recognition feature data of the first user and the face state data corresponding to the first face recognition verification operation of the first user;
and the verification module is used for matching the first verification information with the face recognition feature data and the face state data of the first user to be verified, and determining a verification result of the first face recognition verification operation for the first user to be verified according to the matching result.
13. An electronic device comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the electronic device to perform the method steps of any of claims 1-11.
14. A computer-readable storage medium, in which a computer program is stored which, when run on a computer, causes the computer to carry out the method according to any one of claims 1-11.
CN202010528279.6A 2020-06-11 2020-06-11 Face recognition verification method and device and electronic equipment Active CN111680639B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010528279.6A CN111680639B (en) 2020-06-11 2020-06-11 Face recognition verification method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010528279.6A CN111680639B (en) 2020-06-11 2020-06-11 Face recognition verification method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111680639A CN111680639A (en) 2020-09-18
CN111680639B true CN111680639B (en) 2022-08-30

Family

ID=72454632

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010528279.6A Active CN111680639B (en) 2020-06-11 2020-06-11 Face recognition verification method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111680639B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117034238A (en) * 2023-08-10 2023-11-10 南京云思创智信息科技有限公司 Photoelectric pulse signal enhancement type face recognition method

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2115662B1 (en) * 2007-02-28 2010-06-23 Fotonation Vision Limited Separating directional lighting variability in statistical face modelling based on texture space decomposition
CN105005779A (en) * 2015-08-25 2015-10-28 湖北文理学院 Face verification anti-counterfeit recognition method and system thereof based on interactive action
CN107992739A (en) * 2017-11-30 2018-05-04 北京旷视科技有限公司 User authentication method, apparatus and system
CN108446601B (en) * 2018-02-27 2021-07-13 东南大学 Face recognition method based on dynamic and static feature fusion
CN110197108A (en) * 2018-08-17 2019-09-03 平安科技(深圳)有限公司 Auth method, device, computer equipment and storage medium
CN109034133A (en) * 2018-09-03 2018-12-18 北京诚志重科海图科技有限公司 A kind of face identification method and device
CN109684951A (en) * 2018-12-12 2019-04-26 北京旷视科技有限公司 Face identification method, bottom library input method, device and electronic equipment
CN110738102B (en) * 2019-09-04 2023-05-12 暗物智能科技(广州)有限公司 Facial expression recognition method and system

Also Published As

Publication number Publication date
CN111680639A (en) 2020-09-18

Similar Documents

Publication Publication Date Title
RU2714096C1 (en) Method, equipment and electronic device for detecting a face vitality
CN107018121B (en) User identity authentication method and device
JP6878572B2 (en) Authentication based on face recognition
KR102359558B1 (en) Face verifying method and apparatus
JP6467965B2 (en) Emotion estimation device and emotion estimation method
US9607138B1 (en) User authentication and verification through video analysis
EP3528156A1 (en) Virtual reality environment-based identity authentication method and apparatus
KR20210062381A (en) Liveness test method and liveness test apparatus, biometrics authentication method and biometrics authentication apparatus
KR20210069404A (en) Liveness test method and liveness test apparatus
JP2006235718A (en) Facial authentication device, its facial authentication method, electronic equipment integrated with the facial authentication device and recording medium with the facial authentication program stored thereon
KR20190070179A (en) Device and method to register user
CN111680639B (en) Face recognition verification method and device and electronic equipment
CN111160251B (en) Living body identification method and device
Yin et al. Fusion of face recognition and facial expression detection for authentication: a proposed model
Senarath et al. BehaveFormer: A Framework with Spatio-Temporal Dual Attention Transformers for IMU-enhanced Keystroke Dynamics
CN105897747A (en) Data storage method and device based on digital biological signatures, and intelligent device
Orna et al. A low-cost embedded facial recognition system for door access control using deep learning
de Oliveira et al. A security API for multimodal multi-biometric continuous authentication
Grabovskyi et al. Facial recognition with using of the microsoft face API Service
CN111931148A (en) Image processing method and device and electronic equipment
CN111753656A (en) Feature extraction method, device, equipment and computer-readable storage medium
Nainan et al. Speech controlled automobile with three-level biometric security system
WO2022172430A1 (en) Determination method, determination program, and information processing device
KR20220129169A (en) Device and method for authenticating user based on facial characteristics and mask characteristics of the user
Lindi Utvikling av ansiktsgjenkjenningssystem for bruk på NAO robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40037332

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant