WO2016034069A1 - 身份认证方法、装置、终端及服务器 - Google Patents

身份认证方法、装置、终端及服务器 Download PDF

Info

Publication number
WO2016034069A1
WO2016034069A1 PCT/CN2015/088215 CN2015088215W WO2016034069A1 WO 2016034069 A1 WO2016034069 A1 WO 2016034069A1 CN 2015088215 W CN2015088215 W CN 2015088215W WO 2016034069 A1 WO2016034069 A1 WO 2016034069A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
user
face
facial
server
Prior art date
Application number
PCT/CN2015/088215
Other languages
English (en)
French (fr)
Inventor
杜志军
Original Assignee
阿里巴巴集团控股有限公司
杜志军
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司, 杜志军 filed Critical 阿里巴巴集团控股有限公司
Priority to JP2017512314A priority Critical patent/JP6820062B2/ja
Priority to SG11201701497SA priority patent/SG11201701497SA/en
Priority to KR1020177005848A priority patent/KR101997371B1/ko
Priority to EP15838136.8A priority patent/EP3190534B1/en
Priority to EP19172346.9A priority patent/EP3540621B1/en
Priority to PL19172346T priority patent/PL3540621T3/pl
Publication of WO2016034069A1 publication Critical patent/WO2016034069A1/zh
Priority to US15/448,534 priority patent/US10601821B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0861Network architectures or network communication protocols for network security for authentication of entities using biometrical features, e.g. fingerprint, retina-scan
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/305Authentication, i.e. establishing the identity or authorisation of security principals by remotely controlling device operation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/34User authentication involving the use of external additional devices, e.g. dongles or smart cards
    • G06F21/35User authentication involving the use of external additional devices, e.g. dongles or smart cards communicating wirelessly
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/45Structures or tools for the administration of authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/164Detection; Localisation; Normalisation using holistic features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/22Interactive procedures; Man-machine interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/21Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/2101Auditing as a secondary aspect
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/21Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/2103Challenge-response
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/21Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/2115Third party
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/21Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/2117User registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/142Image acquisition using hand-held instruments; Constructional details of the instruments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities

Definitions

  • the present application relates to the field of communications technologies, and in particular, to an identity authentication method, apparatus, terminal, and server.
  • the server verifies that the input authentication password is consistent with the authentication password when the user is registered, and confirms that the user passes the identity authentication.
  • authentication passwords are often simple combinations of numbers and letters that are easily stolen by malicious third parties. Therefore, the reliability of the existing identity authentication method is poor, and the user information is easily stolen, resulting in low security of the authentication.
  • the present application provides an identity authentication method, device, terminal, and server to solve the problem that the identity authentication method in the prior art is less reliable and less secure.
  • an identity authentication method is provided, where the method includes:
  • an identity authentication method is provided, where the method includes:
  • the gesture recognition information is the gesture recognition information obtained by the terminal by identifying a facial gesture presented by the user according to the facial dynamic authentication prompt information
  • an identity authentication apparatus includes:
  • a receiving unit configured to receive a face dynamic authentication prompt message sent by the server when the user performs identity authentication
  • a recognition unit configured to obtain gesture recognition information of the face dynamic authentication prompt information by identifying a face gesture presented by the user
  • a sending unit configured to send the gesture identification information to the server, so that the server determines that the user passes the identity authentication when verifying that the gesture recognition information is consistent with the face dynamic authentication prompt information.
  • an identity authentication apparatus includes:
  • a sending unit configured to send a face dynamic authentication prompt message to the terminal when the user performs identity authentication
  • a receiving unit configured to receive the gesture identification information sent by the terminal, where the gesture recognition information is the gesture recognition information obtained by the terminal by identifying a facial gesture presented by the user according to the facial dynamic authentication prompt information;
  • a determining unit configured to determine that the user passes the identity authentication when verifying that the gesture identification information is consistent with the face dynamic authentication prompt information.
  • a terminal including:
  • processor a memory for storing the processor executable instructions
  • processor is configured to:
  • a server including:
  • processor a memory for storing the processor executable instructions
  • processor is configured to:
  • the gesture recognition information is the gesture recognition information obtained by the terminal by identifying a facial gesture presented by the user according to the facial dynamic authentication prompt information
  • the server when the user is authenticated by the user, the server sends the dynamic authentication prompt information to the terminal, and the terminal obtains the gesture recognition information of the facial dynamic authentication prompt information by identifying the facial gesture presented by the user, and sends the gesture identification information to the server and the server.
  • the verification gesture recognition information is consistent with the face dynamic authentication prompt information, it is determined that the user passes the identity authentication.
  • the face dynamic authentication method can perform high security authentication on the user identity. Compared with the existing authentication password, the authentication information is not stolen by a malicious third party, which improves the reliability of the authentication. Sex, and the face dynamic authentication can identify the user as a live user, thereby further improving the accuracy of identity authentication and reducing the security risks in the authentication process.
  • FIG. 1 is a schematic diagram of an identity authentication scenario according to an embodiment of the present application.
  • 2A is a flowchart of an embodiment of an identity authentication method of the present application.
  • 2B is a flowchart of another embodiment of the identity authentication method of the present application.
  • 3A is a flowchart of another embodiment of an identity authentication method of the present application.
  • FIG. 3B is a schematic diagram of a gesture of a human head in a face authentication process according to an embodiment of the present application.
  • FIG. 4A is a flowchart of another embodiment of an identity authentication method according to the present application.
  • FIG. 4B and FIG. 4C are schematic diagrams of key points of a face in an embodiment of the present application.
  • FIG. 5 is a hardware structural diagram of a device where the identity authentication device of the present application is located;
  • FIG. 6 is a block diagram of an embodiment of an identity authentication apparatus of the present application.
  • FIG. 7 is a block diagram of another embodiment of the identity authentication apparatus of the present application.
  • first, second, third, etc. may be used to describe various information in this application, such information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other.
  • first information may also be referred to as the second information without departing from the scope of the present application.
  • second information may also be referred to as the first information.
  • word "if” as used herein may be interpreted as "when” or “when” or “in response to a determination.”
  • FIG. 1 it is a schematic diagram of an application scenario for implementing identity authentication according to an embodiment of the present application: a user completes identity authentication of a user by interacting with a terminal and a server, and communication between the terminal and the server may be completed based on a network.
  • the network includes various wireless networks or wired networks, and the embodiments of the present application do not limit the embodiments.
  • the terminal may be specifically a mobile phone, a tablet computer, a personal computer, or the like.
  • two databases may be set on the server, which are a face feature information database and a face dynamic authentication prompt information database.
  • the terminal may obtain the facial feature information of the registered user and send it to the server, and the server saves the facial feature information of the registered user to the face feature information database.
  • face authentication may be performed first.
  • the user sends the acquired facial feature information to the server, and the server verifies that the facial feature information matches the facial feature information of the user saved in the facial feature information data.
  • the server can perform face dynamic authentication.
  • the server can return the face dynamic authentication prompt information obtained from the face dynamic authentication prompt information database, and identify the user at the terminal.
  • the presented face gesture thus obtains the gesture recognition information of the face dynamic authentication prompt information and sends the gesture identification information to the server.
  • the server verifies that the gesture recognition information is consistent with the face dynamic authentication prompt information, it can be known that the current authenticated user is a living user, thereby finally determining The user is authenticated.
  • the face feature information of the user acquired in the face registration phase may be referred to as the second face feature information in the embodiment of the present application, and the face feature information of the user acquired in the face authentication phase is referred to as the first person. Face feature information.
  • the embodiments of the present application are described in detail below.
  • FIG. 2A it is a flowchart of an embodiment of an identity authentication method according to the present application. The embodiment is described from a terminal side that implements identity authentication:
  • Step 201 Receive a face dynamic authentication prompt message sent by the server when the user performs identity authentication.
  • the server may randomly extract the face dynamic authentication prompt information from the face dynamic authentication prompt information data and return the information to the terminal, where the face dynamic authentication prompt information may include at least one of the following information: an expression action prompt information, For example, closing the eyes, opening the mouth, turning the head, etc.; and reading the prompt information by voice, for example, paying 20 yuan or the like.
  • an expression action prompt information For example, closing the eyes, opening the mouth, turning the head, etc.
  • reading the prompt information by voice for example, paying 20 yuan or the like.
  • the terminal may first acquire the facial feature information of the user, and use the facial feature information acquired during the identity authentication as the first facial information of the user to the server.
  • the server sends the facial dynamic authentication prompt information to the terminal when verifying that the first facial feature information matches the saved second facial feature information.
  • the terminal may activate an integrated imaging device, such as a camera, to detect the face of the user, and when the face is detected, the face tracking of the user is performed on the face.
  • an integrated imaging device such as a camera
  • the terminal may activate an integrated imaging device, such as a camera, to detect the face of the user, and when the face is detected, the face tracking of the user is performed on the face.
  • the server may search the facial feature information database according to the user name of the user, obtain the second facial feature information corresponding to the user name, and then adopt a preset comparison manner. Comparing the first face feature information and the second face feature information, if the feature comparison value is within a preset similarity range, determining that the first face feature information matches the second face feature information, determining the first After the face feature information is matched with the second face feature information, the user can be determined to pass the face authentication. At this time, the server sends the face dynamic authentication prompt information to the terminal.
  • Step 202 Obtain gesture recognition information of the face dynamic authentication prompt information by recognizing the face gesture presented by the user.
  • the terminal displays the face dynamic authentication prompt information on the identity authentication interface, and the user can present the corresponding face gesture according to the information, and the terminal recognizes the face gesture.
  • the user may perform face tracking to obtain face tracking information, where the face tracking information may include at least one of facial key position information and human head posture information, and then the terminal obtains the user by analyzing the face tracking information. Gesture identification information.
  • the position information of the key points of the face can be used to know whether the user prompts the eyes according to the expression action, whether to close the eyes, open the mouth, or the mouth shape of the user when reading the voice reading prompt information (the pronunciation of each word has a corresponding relationship with the mouth type, through the mouth type).
  • the gesture recognition information of the user can be determined; the head posture information can be used to know whether the user turns his head, bows, and the like.
  • Step 203 Send the gesture recognition information to the server, so that the server determines that the user passes the identity authentication when the verification gesture identification information is consistent with the facial dynamic authentication prompt information.
  • the server may send the face dynamic authentication prompt information to the terminal and record Corresponding relationship between the user name of the user and the face dynamic authentication prompt information; in this step, after the terminal sends the gesture recognition information to the server, the server obtains the corresponding face dynamic authentication prompt information according to the user name of the user, and verifies the gesture.
  • the identification information is consistent with the face dynamic authentication prompt information, the user is a living user, and the user is determined to pass the identity authentication.
  • the terminal can obtain the user's audio information in addition to the user's mouth type, and obtain the voice information read by the user by voice recognition of the audio information. Therefore, the server determines whether the user is authenticated by identity when the voice information is consistent with the voice reading prompt information.
  • FIG. 2B it is a flowchart of another embodiment of the identity authentication method of the present application, which is described from the server side that implements identity authentication:
  • Step 211 When the user performs identity authentication, send the face dynamic authentication prompt information to the terminal.
  • Step 212 Receive gesture identification information sent by the terminal, where the gesture recognition information is posture recognition information obtained by the terminal by recognizing a facial gesture presented by the user according to the facial dynamic authentication prompt information.
  • Step 213 When the verification gesture identification information is consistent with the facial dynamic authentication prompt information, determine that the user passes the identity authentication.
  • FIG. 2B is different from the identity authentication process shown in FIG. 2A only in the difference of the execution subject, that is, FIG. 2A is described from the terminal side, and FIG. 2B is described from the server side. Therefore, the related implementation process in the embodiment of FIG. 2B can be referred to the foregoing FIG. 2A. The description in the description will not be repeated here.
  • the embodiment can perform high security authentication on the user identity by using the face dynamic authentication mode. Compared with the existing authentication mode, the authentication information is not stolen by a malicious third party, which improves. The reliability of the authentication, and the dynamic authentication of the face can identify the user as a living user, thereby further improving the accuracy of the identity authentication and reducing the security risks in the authentication process.
  • FIG. 3A another embodiment of the identity authentication method of the present application, which illustrates the process of face registration in detail:
  • Step 301 The user registers with the server through the terminal.
  • Step 302 When the terminal detects the face of the user, the terminal performs face tracking on the user.
  • the camera is integrated with a camera device, such as a camera.
  • the user can automatically start the camera device to detect the face of the user when the user registers.
  • the user can point the camera device to the front face of the camera.
  • the terminal can track the face of the user through the face tracking algorithm. It should be noted that the existing face tracking algorithm can be used in the embodiment of the present application, and details are not described herein. .
  • Step 303 The terminal acquires a face image according to a preset time interval in the face tracking process.
  • the terminal acquires the face image according to the preset time interval by the camera device, and the time interval is set to avoid extracting substantially the same face image.
  • the preset time interval may be 3 seconds.
  • Step 304 Determine whether the definition of the face image meets the preset definition threshold. If yes, execute step 305; otherwise, end the current process.
  • the sharpness may be first judged to exclude the face image with insufficient clarity.
  • the terminal can retrieve a preset fuzzy judgment function to determine whether the sharpness of the face image satisfies the definition threshold.
  • the fuzzy judgment function can adopt the fuzzy judgment function in the existing image recognition processing technology. The examples are not limited. For the face image satisfying the sharpness threshold, step 305 is performed, and the face image that does not satisfy the sharpness threshold is directly discarded, and then returns to step 303.
  • Step 305 The terminal extracts the head posture information from the face image.
  • the terminal After determining that the acquired face image is a clear face image in step 304, the terminal extracts the head pose information from the face image.
  • the human head posture information in this embodiment may include at least one of the following angles: a low head angle, a side face angle, and a head angle.
  • Step 306 The terminal determines whether each angle included in the gesture information of the human head is within a preset angle range. If yes, step 307 is performed; otherwise, the current flow ends.
  • whether the face image is the front face image of the user is determined by the posture information of the head, and the terminal can determine whether each angle included in the posture information of the head is within a preset angle range, for example, The preset angle range is from 0 to 10 degrees.
  • Step 307 is executed for the face image corresponding to the head posture information of the determination result, and the face image corresponding to the head posture information of the determination result is discarded, and then the process returns to step 303.
  • Step 307 The terminal extracts facial feature information of the user from the face image.
  • a LBP (Linear Back Projection) feature extraction algorithm may be used to extract a face feature vector value from a face image as a face feature information of the user.
  • a face feature extraction algorithm used in any existing image processing technology can be applied to the embodiment of the present application, for example, a windowed Fourier transform gabor feature extraction algorithm. Wait.
  • the face feature information of the user may be extracted from the plurality of face images, and the number of the plurality of face images may be preset. For example, five, correspondingly, according to the number of face images set, the foregoing steps 303 to 307 may be cyclically executed to acquire a face image that satisfies the preset number, and extract face feature information therefrom.
  • Step 308 The terminal sends the face feature information to the server.
  • Step 309 The server saves the correspondence between the user name of the registered user and the face feature, and ends the current process.
  • the server after receiving the facial feature information sent by the terminal, the server may be in the face.
  • the correspondence between the user name and the face feature of the registered user is saved in the information database.
  • the corresponding relationship between the user name and the plurality of face feature information is saved.
  • FIG. 4A another embodiment of the identity authentication method of the present application is based on the face registration process shown in FIG. 3, and the process of authenticating a user is described in detail:
  • Step 401 Start identity authentication for the user.
  • Step 402 The terminal acquires first facial feature information of the user.
  • the manner in which the terminal obtains the facial feature information of the user is consistent with the manner in which the facial feature information is acquired in the face registration process shown in FIG. 3, which is specifically consistent with steps 302 to 307 shown in FIG. , will not repeat them here.
  • the terminal may acquire at least one first facial feature information.
  • Step 403 The terminal sends the first facial feature information of the user to the server.
  • Step 404 The server verifies whether the first facial feature information matches the saved second facial feature information of the user. If yes, step 405 is performed; otherwise, the current process is ended.
  • the server may search the facial feature information database according to the user name of the user, obtain the second facial feature information corresponding to the user name, and then adopt the pre-preparation.
  • the comparing manner compares the first facial feature information and the second facial feature information. If the feature comparison value is within a preset similarity range, the first facial feature information and the second facial feature information may be determined to match.
  • the first face feature information and the second face feature may be compared by using the Euclidean distance comparison method, and the square sum of the difference between the second face feature vector and the first face feature vector is calculated at this time, if If the sum of squares is less than a preset threshold, it may be determined that the identity authentication is performed by the user himself;
  • the first face feature information and the second face feature may be compared by using a cosine distance comparison method. If the first face feature vector is V1 and the second face feature vector is V2, the following formula may be calculated. Value: V2*V1/(
  • Step 405 The server sends the face dynamic authentication prompt information to the terminal.
  • the server may randomly extract a face dynamic authentication prompt information from the face dynamic authentication prompt information database.
  • the face dynamic authentication prompt information may include an expression action prompt information or a voice read prompt information.
  • the prompting action is usually an action that the user can easily present through the facial gesture, for example, opening a mouth, closing the eyes, turning the head, etc.
  • the voice reading prompt information the information is usually short, so that the user is Read at the time of authentication, and it is convenient for the terminal to recognize the facial gesture when the user reads.
  • Step 406 The terminal obtains face tracking information by performing face tracking on the user.
  • the terminal may output the face dynamic authentication prompt information on the authentication interface, and the user may present the corresponding face gesture according to the information.
  • the terminal acquires the user through the face tracking algorithm.
  • Face tracking information may include at least one of the following information: face key position information, head posture information.
  • Step 407 The terminal analyzes the face tracking information to obtain the gesture recognition information of the user.
  • FIG. 4B and FIG. 4C are schematic diagrams of position information of facial key points in the embodiment of the present application.
  • FIG. 4B is a key position information of a user's mouth extracted in a normal state
  • FIG. 4C is a “open mouth” posture of the user. After extracting the key position information of the user's mouth, by comparing the key position information extracted by FIG. 4B and FIG. 4C, that is, comparing the coordinate distances of the two key points above and below the mouth, the user's posture recognition information can be obtained as “ Open your mouth.”
  • the terminal may obtain the head posture information by performing face tracking on the user, which may be specifically shown in FIG. 3B.
  • the three corners are obtained, and if the angle values of the three corners satisfy the range of angle values defined by the "turning head", the posture recognition information of the user can be obtained as "turning head”.
  • Step 408 The terminal sends the gesture recognition information to the server.
  • Step 409 The server verifies that the gesture recognition information is consistent with the face dynamic authentication prompt information. If yes, step 410 is performed; otherwise, the current flow is ended.
  • Step 410 The server determines that the user passes the identity authentication and ends the current process.
  • the embodiment combines face authentication and dynamic authentication to perform high security authentication on the user identity, wherein the face authentication can initially verify whether the user is the user, and the authentication is performed compared to the existing authentication password.
  • the face authentication can initially verify whether the user is the user, and the authentication is performed compared to the existing authentication password.
  • the authentication information is not easily stolen by a malicious third party, and the reliability of the authentication is improved, and on the basis of being confirmed as the user himself, the face dynamic authentication can identify the user as a living user, thereby further improving the accuracy of the identity authentication. Reduce the security risks in the authentication process.
  • the present application also provides an embodiment of an identity authentication device, a terminal, and a server.
  • Embodiments of the identity authentication apparatus of the present application can be applied to terminals and servers, respectively.
  • the device embodiment may be implemented by software, or may be implemented by hardware or a combination of hardware and software. Taking the software implementation as an example, as a logical means, the processor of the device in which it is located reads the corresponding computer program instructions in the non-volatile memory into the memory. From the hardware level, as shown in FIG. 5, a hardware structure diagram of the device where the identity authentication device is located, except for the processor, the memory, the network interface, and the non-volatile memory shown in FIG.
  • the device in which the device is located in the embodiment may also include other hardware according to the actual function of the device.
  • the terminal may include a camera, a touch screen, a communication component, etc.
  • the server may include a packet responsible for processing the message. Forwarding chips and so on.
  • the identity authentication apparatus may be applied to a terminal, and the apparatus includes: a receiving unit 610, an identifying unit 620, and a sending unit 630.
  • the receiving unit 610 is configured to receive the facial dynamic authentication prompt information sent by the server when the user performs identity authentication.
  • the identifying unit 620 is configured to obtain the gesture recognition information of the facial dynamic authentication prompt information by identifying a facial gesture presented by the user;
  • the sending unit 630 is configured to send the gesture identification information to the server, so that the server determines that the user passes the identity authentication when verifying that the gesture identification information is consistent with the face dynamic authentication prompt information.
  • the identification unit 620 can include (not shown in FIG. 6):
  • a face information obtaining sub-unit configured to obtain face tracking information by performing face tracking on the user when the user presents a face gesture according to the face dynamic authentication prompt information
  • the face information analysis subunit is configured to analyze the face tracking information to obtain the gesture identification information of the user.
  • the face information analysis sub-unit may be specifically configured to: when the face tracking information is the face key position information, obtain the facial gesture recognition information of the user by analyzing the facial key position information, when When the face tracking information is the head posture information, the head rotation identification information of the user is obtained by analyzing the head posture information.
  • the face dynamic authentication prompt information may include at least one of the following information: an expression action prompt information, and a voice read prompt information.
  • the apparatus may also include (not shown in Figure 6):
  • An acquiring unit configured to acquire facial feature information of the user, and use the facial feature information acquired during the identity authentication as the first facial feature information of the user;
  • the sending unit 630 may be further configured to send the first facial feature information of the user to the server, so that the server is verifying the first facial feature information and the saved second person of the user
  • the face dynamic authentication prompt information is sent when the face feature information is matched.
  • the acquiring unit may be further configured to: when the user performs registration, acquire facial feature information of the user, and use the facial feature information acquired during the registration as a second person of the user.
  • the sending unit 630 is further configured to send the second facial feature information to the server, so that the server saves the user name of the user and the second facial feature. Correspondence relationship.
  • the acquiring unit may include: a face tracking subunit, configured to perform face tracking on the user when the face of the user is detected; and an image obtaining subunit, configured to be in the person Obtaining a face image according to a preset time interval in the face tracking process; the condition determining subunit is configured to determine whether the face image satisfies a preset feature extraction condition; and the feature extraction subunit is configured to satisfy the feature extraction condition And extracting facial feature information of the user from the face image.
  • a face tracking subunit configured to perform face tracking on the user when the face of the user is detected
  • an image obtaining subunit configured to be in the person Obtaining a face image according to a preset time interval in the face tracking process
  • the condition determining subunit is configured to determine whether the face image satisfies a preset feature extraction condition
  • the feature extraction subunit is configured to satisfy the feature extraction condition And extracting facial feature information of the user from the face image.
  • condition determining subunit may further include:
  • a clarity determining module configured to determine whether a resolution of the face image meets a preset definition threshold
  • the attitude information extracting module is configured to extract, according to the clarity threshold, the head posture information from the face image, the head posture information including at least one of the following angles: a low head angle, a side face angle, and a partial Head angle
  • An angle determining module configured to determine whether each angle included in the posture information of the human head is within a preset angle range
  • the determination determining module is configured to determine that the face image satisfies the feature extraction condition if it is within a preset angle range.
  • the feature extraction sub-unit may be specifically configured to use a preset feature extraction algorithm to extract a face feature vector value from the face image as the face feature information of the user; wherein the preset feature
  • the extraction algorithm may include: a linear back projection LBP feature extraction algorithm, or a windowed Fourier transform gabor feature extraction algorithm, and the like.
  • the identity authentication apparatus may be applied to a server, and the apparatus includes: a sending unit 710, a receiving unit 720, and a determining unit 730.
  • the sending unit 710 is configured to send the face dynamic authentication prompt information to the terminal when the user performs identity authentication.
  • the receiving unit 720 is configured to receive the gesture identification information that is sent by the terminal, where the gesture identification information is the gesture identification information obtained by the terminal by identifying a facial gesture presented by the user according to the facial dynamic authentication prompt information. ;
  • the determining unit 730 is configured to determine that the user passes the identity authentication when verifying that the gesture identification information is consistent with the facial dynamic authentication prompt information.
  • the receiving unit 720 is further configured to receive the first facial feature information of the user that is sent by the terminal;
  • the device may further include: (not shown in FIG. 7): a verification unit, configured to verify whether the first facial feature information matches the saved second facial feature information of the user;
  • the sending unit 710 may be specifically configured to send the face dynamic authentication prompt information to the terminal when the matching is performed.
  • the receiving unit 720 is further configured to: when the user performs registration, receive the second facial feature information of the user that is sent by the terminal; the device may further include: The saving unit is configured to save a correspondence between the user name of the user and the second facial feature information.
  • the verification unit may include: a feature search subunit, configured to search the correspondence according to the user name of the user, to obtain second face feature information corresponding to the user name; and feature comparison subunit And comparing the first facial feature information and the second facial feature information according to a preset comparison manner; the matching determining subunit is configured to determine, if the feature comparison value is within a preset similarity range, The first facial feature information matches the second facial feature information.
  • the preset comparison manner that the feature comparison subunit can adopt includes: a European distance comparison method or a cosine distance comparison manner.
  • the device embodiment since it basically corresponds to the method embodiment, reference may be made to the partial description of the method embodiment.
  • the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. You can choose some of them according to actual needs or All modules are used to achieve the objectives of the present application. Those of ordinary skill in the art can understand and implement without any creative effort.
  • the face dynamic authentication method can perform high security authentication on the user identity, and the authentication information is not maliciously compared to the existing authentication mode.
  • the three-party stealing improves the reliability of the authentication, and the face dynamic authentication can identify the user as a living user, thereby further improving the accuracy of the identity authentication and reducing the security risks in the authentication process.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Signal Processing (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Collating Specific Patterns (AREA)
  • Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)
  • Image Analysis (AREA)

Abstract

本申请公开了身份认证方法、装置、终端及服务器,所述方法包括:在用户进行身份认证时,接收服务器发送的人脸动态认证提示信息;通过识别用户呈现的人脸姿态,获得人脸动态认证提示信息的姿态识别信息;将姿态识别信息发送至服务器,以使服务器在验证姿态识别信息与人脸动态认证提示信息一致时,确定用户通过身份认证。应用本申请实施例,通过人脸动态认证方式可以对用户身份进行高安全性认证,相较于现有采用认证密码进行认证的方式,认证信息不会被恶意第三方窃取,提高了认证的可靠性,并且通过人脸动态认证可以识别用户为活体用户,从而进一步提高身份认证的准确性,降低认证过程中存在的安全隐患。

Description

身份认证方法、装置、终端及服务器 技术领域
本申请涉及通信技术领域,尤其涉及身份认证方法、装置、终端及服务器。
背景技术
随着智能终端的发展和网络应用的开发,用户通过终端上安装的各种应用客户端可以对各种网络应用进行访问,例如,社交类即时通信应用,购物类应用等。在访问过程中,往往需要对用户进行身份认证,以便在身份认证通过后,允许用户使用各种应用功能。
现有技术中,在进行身份认证时,往往需要用户在认证界面输入认证密码,服务器验证输入的认证密码与用户注册时的认证密码一致时,确认用户通过身份认证。但是,认证密码往往是数字和字母的简单组合,容易被恶意第三方窃取。因此,现有身份认证方式的可靠性较差,容易造成用户信息被盗取,导致认证的安全性不高。
发明内容
本申请提供身份认证方法、装置、终端及服务器,以解决现有技术中身份认证方式可靠性较差且安全性不高的问题。
根据本申请实施例的第一方面,提供一种身份认证方法,所述方法包括:
在用户进行身份认证时,接收服务器发送的人脸动态认证提示信息;
通过识别所述用户呈现的人脸姿态,获得所述人脸动态认证提示信息的姿态识别信息;
将所述姿态识别信息发送至所述服务器,以使所述服务器在验证所述姿态识别信息与所述人脸动态认证提示信息一致时,确定所述用户通过身份认 证。
根据本申请实施例的第二方面,提供一种身份认证方法,所述方法包括:
在用户进行身份认证时,向终端发送人脸动态认证提示信息;
接收所述终端发送的姿态识别信息,所述姿态识别信息为所述终端通过识别所述用户根据所述人脸动态认证提示信息呈现的人脸姿态,获得的姿态识别信息;
当验证所述姿态识别信息与所述人脸动态认证提示信息一致时,确定所述用户通过身份认证。
根据本申请实施例的第三方面,提供一种身份认证装置,所述装置包括:
接收单元,用于在用户进行身份认证时,接收服务器发送的人脸动态认证提示信息;
识别单元,用于通过识别所述用户呈现的人脸姿态,获得所述人脸动态认证提示信息的姿态识别信息;
发送单元,用于将所述姿态识别信息发送至所述服务器,以使所述服务器在验证所述姿态识别信息与所述人脸动态认证提示信息一致时,确定所述用户通过身份认证。
根据本申请实施例的第四方面,提供一种身份认证装置,所述装置包括:
发送单元,用于在用户进行身份认证时,向终端发送人脸动态认证提示信息;
接收单元,用于接收所述终端发送的姿态识别信息,所述姿态识别信息为所述终端通过识别所述用户根据所述人脸动态认证提示信息呈现的人脸姿态,获得的姿态识别信息;
确定单元,用于当验证所述姿态识别信息与所述人脸动态认证提示信息一致时,确定所述用户通过身份认证。
根据本申请实施例的第五方面,提供一种终端,包括:
处理器;用于存储所述处理器可执行指令的存储器;
其中,所述处理器被配置为:
在用户进行身份认证时,接收服务器发送的人脸动态认证提示信息;
通过识别所述用户呈现的人脸姿态,获得所述人脸动态认证提示信息的姿态识别信息;
将所述姿态识别信息发送至所述服务器,以使所述服务器在验证所述姿态识别信息与所述人脸动态认证提示信息一致时,确定所述用户通过身份认证。
根据本申请实施例的第六方面,提供一种服务器,包括:
处理器;用于存储所述处理器可执行指令的存储器;
其中,所述处理器被配置为:
在用户进行身份认证时,向终端发送人脸动态认证提示信息;
接收所述终端发送的姿态识别信息,所述姿态识别信息为所述终端通过识别所述用户根据所述人脸动态认证提示信息呈现的人脸姿态,获得的姿态识别信息;
当验证所述姿态识别信息与所述人脸动态认证提示信息一致时,确定所述用户通过身份认证。
本申请实施例中在对用户进行身份认证时,服务器向终端发送动态认证提示信息,终端通过识别用户呈现的人脸姿态,获得人脸动态认证提示信息的姿态识别信息,并发送至服务器,服务器在验证姿态识别信息与人脸动态认证提示信息一致时,确定用户通过身份认证。应用本申请实施例,通过人脸动态认证方式可以对用户身份进行高安全性认证,相较于现有采用认证密码进行认证的方式,认证信息不会被恶意第三方窃取,提高了认证的可靠性,并且通过人脸动态认证可以识别用户为活体用户,从而进一步提高身份认证的准确性,降低认证过程中存在的安全隐患。
附图说明
图1为本申请实施例的身份认证场景示意图;
图2A为本申请身份认证方法的一个实施例流程图;
图2B为本申请身份认证方法的另一个实施例流程图;
图3A为本申请身份认证方法的另一个实施例流程图;
图3B为本申请实施例中人脸认证过程中的人头姿态示意图;
图4A为本申请身份认证方法的另一个实施例流程图;
图4B和图4C为本申请实施例中面部关键点示意图;
图5为本申请身份认证装置所在设备的一种硬件结构图;
图6为本申请身份认证装置的一个实施例框图;
图7为本申请身份认证装置的另一个实施例框图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。
在本申请使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本申请。在本申请和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,尽管在本申请可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本申请范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。
在基于互联网通信的场景中,用户可以通过所持终端上安装的各种应用 客户端实现对各种网络应用的访问,在访问过程中,用户往往需要进行身份认证,但是现有技术中通常采用认证密码对用户身份进行认证,认证密码通常是数字和字母的简单组合,容易被恶意第三方窃取,因此现有身份认证方式可靠性较差,安全性不高。基于此,参见图1,为本申请实施例实现身份认证的应用场景示意图:用户通过所持终端与服务器之间交互,完成对用户的身份认证,终端与服务器之间的通信可以基于网络完成,该网络包括各种无线网络或有线网络,对此本申请实施例不进行限制。其中,终端可以具体为手机、平板电脑、个人计算机等。在图1示出的应用场景中,服务器上可以设置两个数据库,分别为人脸特征信息数据库和人脸动态认证提示信息数据库。
在人脸注册阶段,终端可以获取注册用户的人脸特征信息并发送至服务器,由服务器将该注册用户的人脸特征信息保存至人脸特征信息数据库。在身份认证阶段,可以首先进行人脸认证,此时用户将获取的人脸特征信息发送至服务器,服务器验证该人脸特征信息与人脸特征信息数据中保存的该用户的人脸特征信息匹配时,可以初步确定当前进行身份认证的为用户本人;然后进行人脸动态认证,此时服务器可以向用户返回从人脸动态认证提示信息数据库中获取的人脸动态认证提示信息,在终端识别用户呈现的人脸姿态从而获得该人脸动态认证提示信息的姿态识别信息并发送至服务器,服务器验证该姿态识别信息与人脸动态认证提示信息一致时,可知当前认证用户为活体用户,从而最终确定用户通过身份认证。为了描述方便,本申请实施例中可以将人脸注册阶段获取的用户的人脸特征信息称为第二人脸特征信息,将人脸认证阶段获取的用户的人脸特征信息称为第一人脸特征信息。下面对本申请实施例进行详细说明。
参见图2A,为本申请身份认证方法的一个实施例的流程图,该实施例从实现身份认证的终端侧进行描述:
步骤201:在用户进行身份认证时,接收服务器发送的人脸动态认证提示信息。
本申请实施例中,服务器可以从人脸动态认证提示信息数据中随机提取人脸动态认证提示信息返回给终端,该人脸动态认证提示信息可以包括至少一种下述信息:表情动作提示信息,例如,闭眼、张嘴、转头等;语音读取提示信息,例如,支付20元等。
可选的,在接收服务器发送的人脸动态认证提示信息之前,终端可以先获取用户的人脸特征信息,将该身份认证时获取的人脸特征信息作为用户的第一人脸信息,向服务器发送用户的第一人脸特征信息后,服务器在验证该第一人脸特征信息与已保存的所述的第二人脸特征信息匹配时向终端发送该人脸动态认证提示信息。
其中,在获取用户的人脸特征信息时,终端可以启动其上集成的摄像设备,例如摄像头,对用户的人脸进行检测,在检测到人脸时,对用户进行人脸跟踪,在人脸跟踪过程中按照预设时间间隔获取人脸图像,对于获取的每个人脸图像,判断该人脸图像是否满足预设的特征提取条件,若满足特征提取条件,则从该人脸图像中提取该用户的人脸特征信息。
其中,服务器接收到用户的第一人脸特征信息后,可以根据该用户的用户名查找人脸特征信息数据库,获得与该用户名对应的第二人脸特征信息,然后采用预设的比较方式比较第一人脸特征信息和第二人脸特征信息,如果特征比较值在预设的相似度范围内,则可以确定第一人脸特征信息与第二人脸特征信息匹配,在确定第一人脸特征信息与第二人脸特征信息匹配后,可以确定用户通过人脸认证,此时服务器向终端发送人脸动态认证提示信息。
步骤202:通过识别用户呈现的人脸姿态,获得人脸动态认证提示信息的姿态识别信息。
本申请实施例中,当终端接收到人脸动态认证提示信息后,在身份认证界面显示该人脸动态认证提示信息,用户可以据此信息呈现相应的人脸姿态,而终端在识别人脸姿态时,可以对用户进行人脸跟踪,获得人脸跟踪信息,该人脸跟踪信息可以包括面部关键点位置信息和人头姿态信息中的至少一种信息,然后终端通过分析人脸跟踪信息获得用户的姿态识别信息。例如,通 过面部关键点位置信息可以获知用户按照表情动作提示信息是否闭眼、张嘴,或者在读取语音读取提示信息时用户的嘴型(每个词的发音与嘴型有对应关系,通过嘴型可以确定用户的姿态识别信息);通过人头姿态信息可以获知用户是否转头、低头等。
步骤203:将姿态识别信息发送至服务器,以使服务器在验证姿态识别信息与人脸动态认证提示信息一致时,确定用户通过身份认证。
对服务器来说,同一时间可能需要对多个用户进行身份认证,如果对不同用户发送了不同的动态认证提示信息,则步骤201中,服务器可以将人脸动态认证提示信息发送至终端后,记录该用户的用户名与人脸动态认证提示信息的对应关系;本步骤中,终端将姿态识别信息发送至服务器后,服务器根据用户的用户名获取到对应的人脸动态认证提示信息,验证该姿态识别信息与人脸动态认证提示信息一致时,说明用户为活体用户,此时确定用户通过身份认证。
另外,如果步骤201中人脸动态认证提示信息为语音读取提示信息,则终端除了获取用户的嘴型外,也可以获得用户的音频信息,通过语音识别该音频信息获得用户读取的语音信息,以便服务器比对该语音信息与语音读取提示信息是否一致,在一致时确定用户通过身份认证。
参见图2B,为本申请身份认证方法的另一个实施例的流程图,该实施例从实现身份认证的服务器侧进行描述:
步骤211:在用户进行身份认证时,向终端发送人脸动态认证提示信息。
步骤212:接收终端发送的姿态识别信息,该姿态识别信息为终端通过识别用户根据人脸动态认证提示信息呈现的人脸姿态,获得的姿态识别信息。
步骤213:当验证姿态识别信息与人脸动态认证提示信息一致时,确定用户通过身份认证。
需要说明的是,上述图2B示出的身份认证过程与图2A示出的身份认证过程的区别仅在于执行主体的不同,即图2A从终端侧进行描述,而图2B从服务器侧进行描述,因此图2B实施例中的相关实现过程可以参见前述图2A 中的描述,在此不再赘述。
由上述实施例可见,该实施例通过人脸动态认证方式可以对用户身份进行高安全性认证,相较于现有采用认证密码进行认证的方式,认证信息不会被恶意第三方窃取,提高了认证的可靠性,并且通过人脸动态认证可以识别用户为活体用户,从而进一步提高身份认证的准确性,降低认证过程中存在的安全隐患。
参见图3A,为本申请身份认证方法的另一个实施例,该实施例详细示出了人脸注册的过程:
步骤301:用户通过终端向服务器注册。
步骤302:终端检测到用户的人脸时,对用户进行人脸跟踪。
通常终端上都集成有摄像设备,例如摄像头,本实施例可以默认设置在用户注册时,自动启动摄像设备对用户人脸进行检测,通常用户可以手持终端将摄像设备对准自己的正脸。当通过摄像设备检测到人脸时,终端可以通过人脸跟踪算法对用户进行人脸跟踪,需要说明的是,本申请实施例可以采用各种现有的人脸跟踪算法,在此不再赘述。
步骤303:终端在人脸跟踪过程中按照预设时间间隔获取人脸图像。
在人脸跟踪过程中,终端通过摄像设备按照预设时间间隔获取人脸图像,设置时间间隔是为了避免提取到大致相同的人脸图像,例如,预设时间间隔可以为3秒。
步骤304:判断人脸图像的清晰度是否满足预设的清晰度阈值,若是,则执行步骤305;否则,结束当前流程。
对于步骤303获取的人脸图像,可以先对其清晰度进行判断,以便排除清晰度不足的人脸图像。此时终端可以调取预先设置的模糊判断函数,判断该人脸图像的清晰度是否满足清晰度阈值,其中,模糊判断函数可以采用现有图像识别处理技术中的模糊判断函数,对此本申请实施例不进行限制。对于满足清晰度阈值的人脸图像,执行步骤305,对于不满足清晰度阈值的人脸图像,直接丢弃,然后返回步骤303。
步骤305:终端从人脸图像中提取人头姿态信息。
在步骤304中判断出获取的人脸图像为清晰的人脸图像后,终端从人脸图像中提取人头姿态信息。如图3B所示,为本申请实施例中的人头姿态示意图:本实施例中的人头姿态信息可以包括至少一个下述角度:低仰头角度、侧脸角度和偏头角度。
步骤306:终端判断人头姿态信息包含的每个角度是否在预设的角度范围内,若是,则执行步骤307;否则,结束当前流程。
本申请实施例中,通过人头姿态信息可以判断出人脸图像是否为用户的正脸图像,此时终端可以判断人头姿态信息中包含的每个角度是否在预设的角度范围内,例如,该预设的角度范围为0度至10度。对于判断结果为是的人头姿态信息对应的人脸图像,执行步骤307;对于判断结果为否的人头姿态信息对应的人脸图像,直接丢弃,然后返回步骤303。
步骤307:终端从人脸图像中提取用户的人脸特征信息。
本申请实施例可以采用LBP(Linear Back Projection,线性反投影)特征提取算法,从人脸图像中提取人脸特征向量值作为用户的人脸特征信息。当然,本申请实施例不限制进行人脸特征提取的具体算法,任何现有图像处理技术中采用的人脸特征提取算法都可适用于本申请实施例,例如,加窗傅立叶变换gabor特征提取算法等。
为了保证后续身份认证阶段人脸认证的准确度,在人脸注册阶段,对于同一注册用户,可以从多个人脸图像中提取该用户的人脸特征信息,该多个人脸图像的数量可以预先设置,例如5个,相应的,按照设置的人脸图像的数量,可以循环执行前述步骤303至步骤307,以便获取到满足该预设数量的人脸图像,并从中提取出人脸特征信息。
步骤308:终端将人脸特征信息发送至服务器。
步骤309:服务器保存注册用户的用户名与该人脸特征的对应关系,结束当前流程。
本实施例中,服务器接收到终端发送的人脸特征信息后,可以在人脸特 征信息数据库中保存注册用户的用户名与人脸特征的对应关系,当接收到多个人脸特征信息时,则相应保存该用户名与多个人脸特征信息的对应关系。
参见图4A,为本申请身份认证方法的另一个实施例,该实施例基于图3所示人脸注册过程,详细描述了对用户进行身份认证的过程:
步骤401:开始对用户进行身份认证。
步骤402:终端获取用户的第一人脸特征信息。
在身份认证过程中,终端获取用户的人脸特征信息的方式与前述图3示出的人脸注册过程中获取人脸特征信息的方式一致,具体与图3示出的步骤302至步骤307一致,在此不再赘述。
本步骤中,终端可以获取到至少一个第一人脸特征信息。
步骤403:终端向服务器发送用户的第一人脸特征信息。
步骤404:服务器验证第一人脸特征信息与已保存的用户的第二人脸特征信息是否匹配,若是,则执行步骤405;否则结束当前流程。
本申请实施例中,服务器接收到用户的第一人脸特征信息后,可以根据该用户的用户名查找人脸特征信息数据库,获得与该用户名对应的第二人脸特征信息,然后采用预设的比较方式比较第一人脸特征信息和第二人脸特征信息,如果特征比较值在预设的相似度范围内,则可以确定第一人脸特征信息与第二人脸特征信息匹配。
假设本申请实施例中人脸特征信息为通过LBP算法提取出的人脸特征向量:
在一个例子中,可以采用欧式距离比较方式比较第一人脸特征信息和第二人脸特征,此时计算第二人脸特征向量与第一人脸特征向量的差值的平方和,如果该平方和小于预设阈值,则可以确定进行身份认证的为用户本人;
在另一个例子中,可以采用余弦距离比较方式比较第一人脸特征信息和第二人脸特征,假设第一人脸特征向量为V1,第二人脸特征向量为V2,则可以计算如下公式值:V2*V1/(|V1|*|V2|),如果该公式值大于预设阈值,则可以确定进行身份认证的为用户本人。
步骤405:服务器向终端发送人脸动态认证提示信息。
当服务器验证第一人脸特征信息与第二人脸特征信息匹配时,确定进行身份认证的为用户本人,此时开始进行人脸动态认证过程。服务器可以从人脸动态认证提示信息数据库中随机抽取一个人脸动态认证提示信息。
本实施例中人脸动态认证提示信息可以包括表情动作提示信息或者语音读取提示信息。对于表情动作提示信息,其所提示的动作通常是用户便于通过面部姿态呈现的动作,例如,张嘴、闭眼、转头等;对于语音读取提示信息,该信息通常比较短,以便于用户在认证时读取,且便于终端识别用户读取时的面部姿态。
步骤406:终端通过对用户进行人脸跟踪,获得人脸跟踪信息。
终端在接收到人脸动态认证提示信息后,可以在认证界面输出人脸动态认证提示信息,用户可以据此信息呈现相应的人脸姿态,在呈现过程中,由终端通过人脸跟踪算法获取用户的人脸跟踪信息。其中,人脸跟踪信息可以包括至少一种下述信息:面部关键点位置信息、人头姿态信息。
步骤407:终端分析人脸跟踪信息获得用户的姿态识别信息。
在一个例子中,假设人脸动态认证提示信息为“张嘴”,则用户相应做出张嘴的动作,终端通过对用户进行人脸跟踪可以获得面部关键点位置信息,具体为嘴部的关键点位置信息,参见图4B和图4C,为本申请实施例中面部关键点位置信息示意图:其中,图4B为正常状态下提取到的用户嘴部关键点位置信息,图4C为用户呈现“张嘴”姿态后提取到的用户嘴部关键点位置信息,通过比较图4B和图4C提取到的关键点位置信息,即比较嘴部上下两个关键点位置的坐标距离就可以获得用户的姿态识别信息为“张嘴”。
在另一个例子中,假设人脸动态认证提示信息为“转头”,则用户相应做出转头的动作,终端通过对用户进行人脸跟踪可以获得人头姿态信息,具体可以如图3B中示出的三个角,如果三个角的角度值满足“转头”所定义的角度值范围,则可获取用户的姿态识别信息为“转头”。
步骤408:终端将姿态识别信息发送至服务器。
步骤409:服务器验证姿态识别信息与人脸动态认证提示信息是否一致,若是,则执行步骤410;否则,结束当前流程。
步骤410:服务器确定用户通过身份认证,结束当前流程。
由上述实施例可见,该实施例将人脸认证与动态认证相结合对用户身份进行高安全性认证,其中通过人脸认证可以初步验证是否为用户本人,相较于现有采用认证密码进行认证的方式,认证信息不易被恶意第三方窃取,提高了认证的可靠性,并且在确认为用户本人的基础上,通过人脸动态认证可以识别用户为活体用户,从而进一步提高身份认证的准确性,降低认证过程中存在的安全隐患。
与本申请身份认证方法的实施例相对应,本申请还提供了身份认证装置、终端及服务器的实施例。
本申请身份认证装置的实施例可以分别应用在终端和服务器上。装置实施例可以通过软件实现,也可以通过硬件或者软硬件结合的方式实现。以软件实现为例,作为一个逻辑意义上的装置,是通过其所在设备的处理器将非易失性存储器中对应的计算机程序指令读取到内存中运行形成的。从硬件层面而言,如图5所示,为本申请身份认证装置所在设备的一种硬件结构图,除了图5所示的处理器、内存、网络接口、以及非易失性存储器之外,实施例中装置所在的设备通常根据该设备的实际功能,还可以包括其他硬件,如对于终端来说,可能包括摄像头、触摸屏子、通信组件等,对于服务器来说,可能包括负责处理报文的转发芯片等等。
参见图6,为本申请身份认证装置的一个实施例框图,该身份认证装置可以应用在终端上,该装置包括:接收单元610、识别单元620和发送单元630。
其中,接收单元610,用于在用户进行身份认证时,接收服务器发送的人脸动态认证提示信息;
识别单元620,用于通过识别所述用户呈现的人脸姿态,获得所述人脸动态认证提示信息的姿态识别信息;
发送单元630,用于将所述姿态识别信息发送至所述服务器,以使所述服务器在验证所述姿态识别信息与所述人脸动态认证提示信息一致时,确定所述用户通过身份认证。
在一个可选的实现方式中:
所述识别单元620可以包括(图6中未示出):
人脸信息获得子单元,用于在所述用户根据所述人脸动态认证提示信息呈现人脸姿态时,通过对所述用户进行人脸跟踪,获得人脸跟踪信息;
人脸信息分析子单元,用于分析所述人脸跟踪信息获得所述用户的姿态识别信息。
其中,所述人脸信息分析子单元,可以具体用于当所述人脸跟踪信息为面部关键点位置信息时,通过分析所述面部关键点位置信息获得所述用户的表情姿态识别信息,当所述人脸跟踪信息为人头姿态信息时,通过分析所述人头姿态信息获得所述用户的头部转动识别信息。
其中,所述人脸动态认证提示信息可以包括至少一种下述信息:表情动作提示信息、语音读取提示信息。
在另一个可选的实现方式中:
所述装置还可以包括(图6中未示出):
获取单元,用于获取所述用户的人脸特征信息,将所述身份认证时获取的人脸特征信息作为所述用户的第一人脸特征信息;
所述发送单元630,还可以用于向服务器发送所述用户的第一人脸特征信息,以使所述服务器在验证所述第一人脸特征信息与已保存的所述用户的第二人脸特征信息匹配时发送所述人脸动态认证提示信息。
可选的,所述获取单元,还可以用于在所述用户进行注册时,获取所述用户的人脸特征信息,将所述注册时获取的人脸特征信息作为所述用户的第二人脸特征信息;所述发送单元630,还可以用于将所述第二人脸特征信息发送至所述服务器,以使所述服务器保存所述用户的用户名与所述第二人脸特征的对应关系。
可选的,所述获取单元可以包括:人脸跟踪子单元,用于在检测到所述用户的人脸时,对所述用户进行人脸跟踪;图像获取子单元,用于在所述人脸跟踪过程中按照预设时间间隔获取人脸图像;条件判断子单元,用于判断所述人脸图像是否满足预设的特征提取条件;特征提取子单元,用于若满足所述特征提取条件,则从所述人脸图像中提取所述用户的人脸特征信息。
其中,所述条件判断子单元可以进一步包括:
清晰度判断模块,用于判断所述人脸图像的清晰度是否满足预设的清晰度阈值;
姿态信息提取模块,用于若满足所述清晰度阈值,则从所述人脸图像中提取人头姿态信息,所述人头姿态信息包括至少一个下述角度:低仰头角度、侧脸角度和偏头角度;
角度判断模块,用于判断所述人头姿态信息包含的每个角度是否在预设的角度范围内;
判断确定模块,用于若在预设的角度范围内,则确定所述人脸图像满足所述特征提取条件。
其中,所述特征提取子单元,可以具体用于采用预设特征提取算法,从所述人脸图像中提取人脸特征向量值作为所述用户的人脸特征信息;其中,所述预设特征提取算法可以包括:线性反投影LBP特征提取算法、或加窗傅立叶变换gabor特征提取算法等。
参见图7,为本申请身份认证装置的另一个实施例框图,该身份认证装置可以应用在服务器上,该装置包括:发送单元710、接收单元720和确定单元730。
其中,发送单元710,用于在用户进行身份认证时,向终端发送人脸动态认证提示信息;
接收单元720,用于接收所述终端发送的姿态识别信息,所述姿态识别信息为所述终端通过识别所述用户根据所述人脸动态认证提示信息呈现的人脸姿态,获得的姿态识别信息;
确定单元730,用于当验证所述姿态识别信息与所述人脸动态认证提示信息一致时,确定所述用户通过身份认证。
在一个可选的实现方式中:
所述接收单元720,还可以用于接收所述终端发送的所述用户的第一人脸特征信息;
所述装置还可以包括(图7中未示出):验证单元,用于验证所述第一人脸特征信息与已保存的所述用户的第二人脸特征信息是否匹配;
所述发送单元710,可以具体用于在匹配时,向终端发送人脸动态认证提示信息。
可选的,所述接收单元720,还可以用于在所述用户进行注册时,接收所述终端发送的所述用户的第二人脸特征信息;所述装置还可以包括(图7中未示出):保存单元,用于保存所述用户的用户名与所述第二人脸特征信息的对应关系。
可选的,所述验证单元可以包括:特征查找子单元,用于根据所述用户的用户名查找所述对应关系,获得与所述用户名对应的第二人脸特征信息;特征比较子单元,用于按照预设的比较方式比较所述第一人脸特征信息和所述第二人脸特征信息;匹配确定子单元,用于如果特征比较值在预设的相似度范围内,则确定所述第一人脸特征信息与所述第二人脸特征信息匹配。其中,所述特征比较子单元可以采用的预设的比较方式包括:欧式距离比较方式、或余弦距离比较方式。
上述装置中各个单元的功能和作用的实现过程具体详见上述方法中对应步骤的实现过程,在此不再赘述。
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或 者全部模块来实现本申请方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
由上述实施例可见,在对用户进行身份认证时,通过人脸动态认证方式可以对用户身份进行高安全性认证,相较于现有采用认证密码进行认证的方式,认证信息不会被恶意第三方窃取,提高了认证的可靠性,并且通过人脸动态认证可以识别用户为活体用户,从而进一步提高身份认证的准确性,降低认证过程中存在的安全隐患。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本申请的其它实施方案。本申请旨在涵盖本申请的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本申请的一般性原理并包括本申请未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本申请的真正范围和精神由下面的权利要求指出。
应当理解的是,本申请并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本申请的范围仅由所附的权利要求来限制。

Claims (25)

  1. 一种身份认证方法,其特征在于,所述方法包括:
    在用户进行身份认证时,接收服务器发送的人脸动态认证提示信息;
    通过识别所述用户呈现的人脸姿态,获得所述人脸动态认证提示信息的姿态识别信息;
    将所述姿态识别信息发送至所述服务器,以使所述服务器在验证所述姿态识别信息与所述人脸动态认证提示信息一致时,确定所述用户通过身份认证。
  2. 根据权利要求1所述的方法,其特征在于,所述通过识别所述用户呈现的人脸姿态,获得所述人脸动态认证提示信息的姿态识别信息,包括:
    在所述用户根据所述人脸动态认证提示信息呈现人脸姿态时,通过对所述用户进行人脸跟踪,获得人脸跟踪信息;
    分析所述人脸跟踪信息获得所述用户的姿态识别信息。
  3. 根据权利要求2所述的方法,其特征在于,所述分析所述人脸跟踪信息获得所述用户的姿态识别信息,包括:
    当所述人脸跟踪信息为面部关键点位置信息时,通过分析所述面部关键点位置信息获得所述用户的表情姿态识别信息;
    当所述人脸跟踪信息为人头姿态信息时,通过分析所述人头姿态信息获得所述用户的头部转动识别信息。
  4. 根据权利要求1至3任一所述的方法,其特征在于,所述人脸动态认证提示信息包括至少一种下述信息:表情动作提示信息、语音读取提示信息。
  5. 根据权利要求1所述的方法,其特征在于,所述接收所述服务器发送的人脸动态认证提示信息之前,所述方法还包括:
    获取所述用户的人脸特征信息,将所述身份认证时获取的人脸特征信息作为所述用户的第一人脸特征信息;
    向服务器发送所述用户的第一人脸特征信息,以使所述服务器在验证所述第一人脸特征信息与已保存的所述用户的第二人脸特征信息匹配时发送所 述人脸动态认证提示信息。
  6. 根据权利要求5所述的方法,其特征在于,所述方法还包括:
    在所述用户进行注册时,获取所述用户的人脸特征信息,将所述注册时获取的人脸特征信息作为所述用户的第二人脸特征信息;
    将所述第二人脸特征信息发送至所述服务器,以使所述服务器保存所述用户的用户名与所述第二人脸特征的对应关系。
  7. 根据权利要求5或6所述的方法,其特征在于,所述获取所述用户的人脸特征信息,包括:
    在检测到所述用户的人脸时,对所述用户进行人脸跟踪;
    在所述人脸跟踪过程中按照预设时间间隔获取人脸图像;
    判断所述人脸图像是否满足预设的特征提取条件;
    若满足所述特征提取条件,则从所述人脸图像中提取所述用户的人脸特征信息。
  8. 根据权利要求7所述的方法,其特征在于,所述判断所述人脸图像是否满足预设的特征提取条件,包括:
    判断所述人脸图像的清晰度是否满足预设的清晰度阈值;
    若满足所述清晰度阈值,则从所述人脸图像中提取人头姿态信息,所述人头姿态信息包括至少一个下述角度:低仰头角度、侧脸角度和偏头角度;
    判断所述人头姿态信息包含的每个角度是否在预设的角度范围内;
    若在预设的角度范围内,则确定所述人脸图像满足所述特征提取条件。
  9. 一种身份认证方法,其特征在于,所述方法包括:
    在用户进行身份认证时,向终端发送人脸动态认证提示信息;
    接收所述终端发送的姿态识别信息,所述姿态识别信息为所述终端通过识别所述用户根据所述人脸动态认证提示信息呈现的人脸姿态,获得的姿态识别信息;
    当验证所述姿态识别信息与所述人脸动态认证提示信息一致时,确定所述用户通过身份认证。
  10. 根据权利要求9所述的方法,其特征在于,向终端发送人脸动态认证提示信息之前,所述方法还包括:
    接收所述终端发送的所述用户的第一人脸特征信息;
    验证所述第一人脸特征信息与已保存的所述用户的第二人脸特征信息是否匹配;
    若匹配,则执行所述向终端发送人脸动态认证提示信息。
  11. 根据权利要求10所述的方法,其特征在于,所述方法还包括:
    在所述用户进行注册时,接收所述终端发送的所述用户的第二人脸特征信息;
    保存所述用户的用户名与所述第二人脸特征信息的对应关系。
  12. 根据权利要求11所述的方法,其特征在于,所述验证所述第一人脸特征信息与已保存的所述用户的第二人脸特征信息是否匹配,包括:
    根据所述用户的用户名查找所述对应关系,获得与所述用户名对应的第二人脸特征信息;
    采用预设的比较方式比较所述第一人脸特征信息和所述第二人脸特征信息;
    如果特征比较值在预设的相似度范围内,则确定所述第一人脸特征信息与所述第二人脸特征信息匹配。
  13. 一种身份认证装置,其特征在于,所述装置包括:
    接收单元,用于在用户进行身份认证时,接收服务器发送的人脸动态认证提示信息;
    识别单元,用于通过识别所述用户呈现的人脸姿态,获得所述人脸动态认证提示信息的姿态识别信息;
    发送单元,用于将所述姿态识别信息发送至所述服务器,以使所述服务器在验证所述姿态识别信息与所述人脸动态认证提示信息一致时,确定所述用户通过身份认证。
  14. 根据权利要求13所述的装置,其特征在于,所述识别单元包括:
    人脸信息获得子单元,用于在所述用户根据所述人脸动态认证提示信息呈现人脸姿态时,通过对所述用户进行人脸跟踪,获得人脸跟踪信息;
    人脸信息分析子单元,用于分析所述人脸跟踪信息获得所述用户的姿态识别信息。
  15. 根据权利要求14所述的装置,其特征在于,
    所述人脸信息分析子单元,具体用于当所述人脸跟踪信息为面部关键点位置信息时,通过分析所述面部关键点位置信息获得所述用户的表情姿态识别信息,当所述人脸跟踪信息为人头姿态信息时,通过分析所述人头姿态信息获得所述用户的头部转动识别信息。
  16. 根据权利要求13所述的装置,其特征在于,所述装置还包括:
    获取单元,用于获取所述用户的人脸特征信息,将所述身份认证时获取的人脸特征信息作为所述用户的第一人脸特征信息;
    所述发送单元,还用于向服务器发送所述用户的第一人脸特征信息,以使所述服务器在验证所述第一人脸特征信息与已保存的所述用户的第二人脸特征信息匹配时发送所述人脸动态认证提示信息。
  17. 根据权利要求16所述的装置,其特征在于,
    所述获取单元,还用于在所述用户进行注册时,获取所述用户的人脸特征信息,将所述注册时获取的人脸特征信息作为所述用户的第二人脸特征信息;
    所述发送单元,还用于将所述第二人脸特征信息发送至所述服务器,以使所述服务器保存所述用户的用户名与所述第二人脸特征的对应关系。
  18. 根据权利要求16或17所述的装置,其特征在于,所述获取单元包括:
    人脸跟踪子单元,用于在检测到所述用户的人脸时,对所述用户进行人脸跟踪;
    图像获取子单元,用于在所述人脸跟踪过程中按照预设时间间隔获取人脸图像;
    条件判断子单元,用于判断所述人脸图像是否满足预设的特征提取条件;
    特征提取子单元,用于若满足所述特征提取条件,则从所述人脸图像中提取所述用户的人脸特征信息。
  19. 根据权利要求18所述的装置,其特征在于,所述条件判断子单元包括:
    清晰度判断模块,用于判断所述人脸图像的清晰度是否满足预设的清晰度阈值;
    姿态信息提取模块,用于若满足所述清晰度阈值,则从所述人脸图像中提取人头姿态信息,所述人头姿态信息包括至少一个下述角度:低仰头角度、侧脸角度和偏头角度;
    角度判断模块,用于判断所述人头姿态信息包含的每个角度是否在预设的角度范围内;
    判断确定模块,用于若在预设的角度范围内,则确定所述人脸图像满足所述特征提取条件。
  20. 一种身份认证装置,其特征在于,所述装置包括:
    发送单元,用于在用户进行身份认证时,向终端发送人脸动态认证提示信息;
    接收单元,用于接收所述终端发送的姿态识别信息,所述姿态识别信息为所述终端通过识别所述用户根据所述人脸动态认证提示信息呈现的人脸姿态,获得的姿态识别信息;
    确定单元,用于当验证所述姿态识别信息与所述人脸动态认证提示信息一致时,确定所述用户通过身份认证。
  21. 根据权利要求20所述的装置,其特征在于,
    所述接收单元,还用于接收所述终端发送的所述用户的第一人脸特征信息;
    所述装置还包括:
    验证单元,用于验证所述第一人脸特征信息与已保存的所述用户的第二 人脸特征信息是否匹配;
    所述发送单元,具体用于在匹配时,向终端发送人脸动态认证提示信息。
  22. 根据权利要求21所述的装置,其特征在于,
    所述接收单元,还用于在所述用户进行注册时,接收所述终端发送的所述用户的第二人脸特征信息;
    所述装置还包括:
    保存单元,用于保存所述用户的用户名与所述第二人脸特征信息的对应关系。
  23. 根据权利要求22所述的装置,其特征在于,所述验证单元包括:
    特征查找子单元,用于根据所述用户的用户名查找所述对应关系,获得与所述用户名对应的第二人脸特征信息;
    特征比较子单元,用于按照预设的比较方式比较所述第一人脸特征信息和所述第二人脸特征信息;
    匹配确定子单元,用于如果特征比较值在预设的相似度范围内,则确定所述第一人脸特征信息与所述第二人脸特征信息匹配。
  24. 一种终端,其特征在于,包括:
    处理器;用于存储所述处理器可执行指令的存储器;
    其中,所述处理器被配置为:
    在用户进行身份认证时,接收服务器发送的人脸动态认证提示信息;
    通过识别所述用户呈现的人脸姿态,获得所述人脸动态认证提示信息的姿态识别信息;
    将所述姿态识别信息发送至所述服务器,以使所述服务器在验证所述姿态识别信息与所述人脸动态认证提示信息一致时,确定所述用户通过身份认证。
  25. 一种服务器,其特征在于,包括:
    处理器;用于存储所述处理器可执行指令的存储器;
    其中,所述处理器被配置为:
    在用户进行身份认证时,向终端发送人脸动态认证提示信息;
    接收所述终端发送的姿态识别信息,所述姿态识别信息为所述终端通过识别所述用户根据所述人脸动态认证提示信息呈现的人脸姿态,获得的姿态识别信息;
    当验证所述姿态识别信息与所述人脸动态认证提示信息一致时,确定所述用户通过身份认证。
PCT/CN2015/088215 2014-09-03 2015-08-27 身份认证方法、装置、终端及服务器 WO2016034069A1 (zh)

Priority Applications (7)

Application Number Priority Date Filing Date Title
JP2017512314A JP6820062B2 (ja) 2014-09-03 2015-08-27 アイデンティティ認証方法ならびに装置、端末及びサーバ
SG11201701497SA SG11201701497SA (en) 2014-09-03 2015-08-27 Identity authentication method and apparatus, terminal and server
KR1020177005848A KR101997371B1 (ko) 2014-09-03 2015-08-27 신원 인증 방법 및 장치, 단말기 및 서버
EP15838136.8A EP3190534B1 (en) 2014-09-03 2015-08-27 Identity authentication method and apparatus, terminal and server
EP19172346.9A EP3540621B1 (en) 2014-09-03 2015-08-27 Identity authentication method and apparatus, terminal and server
PL19172346T PL3540621T3 (pl) 2014-09-03 2015-08-27 Sposób oraz urządzenie do uwierzytelniania tożsamości, terminal i serwer
US15/448,534 US10601821B2 (en) 2014-09-03 2017-03-02 Identity authentication method and apparatus, terminal and server

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410446657.0 2014-09-03
CN201410446657.0A CN105468950B (zh) 2014-09-03 2014-09-03 身份认证方法、装置、终端及服务器

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/448,534 Continuation US10601821B2 (en) 2014-09-03 2017-03-02 Identity authentication method and apparatus, terminal and server

Publications (1)

Publication Number Publication Date
WO2016034069A1 true WO2016034069A1 (zh) 2016-03-10

Family

ID=55439122

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/088215 WO2016034069A1 (zh) 2014-09-03 2015-08-27 身份认证方法、装置、终端及服务器

Country Status (10)

Country Link
US (1) US10601821B2 (zh)
EP (2) EP3540621B1 (zh)
JP (1) JP6820062B2 (zh)
KR (1) KR101997371B1 (zh)
CN (2) CN105468950B (zh)
ES (1) ES2810012T3 (zh)
HK (1) HK1221795A1 (zh)
PL (1) PL3540621T3 (zh)
SG (2) SG10201901818UA (zh)
WO (1) WO2016034069A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110460993A (zh) * 2019-08-21 2019-11-15 广州大学 一种基于手势验证的认证方法及系统

Families Citing this family (56)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10929550B2 (en) 2015-04-30 2021-02-23 Masaaki Tokuyama Terminal device and computer program
US10452823B2 (en) * 2015-04-30 2019-10-22 Masaaki Tokuyama Terminal device and computer program
CN106302330B (zh) * 2015-05-21 2021-01-05 腾讯科技(深圳)有限公司 身份验证方法、装置和系统
CN105893828A (zh) * 2016-05-05 2016-08-24 南京甄视智能科技有限公司 基于移动终端的人脸验证驾考系统与方法
CN107644190A (zh) * 2016-07-20 2018-01-30 北京旷视科技有限公司 行人监控方法和装置
CN106228133B (zh) * 2016-07-21 2020-04-10 北京旷视科技有限公司 用户验证方法及装置
CN107819807A (zh) * 2016-09-14 2018-03-20 腾讯科技(深圳)有限公司 一种信息验证方法、装置和设备
CN108629260B (zh) * 2017-03-17 2022-02-08 北京旷视科技有限公司 活体验证方法和装置及存储介质
CN106971163A (zh) * 2017-03-28 2017-07-21 深圳市校联宝科技有限公司 一种接领人识别方法、装置和系统
CN109844747A (zh) * 2017-04-01 2019-06-04 深圳市大疆创新科技有限公司 身份认证服务器、身份认证终端、身份认证系统及方法
CN107045744A (zh) * 2017-04-14 2017-08-15 特斯联(北京)科技有限公司 一种智能别墅门禁认证方法及系统
CN107066983B (zh) * 2017-04-20 2022-08-09 腾讯科技(上海)有限公司 一种身份验证方法及装置
US11080316B1 (en) * 2017-05-26 2021-08-03 Amazon Technologies, Inc. Context-inclusive face clustering
CN107707738A (zh) 2017-09-07 2018-02-16 维沃移动通信有限公司 一种人脸识别方法及移动终端
CN108229120B (zh) * 2017-09-07 2020-07-24 北京市商汤科技开发有限公司 人脸解锁及其信息注册方法和装置、设备、程序、介质
CN107562204B (zh) * 2017-09-14 2021-10-01 深圳Tcl新技术有限公司 电视交互方法、电视及计算机可读存储介质
CN109670386A (zh) * 2017-10-16 2019-04-23 深圳泰首智能技术有限公司 人脸识别方法及终端
CN107818253B (zh) * 2017-10-18 2020-07-17 Oppo广东移动通信有限公司 人脸模板数据录入控制方法及相关产品
CN107993174A (zh) * 2017-11-06 2018-05-04 国政通科技股份有限公司 一种多功能便民服务系统
US10594690B2 (en) * 2017-11-16 2020-03-17 Bank Of America Corporation Authenticating access to a computing resource using facial recognition based on involuntary facial movement
CN107944380B (zh) * 2017-11-20 2022-11-29 腾讯科技(深圳)有限公司 身份识别方法、装置及存储设备
CN107798548A (zh) * 2017-11-27 2018-03-13 甘平安 一种购买方法及购买系统
US10924476B2 (en) * 2017-11-29 2021-02-16 Ncr Corporation Security gesture authentication
CN108229937A (zh) * 2017-12-20 2018-06-29 阿里巴巴集团控股有限公司 基于增强现实的虚拟对象分配方法及装置
CN108171026A (zh) * 2018-01-19 2018-06-15 百度在线网络技术(北京)有限公司 鉴权方法和装置
US11366884B2 (en) 2018-02-14 2022-06-21 American Express Travel Related Services Company, Inc. Authentication challenges based on fraud initiation requests
JP6859970B2 (ja) * 2018-03-09 2021-04-14 京セラドキュメントソリューションズ株式会社 ログイン支援システム
CN108734084A (zh) * 2018-03-21 2018-11-02 百度在线网络技术(北京)有限公司 人脸注册方法和装置
CN108446664A (zh) * 2018-03-30 2018-08-24 广东华电网维信息科技有限公司 一种基于人脸识别的身份确认方法及装置
CN108712381A (zh) * 2018-04-16 2018-10-26 出门问问信息科技有限公司 一种身份验证方法及装置
US10733676B2 (en) * 2018-05-17 2020-08-04 Coupa Software Incorporated Automatic generation of expense data using facial recognition in digitally captured photographic images
CN110555330A (zh) * 2018-05-30 2019-12-10 百度在线网络技术(北京)有限公司 图像面签方法、装置、计算机设备及存储介质
CN109165614A (zh) * 2018-08-31 2019-01-08 杭州行开科技有限公司 基于3d摄像头的人脸识别系统
CN109065058B (zh) * 2018-09-30 2024-03-15 合肥鑫晟光电科技有限公司 语音通信方法、装置及系统
US10956548B2 (en) * 2018-10-09 2021-03-23 Lenovo (Singapore) Pte. Ltd. User authentication via emotion detection
CN111104658A (zh) * 2018-10-25 2020-05-05 北京嘀嘀无限科技发展有限公司 注册方法及装置、认证方法及装置
CN109376684B (zh) 2018-11-13 2021-04-06 广州市百果园信息技术有限公司 一种人脸关键点检测方法、装置、计算机设备和存储介质
JP7054847B2 (ja) * 2019-03-04 2022-04-15 パナソニックIpマネジメント株式会社 顔認証登録装置および顔認証登録方法
US10860705B1 (en) 2019-05-16 2020-12-08 Capital One Services, Llc Augmented reality generated human challenge
KR20210009596A (ko) * 2019-07-17 2021-01-27 엘지전자 주식회사 지능적 음성 인식 방법, 음성 인식 장치 및 지능형 컴퓨팅 디바이스
CN110490106B (zh) * 2019-08-06 2022-05-03 万翼科技有限公司 信息管理方法及相关设备
CN110611734A (zh) * 2019-08-08 2019-12-24 深圳传音控股股份有限公司 交互方法及终端
CN110765436A (zh) * 2019-10-26 2020-02-07 福建省伟志地理信息科学研究院 一种不动产信息分析管理系统和方法
CN111144896A (zh) * 2019-12-16 2020-05-12 中国银行股份有限公司 一种身份验证方法及装置
US12033428B2 (en) 2020-02-04 2024-07-09 Grabtaxi Holdings Pte. Ltd. Method, server and communication system of verifying user for transportation purposes
KR102531579B1 (ko) * 2020-11-18 2023-05-16 주식회사 코밴 이중 인증을 통한 간편 결제 방법
CN112487389A (zh) * 2020-12-16 2021-03-12 熵基科技股份有限公司 一种身份认证方法、装置和设备
CN112383571B (zh) * 2021-01-12 2021-06-04 浙江正元智慧科技股份有限公司 基于人脸识别大数据的登录管理系统
CN115131904A (zh) * 2021-03-25 2022-09-30 中国移动通信集团安徽有限公司 一种门禁控制方法、装置、设备及计算机存储介质
CN113128969A (zh) * 2021-04-26 2021-07-16 广州太玮生物科技有限公司 一种色谱柱日常使用管理系统
CN113255529A (zh) * 2021-05-28 2021-08-13 支付宝(杭州)信息技术有限公司 一种生物特征的识别方法、装置及设备
CN113696851B (zh) * 2021-08-27 2023-04-28 上海仙塔智能科技有限公司 基于车外手势的车辆控制方法、装置、设备以及介质
CN114268453B (zh) * 2021-11-17 2024-07-12 中国南方电网有限责任公司 电力系统解锁方法、装置、计算机设备和存储介质
WO2023159462A1 (zh) * 2022-02-25 2023-08-31 百果园技术(新加坡)有限公司 身份认证方法、装置、终端、存储介质及程序产品
CN114760074B (zh) * 2022-06-13 2022-09-02 中广(绍兴上虞)有线信息网络有限公司 一种基于大数据安全的身份认证方法及系统
CN118196876B (zh) * 2024-05-20 2024-08-16 东南大学 一种虚拟身份认证装置及其认证方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101075868A (zh) * 2006-05-19 2007-11-21 华为技术有限公司 一种远程身份认证的系统、终端、服务器和方法
CN102385703A (zh) * 2010-08-27 2012-03-21 北京中星微电子有限公司 一种基于人脸的身份认证方法及系统
CN103259796A (zh) * 2013-05-15 2013-08-21 金硕澳门离岸商业服务有限公司 认证系统和方法
CN104298909A (zh) * 2013-07-19 2015-01-21 富泰华工业(深圳)有限公司 电子装置、身份验证系统及方法
CN104518877A (zh) * 2013-10-08 2015-04-15 鸿富锦精密工业(深圳)有限公司 身份认证系统及方法

Family Cites Families (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5572261A (en) * 1995-06-07 1996-11-05 Cooper; J. Carl Automatic audio to video timing measurement device and method
JP2000306090A (ja) * 1999-04-20 2000-11-02 Ntt Data Corp 個人認証装置、方法及び記録媒体
SG91841A1 (en) * 1999-11-03 2002-10-15 Kent Ridge Digital Labs Face direction estimation using a single gray-level image
JP2003216955A (ja) * 2002-01-23 2003-07-31 Sharp Corp ジェスチャ認識方法、ジェスチャ認識装置、対話装置及びジェスチャ認識プログラムを記録した記録媒体
US9412142B2 (en) * 2002-08-23 2016-08-09 Federal Law Enforcement Development Services, Inc. Intelligent observation and identification database system
JP2004110813A (ja) * 2002-08-30 2004-04-08 Victor Co Of Japan Ltd 人物認証装置
WO2005093637A1 (de) * 2004-03-29 2005-10-06 Hoffmann Andre Verfahren und system zur identifikation, verifikation, erkennung und wiedererkennung
US7634106B2 (en) * 2004-09-22 2009-12-15 Fujifilm Corporation Synthesized image generation method, synthesized image generation apparatus, and synthesized image generation program
US8370639B2 (en) * 2005-06-16 2013-02-05 Sensible Vision, Inc. System and method for providing secure access to an electronic device using continuous facial biometrics
KR100725771B1 (ko) * 2005-09-23 2007-06-08 삼성전자주식회사 휴대용 단말기용 얼굴 인식 및 인증 장치 및 방법
KR100777922B1 (ko) * 2006-02-06 2007-11-21 에스케이 텔레콤주식회사 영상인식을 이용한 개인인증 및 전자서명 시스템 및 그방법
JP4367424B2 (ja) * 2006-02-21 2009-11-18 沖電気工業株式会社 個人識別装置,個人識別方法
WO2007105193A1 (en) * 2006-03-12 2007-09-20 Nice Systems Ltd. Apparatus and method for target oriented law enforcement interception and analysis
JP5219184B2 (ja) * 2007-04-24 2013-06-26 任天堂株式会社 トレーニングプログラム、トレーニング装置、トレーニングシステムおよびトレーニング方法
JP4999570B2 (ja) * 2007-06-18 2012-08-15 キヤノン株式会社 表情認識装置及び方法、並びに撮像装置
KR20100062413A (ko) * 2008-12-02 2010-06-10 한국전자통신연구원 텔레매틱스 장치를 위한 음성인식 장치 및 그 방법
JP2010231350A (ja) * 2009-03-26 2010-10-14 Toshiba Corp 人物識別装置、そのプログラム、及び、その方法
KR101092820B1 (ko) * 2009-09-22 2011-12-12 현대자동차주식회사 립리딩과 음성 인식 통합 멀티모달 인터페이스 시스템
WO2011065952A1 (en) * 2009-11-30 2011-06-03 Hewlett-Packard Development Company, L.P. Face recognition apparatus and methods
TWI411935B (zh) * 2009-12-25 2013-10-11 Primax Electronics Ltd 利用影像擷取裝置辨識使用者姿勢以產生控制訊號之系統以及方法
US8818025B2 (en) * 2010-08-23 2014-08-26 Nokia Corporation Method and apparatus for recognizing objects in media content
US8897500B2 (en) * 2011-05-05 2014-11-25 At&T Intellectual Property I, L.P. System and method for dynamic facial features for speaker recognition
CN102298443B (zh) * 2011-06-24 2013-09-25 华南理工大学 结合视频通道的智能家居语音控制系统及其控制方法
US9082235B2 (en) * 2011-07-12 2015-07-14 Microsoft Technology Licensing, Llc Using facial data for device authentication or subject identification
CN102324035A (zh) * 2011-08-19 2012-01-18 广东好帮手电子科技股份有限公司 口型辅助语音识别术在车载导航中应用的方法及系统
KR20130022607A (ko) * 2011-08-25 2013-03-07 삼성전자주식회사 입술 이미지를 이용한 음성 인식 장치 및 이의 음성 인식 방법
KR101242390B1 (ko) * 2011-12-29 2013-03-12 인텔 코오퍼레이션 사용자를 인증하기 위한 방법, 장치, 및 컴퓨터 판독 가능한 기록 매체
US9083532B2 (en) * 2012-03-06 2015-07-14 Ebay Inc. Physiological response PIN entry
US8687880B2 (en) * 2012-03-20 2014-04-01 Microsoft Corporation Real time head pose estimation
EP2871640B1 (en) * 2012-07-09 2021-01-06 LG Electronics, Inc. Speech recognition apparatus and method
CN102932212A (zh) * 2012-10-12 2013-02-13 华南理工大学 一种基于多通道交互方式的智能家居控制系统
US20140118257A1 (en) * 2012-10-29 2014-05-01 Amazon Technologies, Inc. Gesture detection systems
CN103036680A (zh) * 2012-12-10 2013-04-10 中国科学院计算机网络信息中心 基于生物特征识别的域名认证系统及方法
US9310977B2 (en) * 2012-12-14 2016-04-12 Biscotti Inc. Mobile presence detection
JP6132232B2 (ja) * 2013-02-01 2017-05-24 パナソニックIpマネジメント株式会社 メイクアップ支援装置、メイクアップ支援システム、およびメイクアップ支援方法
US20140341444A1 (en) * 2013-05-14 2014-11-20 Tencent Technology (Shenzhen) Company Limited Systems and Methods for User Login
JP6209067B2 (ja) * 2013-11-21 2017-10-04 株式会社Nttドコモ 画像認識装置、及び画像認識方法
CN103593598B (zh) * 2013-11-25 2016-09-21 上海骏聿数码科技有限公司 基于活体检测和人脸识别的用户在线认证方法及系统
CN103634120A (zh) * 2013-12-18 2014-03-12 上海市数字证书认证中心有限公司 基于人脸识别的实名认证方法及系统
US9866820B1 (en) * 2014-07-01 2018-01-09 Amazon Technologies, Inc. Online calibration of cameras

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101075868A (zh) * 2006-05-19 2007-11-21 华为技术有限公司 一种远程身份认证的系统、终端、服务器和方法
CN102385703A (zh) * 2010-08-27 2012-03-21 北京中星微电子有限公司 一种基于人脸的身份认证方法及系统
CN103259796A (zh) * 2013-05-15 2013-08-21 金硕澳门离岸商业服务有限公司 认证系统和方法
CN104298909A (zh) * 2013-07-19 2015-01-21 富泰华工业(深圳)有限公司 电子装置、身份验证系统及方法
CN104518877A (zh) * 2013-10-08 2015-04-15 鸿富锦精密工业(深圳)有限公司 身份认证系统及方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3190534A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110460993A (zh) * 2019-08-21 2019-11-15 广州大学 一种基于手势验证的认证方法及系统

Also Published As

Publication number Publication date
SG11201701497SA (en) 2017-03-30
ES2810012T3 (es) 2021-03-08
EP3190534B1 (en) 2019-06-26
CN111898108A (zh) 2020-11-06
EP3540621A1 (en) 2019-09-18
JP6820062B2 (ja) 2021-01-27
CN111898108B (zh) 2024-06-04
EP3190534A4 (en) 2018-03-21
KR20170047255A (ko) 2017-05-04
US10601821B2 (en) 2020-03-24
US20170180362A1 (en) 2017-06-22
CN105468950B (zh) 2020-06-30
HK1221795A1 (zh) 2017-06-09
CN105468950A (zh) 2016-04-06
JP2017530457A (ja) 2017-10-12
SG10201901818UA (en) 2019-03-28
PL3540621T3 (pl) 2021-05-17
EP3190534A1 (en) 2017-07-12
EP3540621B1 (en) 2020-07-29
KR101997371B1 (ko) 2019-07-05

Similar Documents

Publication Publication Date Title
WO2016034069A1 (zh) 身份认证方法、装置、终端及服务器
CN108804884B (zh) 身份认证的方法、装置及计算机存储介质
WO2017198014A1 (zh) 一种身份认证方法和装置
US20190012450A1 (en) Biometric-based authentication method, apparatus and system
CN104298909B (zh) 电子装置、身份验证系统及方法
US9177131B2 (en) User authentication method and apparatus based on audio and video data
US9122913B2 (en) Method for logging a user in to a mobile device
US20170046508A1 (en) Biometric authentication using gesture
US20160269411A1 (en) System and Method for Anonymous Biometric Access Control
TWI727329B (zh) 用於基於深度學習方法提供對資源之選擇性存取之防欺騙系統及方法
WO2017193826A1 (zh) 一种云桌面登陆验证方法、云桌面控制系统及客户端
US20080013794A1 (en) Feature Extraction Algorithm for Automatic Ear Recognition
US8983207B1 (en) Mitigating replay attacks using multiple-image authentication
US9792421B1 (en) Secure storage of fingerprint related elements
KR101724971B1 (ko) 광각 카메라를 이용한 얼굴 인식 시스템 및 그를 이용한 얼굴 인식 방법
CN105407069B (zh) 活体认证方法、装置、客户端设备及服务器
US10547610B1 (en) Age adapted biometric authentication
WO2016062200A1 (zh) 一种指纹认证的方法、装置及服务器
WO2017041358A1 (zh) 一种用户身份识别方法、装置和移动终端
CN107370769B (zh) 用户认证方法及系统
WO2016058540A1 (zh) 身份验证方法、装置和存储介质
KR101718244B1 (ko) 얼굴 인식을 위한 광각 영상 처리 장치 및 방법
CN113344586B (zh) 一种面向移动终端的人脸识别支付系统
CN112149085A (zh) 一种基于用户生物特征的游戏账户登录方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15838136

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20177005848

Country of ref document: KR

Kind code of ref document: A

Ref document number: 2017512314

Country of ref document: JP

Kind code of ref document: A

REEP Request for entry into the european phase

Ref document number: 2015838136

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2015838136

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE