WO2018192406A1 - Procédé et appareil d'authentification d'identité et support d'informations - Google Patents

Procédé et appareil d'authentification d'identité et support d'informations Download PDF

Info

Publication number
WO2018192406A1
WO2018192406A1 PCT/CN2018/082803 CN2018082803W WO2018192406A1 WO 2018192406 A1 WO2018192406 A1 WO 2018192406A1 CN 2018082803 W CN2018082803 W CN 2018082803W WO 2018192406 A1 WO2018192406 A1 WO 2018192406A1
Authority
WO
WIPO (PCT)
Prior art keywords
face image
target
verified
feature information
preset
Prior art date
Application number
PCT/CN2018/082803
Other languages
English (en)
Chinese (zh)
Inventor
梁晓晴
梁亦聪
丁守鸿
刘畅
陶芝伟
周可菁
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2018192406A1 publication Critical patent/WO2018192406A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Definitions

  • the embodiments of the present invention relate to the field of computer technologies, and in particular, to an identity verification method and apparatus, and a storage medium.
  • the invention provides an identity verification method and device, and a storage medium.
  • An authentication method the method is performed by the network device, including: providing action prompt information to the object to be verified; acquiring video stream data of the object to be verified, the video stream data being the object to be verified according to the action a continuous frame face image acquired when the prompting action is performed; determining a target face image according to the video stream data; determining, according to the target face image, the credibility of the object to be verified as a living body; The credibility and the target face image authenticate the object to be verified.
  • An authentication device comprising: at least one memory; at least one processor; wherein the at least one memory stores at least one instruction, the at least one instruction being executed by the at least one processor and implementing a method of:
  • the verification object provides the action prompt information; the video stream data of the object to be verified is obtained, and the video stream data is a continuous frame face image collected by the object to be verified according to the action prompt information; Determining a target face image according to the video stream data; determining, according to the target face image, the credibility of the object to be verified as a living body; performing the object to be verified according to the credibility and the target face image Authentication.
  • a computer storage medium having stored therein computer readable instructions or programs, the computer readable instructions or programs being executed by a processor.
  • FIG. 1 is a schematic flowchart of an identity verification method according to an embodiment of the present invention.
  • FIG. 2a is a schematic flowchart of an identity verification method according to an embodiment of the present invention.
  • FIG. 2b is a schematic flowchart of user identity verification in a conference sign-in system according to an embodiment of the present invention
  • FIG. 3a is a schematic structural diagram of an identity verification apparatus according to an embodiment of the present invention.
  • FIG. 3b is a schematic structural diagram of another identity verification apparatus according to an embodiment of the present disclosure.
  • 3c is a schematic structural diagram of a verification submodule according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of a network device according to an embodiment of the present invention.
  • the embodiment of the invention provides an identity verification method, device and system. The details are described below separately. It should be noted that the numbers of the following examples are not intended to limit the preferred order of the embodiments.
  • an identity verification device which may be implemented as an independent entity or integrated into a network device, such as a terminal or a server.
  • An authentication method includes: providing action prompt information to an object to be verified, and acquiring video stream data of the object to be verified, where the video stream data is continuously collected when the object to be verified performs corresponding action according to the action prompt information.
  • Frame a face image, and then determining a target face image according to the video stream data, and determining, according to the target face image, the credibility of the object to be verified as a living body, and then, according to the credibility and the target face image pair
  • the object to be verified is authenticated.
  • the specific process of the identity verification method can be as follows:
  • the action prompt information is mainly used to prompt the user to perform some specified actions, such as shaking the head or blinking, etc., and the display may be displayed through a prompt box or a prompt interface.
  • the user clicks a button on the interactive interface, such as "brush face login” the action of providing the action prompt information may be triggered.
  • the video stream data may be a video captured in a predetermined time (for example, one minute), and is mainly used for image data of a user's face, and may be collected by a video capture device such as a camera.
  • step S103 may specifically include:
  • the key points in the key point focus mainly on feature points in the face image, that is, points where the image gray value changes drastically, or points with large curvature on the edge of the image (ie, two edges) Intersection), such as the eyes, eyebrows, nose, mouth and facial contours.
  • some deep learning models such as ASM (Active Shape Model) or AAM (Active Appearance Model) can be used to extract key points.
  • the location information is mainly a two-dimensional coordinate for a certain reference coordinate system, such as a face acquisition interface displayed by the terminal.
  • the motion trajectory mainly refers to a route formed by the entire face or a partial region from the start motion to the end motion when the object to be verified performs corresponding action according to the motion prompt information, such as a blink track, a shake track, and the like.
  • the position change information of some important key points such as eyes, mouth corners, cheek edges, and nose
  • the three-dimensional face model of the object to be verified is obtained with three-dimensional coordinates of each key point, and then the motion trajectory is determined according to the three-dimensional coordinates of any key point.
  • steps 1-3 may specifically include:
  • the preset track point is a preset position point in the motion track.
  • the preset condition is mainly determined according to the characteristics of the human body motion.
  • the preset condition may be set as: the motion track includes a plurality of specified track points, such as a 5° deflection angle. Point, 15° deflection angle point and 30° deflection angle point, etc., or the preset condition can be set as follows: the number of track points in the motion track reaches a certain value, for example, 10.
  • the preset track point may be determined according to actual needs. For example, considering that the more key points on the face image, the more accurate the conclusion is, the 0° deflecting corner point may be selected as the preset track point, that is, the front side is selected.
  • the face image serves as the target face image.
  • the preset trajectory point may be a point of a smaller interval range including a 0° deflection angle point, rather than a single point.
  • the unknown live users mainly refer to live users who are not registered or authenticated on the system platform.
  • the virtual users mainly refer to a group of photos taken by lawless elements using legitimate users.
  • a pseudo-live user who is forged by a video or a human head model (that is, a screen remake). Specifically, when the motion trajectory meets the specified condition, it indicates that the target face image is not a single photo or multiple photos remake. In this case, it is necessary to further confirm whether the target face image is remake by video according to the image texture feature. Or the forged model of the human head.
  • the motion trajectory does not meet the specified conditions, such as only a small number of two or three track points, it indicates that the object to be verified is most likely a fake living user falsified by staking a single photo or multiple photos of the user.
  • it can be directly determined as an illegal user, and the user is prompted to perform the detection again.
  • the credibility mainly refers to the degree of credibility of the object to be verified as a living body, which may be expressed in the form of a probability value or a fractional value.
  • the texture of the image that is reticled by the screen is different from the texture of the normal image. Therefore, the reliability of the user to be authenticated may be determined by performing feature analysis on the target facial image, that is, the step S104 may specifically include:
  • the target key points mainly include some feature points that are relatively stable in relative position and have distinct distinguishing features, such as two left and right pupils, two left and right mouth corners, and a nose tip, etc., which may be determined according to actual needs.
  • the foregoing step 2-2 may specifically include:
  • the target face image is similarly transformed according to the Euclidean distance to obtain a normalized image.
  • the preset position may be obtained according to a standard face model, where the Euclidean distance refers to a distance between a preset position and position information corresponding to each target key point.
  • the similarity transformation may include operations such as rotation, translation, and scaling.
  • the image before the similar transformation and the similarly transformed image have the same graphics, that is, the shape of the included graphics does not change.
  • the distance between the preset position of the target key point and the corresponding position information can be minimized, that is, the target face image is unified. Normalize the image to a standard face model.
  • the preset classification model mainly refers to a trained deep neural network, which can be trained by some deep training models, such as CNN (Convolutional Neural Networks), wherein CNN is a multi-layer
  • the neural network is composed of an input layer, a convolution layer, a pooling layer, a fully connected layer and an output layer, which supports inputting an image of a multi-dimensional input vector directly into the network, thereby avoiding data reconstruction in feature extraction and classification, and greatly reducing The complexity of image processing.
  • CNN Convolutional Neural Networks
  • the identity verification method may further include:
  • the convolutional neural network is trained according to the preset image set and category information to obtain a preset classification model.
  • the preset face image set may include a screen remake photo sample (negative sample) and Normal photo samples (positive samples), the specific sample size can be determined according to actual needs.
  • This category information is usually manually labeled, which can include both remake photos and normal photos.
  • the training process mainly includes two stages: a forward propagation phase and a backward propagation phase.
  • each sample X i (that is, a preset face image) can be input into the n-layer convolutional neural network.
  • O i F n (... (F 2 (F 1 (X i W (1)) W (2)) ...) W (n))
  • i a positive integer
  • W (n) is the weight of the nth layer
  • F is an activation function (such as a sigmoid function or a hyperbolic tangent function).
  • a weight matrix By inputting the preset face image set to the convolutional neural network, a weight matrix can be obtained, and then In the backward propagation phase, the difference between each actual output O i and the ideal output Y i can be calculated, and the adjustment weight matrix is back-propagated according to the method of minimizing the error, wherein Y i is obtained according to the category information of the sample X i For example, if the sample X i is a normal photo, Y i can be set to 1. If the sample X i is a remake photo, Y i can be set to 0, and finally, the trained volume is determined according to the adjusted weight matrix.
  • the neural network which is the preset classification model.
  • S105 Perform identity verification on the object to be verified according to the credibility and the target face image.
  • step S105 may specifically include:
  • the object to be verified is authenticated according to the target face image
  • the first preset threshold may be determined according to an actual application domain. For example, when the identity verification method is mainly applied to a financial domain with high security requirements, the first preset threshold may be set to be compared. Large, such as 0.9, when the authentication method is mainly applied to a field such as a conference sign-in system that has relatively low security requirements, the first preset threshold may be set to be relatively small, such as 0.5.
  • the calculated reliability when the calculated reliability is less than or equal to the first preset threshold, it indicates that the object to be verified is likely to be a virtual user whose screen is remake. At this time, in order to reduce the false positive rate, the user may be prompted. Perform face image acquisition again.
  • the calculated reliability is greater than the first preset threshold, it indicates that the object to be verified is very likely to be a living user. In this case, it is necessary to further analyze whether the living user is an unknown living user or a registered or authenticated living user. That is, the above step "identifying the object to be verified according to the target face image" may specifically include:
  • the face area mainly refers to the facial features, such as eyes, mouth, nose, eyebrows, and cheeks, etc., which are mainly based on the relative positional relationship between the key points to segment the target face image.
  • step 3-2 may specifically include:
  • the plurality of pieces of feature information are reorganized to obtain target feature information.
  • the feature extraction may be performed on the face region through the deep learning network, and the extracted features are recombined to obtain the feature string (that is, the target feature information). Since the geometric models corresponding to different face regions are different, In order to improve extraction efficiency and accuracy, different depth learning networks can be used to extract different face regions.
  • the foregoing step 3-3 may specifically include:
  • 3-3-1 Acquire a stored feature information set, and a user identifier corresponding to each stored feature information in the stored feature information set.
  • the user identifier is a unique identifier of the user, which may include a registered account.
  • the stored feature information set includes at least one stored feature information, and the different stored feature information is obtained according to face images of different registered users.
  • the identity verification method may further include:
  • the user registration request carries a user identifier to be registered and a face image to be registered;
  • the to-be-registered feature information is associated with the to-be-registered user identifier, and the to-be-registered feature information is added to the stored feature information set.
  • the to-be-registered face image may be processed by using the methods involved in steps 3-1 and 3-2 to obtain the to-be-registered feature information.
  • the user registration request may be automatically triggered. For example, after the user's face image is collected, the user registration request may be automatically generated, or may be generated by the user, for example, when the user clicks the “finish” button, the user may generate the request.
  • the user registration request may be determined according to actual needs.
  • the face image to be registered may be collected on the spot, or may be uploaded after the user has taken the image in advance.
  • the preset algorithm may include a joint Bayesian algorithm, which is a classification method of statistics.
  • the main idea is to treat a face as a two-part composition, and a part is a difference between people.
  • the other part is the individual's own differences (such as changes in expression), and the overall similarity is calculated based on the difference between the two parts.
  • step 3-3-3 may specifically include:
  • the user identifier corresponding to the similarity of the second preset threshold is used as the target user identifier, and the verification result indicating that the object to be verified is the target user identifier is generated;
  • the second preset threshold may be determined according to actual needs. For example, a face image of a large number of users may be collected in advance, and each user collects two face images, and then, according to two people collected by each user. The face image calculates the corresponding similarity, and the average value is counted, and the average value is used as the second preset threshold.
  • the degree is usually slightly less than 1, so that the second predetermined threshold can also be set to be slightly less than 1, such as 0.8.
  • the living user is a registered or authenticated living user.
  • the user can be directly logged in according to the target user identifier, without the user manually. Enter your password and account number, the method is simple, convenient and fast.
  • the identity verification method provides the action prompt information to the object to be verified, and acquires the video stream data of the object to be verified, and the video stream data is made by the object to be verified according to the action prompt information.
  • a continuous frame face image acquired during the corresponding action, and then determining a target face image according to the video stream data, and determining, according to the target face image, the credibility of the object to be verified as a living body, and then, according to the credibility
  • the target face image authenticates the object to be verified, and can effectively block various types of attacks such as photos, videos, and human head models in the face recognition process, and the method is simple and the security is high.
  • the integration of the identity verification device in the network device will be described in detail as an example.
  • an authentication method can be as follows:
  • the network device acquires a user registration request, where the user registration request carries the to-be-registered user identifier and the to-be-registered face image.
  • the user when a user first registers an application system (such as a conference check-in system), the user may be required to provide an account to be registered and a face image to be registered, and the image to be registered may be collected on site, or may be taken in advance by the user. After uploading, after the user clicks the "Finish" button, the user registration request can be generated.
  • an application system such as a conference check-in system
  • the network device determines the to-be-registered feature information according to the to-be-registered face image, and then associates the to-be-registered feature information with the to-be-registered user identifier, and adds the to-be-registered feature information to the stored feature information set.
  • a key point extraction may be performed on the face image to be registered, and the face image to be registered is segmented into multiple regions according to the extracted key points, and then the feature extraction is performed on the segmented region by using multiple depth learning networks. And reorganizing these features to obtain the feature information to be registered.
  • the user identifier and the feature information of each registered user are stored in association, so that in the subsequent login process, the network device can verify the identity of the user according to the stored information.
  • the network device acquires a login request, and provides action prompt information to the object to be verified according to the login request.
  • the object to be verified clicks the “face login” button on the interactive interface
  • the login request may be generated.
  • an action prompt box may be displayed on the interaction interface to prompt the object to be verified. Make specific actions, such as shaking your head.
  • the network device acquires video stream data of the object to be verified, where the video stream data is a continuous frame face image collected by the object to be verified according to the action prompt information.
  • the video stream data may be face data collected within a specified time (for example, 1 minute).
  • a detection frame can be displayed on the interactive interface, and the user is prompted to put the face into the detection frame to guide the user to stand in a suitable position for collecting video stream data.
  • the network device acquires a key point set of each frame face image in the video stream data, and location information of each key point in the key point set.
  • the ASM algorithm can be used to extract a set of key points for each frame of the face image, which can include 88 key points such as eyes, eyebrows, nose, mouth, and facial contours.
  • the location information may be display coordinates of each key point in the detection frame. When the user's face is located in the detection frame, the display coordinates of each key point may be automatically located.
  • the network device determines, according to a key point set and location information of each frame of the face image, a motion trajectory of the object to be verified.
  • the positional change information of important key points such as the eyes, the corners of the mouth and the nose, and the angles and relative distances between the important key points can be used to determine the three-dimensional face model of the object to be verified, and The three-dimensional coordinates of each key point, and then the motion trajectory is determined according to the three-dimensional coordinates of any key point.
  • the network device determines whether the motion track meets the preset condition. If yes, the following step S208 is performed. If not, the verification result indicating that the object to be verified is an illegal user may be generated, and the process returns to step S203.
  • the preset condition is: including a 5° deflection angle point, a 10° deflection angle point, a 15° deflection angle point, and a 30° deflection angle point
  • the motion trajectory is the user's head from the front side (ie, 0°)
  • the rotation is 40°, that is, when the motion trajectory includes a deflection angle of 0 to 40°
  • the preset condition is satisfied.
  • the motion trajectory is formed by the user's head rotated by 15° from the front side (ie, 0°)
  • the motion trajectory only includes a deflection angle point of 0 to 15° it can be determined that the preset condition is not satisfied.
  • the network device selects, from the video stream data, a face image corresponding to the preset track point as the target face image.
  • the preset track point may be a 0° deflecting corner point.
  • the target face image is also a face image corresponding to a 0° deflecting corner point in the video stream data.
  • the network device determines at least one target key point from the key points of the target face image, and determines a normalized image according to the location information of the target key point and the target face image.
  • step S209 may specifically include:
  • the target face image is similarly transformed according to the Euclidean distance to obtain a normalized image.
  • the target key point may be five points of left and right pupils, two left and right mouth corners, and a nose point
  • the preset position may be two-dimensional coordinates of the five points in the standard face model for the same reference coordinate system.
  • the network device calculates the normalized image by using a preset classification model, and obtains the credibility of the object to be verified as a living body, and determines whether the credibility is greater than a first preset threshold, and if yes, performs the following In step S211, if not, a verification result indicating that the object to be verified is an illegal user may be generated, and the process returns to step S203.
  • the preset classification model may be obtained by training a CNN with a large number of screen remake photo samples (negative samples) and normal photo samples (positive samples) in advance.
  • the image information is transformed from the input layer to the output layer, and finally output through the output layer is a probability value, that is, the reliability.
  • the first preset threshold may be 0.5. In this case, if the reliability is 0.7, the determination may be YES, and if the reliability is 0.3, the determination may be NO.
  • the network device divides the target face image into a plurality of face regions according to the key point set of the target face image, and determines target feature information according to the plurality of face regions.
  • the target face image can be segmented based on the relative positional relationship between the key points to obtain a plurality of face regions including eyes, mouth, nose, eyebrows, and cheeks, and then, through different depth learning.
  • the network extracts features from different face regions and reorganizes the extracted features to obtain the target feature information.
  • the network device calculates, by using a preset algorithm, a similarity between each stored feature information and the target feature information in the stored feature information set, and determines whether the calculated similarity is not less than a second preset threshold.
  • the similarity if yes, performs the following step S213, and if not, it can generate a verification result indicating that the object to be verified is an illegal user, and returns to the above step S203.
  • the similarity between each stored feature information and the target feature information can be calculated by the joint Bayesian algorithm to obtain a plurality of similarities ⁇ A1, A2...An ⁇ , at this time, if ⁇ A1, A2. If there is Ai greater than or equal to the second preset threshold in An.., it can be determined as YES. If it does not exist, it can be determined as NO, where i ⁇ (1, 2...n), when it is determined as Otherwise, you can further prompt the user to verify the failure and inform the reason for the failure, such as the inability to find the user.
  • the network device uses the user identifier corresponding to the similarity of the second preset threshold as the target user identifier, and generates a verification result indicating that the object to be verified is the target user identifier.
  • the user identifier corresponding to the similarity Ai may be used as the authentication result of the object to be verified, and the result may be displayed to the user in the form of prompt information to inform the user that the login is successful.
  • the identity verification method wherein the network device obtains a user registration request, the user registration request carries the to-be-registered user identifier and the to-be-registered face image, and determines the to-be-registered feature according to the to-be-registered face image.
  • Information after which the feature information to be registered is associated with the user identity to be registered, and the feature information to be registered is added to the stored feature information set, and then a login request is obtained, and an action is provided to the object to be verified according to the login request.
  • the video stream data of the object to be verified is obtained, and the video stream data is a continuous frame face image collected by the object to be verified according to the motion prompt information, and each video stream data is acquired.
  • the verification result of the method user if yes, selecting a face image corresponding to the preset track point from the video stream data as the target face image, and then determining at least one target key point from the key point of the target face image And determining a normalized image according to the location information of the target key point and the target face image, and then calculating the normalized image by using a preset classification model to obtain the reliability of the object to be verified as a living body, and Determining
  • the user identifier corresponding to the similarity of the second preset threshold is used as the target user identifier, and the object to be verified is generated.
  • the verification result of the target user identifier can effectively block various types of attacks such as photos, videos and human head models in the face recognition process, and the method is simple, high in security, and the identity can be realized without the user manually inputting the password and the account number. Verification, convenient and fast.
  • Embodiment 1 and Embodiment 2 the present embodiment will be further described from the perspective of an identity verification device, which can be integrated in a network device.
  • FIG. 3a specifically describes an identity verification apparatus according to a third embodiment of the present invention, which may include:
  • At least one memory At least one memory
  • At least one processor At least one processor
  • the at least one memory stores at least one instruction module configured to be executed by the at least one processor; wherein the at least one instruction module comprises:
  • the module 10 the obtaining module 20, the first determining module 30, the second determining module 40, and the verifying module 50 are provided, wherein:
  • the module 10 is configured to provide action prompt information to the object to be verified.
  • the action prompt information is mainly used to prompt the user to perform some specified actions, such as shaking the head or blinking, etc., and the display may be displayed through a prompt box or a prompt interface.
  • the providing module 10 can be triggered to provide the action prompt information.
  • the obtaining module 20 is configured to acquire video stream data of the object to be verified, where the video stream data is a continuous frame face image collected by the object to be verified according to the motion prompt information.
  • the video stream data may be a video captured in a predetermined time (for example, one minute), and is mainly used for image data of the user's face, and the acquiring module 20 may perform the collection by using a video capturing device such as a camera.
  • the first determining module 30 is configured to determine a target face image according to the video stream data.
  • the first determining module 30 can be specifically configured to:
  • the key points in the key point focus mainly on feature points in the face image, that is, points where the image gray value changes drastically, or points with large curvature on the edge of the image (ie, two edges) Intersection), such as the eyes, eyebrows, nose, mouth and facial contours.
  • the first determining module 30 can perform key point extraction operations through some deep learning models, such as an ASM (Active Shape Model) or an AAM (Active Appearance Model).
  • the location information is mainly a two-dimensional coordinate for a certain reference coordinate system, such as a face acquisition interface displayed by the terminal.
  • the motion trajectory mainly refers to a route formed by the entire face or a partial region from the start motion to the end motion when the object to be verified performs corresponding action according to the motion prompt information, such as a blink track, a shake track, and the like.
  • the first determining module 30 may firstly change the position change information of some important key points (such as eyes, mouth corners, cheek edges, and noses) in each frame of the face image, and the angles between the important key points. And the relative distance to determine the three-dimensional face model of the object to be verified, and obtain the three-dimensional coordinates of each key point, and then determine the motion trajectory according to the three-dimensional coordinates of any key point.
  • steps 1-3 may specifically include:
  • the preset condition is mainly determined according to the characteristics of the human body motion.
  • the preset condition may be set as: the motion track includes a plurality of specified track points, such as a 5° deflection angle. Point, 15° deflection angle point and 30° deflection angle point, etc., or the preset condition can be set as follows: the number of track points in the motion track reaches a certain value, for example, 10.
  • the preset track point may be determined according to actual needs. For example, considering that the more key points on the face image, the more accurate the conclusion is, the 0° deflecting corner point may be selected as the preset track point, that is, the front side is selected.
  • the face image serves as the target face image.
  • the preset trajectory point may be a point of a smaller interval range including a 0° deflection angle point, rather than a single point.
  • the unknown live users mainly refer to live users who are not registered or authenticated on the system platform.
  • the virtual users mainly refer to a group of photos taken by lawless elements using legitimate users.
  • a pseudo-live user who is forged by a video or a human head model (that is, a screen remake). Specifically, when the motion trajectory meets the specified condition, it indicates that the target face image is not a single photo or multiple photos remake. In this case, it is necessary to further confirm whether the target face image is remake by video according to the image texture feature. Or the forged model of the human head.
  • the motion trajectory does not meet the specified conditions, such as only a small number of two or three track points, it indicates that the object to be verified is most likely a fake living user falsified by staking a single photo or multiple photos of the user.
  • it can be directly determined as an illegal user, and the user is prompted to perform the detection again.
  • the second determining module 40 is configured to determine, according to the target face image, the credibility of the object to be verified as a living body.
  • the credibility mainly refers to the degree of credibility of the object to be verified as a living body, which may be expressed in the form of a probability value or a fractional value.
  • the second determining module 40 can determine the credibility of the user to be verified by performing feature analysis on the target face image, that is, see the texture of the picture that is retries by the screen. 3b, the second determining module 40 may specifically include a first determining submodule 41, a second determining submodule 42 and a calculating submodule 43, wherein:
  • the first determining sub-module 41 is configured to determine at least one target key point from the key point of the target face image.
  • the target key points mainly include some feature points that are relatively stable in relative position and have distinct distinguishing features, such as two left and right pupils, two left and right mouth corners, and a nose tip, etc., which may be determined according to actual needs.
  • the second determining sub-module 42 is configured to determine a normalized image according to the location information of the target key point and the target facial image.
  • the second determining sub-module 42 can be specifically used to:
  • the target face image is similarly transformed according to the Euclidean distance to obtain a normalized image.
  • the preset position may be obtained according to a standard face model, where the Euclidean distance refers to a distance between a preset position and position information corresponding to each target key point.
  • the similarity transformation may include operations such as rotation, translation, and scaling.
  • the image before the similar transformation and the similarly transformed image have the same graphics, that is, the shape of the included graphics does not change.
  • the second determining sub-module 42 can minimize the distance between the preset position of the target key point and the corresponding position information by continuously adjusting the size, the rotation angle, and the coordinate position of the target face image, that is, the distance
  • the target face image is normalized to the standard face model to obtain a normalized image.
  • the calculation sub-module 43 is configured to calculate the normalized image by using a preset classification model to obtain the reliability of the object to be verified as a living body.
  • the preset classification model mainly refers to a trained deep neural network, which can be trained by some deep training models, such as CNN (Convolutional Neural Networks), wherein CNN is a multi-layer
  • the neural network is composed of an input layer, a convolution layer, a pooling layer, a fully connected layer and an output layer, which supports inputting an image of a multi-dimensional input vector directly into the network, thereby avoiding data reconstruction in feature extraction and classification, and greatly reducing The complexity of image processing.
  • CNN Convolutional Neural Networks
  • the identity verification apparatus may further include a training module 60 for:
  • the calculating sub-module 43 calculates the normalized image by using the preset classification model, acquiring a preset face image set and category information of each preset face image in the preset face map set;
  • the convolutional neural network is trained according to the preset image set and category information to obtain a preset classification model.
  • the preset face image set may include a screen remake photo sample (negative sample) and Normal photo samples (positive samples), the specific sample size can be determined according to actual needs.
  • This category information is usually manually labeled, which can include both remake photos and normal photos.
  • the training process mainly includes two phases: a forward propagation phase and a backward propagation phase.
  • a weight matrix can be obtained.
  • the training module 60 can calculate the difference between each actual output O i and the ideal output Y i , and backproper the adjustment weight matrix according to the method of minimizing the error, wherein Y i is based on the sample X i type information obtained, for example, if the sample is a normal picture X i, Y i is set to be 1, if X i is a sample picture photographing, the Y i may be set to 0, and finally, the weights adjusted according to The matrix determines the trained convolutional neural network, which is the pre-defined classification model.
  • the verification module 50 is configured to perform identity verification on the object to be verified according to the credibility and the target face image.
  • the verification module 50 may specifically include a determination sub-module 51, a verification sub-module 52, and a generation sub-module 53, wherein:
  • the determining sub-module 51 is configured to determine whether the credibility is greater than a first preset threshold.
  • the first preset threshold may be determined according to an actual application domain. For example, when the identity verification method is mainly applied to a financial domain with high security requirements, the first preset threshold may be set to be compared. Large, such as 0.9, when the authentication method is mainly applied to a field such as a conference sign-in system that has relatively low security requirements, the first preset threshold may be set to be relatively small, such as 0.5.
  • the verification sub-module 52 is configured to perform identity verification on the object to be verified according to the target face image if the reliability is greater than the first preset threshold.
  • the verification sub-module 52 when the calculated reliability is greater than the first preset threshold, it indicates that the object to be verified is very likely to be a living user. At this time, the verification sub-module 52 needs to further analyze that the living user is an unknown living user.
  • the active user who is already registered or authenticated that is, referring to FIG. 3c, the verification sub-module 52 may specifically include a dividing unit 521, a determining unit 522, and a verifying unit 523, where:
  • the dividing unit 521 is configured to divide the target face image into a plurality of face regions according to the key point set of the target face image.
  • the face area mainly refers to the facial features, such as eyes, mouth, nose, eyebrows, and cheeks, etc., which are mainly based on the relative positional relationship between the key points to segment the target face image.
  • the determining unit 522 is configured to determine target feature information according to the plurality of face regions.
  • the determining unit 522 can be specifically configured to:
  • the plurality of pieces of feature information are reorganized to obtain target feature information.
  • the determining unit 522 may perform feature extraction on the face region through the deep learning network, and recombine the extracted features to obtain a feature string (that is, the target feature information), because the geometry corresponding to different face regions
  • a feature string that is, the target feature information
  • the verification unit 523 is configured to perform identity verification on the object to be verified according to the target feature information.
  • the verification unit 523 can be specifically configured to:
  • 3-3-1 Acquire a stored feature information set, and a user identifier corresponding to each stored feature information in the stored feature information set.
  • the user identifier is a unique identifier of the user, which may include a registered account.
  • the stored feature information set includes at least one stored feature information, and the different stored feature information is obtained according to face images of different registered users.
  • the identity verification device may further include an association module, configured to:
  • the verification unit 523 Before the verification unit 523 acquires the stored feature information set and the user identifier corresponding to each stored feature information in the stored feature information set, acquiring a user registration request, the user registration request carrying the to-be-registered user identifier and the to-be-registered face image;
  • the to-be-registered feature information is associated with the to-be-registered user identifier, and the to-be-registered feature information is added to the stored feature information set.
  • the association module may process the to-be-registered face image by referring to the method used by the dividing unit 521 and the determining unit 522 to obtain the to-be-registered feature information.
  • the user registration request may be automatically triggered. For example, after the user's face image is collected, the user registration request may be automatically generated, or may be generated by the user, for example, when the user clicks the “finish” button, the user may generate the request.
  • the user registration request may be determined according to actual needs.
  • the face image to be registered may be collected on the spot, or may be uploaded after the user has taken the image in advance.
  • the preset algorithm may include a joint Bayesian algorithm, which is a classification method of statistics.
  • the main idea is to regard a face as a two-part composition, and a part is a difference between people.
  • the other part is the individual's own differences (such as changes in expression), and the overall similarity is calculated based on the difference between the two parts.
  • verification unit 523 can be used to:
  • the user identifier corresponding to the similarity of the second preset threshold is used as the target user identifier, and the verification result indicating that the object to be verified is the target user identifier is generated;
  • the second preset threshold may be determined according to actual needs. For example, a face image of a large number of users may be collected in advance, and each user collects two face images, and then, according to two people collected by each user. The face image calculates the corresponding similarity, and the average value is counted, and the average value is used as the second preset threshold.
  • the degree is usually slightly less than 1, so that the second predetermined threshold can also be set to be slightly less than 1, such as 0.8.
  • the verification unit 523 when the verification unit 523 generates a verification result indicating that the object to be verified is the target user identifier, it indicates that the living user is a registered or authenticated living user, and at this time, the login can be directly performed according to the target user identifier. No need for users to manually enter passwords and accounts, the method is simple, convenient and fast.
  • the generating sub-module 53 is configured to generate a verification result indicating that the object to be verified is an illegal user, if the credibility is not greater than the first preset threshold.
  • the calculated reliability when the calculated reliability is less than or equal to the first preset threshold, it indicates that the object to be verified is likely to be a virtual user whose screen is retrased. At this time, in order to reduce the false positive rate, Prompt the user to re-collect the face image.
  • the foregoing units may be implemented as a separate entity, or may be implemented in any combination, and may be implemented as the same or a plurality of entities.
  • the foregoing method embodiments and details are not described herein.
  • identity verification method provided in the above embodiment can be implemented when the processor executes the instructions stored in the memory.
  • the identity verification device provided by the embodiment provides the action prompt information to the object to be verified by the providing module 10, and the obtaining module 20 acquires the video stream data of the object to be verified, and the video stream data is the object to be verified according to the
  • the action prompt information is used to make a continuous frame face image collected when the action is performed.
  • the first determining module 30 determines the target face image according to the video stream data
  • the second determining module 40 determines the object to be verified according to the target face image.
  • the verification module 50 authenticates the object to be verified according to the credibility and the target face image, and can effectively block various types of photos, videos, and head models in the face recognition process.
  • the attack is simple and safe.
  • the embodiment of the present invention further provides an identity verification system, which includes any of the identity verification devices provided by the embodiments of the present invention.
  • an identity verification system which includes any of the identity verification devices provided by the embodiments of the present invention.
  • the third embodiment refer to the third embodiment.
  • the network device may provide action prompt information to the object to be verified; and obtain video stream data of the object to be verified, where the video stream data is a continuous frame face image collected by the object to be verified according to the motion prompt information. Determining a target face image according to the video stream data; determining, according to the target face image, the credibility of the object to be verified as a living body; and authenticating the object to be verified according to the credibility and the target face image.
  • the system for generating the traffic information may include any of the identity verification devices provided by the embodiments of the present invention. Therefore, the beneficial effects of any of the identity verification devices provided by the embodiments of the present invention can be implemented. The embodiment is not described here.
  • the embodiment of the present invention further provides a network device, as shown in FIG. 4, which shows a schematic structural diagram of a network device according to an embodiment of the present invention, specifically:
  • the network device can include a processor 701 of one or more processing cores, a memory 702 of one or more computer readable storage media, a radio frequency (RF) circuit 703, a power source 704, an input unit 705, and a display unit 707 And other components.
  • RF radio frequency
  • FIG. 4 does not constitute a limitation to the network device, and may include more or less components than those illustrated, or some components may be combined, or different component arrangements. among them:
  • the processor 701 is the control center of the network device, interconnecting various portions of the entire network device using various interfaces and lines, by running or executing software programs and/or modules stored in the memory 702, and recalling stored in the memory 702. Data, performing various functions of the network device and processing data, thereby performing overall monitoring of the network device.
  • the processor 701 may include one or more processing cores; preferably, the processor 701 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like.
  • the modem processor primarily handles wireless communications. It can be understood that the above modem processor may not be integrated into the processor 701.
  • the memory 702 can be used to store software programs and modules, and the processor 701 executes various functional applications and data processing by running software programs and modules stored in the memory 702.
  • the memory 702 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to Data created by the use of network devices, etc.
  • memory 702 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, memory 702 can also include a memory controller to provide processor 701 access to memory 702.
  • the RF circuit 703 can be used for receiving and transmitting signals during the process of transmitting and receiving information. Specifically, after receiving the downlink information of the base station, it is processed by one or more processors 701; in addition, the data related to the uplink is sent to the base station.
  • the RF circuit 703 includes, but is not limited to, an antenna, at least one amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, and a Low Noise Amplifier (LNA). , duplexer, etc.
  • SIM Subscriber Identity Module
  • LNA Low Noise Amplifier
  • the RF circuit 703 can also communicate with the network and other devices through wireless communication.
  • the wireless communication can use any communication standard or protocol, including but not limited to Global System of Mobile communication (GSM), General Packet Radio Service (GPRS), and Code Division Multiple Access (CDMA). Code Division Multiple Access), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), e-mail, Short Messaging Service (SMS), and the like.
  • GSM Global System of Mobile communication
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • Code Division Multiple Access Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • SMS Short Messaging Service
  • the network device also includes a power source 704 (such as a battery) that supplies power to the various components.
  • the power source 704 can be logically coupled to the processor 701 through the power management system to manage functions such as charging, discharging, and power management through the power management system.
  • the power supply 704 can also include any one or more of a DC or AC power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
  • the network device can also include an input unit 705 that can be used to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function controls.
  • input unit 705 can include a touch-sensitive surface as well as other input devices. Touch-sensitive surfaces, also known as touch screens or trackpads, collect touch operations on or near the user (such as the user using a finger, stylus, etc., any suitable object or accessory on a touch-sensitive surface or touch-sensitive Operation near the surface), and drive the corresponding connecting device according to a preset program.
  • the touch sensitive surface may include two parts of a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
  • the processor 701 is provided and can receive commands from the processor 701 and execute them.
  • touch-sensitive surfaces can be implemented in a variety of types including resistive, capacitive, infrared, and surface acoustic waves.
  • the input unit 705 can also include other input devices. Specifically, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.
  • the network device can also include a display unit 706 that can be used to display information entered by the user or information provided to the user and various graphical user interfaces of the network device, the graphical user interface can be represented by graphics, text, icons, Video and any combination of them.
  • the display unit 706 can include a display panel.
  • the display panel can be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
  • the touch-sensitive surface may cover the display panel, and when the touch-sensitive surface detects a touch operation thereon or nearby, it is transmitted to the processor 701 to determine the type of the touch event, and then the processor 701 displays the type according to the touch event. A corresponding visual output is provided on the panel.
  • the touch-sensitive surface and display panel are implemented as two separate components to perform input and input functions, in some embodiments, the touch-sensitive surface can be integrated with the display panel to implement input and output functions.
  • the network device may further include a camera, a Bluetooth module, and the like, and details are not described herein again.
  • the processor 701 in the network device loads the executable file corresponding to the process of one or more applications into the memory 702 according to the following instructions, and is stored and stored by the processor 701.
  • video stream data of the object to be verified where the video stream data is a continuous frame face image collected when the object to be verified performs corresponding action according to the action prompt information
  • the object to be verified is authenticated according to the credibility and the target face image.
  • the network device can implement the effective effects of any of the identity verification devices provided by the embodiments of the present invention. For details, refer to the foregoing embodiments, and details are not described herein again.
  • the program may be stored in a computer readable storage medium, and the storage medium may include: Read Only Memory (ROM), Random Access Memory (RAM), disk or optical disk.
  • ROM Read Only Memory
  • RAM Random Access Memory
  • Embodiments of the present invention also provide a computer storage medium having stored therein computer readable instructions or programs, the computer readable instructions or programs being executed by a processor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé et un appareil d'authentification d'identité, ainsi qu'un support d'informations. Le procédé d'authentification d'identité consiste : à fournir des informations d'invite d'action pour un objet à vérifier; à acquérir des données de flux vidéo de l'objet à vérifier, les données de flux vidéo constituant des trames continues d'une image de visage humain collectées lorsque l'objet à vérifier exécute une action correspondante en fonction des informations d'invite d'action; à déterminer une image de visage humain cible en fonction des données de flux vidéo; à déterminer la fiabilité du caractère de corps vivant de l'objet à vérifier en fonction de l'image de visage humain cible; et à réaliser, en fonction de la fiabilité et de l'image de visage humain cible, une authentification d'identité sur l'objet à vérifier.
PCT/CN2018/082803 2017-04-20 2018-04-12 Procédé et appareil d'authentification d'identité et support d'informations WO2018192406A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710261931.0 2017-04-20
CN201710261931.0A CN107066983B (zh) 2017-04-20 2017-04-20 一种身份验证方法及装置

Publications (1)

Publication Number Publication Date
WO2018192406A1 true WO2018192406A1 (fr) 2018-10-25

Family

ID=59600617

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/082803 WO2018192406A1 (fr) 2017-04-20 2018-04-12 Procédé et appareil d'authentification d'identité et support d'informations

Country Status (2)

Country Link
CN (1) CN107066983B (fr)
WO (1) WO2018192406A1 (fr)

Cited By (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109635021A (zh) * 2018-10-30 2019-04-16 平安科技(深圳)有限公司 一种基于人体检测的数据信息录入方法、装置及设备
CN109670285A (zh) * 2018-11-13 2019-04-23 平安科技(深圳)有限公司 面部识别登陆方法、装置、计算机设备及存储介质
CN109726648A (zh) * 2018-12-14 2019-05-07 深圳壹账通智能科技有限公司 一种基于机器学习的人脸图像识别方法和装置
CN109815658A (zh) * 2018-12-14 2019-05-28 平安科技(深圳)有限公司 一种验证方法和装置、计算机设备以及计算机存储介质
CN109934187A (zh) * 2019-03-19 2019-06-25 西安电子科技大学 基于人脸活性检测-眼睛视线随机挑战响应方法
CN110111129A (zh) * 2019-03-28 2019-08-09 中国科学院深圳先进技术研究院 一种数据分析方法、广告播放设备及存储介质
CN110288272A (zh) * 2019-04-19 2019-09-27 平安科技(深圳)有限公司 数据处理方法、装置、电子设备及存储介质
CN110287971A (zh) * 2019-05-22 2019-09-27 平安银行股份有限公司 数据验证方法、装置、计算机设备及存储介质
CN110399794A (zh) * 2019-06-20 2019-11-01 平安科技(深圳)有限公司 基于人体的姿态识别方法、装置、设备及存储介质
CN110443137A (zh) * 2019-07-03 2019-11-12 平安科技(深圳)有限公司 多维度身份信息识别方法、装置、计算机设备及存储介质
CN110688517A (zh) * 2019-09-02 2020-01-14 平安科技(深圳)有限公司 音频分配方法、装置及存储介质
CN111062323A (zh) * 2019-12-16 2020-04-24 腾讯科技(深圳)有限公司 人脸图像传输方法、数值转移方法、装置及电子设备
CN111143703A (zh) * 2019-12-19 2020-05-12 上海寒武纪信息科技有限公司 智能线路推荐方法及相关产品
CN111160243A (zh) * 2019-12-27 2020-05-15 深圳云天励飞技术有限公司 客流量统计方法及相关产品
CN111178287A (zh) * 2019-12-31 2020-05-19 云知声智能科技股份有限公司 一种声像融合的端对端身份识别方法及装置
CN111191207A (zh) * 2019-12-23 2020-05-22 深圳壹账通智能科技有限公司 电子文件的控制方法、装置、计算机设备及存储介质
CN111241505A (zh) * 2018-11-28 2020-06-05 深圳市帝迈生物技术有限公司 一种终端设备及其登录验证方法、计算机存储介质
CN111414785A (zh) * 2019-01-07 2020-07-14 财团法人交大思源基金会 身分辨识系统及身分辨识方法
CN111461368A (zh) * 2019-01-21 2020-07-28 北京嘀嘀无限科技发展有限公司 异常订单处理方法、装置、设备及计算机可读存储介质
CN111652086A (zh) * 2020-05-15 2020-09-11 汉王科技股份有限公司 人脸活体检测方法、装置、电子设备及存储介质
CN111723655A (zh) * 2020-05-12 2020-09-29 五八有限公司 人脸图像处理方法、装置、服务器、终端、设备及介质
CN111753271A (zh) * 2020-06-28 2020-10-09 深圳壹账通智能科技有限公司 基于ai识别的开户身份验证方法、装置、设备及介质
CN111898536A (zh) * 2019-08-27 2020-11-06 创新先进技术有限公司 证件识别方法及装置
CN111950401A (zh) * 2020-07-28 2020-11-17 深圳数联天下智能科技有限公司 确定关键点区域位置的方法、图像处理系统、设备和介质
CN111985298A (zh) * 2020-06-28 2020-11-24 百度在线网络技术(北京)有限公司 人脸识别样本收集方法和装置
CN112132030A (zh) * 2020-09-23 2020-12-25 湖南快乐阳光互动娱乐传媒有限公司 视频处理方法及装置、存储介质及电子设备
CN112307817A (zh) * 2019-07-29 2021-02-02 中国移动通信集团浙江有限公司 人脸活体检测方法、装置、计算设备及计算机存储介质
CN112383737A (zh) * 2020-11-11 2021-02-19 从法信息科技有限公司 多人在线内容同屏的视频处理验证方法、装置和电子设备
CN112434547A (zh) * 2019-08-26 2021-03-02 中国移动通信集团广东有限公司 一种用户身份稽核方法和设备
CN112491840A (zh) * 2020-11-17 2021-03-12 平安养老保险股份有限公司 信息修改方法、装置、计算机设备及存储介质
CN112560768A (zh) * 2020-12-25 2021-03-26 深圳市商汤科技有限公司 闸机通道控制方法、装置、计算机设备及存储介质
CN112633129A (zh) * 2020-12-18 2021-04-09 深圳追一科技有限公司 视频分析方法、装置、电子设备及存储介质
CN112767436A (zh) * 2019-10-21 2021-05-07 深圳云天励飞技术有限公司 一种人脸检测跟踪方法及装置
TWI727337B (zh) * 2019-06-06 2021-05-11 大陸商鴻富錦精密工業(武漢)有限公司 電子裝置及人臉識別方法
CN112818733A (zh) * 2020-08-24 2021-05-18 腾讯科技(深圳)有限公司 信息处理方法、装置、存储介质及终端
CN112906741A (zh) * 2019-05-21 2021-06-04 北京嘀嘀无限科技发展有限公司 图像处理方法、装置、电子设备及存储介质
CN113128452A (zh) * 2021-04-30 2021-07-16 重庆锐云科技有限公司 一种基于图像识别的绿化满意度采集方法和系统
WO2021158168A1 (fr) * 2020-02-04 2021-08-12 Grabtaxi Holdings Pte. Ltd. Procédé, serveur et système de communication pour vérifier un utilisateur à des fins de transport
CN113316781A (zh) * 2019-01-17 2021-08-27 电装波动株式会社 认证系统、认证装置及认证方法
CN113361366A (zh) * 2021-05-27 2021-09-07 北京百度网讯科技有限公司 人脸标注方法、装置、电子设备和存储介质
CN113569676A (zh) * 2021-07-16 2021-10-29 北京市商汤科技开发有限公司 图像处理方法、装置、电子设备及存储介质
CN113742776A (zh) * 2021-09-08 2021-12-03 未鲲(上海)科技服务有限公司 基于生物识别技术的数据校验方法、装置和计算机设备
CN113780212A (zh) * 2021-09-16 2021-12-10 平安科技(深圳)有限公司 用户身份核验方法、装置、设备及存储介质
CN114267066A (zh) * 2021-12-24 2022-04-01 北京的卢深视科技有限公司 人脸识别方法、电子设备及存储介质
CN114626036A (zh) * 2020-12-08 2022-06-14 腾讯科技(深圳)有限公司 基于人脸识别的信息处理方法、装置、存储介质及终端
CN114760068A (zh) * 2022-04-08 2022-07-15 中国银行股份有限公司 用户的身份验证方法、系统、电子设备及存储介质
CN116469196A (zh) * 2023-03-16 2023-07-21 东莞市恒鑫科技信息有限公司 一种数字化综合管理系统以及方法
EP3975047B1 (fr) * 2019-06-11 2024-04-10 Honor Device Co., Ltd. Procédé pour déterminer la validité d'une caractéristique faciale, et dispositif électronique

Families Citing this family (59)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104518877A (zh) * 2013-10-08 2015-04-15 鸿富锦精密工业(深圳)有限公司 身份认证系统及方法
CN107066983B (zh) * 2017-04-20 2022-08-09 腾讯科技(上海)有限公司 一种身份验证方法及装置
GB2567798A (en) * 2017-08-22 2019-05-01 Eyn Ltd Verification method and system
EP3447684A1 (fr) 2017-08-22 2019-02-27 Eyn Limited Procédé et système de vérification
CN107590485A (zh) * 2017-09-29 2018-01-16 广州市森锐科技股份有限公司 一种用于快递柜的身份验证方法、装置和取快递系统
CN107729857B (zh) * 2017-10-26 2021-05-28 Oppo广东移动通信有限公司 人脸识别方法、装置、存储介质和电子设备
CN107733911A (zh) * 2017-10-30 2018-02-23 郑州云海信息技术有限公司 一种动力与环境监控系统客户端登录验证系统及方法
CN108171109A (zh) * 2017-11-28 2018-06-15 苏州市东皓计算机系统工程有限公司 一种人脸识别系统
CN109993024A (zh) * 2017-12-29 2019-07-09 技嘉科技股份有限公司 身份验证装置、身份验证方法、及电脑可读储存介质
CN108335394A (zh) * 2018-03-16 2018-07-27 东莞市华睿电子科技有限公司 一种智能门锁的远程控制方法
CN108494942B (zh) * 2018-03-16 2021-12-10 深圳八爪网络科技有限公司 一种基于云端通讯录的解锁控制方法
CN108564673A (zh) * 2018-04-13 2018-09-21 北京师范大学 一种基于全局人脸识别的课堂考勤方法及系统
CN108615007B (zh) * 2018-04-23 2019-07-19 深圳大学 基于特征张量的三维人脸识别方法、装置及存储介质
CN112270299A (zh) 2018-04-25 2021-01-26 北京嘀嘀无限科技发展有限公司 一种识别头部运动的系统和方法
CN108647874B (zh) * 2018-05-04 2020-12-08 科大讯飞股份有限公司 门限阈值确定方法及装置
CN110210276A (zh) * 2018-05-15 2019-09-06 腾讯科技(深圳)有限公司 一种移动轨迹获取方法及其设备、存储介质、终端
CN110826045B (zh) * 2018-08-13 2022-04-05 深圳市商汤科技有限公司 认证方法及装置、电子设备和存储介质
CN109190522B (zh) * 2018-08-17 2021-05-07 浙江捷尚视觉科技股份有限公司 一种基于红外相机的活体检测方法
CN110197108A (zh) * 2018-08-17 2019-09-03 平安科技(深圳)有限公司 身份验证方法、装置、计算机设备及存储介质
CN109146879B (zh) * 2018-09-30 2021-05-18 杭州依图医疗技术有限公司 一种检测骨龄的方法及装置
CN109583165A (zh) * 2018-10-12 2019-04-05 阿里巴巴集团控股有限公司 一种生物特征信息处理方法、装置、设备及系统
CN109635625B (zh) * 2018-10-16 2023-08-18 平安科技(深圳)有限公司 智能身份核验方法、设备、存储介质及装置
CN111144169A (zh) * 2018-11-02 2020-05-12 深圳比亚迪微电子有限公司 人脸识别方法、装置和电子设备
CN111209768A (zh) * 2018-11-06 2020-05-29 深圳市商汤科技有限公司 身份验证系统及方法、电子设备和存储介质
CN109376684B (zh) 2018-11-13 2021-04-06 广州市百果园信息技术有限公司 一种人脸关键点检测方法、装置、计算机设备和存储介质
CN109670440B (zh) * 2018-12-14 2023-08-08 央视国际网络无锡有限公司 大熊猫脸的识别方法及装置
CN111372023B (zh) * 2018-12-25 2023-04-07 杭州海康威视数字技术股份有限公司 一种码流加密、解密方法及装置
CN111382624B (zh) * 2018-12-28 2023-08-11 杭州海康威视数字技术股份有限公司 动作识别方法、装置、设备及可读存储介质
CN109815835A (zh) * 2018-12-29 2019-05-28 联动优势科技有限公司 一种交互式活体检测方法
CN109934191A (zh) * 2019-03-20 2019-06-25 北京字节跳动网络技术有限公司 信息处理方法和装置
CN112507889A (zh) * 2019-04-29 2021-03-16 众安信息技术服务有限公司 一种校验证件与持证人的方法及系统
CN111866589A (zh) * 2019-05-20 2020-10-30 北京嘀嘀无限科技发展有限公司 一种视频数据验证方法、装置、电子设备及存储介质
CN110443621A (zh) * 2019-08-07 2019-11-12 深圳前海微众银行股份有限公司 视频核身方法、装置、设备及计算机存储介质
CN110705351A (zh) * 2019-08-28 2020-01-17 视联动力信息技术股份有限公司 视频会议的签到方法及系统
CN110968239B (zh) * 2019-11-28 2022-04-05 北京市商汤科技开发有限公司 一种展示对象的控制方法、装置、设备及存储介质
CN111881707B (zh) * 2019-12-04 2021-09-14 马上消费金融股份有限公司 图像翻拍检测方法、身份验证方法、模型训练方法及装置
CN113095110B (zh) * 2019-12-23 2024-03-08 浙江宇视科技有限公司 人脸数据动态入库的方法、装置、介质及电子设备
CN111060507B (zh) * 2019-12-24 2021-05-04 北京嘀嘀无限科技发展有限公司 一种车辆验证方法及装置
CN111178259A (zh) * 2019-12-30 2020-05-19 八维通科技有限公司 支持多算法融合的识别方法及系统
CN111259757B (zh) * 2020-01-13 2023-06-20 支付宝实验室(新加坡)有限公司 一种基于图像的活体识别方法、装置及设备
CN111091388B (zh) * 2020-02-18 2024-02-09 支付宝实验室(新加坡)有限公司 活体检测方法和装置、人脸支付方法和装置、电子设备
CN111523408B (zh) * 2020-04-09 2023-09-15 北京百度网讯科技有限公司 动作捕捉方法和装置
CN111932755A (zh) * 2020-07-02 2020-11-13 北京市威富安防科技有限公司 人员通行验证方法、装置、计算机设备和存储介质
CN111985331B (zh) * 2020-07-20 2024-05-10 中电天奥有限公司 预防商业秘密被窃照的检测方法及装置
CN112084858A (zh) * 2020-08-05 2020-12-15 广州虎牙科技有限公司 对象识别方法和装置、电子设备及存储介质
CN112101286A (zh) * 2020-09-25 2020-12-18 北京市商汤科技开发有限公司 一种服务请求的方法、装置、计算机设备及存储介质
CN112364733B (zh) * 2020-10-30 2022-07-26 重庆电子工程职业学院 智能安防人脸识别系统
CN112700344A (zh) * 2020-12-22 2021-04-23 成都睿畜电子科技有限公司 一种养殖场管理方法、装置、介质及设备
CN112287909B (zh) * 2020-12-24 2021-09-07 四川新网银行股份有限公司 一种随机生成检测点和交互要素的双随机活体检测方法
CN112800885B (zh) * 2021-01-16 2023-09-26 南京众鑫云创软件科技有限公司 一种基于大数据的数据处理系统及方法
CN113255512B (zh) * 2021-05-21 2023-07-28 北京百度网讯科技有限公司 用于活体识别的方法、装置、设备以及存储介质
CN113255529A (zh) * 2021-05-28 2021-08-13 支付宝(杭州)信息技术有限公司 一种生物特征的识别方法、装置及设备
CN113536270B (zh) * 2021-07-26 2023-08-08 网易(杭州)网络有限公司 一种信息验证的方法、装置、计算机设备及存储介质
CN113505756A (zh) * 2021-08-23 2021-10-15 支付宝(杭州)信息技术有限公司 人脸活体检测方法及装置
CN115514893B (zh) * 2022-09-20 2023-10-27 北京有竹居网络技术有限公司 图像上传方法、图像上传装置、可读存储介质和电子设备
CN115512426B (zh) * 2022-11-04 2023-03-24 安徽五域安全技术有限公司 一种智能人脸识别方法以及系统
CN116152936A (zh) * 2023-02-17 2023-05-23 深圳市永腾翼科技有限公司 一种带交互式活体检测的人脸身份认证系统及其方法
CN115937961B (zh) * 2023-03-02 2023-07-11 济南丽阳神州智能科技有限公司 一种线上学习识别方法及设备
CN117789272A (zh) * 2023-12-26 2024-03-29 中邮消费金融有限公司 身份验证方法、装置、设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105227316A (zh) * 2015-09-01 2016-01-06 深圳市创想一登科技有限公司 基于人脸图像身份验证的移动互联网账号登录系统及方法
CN105468950A (zh) * 2014-09-03 2016-04-06 阿里巴巴集团控股有限公司 身份认证方法、装置、终端及服务器
CN105718874A (zh) * 2016-01-18 2016-06-29 北京天诚盛业科技有限公司 活体检测及认证的方法和装置
CN105989264A (zh) * 2015-02-02 2016-10-05 北京中科奥森数据科技有限公司 生物特征活体检测方法及系统
CN106302330A (zh) * 2015-05-21 2017-01-04 腾讯科技(深圳)有限公司 身份验证方法、装置和系统
CN106557723A (zh) * 2015-09-25 2017-04-05 北京市商汤科技开发有限公司 一种带交互式活体检测的人脸身份认证系统及其方法
CN107066983A (zh) * 2017-04-20 2017-08-18 腾讯科技(上海)有限公司 一种身份验证方法及装置

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000113197A (ja) * 1998-10-02 2000-04-21 Victor Co Of Japan Ltd 個人識別装置
CN101162500A (zh) * 2006-10-13 2008-04-16 上海银晨智能识别科技有限公司 分区式人脸识别方法
CN104036276A (zh) * 2014-05-29 2014-09-10 无锡天脉聚源传媒科技有限公司 人脸识别方法及装置
CN106156578B (zh) * 2015-04-22 2020-02-14 深圳市腾讯计算机系统有限公司 身份验证方法和装置
WO2016172872A1 (fr) * 2015-04-29 2016-11-03 北京旷视科技有限公司 Procédé et dispositif de vérification de visage humain réel, et produit-programme d'ordinateur
CN105069408B (zh) * 2015-07-24 2018-08-03 上海依图网络科技有限公司 一种复杂场景下基于人脸识别的视频人像跟踪方法
CN105426827B (zh) * 2015-11-09 2019-03-08 北京市商汤科技开发有限公司 活体验证方法、装置和系统
CN105426850B (zh) * 2015-11-23 2021-08-31 深圳市商汤科技有限公司 一种基于人脸识别的关联信息推送设备及方法
CN105847735A (zh) * 2016-03-30 2016-08-10 宁波三博电子科技有限公司 一种基于人脸识别的即时弹幕视频通信方法及系统
CN106295574A (zh) * 2016-08-12 2017-01-04 广州视源电子科技股份有限公司 基于神经网络的人脸特征提取建模、人脸识别方法及装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105468950A (zh) * 2014-09-03 2016-04-06 阿里巴巴集团控股有限公司 身份认证方法、装置、终端及服务器
CN105989264A (zh) * 2015-02-02 2016-10-05 北京中科奥森数据科技有限公司 生物特征活体检测方法及系统
CN106302330A (zh) * 2015-05-21 2017-01-04 腾讯科技(深圳)有限公司 身份验证方法、装置和系统
CN105227316A (zh) * 2015-09-01 2016-01-06 深圳市创想一登科技有限公司 基于人脸图像身份验证的移动互联网账号登录系统及方法
CN106557723A (zh) * 2015-09-25 2017-04-05 北京市商汤科技开发有限公司 一种带交互式活体检测的人脸身份认证系统及其方法
CN105718874A (zh) * 2016-01-18 2016-06-29 北京天诚盛业科技有限公司 活体检测及认证的方法和装置
CN107066983A (zh) * 2017-04-20 2017-08-18 腾讯科技(上海)有限公司 一种身份验证方法及装置

Cited By (68)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109635021A (zh) * 2018-10-30 2019-04-16 平安科技(深圳)有限公司 一种基于人体检测的数据信息录入方法、装置及设备
CN109670285A (zh) * 2018-11-13 2019-04-23 平安科技(深圳)有限公司 面部识别登陆方法、装置、计算机设备及存储介质
CN111241505A (zh) * 2018-11-28 2020-06-05 深圳市帝迈生物技术有限公司 一种终端设备及其登录验证方法、计算机存储介质
CN109726648A (zh) * 2018-12-14 2019-05-07 深圳壹账通智能科技有限公司 一种基于机器学习的人脸图像识别方法和装置
CN109815658A (zh) * 2018-12-14 2019-05-28 平安科技(深圳)有限公司 一种验证方法和装置、计算机设备以及计算机存储介质
CN111414785A (zh) * 2019-01-07 2020-07-14 财团法人交大思源基金会 身分辨识系统及身分辨识方法
CN113316781A (zh) * 2019-01-17 2021-08-27 电装波动株式会社 认证系统、认证装置及认证方法
CN111461368B (zh) * 2019-01-21 2024-01-09 北京嘀嘀无限科技发展有限公司 异常订单处理方法、装置、设备及计算机可读存储介质
CN111461368A (zh) * 2019-01-21 2020-07-28 北京嘀嘀无限科技发展有限公司 异常订单处理方法、装置、设备及计算机可读存储介质
CN109934187B (zh) * 2019-03-19 2023-04-07 西安电子科技大学 基于人脸活性检测-眼睛视线随机挑战响应方法
CN109934187A (zh) * 2019-03-19 2019-06-25 西安电子科技大学 基于人脸活性检测-眼睛视线随机挑战响应方法
CN110111129B (zh) * 2019-03-28 2024-01-19 中国科学院深圳先进技术研究院 一种数据分析方法、广告播放设备及存储介质
CN110111129A (zh) * 2019-03-28 2019-08-09 中国科学院深圳先进技术研究院 一种数据分析方法、广告播放设备及存储介质
CN110288272B (zh) * 2019-04-19 2024-01-30 平安科技(深圳)有限公司 数据处理方法、装置、电子设备及存储介质
CN110288272A (zh) * 2019-04-19 2019-09-27 平安科技(深圳)有限公司 数据处理方法、装置、电子设备及存储介质
CN112906741A (zh) * 2019-05-21 2021-06-04 北京嘀嘀无限科技发展有限公司 图像处理方法、装置、电子设备及存储介质
CN110287971B (zh) * 2019-05-22 2023-11-14 平安银行股份有限公司 数据验证方法、装置、计算机设备及存储介质
CN110287971A (zh) * 2019-05-22 2019-09-27 平安银行股份有限公司 数据验证方法、装置、计算机设备及存储介质
TWI727337B (zh) * 2019-06-06 2021-05-11 大陸商鴻富錦精密工業(武漢)有限公司 電子裝置及人臉識別方法
EP3975047B1 (fr) * 2019-06-11 2024-04-10 Honor Device Co., Ltd. Procédé pour déterminer la validité d'une caractéristique faciale, et dispositif électronique
CN110399794A (zh) * 2019-06-20 2019-11-01 平安科技(深圳)有限公司 基于人体的姿态识别方法、装置、设备及存储介质
CN110443137A (zh) * 2019-07-03 2019-11-12 平安科技(深圳)有限公司 多维度身份信息识别方法、装置、计算机设备及存储介质
CN110443137B (zh) * 2019-07-03 2023-07-25 平安科技(深圳)有限公司 多维度身份信息识别方法、装置、计算机设备及存储介质
CN112307817B (zh) * 2019-07-29 2024-03-19 中国移动通信集团浙江有限公司 人脸活体检测方法、装置、计算设备及计算机存储介质
CN112307817A (zh) * 2019-07-29 2021-02-02 中国移动通信集团浙江有限公司 人脸活体检测方法、装置、计算设备及计算机存储介质
CN112434547A (zh) * 2019-08-26 2021-03-02 中国移动通信集团广东有限公司 一种用户身份稽核方法和设备
CN112434547B (zh) * 2019-08-26 2023-11-14 中国移动通信集团广东有限公司 一种用户身份稽核方法和设备
CN111898536A (zh) * 2019-08-27 2020-11-06 创新先进技术有限公司 证件识别方法及装置
CN110688517A (zh) * 2019-09-02 2020-01-14 平安科技(深圳)有限公司 音频分配方法、装置及存储介质
CN110688517B (zh) * 2019-09-02 2023-05-30 平安科技(深圳)有限公司 音频分配方法、装置及存储介质
CN112767436A (zh) * 2019-10-21 2021-05-07 深圳云天励飞技术有限公司 一种人脸检测跟踪方法及装置
CN111062323A (zh) * 2019-12-16 2020-04-24 腾讯科技(深圳)有限公司 人脸图像传输方法、数值转移方法、装置及电子设备
CN111062323B (zh) * 2019-12-16 2023-06-02 腾讯科技(深圳)有限公司 人脸图像传输方法、数值转移方法、装置及电子设备
CN111143703A (zh) * 2019-12-19 2020-05-12 上海寒武纪信息科技有限公司 智能线路推荐方法及相关产品
CN111143703B (zh) * 2019-12-19 2023-05-23 上海寒武纪信息科技有限公司 智能线路推荐方法及相关产品
CN111191207A (zh) * 2019-12-23 2020-05-22 深圳壹账通智能科技有限公司 电子文件的控制方法、装置、计算机设备及存储介质
CN111160243A (zh) * 2019-12-27 2020-05-15 深圳云天励飞技术有限公司 客流量统计方法及相关产品
CN111178287A (zh) * 2019-12-31 2020-05-19 云知声智能科技股份有限公司 一种声像融合的端对端身份识别方法及装置
WO2021158168A1 (fr) * 2020-02-04 2021-08-12 Grabtaxi Holdings Pte. Ltd. Procédé, serveur et système de communication pour vérifier un utilisateur à des fins de transport
CN111723655A (zh) * 2020-05-12 2020-09-29 五八有限公司 人脸图像处理方法、装置、服务器、终端、设备及介质
CN111723655B (zh) * 2020-05-12 2024-03-08 五八有限公司 人脸图像处理方法、装置、服务器、终端、设备及介质
CN111652086A (zh) * 2020-05-15 2020-09-11 汉王科技股份有限公司 人脸活体检测方法、装置、电子设备及存储介质
CN111652086B (zh) * 2020-05-15 2022-12-30 汉王科技股份有限公司 人脸活体检测方法、装置、电子设备及存储介质
CN111985298A (zh) * 2020-06-28 2020-11-24 百度在线网络技术(北京)有限公司 人脸识别样本收集方法和装置
CN111753271A (zh) * 2020-06-28 2020-10-09 深圳壹账通智能科技有限公司 基于ai识别的开户身份验证方法、装置、设备及介质
CN111985298B (zh) * 2020-06-28 2023-07-25 百度在线网络技术(北京)有限公司 人脸识别样本收集方法和装置
CN111950401B (zh) * 2020-07-28 2023-12-08 深圳数联天下智能科技有限公司 确定关键点区域位置的方法、图像处理系统、设备和介质
CN111950401A (zh) * 2020-07-28 2020-11-17 深圳数联天下智能科技有限公司 确定关键点区域位置的方法、图像处理系统、设备和介质
CN112818733A (zh) * 2020-08-24 2021-05-18 腾讯科技(深圳)有限公司 信息处理方法、装置、存储介质及终端
CN112818733B (zh) * 2020-08-24 2024-01-05 腾讯科技(深圳)有限公司 信息处理方法、装置、存储介质及终端
CN112132030B (zh) * 2020-09-23 2024-05-28 湖南快乐阳光互动娱乐传媒有限公司 视频处理方法及装置、存储介质及电子设备
CN112132030A (zh) * 2020-09-23 2020-12-25 湖南快乐阳光互动娱乐传媒有限公司 视频处理方法及装置、存储介质及电子设备
CN112383737A (zh) * 2020-11-11 2021-02-19 从法信息科技有限公司 多人在线内容同屏的视频处理验证方法、装置和电子设备
CN112491840A (zh) * 2020-11-17 2021-03-12 平安养老保险股份有限公司 信息修改方法、装置、计算机设备及存储介质
CN112491840B (zh) * 2020-11-17 2023-07-07 平安养老保险股份有限公司 信息修改方法、装置、计算机设备及存储介质
CN114626036A (zh) * 2020-12-08 2022-06-14 腾讯科技(深圳)有限公司 基于人脸识别的信息处理方法、装置、存储介质及终端
CN114626036B (zh) * 2020-12-08 2024-05-24 腾讯科技(深圳)有限公司 基于人脸识别的信息处理方法、装置、存储介质及终端
CN112633129A (zh) * 2020-12-18 2021-04-09 深圳追一科技有限公司 视频分析方法、装置、电子设备及存储介质
CN112560768A (zh) * 2020-12-25 2021-03-26 深圳市商汤科技有限公司 闸机通道控制方法、装置、计算机设备及存储介质
CN113128452A (zh) * 2021-04-30 2021-07-16 重庆锐云科技有限公司 一种基于图像识别的绿化满意度采集方法和系统
CN113361366A (zh) * 2021-05-27 2021-09-07 北京百度网讯科技有限公司 人脸标注方法、装置、电子设备和存储介质
CN113569676A (zh) * 2021-07-16 2021-10-29 北京市商汤科技开发有限公司 图像处理方法、装置、电子设备及存储介质
CN113742776A (zh) * 2021-09-08 2021-12-03 未鲲(上海)科技服务有限公司 基于生物识别技术的数据校验方法、装置和计算机设备
CN113780212A (zh) * 2021-09-16 2021-12-10 平安科技(深圳)有限公司 用户身份核验方法、装置、设备及存储介质
CN114267066A (zh) * 2021-12-24 2022-04-01 北京的卢深视科技有限公司 人脸识别方法、电子设备及存储介质
CN114760068A (zh) * 2022-04-08 2022-07-15 中国银行股份有限公司 用户的身份验证方法、系统、电子设备及存储介质
CN116469196A (zh) * 2023-03-16 2023-07-21 东莞市恒鑫科技信息有限公司 一种数字化综合管理系统以及方法
CN116469196B (zh) * 2023-03-16 2024-03-15 南京誉泰瑞思科技有限公司 一种数字化综合管理系统以及方法

Also Published As

Publication number Publication date
CN107066983B (zh) 2022-08-09
CN107066983A (zh) 2017-08-18

Similar Documents

Publication Publication Date Title
WO2018192406A1 (fr) Procédé et appareil d'authentification d'identité et support d'informations
US11330012B2 (en) System, method, and device of authenticating a user based on selfie image or selfie video
KR102139548B1 (ko) 안면인식 기술 기반 분산화된 신원증명 시스템 및 방법
US11983964B2 (en) Liveness detection
US10395018B2 (en) System, method, and device of detecting identity of a user and authenticating a user
CN108804884B (zh) 身份认证的方法、装置及计算机存储介质
TWI700612B (zh) 信息展示方法、裝置及系統
US10268910B1 (en) Authentication based on heartbeat detection and facial recognition in video data
EP2869238B1 (fr) Procédés et systèmes pour déterminer l'activité d'un utilisateur
KR101629224B1 (ko) 생체 특징에 기반한 인증 방법, 장치 및 시스템
WO2016169432A1 (fr) Procédé et dispositif d'authentification d'identité, et terminal
CN106778141B (zh) 基于手势识别的解锁方法、装置及移动终端
US11989275B2 (en) Passive identification of a device user
WO2019153504A1 (fr) Procédé de création de groupe et terminal associé
WO2020135081A1 (fr) Procédé et appareil de reconnaissance d'identité basés sur une gestion de rasterisation dynamique, et serveur
TW201512882A (zh) 身份認證系統及方法
CN112115455B (zh) 多个用户账号的关联关系设置方法、装置、服务器及介质
Fenu et al. Controlling user access to cloud-connected mobile applications by means of biometrics
WO2018068664A1 (fr) Procédé et dispositif d'identification d'informations de réseau
CN112818733B (zh) 信息处理方法、装置、存储介质及终端
CN107483423A (zh) 一种用户登录验证方法
Yuan et al. SALM: smartphone-based identity authentication using lip motion characteristics
CN112115454B (zh) 单点登录方法、第一服务器及电子设备
CN112131553B (zh) 单点登录方法、第一服务器以及电子设备
CN113537993B (zh) 一种基于脸部支付的数据检测方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18787038

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18787038

Country of ref document: EP

Kind code of ref document: A1