WO2018192406A1 - Identity authentication method and apparatus, and storage medium - Google Patents

Identity authentication method and apparatus, and storage medium Download PDF

Info

Publication number
WO2018192406A1
WO2018192406A1 PCT/CN2018/082803 CN2018082803W WO2018192406A1 WO 2018192406 A1 WO2018192406 A1 WO 2018192406A1 CN 2018082803 W CN2018082803 W CN 2018082803W WO 2018192406 A1 WO2018192406 A1 WO 2018192406A1
Authority
WO
WIPO (PCT)
Prior art keywords
face image
target
verified
feature information
preset
Prior art date
Application number
PCT/CN2018/082803
Other languages
French (fr)
Chinese (zh)
Inventor
梁晓晴
梁亦聪
丁守鸿
刘畅
陶芝伟
周可菁
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2018192406A1 publication Critical patent/WO2018192406A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Definitions

  • the embodiments of the present invention relate to the field of computer technologies, and in particular, to an identity verification method and apparatus, and a storage medium.
  • the invention provides an identity verification method and device, and a storage medium.
  • An authentication method the method is performed by the network device, including: providing action prompt information to the object to be verified; acquiring video stream data of the object to be verified, the video stream data being the object to be verified according to the action a continuous frame face image acquired when the prompting action is performed; determining a target face image according to the video stream data; determining, according to the target face image, the credibility of the object to be verified as a living body; The credibility and the target face image authenticate the object to be verified.
  • An authentication device comprising: at least one memory; at least one processor; wherein the at least one memory stores at least one instruction, the at least one instruction being executed by the at least one processor and implementing a method of:
  • the verification object provides the action prompt information; the video stream data of the object to be verified is obtained, and the video stream data is a continuous frame face image collected by the object to be verified according to the action prompt information; Determining a target face image according to the video stream data; determining, according to the target face image, the credibility of the object to be verified as a living body; performing the object to be verified according to the credibility and the target face image Authentication.
  • a computer storage medium having stored therein computer readable instructions or programs, the computer readable instructions or programs being executed by a processor.
  • FIG. 1 is a schematic flowchart of an identity verification method according to an embodiment of the present invention.
  • FIG. 2a is a schematic flowchart of an identity verification method according to an embodiment of the present invention.
  • FIG. 2b is a schematic flowchart of user identity verification in a conference sign-in system according to an embodiment of the present invention
  • FIG. 3a is a schematic structural diagram of an identity verification apparatus according to an embodiment of the present invention.
  • FIG. 3b is a schematic structural diagram of another identity verification apparatus according to an embodiment of the present disclosure.
  • 3c is a schematic structural diagram of a verification submodule according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of a network device according to an embodiment of the present invention.
  • the embodiment of the invention provides an identity verification method, device and system. The details are described below separately. It should be noted that the numbers of the following examples are not intended to limit the preferred order of the embodiments.
  • an identity verification device which may be implemented as an independent entity or integrated into a network device, such as a terminal or a server.
  • An authentication method includes: providing action prompt information to an object to be verified, and acquiring video stream data of the object to be verified, where the video stream data is continuously collected when the object to be verified performs corresponding action according to the action prompt information.
  • Frame a face image, and then determining a target face image according to the video stream data, and determining, according to the target face image, the credibility of the object to be verified as a living body, and then, according to the credibility and the target face image pair
  • the object to be verified is authenticated.
  • the specific process of the identity verification method can be as follows:
  • the action prompt information is mainly used to prompt the user to perform some specified actions, such as shaking the head or blinking, etc., and the display may be displayed through a prompt box or a prompt interface.
  • the user clicks a button on the interactive interface, such as "brush face login” the action of providing the action prompt information may be triggered.
  • the video stream data may be a video captured in a predetermined time (for example, one minute), and is mainly used for image data of a user's face, and may be collected by a video capture device such as a camera.
  • step S103 may specifically include:
  • the key points in the key point focus mainly on feature points in the face image, that is, points where the image gray value changes drastically, or points with large curvature on the edge of the image (ie, two edges) Intersection), such as the eyes, eyebrows, nose, mouth and facial contours.
  • some deep learning models such as ASM (Active Shape Model) or AAM (Active Appearance Model) can be used to extract key points.
  • the location information is mainly a two-dimensional coordinate for a certain reference coordinate system, such as a face acquisition interface displayed by the terminal.
  • the motion trajectory mainly refers to a route formed by the entire face or a partial region from the start motion to the end motion when the object to be verified performs corresponding action according to the motion prompt information, such as a blink track, a shake track, and the like.
  • the position change information of some important key points such as eyes, mouth corners, cheek edges, and nose
  • the three-dimensional face model of the object to be verified is obtained with three-dimensional coordinates of each key point, and then the motion trajectory is determined according to the three-dimensional coordinates of any key point.
  • steps 1-3 may specifically include:
  • the preset track point is a preset position point in the motion track.
  • the preset condition is mainly determined according to the characteristics of the human body motion.
  • the preset condition may be set as: the motion track includes a plurality of specified track points, such as a 5° deflection angle. Point, 15° deflection angle point and 30° deflection angle point, etc., or the preset condition can be set as follows: the number of track points in the motion track reaches a certain value, for example, 10.
  • the preset track point may be determined according to actual needs. For example, considering that the more key points on the face image, the more accurate the conclusion is, the 0° deflecting corner point may be selected as the preset track point, that is, the front side is selected.
  • the face image serves as the target face image.
  • the preset trajectory point may be a point of a smaller interval range including a 0° deflection angle point, rather than a single point.
  • the unknown live users mainly refer to live users who are not registered or authenticated on the system platform.
  • the virtual users mainly refer to a group of photos taken by lawless elements using legitimate users.
  • a pseudo-live user who is forged by a video or a human head model (that is, a screen remake). Specifically, when the motion trajectory meets the specified condition, it indicates that the target face image is not a single photo or multiple photos remake. In this case, it is necessary to further confirm whether the target face image is remake by video according to the image texture feature. Or the forged model of the human head.
  • the motion trajectory does not meet the specified conditions, such as only a small number of two or three track points, it indicates that the object to be verified is most likely a fake living user falsified by staking a single photo or multiple photos of the user.
  • it can be directly determined as an illegal user, and the user is prompted to perform the detection again.
  • the credibility mainly refers to the degree of credibility of the object to be verified as a living body, which may be expressed in the form of a probability value or a fractional value.
  • the texture of the image that is reticled by the screen is different from the texture of the normal image. Therefore, the reliability of the user to be authenticated may be determined by performing feature analysis on the target facial image, that is, the step S104 may specifically include:
  • the target key points mainly include some feature points that are relatively stable in relative position and have distinct distinguishing features, such as two left and right pupils, two left and right mouth corners, and a nose tip, etc., which may be determined according to actual needs.
  • the foregoing step 2-2 may specifically include:
  • the target face image is similarly transformed according to the Euclidean distance to obtain a normalized image.
  • the preset position may be obtained according to a standard face model, where the Euclidean distance refers to a distance between a preset position and position information corresponding to each target key point.
  • the similarity transformation may include operations such as rotation, translation, and scaling.
  • the image before the similar transformation and the similarly transformed image have the same graphics, that is, the shape of the included graphics does not change.
  • the distance between the preset position of the target key point and the corresponding position information can be minimized, that is, the target face image is unified. Normalize the image to a standard face model.
  • the preset classification model mainly refers to a trained deep neural network, which can be trained by some deep training models, such as CNN (Convolutional Neural Networks), wherein CNN is a multi-layer
  • the neural network is composed of an input layer, a convolution layer, a pooling layer, a fully connected layer and an output layer, which supports inputting an image of a multi-dimensional input vector directly into the network, thereby avoiding data reconstruction in feature extraction and classification, and greatly reducing The complexity of image processing.
  • CNN Convolutional Neural Networks
  • the identity verification method may further include:
  • the convolutional neural network is trained according to the preset image set and category information to obtain a preset classification model.
  • the preset face image set may include a screen remake photo sample (negative sample) and Normal photo samples (positive samples), the specific sample size can be determined according to actual needs.
  • This category information is usually manually labeled, which can include both remake photos and normal photos.
  • the training process mainly includes two stages: a forward propagation phase and a backward propagation phase.
  • each sample X i (that is, a preset face image) can be input into the n-layer convolutional neural network.
  • O i F n (... (F 2 (F 1 (X i W (1)) W (2)) ...) W (n))
  • i a positive integer
  • W (n) is the weight of the nth layer
  • F is an activation function (such as a sigmoid function or a hyperbolic tangent function).
  • a weight matrix By inputting the preset face image set to the convolutional neural network, a weight matrix can be obtained, and then In the backward propagation phase, the difference between each actual output O i and the ideal output Y i can be calculated, and the adjustment weight matrix is back-propagated according to the method of minimizing the error, wherein Y i is obtained according to the category information of the sample X i For example, if the sample X i is a normal photo, Y i can be set to 1. If the sample X i is a remake photo, Y i can be set to 0, and finally, the trained volume is determined according to the adjusted weight matrix.
  • the neural network which is the preset classification model.
  • S105 Perform identity verification on the object to be verified according to the credibility and the target face image.
  • step S105 may specifically include:
  • the object to be verified is authenticated according to the target face image
  • the first preset threshold may be determined according to an actual application domain. For example, when the identity verification method is mainly applied to a financial domain with high security requirements, the first preset threshold may be set to be compared. Large, such as 0.9, when the authentication method is mainly applied to a field such as a conference sign-in system that has relatively low security requirements, the first preset threshold may be set to be relatively small, such as 0.5.
  • the calculated reliability when the calculated reliability is less than or equal to the first preset threshold, it indicates that the object to be verified is likely to be a virtual user whose screen is remake. At this time, in order to reduce the false positive rate, the user may be prompted. Perform face image acquisition again.
  • the calculated reliability is greater than the first preset threshold, it indicates that the object to be verified is very likely to be a living user. In this case, it is necessary to further analyze whether the living user is an unknown living user or a registered or authenticated living user. That is, the above step "identifying the object to be verified according to the target face image" may specifically include:
  • the face area mainly refers to the facial features, such as eyes, mouth, nose, eyebrows, and cheeks, etc., which are mainly based on the relative positional relationship between the key points to segment the target face image.
  • step 3-2 may specifically include:
  • the plurality of pieces of feature information are reorganized to obtain target feature information.
  • the feature extraction may be performed on the face region through the deep learning network, and the extracted features are recombined to obtain the feature string (that is, the target feature information). Since the geometric models corresponding to different face regions are different, In order to improve extraction efficiency and accuracy, different depth learning networks can be used to extract different face regions.
  • the foregoing step 3-3 may specifically include:
  • 3-3-1 Acquire a stored feature information set, and a user identifier corresponding to each stored feature information in the stored feature information set.
  • the user identifier is a unique identifier of the user, which may include a registered account.
  • the stored feature information set includes at least one stored feature information, and the different stored feature information is obtained according to face images of different registered users.
  • the identity verification method may further include:
  • the user registration request carries a user identifier to be registered and a face image to be registered;
  • the to-be-registered feature information is associated with the to-be-registered user identifier, and the to-be-registered feature information is added to the stored feature information set.
  • the to-be-registered face image may be processed by using the methods involved in steps 3-1 and 3-2 to obtain the to-be-registered feature information.
  • the user registration request may be automatically triggered. For example, after the user's face image is collected, the user registration request may be automatically generated, or may be generated by the user, for example, when the user clicks the “finish” button, the user may generate the request.
  • the user registration request may be determined according to actual needs.
  • the face image to be registered may be collected on the spot, or may be uploaded after the user has taken the image in advance.
  • the preset algorithm may include a joint Bayesian algorithm, which is a classification method of statistics.
  • the main idea is to treat a face as a two-part composition, and a part is a difference between people.
  • the other part is the individual's own differences (such as changes in expression), and the overall similarity is calculated based on the difference between the two parts.
  • step 3-3-3 may specifically include:
  • the user identifier corresponding to the similarity of the second preset threshold is used as the target user identifier, and the verification result indicating that the object to be verified is the target user identifier is generated;
  • the second preset threshold may be determined according to actual needs. For example, a face image of a large number of users may be collected in advance, and each user collects two face images, and then, according to two people collected by each user. The face image calculates the corresponding similarity, and the average value is counted, and the average value is used as the second preset threshold.
  • the degree is usually slightly less than 1, so that the second predetermined threshold can also be set to be slightly less than 1, such as 0.8.
  • the living user is a registered or authenticated living user.
  • the user can be directly logged in according to the target user identifier, without the user manually. Enter your password and account number, the method is simple, convenient and fast.
  • the identity verification method provides the action prompt information to the object to be verified, and acquires the video stream data of the object to be verified, and the video stream data is made by the object to be verified according to the action prompt information.
  • a continuous frame face image acquired during the corresponding action, and then determining a target face image according to the video stream data, and determining, according to the target face image, the credibility of the object to be verified as a living body, and then, according to the credibility
  • the target face image authenticates the object to be verified, and can effectively block various types of attacks such as photos, videos, and human head models in the face recognition process, and the method is simple and the security is high.
  • the integration of the identity verification device in the network device will be described in detail as an example.
  • an authentication method can be as follows:
  • the network device acquires a user registration request, where the user registration request carries the to-be-registered user identifier and the to-be-registered face image.
  • the user when a user first registers an application system (such as a conference check-in system), the user may be required to provide an account to be registered and a face image to be registered, and the image to be registered may be collected on site, or may be taken in advance by the user. After uploading, after the user clicks the "Finish" button, the user registration request can be generated.
  • an application system such as a conference check-in system
  • the network device determines the to-be-registered feature information according to the to-be-registered face image, and then associates the to-be-registered feature information with the to-be-registered user identifier, and adds the to-be-registered feature information to the stored feature information set.
  • a key point extraction may be performed on the face image to be registered, and the face image to be registered is segmented into multiple regions according to the extracted key points, and then the feature extraction is performed on the segmented region by using multiple depth learning networks. And reorganizing these features to obtain the feature information to be registered.
  • the user identifier and the feature information of each registered user are stored in association, so that in the subsequent login process, the network device can verify the identity of the user according to the stored information.
  • the network device acquires a login request, and provides action prompt information to the object to be verified according to the login request.
  • the object to be verified clicks the “face login” button on the interactive interface
  • the login request may be generated.
  • an action prompt box may be displayed on the interaction interface to prompt the object to be verified. Make specific actions, such as shaking your head.
  • the network device acquires video stream data of the object to be verified, where the video stream data is a continuous frame face image collected by the object to be verified according to the action prompt information.
  • the video stream data may be face data collected within a specified time (for example, 1 minute).
  • a detection frame can be displayed on the interactive interface, and the user is prompted to put the face into the detection frame to guide the user to stand in a suitable position for collecting video stream data.
  • the network device acquires a key point set of each frame face image in the video stream data, and location information of each key point in the key point set.
  • the ASM algorithm can be used to extract a set of key points for each frame of the face image, which can include 88 key points such as eyes, eyebrows, nose, mouth, and facial contours.
  • the location information may be display coordinates of each key point in the detection frame. When the user's face is located in the detection frame, the display coordinates of each key point may be automatically located.
  • the network device determines, according to a key point set and location information of each frame of the face image, a motion trajectory of the object to be verified.
  • the positional change information of important key points such as the eyes, the corners of the mouth and the nose, and the angles and relative distances between the important key points can be used to determine the three-dimensional face model of the object to be verified, and The three-dimensional coordinates of each key point, and then the motion trajectory is determined according to the three-dimensional coordinates of any key point.
  • the network device determines whether the motion track meets the preset condition. If yes, the following step S208 is performed. If not, the verification result indicating that the object to be verified is an illegal user may be generated, and the process returns to step S203.
  • the preset condition is: including a 5° deflection angle point, a 10° deflection angle point, a 15° deflection angle point, and a 30° deflection angle point
  • the motion trajectory is the user's head from the front side (ie, 0°)
  • the rotation is 40°, that is, when the motion trajectory includes a deflection angle of 0 to 40°
  • the preset condition is satisfied.
  • the motion trajectory is formed by the user's head rotated by 15° from the front side (ie, 0°)
  • the motion trajectory only includes a deflection angle point of 0 to 15° it can be determined that the preset condition is not satisfied.
  • the network device selects, from the video stream data, a face image corresponding to the preset track point as the target face image.
  • the preset track point may be a 0° deflecting corner point.
  • the target face image is also a face image corresponding to a 0° deflecting corner point in the video stream data.
  • the network device determines at least one target key point from the key points of the target face image, and determines a normalized image according to the location information of the target key point and the target face image.
  • step S209 may specifically include:
  • the target face image is similarly transformed according to the Euclidean distance to obtain a normalized image.
  • the target key point may be five points of left and right pupils, two left and right mouth corners, and a nose point
  • the preset position may be two-dimensional coordinates of the five points in the standard face model for the same reference coordinate system.
  • the network device calculates the normalized image by using a preset classification model, and obtains the credibility of the object to be verified as a living body, and determines whether the credibility is greater than a first preset threshold, and if yes, performs the following In step S211, if not, a verification result indicating that the object to be verified is an illegal user may be generated, and the process returns to step S203.
  • the preset classification model may be obtained by training a CNN with a large number of screen remake photo samples (negative samples) and normal photo samples (positive samples) in advance.
  • the image information is transformed from the input layer to the output layer, and finally output through the output layer is a probability value, that is, the reliability.
  • the first preset threshold may be 0.5. In this case, if the reliability is 0.7, the determination may be YES, and if the reliability is 0.3, the determination may be NO.
  • the network device divides the target face image into a plurality of face regions according to the key point set of the target face image, and determines target feature information according to the plurality of face regions.
  • the target face image can be segmented based on the relative positional relationship between the key points to obtain a plurality of face regions including eyes, mouth, nose, eyebrows, and cheeks, and then, through different depth learning.
  • the network extracts features from different face regions and reorganizes the extracted features to obtain the target feature information.
  • the network device calculates, by using a preset algorithm, a similarity between each stored feature information and the target feature information in the stored feature information set, and determines whether the calculated similarity is not less than a second preset threshold.
  • the similarity if yes, performs the following step S213, and if not, it can generate a verification result indicating that the object to be verified is an illegal user, and returns to the above step S203.
  • the similarity between each stored feature information and the target feature information can be calculated by the joint Bayesian algorithm to obtain a plurality of similarities ⁇ A1, A2...An ⁇ , at this time, if ⁇ A1, A2. If there is Ai greater than or equal to the second preset threshold in An.., it can be determined as YES. If it does not exist, it can be determined as NO, where i ⁇ (1, 2...n), when it is determined as Otherwise, you can further prompt the user to verify the failure and inform the reason for the failure, such as the inability to find the user.
  • the network device uses the user identifier corresponding to the similarity of the second preset threshold as the target user identifier, and generates a verification result indicating that the object to be verified is the target user identifier.
  • the user identifier corresponding to the similarity Ai may be used as the authentication result of the object to be verified, and the result may be displayed to the user in the form of prompt information to inform the user that the login is successful.
  • the identity verification method wherein the network device obtains a user registration request, the user registration request carries the to-be-registered user identifier and the to-be-registered face image, and determines the to-be-registered feature according to the to-be-registered face image.
  • Information after which the feature information to be registered is associated with the user identity to be registered, and the feature information to be registered is added to the stored feature information set, and then a login request is obtained, and an action is provided to the object to be verified according to the login request.
  • the video stream data of the object to be verified is obtained, and the video stream data is a continuous frame face image collected by the object to be verified according to the motion prompt information, and each video stream data is acquired.
  • the verification result of the method user if yes, selecting a face image corresponding to the preset track point from the video stream data as the target face image, and then determining at least one target key point from the key point of the target face image And determining a normalized image according to the location information of the target key point and the target face image, and then calculating the normalized image by using a preset classification model to obtain the reliability of the object to be verified as a living body, and Determining
  • the user identifier corresponding to the similarity of the second preset threshold is used as the target user identifier, and the object to be verified is generated.
  • the verification result of the target user identifier can effectively block various types of attacks such as photos, videos and human head models in the face recognition process, and the method is simple, high in security, and the identity can be realized without the user manually inputting the password and the account number. Verification, convenient and fast.
  • Embodiment 1 and Embodiment 2 the present embodiment will be further described from the perspective of an identity verification device, which can be integrated in a network device.
  • FIG. 3a specifically describes an identity verification apparatus according to a third embodiment of the present invention, which may include:
  • At least one memory At least one memory
  • At least one processor At least one processor
  • the at least one memory stores at least one instruction module configured to be executed by the at least one processor; wherein the at least one instruction module comprises:
  • the module 10 the obtaining module 20, the first determining module 30, the second determining module 40, and the verifying module 50 are provided, wherein:
  • the module 10 is configured to provide action prompt information to the object to be verified.
  • the action prompt information is mainly used to prompt the user to perform some specified actions, such as shaking the head or blinking, etc., and the display may be displayed through a prompt box or a prompt interface.
  • the providing module 10 can be triggered to provide the action prompt information.
  • the obtaining module 20 is configured to acquire video stream data of the object to be verified, where the video stream data is a continuous frame face image collected by the object to be verified according to the motion prompt information.
  • the video stream data may be a video captured in a predetermined time (for example, one minute), and is mainly used for image data of the user's face, and the acquiring module 20 may perform the collection by using a video capturing device such as a camera.
  • the first determining module 30 is configured to determine a target face image according to the video stream data.
  • the first determining module 30 can be specifically configured to:
  • the key points in the key point focus mainly on feature points in the face image, that is, points where the image gray value changes drastically, or points with large curvature on the edge of the image (ie, two edges) Intersection), such as the eyes, eyebrows, nose, mouth and facial contours.
  • the first determining module 30 can perform key point extraction operations through some deep learning models, such as an ASM (Active Shape Model) or an AAM (Active Appearance Model).
  • the location information is mainly a two-dimensional coordinate for a certain reference coordinate system, such as a face acquisition interface displayed by the terminal.
  • the motion trajectory mainly refers to a route formed by the entire face or a partial region from the start motion to the end motion when the object to be verified performs corresponding action according to the motion prompt information, such as a blink track, a shake track, and the like.
  • the first determining module 30 may firstly change the position change information of some important key points (such as eyes, mouth corners, cheek edges, and noses) in each frame of the face image, and the angles between the important key points. And the relative distance to determine the three-dimensional face model of the object to be verified, and obtain the three-dimensional coordinates of each key point, and then determine the motion trajectory according to the three-dimensional coordinates of any key point.
  • steps 1-3 may specifically include:
  • the preset condition is mainly determined according to the characteristics of the human body motion.
  • the preset condition may be set as: the motion track includes a plurality of specified track points, such as a 5° deflection angle. Point, 15° deflection angle point and 30° deflection angle point, etc., or the preset condition can be set as follows: the number of track points in the motion track reaches a certain value, for example, 10.
  • the preset track point may be determined according to actual needs. For example, considering that the more key points on the face image, the more accurate the conclusion is, the 0° deflecting corner point may be selected as the preset track point, that is, the front side is selected.
  • the face image serves as the target face image.
  • the preset trajectory point may be a point of a smaller interval range including a 0° deflection angle point, rather than a single point.
  • the unknown live users mainly refer to live users who are not registered or authenticated on the system platform.
  • the virtual users mainly refer to a group of photos taken by lawless elements using legitimate users.
  • a pseudo-live user who is forged by a video or a human head model (that is, a screen remake). Specifically, when the motion trajectory meets the specified condition, it indicates that the target face image is not a single photo or multiple photos remake. In this case, it is necessary to further confirm whether the target face image is remake by video according to the image texture feature. Or the forged model of the human head.
  • the motion trajectory does not meet the specified conditions, such as only a small number of two or three track points, it indicates that the object to be verified is most likely a fake living user falsified by staking a single photo or multiple photos of the user.
  • it can be directly determined as an illegal user, and the user is prompted to perform the detection again.
  • the second determining module 40 is configured to determine, according to the target face image, the credibility of the object to be verified as a living body.
  • the credibility mainly refers to the degree of credibility of the object to be verified as a living body, which may be expressed in the form of a probability value or a fractional value.
  • the second determining module 40 can determine the credibility of the user to be verified by performing feature analysis on the target face image, that is, see the texture of the picture that is retries by the screen. 3b, the second determining module 40 may specifically include a first determining submodule 41, a second determining submodule 42 and a calculating submodule 43, wherein:
  • the first determining sub-module 41 is configured to determine at least one target key point from the key point of the target face image.
  • the target key points mainly include some feature points that are relatively stable in relative position and have distinct distinguishing features, such as two left and right pupils, two left and right mouth corners, and a nose tip, etc., which may be determined according to actual needs.
  • the second determining sub-module 42 is configured to determine a normalized image according to the location information of the target key point and the target facial image.
  • the second determining sub-module 42 can be specifically used to:
  • the target face image is similarly transformed according to the Euclidean distance to obtain a normalized image.
  • the preset position may be obtained according to a standard face model, where the Euclidean distance refers to a distance between a preset position and position information corresponding to each target key point.
  • the similarity transformation may include operations such as rotation, translation, and scaling.
  • the image before the similar transformation and the similarly transformed image have the same graphics, that is, the shape of the included graphics does not change.
  • the second determining sub-module 42 can minimize the distance between the preset position of the target key point and the corresponding position information by continuously adjusting the size, the rotation angle, and the coordinate position of the target face image, that is, the distance
  • the target face image is normalized to the standard face model to obtain a normalized image.
  • the calculation sub-module 43 is configured to calculate the normalized image by using a preset classification model to obtain the reliability of the object to be verified as a living body.
  • the preset classification model mainly refers to a trained deep neural network, which can be trained by some deep training models, such as CNN (Convolutional Neural Networks), wherein CNN is a multi-layer
  • the neural network is composed of an input layer, a convolution layer, a pooling layer, a fully connected layer and an output layer, which supports inputting an image of a multi-dimensional input vector directly into the network, thereby avoiding data reconstruction in feature extraction and classification, and greatly reducing The complexity of image processing.
  • CNN Convolutional Neural Networks
  • the identity verification apparatus may further include a training module 60 for:
  • the calculating sub-module 43 calculates the normalized image by using the preset classification model, acquiring a preset face image set and category information of each preset face image in the preset face map set;
  • the convolutional neural network is trained according to the preset image set and category information to obtain a preset classification model.
  • the preset face image set may include a screen remake photo sample (negative sample) and Normal photo samples (positive samples), the specific sample size can be determined according to actual needs.
  • This category information is usually manually labeled, which can include both remake photos and normal photos.
  • the training process mainly includes two phases: a forward propagation phase and a backward propagation phase.
  • a weight matrix can be obtained.
  • the training module 60 can calculate the difference between each actual output O i and the ideal output Y i , and backproper the adjustment weight matrix according to the method of minimizing the error, wherein Y i is based on the sample X i type information obtained, for example, if the sample is a normal picture X i, Y i is set to be 1, if X i is a sample picture photographing, the Y i may be set to 0, and finally, the weights adjusted according to The matrix determines the trained convolutional neural network, which is the pre-defined classification model.
  • the verification module 50 is configured to perform identity verification on the object to be verified according to the credibility and the target face image.
  • the verification module 50 may specifically include a determination sub-module 51, a verification sub-module 52, and a generation sub-module 53, wherein:
  • the determining sub-module 51 is configured to determine whether the credibility is greater than a first preset threshold.
  • the first preset threshold may be determined according to an actual application domain. For example, when the identity verification method is mainly applied to a financial domain with high security requirements, the first preset threshold may be set to be compared. Large, such as 0.9, when the authentication method is mainly applied to a field such as a conference sign-in system that has relatively low security requirements, the first preset threshold may be set to be relatively small, such as 0.5.
  • the verification sub-module 52 is configured to perform identity verification on the object to be verified according to the target face image if the reliability is greater than the first preset threshold.
  • the verification sub-module 52 when the calculated reliability is greater than the first preset threshold, it indicates that the object to be verified is very likely to be a living user. At this time, the verification sub-module 52 needs to further analyze that the living user is an unknown living user.
  • the active user who is already registered or authenticated that is, referring to FIG. 3c, the verification sub-module 52 may specifically include a dividing unit 521, a determining unit 522, and a verifying unit 523, where:
  • the dividing unit 521 is configured to divide the target face image into a plurality of face regions according to the key point set of the target face image.
  • the face area mainly refers to the facial features, such as eyes, mouth, nose, eyebrows, and cheeks, etc., which are mainly based on the relative positional relationship between the key points to segment the target face image.
  • the determining unit 522 is configured to determine target feature information according to the plurality of face regions.
  • the determining unit 522 can be specifically configured to:
  • the plurality of pieces of feature information are reorganized to obtain target feature information.
  • the determining unit 522 may perform feature extraction on the face region through the deep learning network, and recombine the extracted features to obtain a feature string (that is, the target feature information), because the geometry corresponding to different face regions
  • a feature string that is, the target feature information
  • the verification unit 523 is configured to perform identity verification on the object to be verified according to the target feature information.
  • the verification unit 523 can be specifically configured to:
  • 3-3-1 Acquire a stored feature information set, and a user identifier corresponding to each stored feature information in the stored feature information set.
  • the user identifier is a unique identifier of the user, which may include a registered account.
  • the stored feature information set includes at least one stored feature information, and the different stored feature information is obtained according to face images of different registered users.
  • the identity verification device may further include an association module, configured to:
  • the verification unit 523 Before the verification unit 523 acquires the stored feature information set and the user identifier corresponding to each stored feature information in the stored feature information set, acquiring a user registration request, the user registration request carrying the to-be-registered user identifier and the to-be-registered face image;
  • the to-be-registered feature information is associated with the to-be-registered user identifier, and the to-be-registered feature information is added to the stored feature information set.
  • the association module may process the to-be-registered face image by referring to the method used by the dividing unit 521 and the determining unit 522 to obtain the to-be-registered feature information.
  • the user registration request may be automatically triggered. For example, after the user's face image is collected, the user registration request may be automatically generated, or may be generated by the user, for example, when the user clicks the “finish” button, the user may generate the request.
  • the user registration request may be determined according to actual needs.
  • the face image to be registered may be collected on the spot, or may be uploaded after the user has taken the image in advance.
  • the preset algorithm may include a joint Bayesian algorithm, which is a classification method of statistics.
  • the main idea is to regard a face as a two-part composition, and a part is a difference between people.
  • the other part is the individual's own differences (such as changes in expression), and the overall similarity is calculated based on the difference between the two parts.
  • verification unit 523 can be used to:
  • the user identifier corresponding to the similarity of the second preset threshold is used as the target user identifier, and the verification result indicating that the object to be verified is the target user identifier is generated;
  • the second preset threshold may be determined according to actual needs. For example, a face image of a large number of users may be collected in advance, and each user collects two face images, and then, according to two people collected by each user. The face image calculates the corresponding similarity, and the average value is counted, and the average value is used as the second preset threshold.
  • the degree is usually slightly less than 1, so that the second predetermined threshold can also be set to be slightly less than 1, such as 0.8.
  • the verification unit 523 when the verification unit 523 generates a verification result indicating that the object to be verified is the target user identifier, it indicates that the living user is a registered or authenticated living user, and at this time, the login can be directly performed according to the target user identifier. No need for users to manually enter passwords and accounts, the method is simple, convenient and fast.
  • the generating sub-module 53 is configured to generate a verification result indicating that the object to be verified is an illegal user, if the credibility is not greater than the first preset threshold.
  • the calculated reliability when the calculated reliability is less than or equal to the first preset threshold, it indicates that the object to be verified is likely to be a virtual user whose screen is retrased. At this time, in order to reduce the false positive rate, Prompt the user to re-collect the face image.
  • the foregoing units may be implemented as a separate entity, or may be implemented in any combination, and may be implemented as the same or a plurality of entities.
  • the foregoing method embodiments and details are not described herein.
  • identity verification method provided in the above embodiment can be implemented when the processor executes the instructions stored in the memory.
  • the identity verification device provided by the embodiment provides the action prompt information to the object to be verified by the providing module 10, and the obtaining module 20 acquires the video stream data of the object to be verified, and the video stream data is the object to be verified according to the
  • the action prompt information is used to make a continuous frame face image collected when the action is performed.
  • the first determining module 30 determines the target face image according to the video stream data
  • the second determining module 40 determines the object to be verified according to the target face image.
  • the verification module 50 authenticates the object to be verified according to the credibility and the target face image, and can effectively block various types of photos, videos, and head models in the face recognition process.
  • the attack is simple and safe.
  • the embodiment of the present invention further provides an identity verification system, which includes any of the identity verification devices provided by the embodiments of the present invention.
  • an identity verification system which includes any of the identity verification devices provided by the embodiments of the present invention.
  • the third embodiment refer to the third embodiment.
  • the network device may provide action prompt information to the object to be verified; and obtain video stream data of the object to be verified, where the video stream data is a continuous frame face image collected by the object to be verified according to the motion prompt information. Determining a target face image according to the video stream data; determining, according to the target face image, the credibility of the object to be verified as a living body; and authenticating the object to be verified according to the credibility and the target face image.
  • the system for generating the traffic information may include any of the identity verification devices provided by the embodiments of the present invention. Therefore, the beneficial effects of any of the identity verification devices provided by the embodiments of the present invention can be implemented. The embodiment is not described here.
  • the embodiment of the present invention further provides a network device, as shown in FIG. 4, which shows a schematic structural diagram of a network device according to an embodiment of the present invention, specifically:
  • the network device can include a processor 701 of one or more processing cores, a memory 702 of one or more computer readable storage media, a radio frequency (RF) circuit 703, a power source 704, an input unit 705, and a display unit 707 And other components.
  • RF radio frequency
  • FIG. 4 does not constitute a limitation to the network device, and may include more or less components than those illustrated, or some components may be combined, or different component arrangements. among them:
  • the processor 701 is the control center of the network device, interconnecting various portions of the entire network device using various interfaces and lines, by running or executing software programs and/or modules stored in the memory 702, and recalling stored in the memory 702. Data, performing various functions of the network device and processing data, thereby performing overall monitoring of the network device.
  • the processor 701 may include one or more processing cores; preferably, the processor 701 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like.
  • the modem processor primarily handles wireless communications. It can be understood that the above modem processor may not be integrated into the processor 701.
  • the memory 702 can be used to store software programs and modules, and the processor 701 executes various functional applications and data processing by running software programs and modules stored in the memory 702.
  • the memory 702 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to Data created by the use of network devices, etc.
  • memory 702 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, memory 702 can also include a memory controller to provide processor 701 access to memory 702.
  • the RF circuit 703 can be used for receiving and transmitting signals during the process of transmitting and receiving information. Specifically, after receiving the downlink information of the base station, it is processed by one or more processors 701; in addition, the data related to the uplink is sent to the base station.
  • the RF circuit 703 includes, but is not limited to, an antenna, at least one amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, and a Low Noise Amplifier (LNA). , duplexer, etc.
  • SIM Subscriber Identity Module
  • LNA Low Noise Amplifier
  • the RF circuit 703 can also communicate with the network and other devices through wireless communication.
  • the wireless communication can use any communication standard or protocol, including but not limited to Global System of Mobile communication (GSM), General Packet Radio Service (GPRS), and Code Division Multiple Access (CDMA). Code Division Multiple Access), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), e-mail, Short Messaging Service (SMS), and the like.
  • GSM Global System of Mobile communication
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • Code Division Multiple Access Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • SMS Short Messaging Service
  • the network device also includes a power source 704 (such as a battery) that supplies power to the various components.
  • the power source 704 can be logically coupled to the processor 701 through the power management system to manage functions such as charging, discharging, and power management through the power management system.
  • the power supply 704 can also include any one or more of a DC or AC power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
  • the network device can also include an input unit 705 that can be used to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function controls.
  • input unit 705 can include a touch-sensitive surface as well as other input devices. Touch-sensitive surfaces, also known as touch screens or trackpads, collect touch operations on or near the user (such as the user using a finger, stylus, etc., any suitable object or accessory on a touch-sensitive surface or touch-sensitive Operation near the surface), and drive the corresponding connecting device according to a preset program.
  • the touch sensitive surface may include two parts of a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
  • the processor 701 is provided and can receive commands from the processor 701 and execute them.
  • touch-sensitive surfaces can be implemented in a variety of types including resistive, capacitive, infrared, and surface acoustic waves.
  • the input unit 705 can also include other input devices. Specifically, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.
  • the network device can also include a display unit 706 that can be used to display information entered by the user or information provided to the user and various graphical user interfaces of the network device, the graphical user interface can be represented by graphics, text, icons, Video and any combination of them.
  • the display unit 706 can include a display panel.
  • the display panel can be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
  • the touch-sensitive surface may cover the display panel, and when the touch-sensitive surface detects a touch operation thereon or nearby, it is transmitted to the processor 701 to determine the type of the touch event, and then the processor 701 displays the type according to the touch event. A corresponding visual output is provided on the panel.
  • the touch-sensitive surface and display panel are implemented as two separate components to perform input and input functions, in some embodiments, the touch-sensitive surface can be integrated with the display panel to implement input and output functions.
  • the network device may further include a camera, a Bluetooth module, and the like, and details are not described herein again.
  • the processor 701 in the network device loads the executable file corresponding to the process of one or more applications into the memory 702 according to the following instructions, and is stored and stored by the processor 701.
  • video stream data of the object to be verified where the video stream data is a continuous frame face image collected when the object to be verified performs corresponding action according to the action prompt information
  • the object to be verified is authenticated according to the credibility and the target face image.
  • the network device can implement the effective effects of any of the identity verification devices provided by the embodiments of the present invention. For details, refer to the foregoing embodiments, and details are not described herein again.
  • the program may be stored in a computer readable storage medium, and the storage medium may include: Read Only Memory (ROM), Random Access Memory (RAM), disk or optical disk.
  • ROM Read Only Memory
  • RAM Random Access Memory
  • Embodiments of the present invention also provide a computer storage medium having stored therein computer readable instructions or programs, the computer readable instructions or programs being executed by a processor.

Abstract

Disclosed are an identity authentication method and apparatus, and a storage medium. The identity authentication method comprises: providing action prompt information for an object to be verified; acquiring video stream data of the object to be verified, wherein the video stream data is continuous frames of a human face image collected when the object to be verified executes a corresponding action according to the action prompt information; determining a target human face image according to the video stream data; determining the reliability that the object to be verified is a living body according to the target human face image; and carrying out, according to the reliability and the target human face image, identity authentication on the object to be verified.

Description

身份验证方法及装置、存储介质Authentication method and device, storage medium
本申请要求于2017年04月20日提交中国专利局、申请号为201710261931.0、发明名称为“一种身份验证方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。The present application claims priority to Chinese Patent Application No. JP-A No. No. No. No. No. No. No. No. No. No. No. .
技术领域Technical field
本发明实施例涉及计算机技术领域,尤其涉及一种身份验证方法及装置、存储介质。The embodiments of the present invention relate to the field of computer technologies, and in particular, to an identity verification method and apparatus, and a storage medium.
背景background
随着终端技术的不断发展及普及,终端中的应用也在不断增加,而一些应用由于涉及用户的大量隐私,为保证信息安全性,在使用过程中需要进行登录操作,以确认用户身份。With the continuous development and popularization of terminal technologies, applications in terminals are also increasing. Some applications involve a large amount of privacy of users. To ensure information security, a login operation is required during use to confirm the identity of the user.
技术内容Technical content
本发明提供一种身份验证方法及装置、存储介质。The invention provides an identity verification method and device, and a storage medium.
本发明实施例提供以下技术方案:The embodiments of the present invention provide the following technical solutions:
一种身份验证方法,该方法由网络设备执行,包括:向待验证对象提供动作提示信息;获取所述待验证对象的视频流数据,所述视频流数据为所述待验证对象根据所述动作提示信息做出相应动作时采集的连续帧人脸图像;根据所述视频流数据确定目标人脸图像;根据所述目标人脸图像确定所述待验证对象为活体的可信度;根据所述可信度和所述目标人脸图像对所述待验证对象进行身份验证。An authentication method, the method is performed by the network device, including: providing action prompt information to the object to be verified; acquiring video stream data of the object to be verified, the video stream data being the object to be verified according to the action a continuous frame face image acquired when the prompting action is performed; determining a target face image according to the video stream data; determining, according to the target face image, the credibility of the object to be verified as a living body; The credibility and the target face image authenticate the object to be verified.
本发明实施例还提供以下技术方案:The embodiment of the invention further provides the following technical solutions:
一种身份验证装置,包括:至少一个存储器;至少一个处理器;其中,所述至少一个存储器存储有至少一条指令,所述至少一条指令由所述至少一个处理器执行并实现以下方法:向待验证对象提供动作提示信息;获取所述待验证对象的视频流数据,所述视频流数据为所述待验证对象根据所述动作提示信息做出相应动作时采集的连续帧人脸图像;根 据所述视频流数据确定目标人脸图像;根据所述目标人脸图像确定所述待验证对象为活体的可信度;根据所述可信度和所述目标人脸图像对所述待验证对象进行身份验证。An authentication device comprising: at least one memory; at least one processor; wherein the at least one memory stores at least one instruction, the at least one instruction being executed by the at least one processor and implementing a method of: The verification object provides the action prompt information; the video stream data of the object to be verified is obtained, and the video stream data is a continuous frame face image collected by the object to be verified according to the action prompt information; Determining a target face image according to the video stream data; determining, according to the target face image, the credibility of the object to be verified as a living body; performing the object to be verified according to the credibility and the target face image Authentication.
本发明实施例还提供以下技术方案:The embodiment of the invention further provides the following technical solutions:
一种计算机存储介质,其内存储有计算机可读指令或程序,所述计算机可读指令或程序被处理器执行以上方法。A computer storage medium having stored therein computer readable instructions or programs, the computer readable instructions or programs being executed by a processor.
附图简要说明BRIEF DESCRIPTION OF THE DRAWINGS
下面结合附图,通过对本发明的具体实施方式详细描述,将使本发明的技术方案及其它有益效果显而易见。The technical solutions and other advantageous effects of the present invention will be apparent from the following detailed description of embodiments of the invention.
图1为本发明实施例提供的身份验证方法的流程示意图;1 is a schematic flowchart of an identity verification method according to an embodiment of the present invention;
图2a为本发明实施例提供的身份验证方法的流程示意图;2a is a schematic flowchart of an identity verification method according to an embodiment of the present invention;
图2b为本发明实施例提供的会议签到系统中用户身份验证的流程示意图FIG. 2b is a schematic flowchart of user identity verification in a conference sign-in system according to an embodiment of the present invention;
图3a为本发明实施例提供的身份验证装置的结构示意图;3a is a schematic structural diagram of an identity verification apparatus according to an embodiment of the present invention;
图3b为本发明实施例提供的另一身份验证装置的结构示意图;FIG. 3b is a schematic structural diagram of another identity verification apparatus according to an embodiment of the present disclosure;
图3c为本发明实施例提供的验证子模块的结构示意图;3c is a schematic structural diagram of a verification submodule according to an embodiment of the present invention;
图4为本发明实施例提供的网络设备的结构示意图FIG. 4 is a schematic structural diagram of a network device according to an embodiment of the present invention;
实施方式Implementation
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention are clearly and completely described in the following with reference to the accompanying drawings in the embodiments of the present invention. It is obvious that the described embodiments are only a part of the embodiments of the present invention, but not all embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments of the present invention without creative efforts are within the scope of the present invention.
本发明实施例提供一种身份验证方法、装置及系统。以下分别进行详细说明。需说明的是,以下实施例的编号并不作为对实施例优选顺序的限定。The embodiment of the invention provides an identity verification method, device and system. The details are described below separately. It should be noted that the numbers of the following examples are not intended to limit the preferred order of the embodiments.
第一实施例First embodiment
本实施例将从身份验证装置的角度进行描述,该身份验证装置具体可以作为独立的实体来实现,也可以集成网络设备,比如终端或服务器中来实现。This embodiment will be described from the perspective of an identity verification device, which may be implemented as an independent entity or integrated into a network device, such as a terminal or a server.
一种身份验证方法,包括:向待验证对象提供动作提示信息,并获取该待验证对象的视频流数据,该视频流数据为该待验证对象根据该动作提示信息做出相应动作时采集的连续帧人脸图像,之后,根据该视频流数据确定目标人脸图像,并根据该目标人脸图像确定该待验证对象为活体的可信度,之后,根据该可信度和目标人脸图像对该待验证对象进行身份验证。An authentication method includes: providing action prompt information to an object to be verified, and acquiring video stream data of the object to be verified, where the video stream data is continuously collected when the object to be verified performs corresponding action according to the action prompt information. Frame a face image, and then determining a target face image according to the video stream data, and determining, according to the target face image, the credibility of the object to be verified as a living body, and then, according to the credibility and the target face image pair The object to be verified is authenticated.
如图1所示,该身份验证方法的具体流程可以如下:As shown in Figure 1, the specific process of the identity verification method can be as follows:
S101、向待验证对象提供动作提示信息。S101. Provide action prompt information to the object to be verified.
本实施例中,该动作提示信息主要用于提示用户做一些指定的动作,比如摇头或眨眼等,其可以通过提示框或提示界面等形式进行显示。当用户点击交互界面上的某个按钮,比如“刷脸登陆”时,可以触发该动作提示信息的提供操作。In this embodiment, the action prompt information is mainly used to prompt the user to perform some specified actions, such as shaking the head or blinking, etc., and the display may be displayed through a prompt box or a prompt interface. When the user clicks a button on the interactive interface, such as "brush face login", the action of providing the action prompt information may be triggered.
S102、获取该待验证对象的视频流数据,该视频流数据为该待验证对象根据该动作提示信息做出相应动作时采集的连续帧人脸图像。S102. Obtain video stream data of the object to be verified, where the video stream data is a continuous frame face image collected by the object to be verified according to the action prompt information.
本实施例中,该视频流数据可以是规定时间内(比如一分钟)采集的的一段视频,其主要针对的是用户脸部的图像数据,具体可以通过摄像头等视频采集设备进行采集。In this embodiment, the video stream data may be a video captured in a predetermined time (for example, one minute), and is mainly used for image data of a user's face, and may be collected by a video capture device such as a camera.
S103、根据该视频流数据确定目标人脸图像。S103. Determine a target face image according to the video stream data.
例如,上述步骤S103具体可以包括:For example, the foregoing step S103 may specifically include:
1-1、获取该视频流数据中每一帧人脸图像的关键点集、以及该关键点集中每一关键点的位置信息。1-1. Obtain a key point set of each frame face image in the video stream data, and location information of each key point in the key point set.
本实施例中,该关键点集中的关键点主要指人脸图像中的特征点,也即图像灰度值发生剧烈变化的点,或者在图像边缘上曲率较大的点(即两个边缘的交点),比如眼睛、眉毛、鼻子、嘴巴和脸部外轮廓等。具体可以通过一些深度学习模型,比如ASM(Active Shape Model,主动形 状模型)或AAM(Active Appearance Model,主动外观模型)等进行关键点的提取操作。该位置信息主要是针对某一参考坐标系(比如终端显示的人脸采集界面)的二维坐标。In this embodiment, the key points in the key point focus mainly on feature points in the face image, that is, points where the image gray value changes drastically, or points with large curvature on the edge of the image (ie, two edges) Intersection), such as the eyes, eyebrows, nose, mouth and facial contours. Specifically, some deep learning models, such as ASM (Active Shape Model) or AAM (Active Appearance Model), can be used to extract key points. The location information is mainly a two-dimensional coordinate for a certain reference coordinate system, such as a face acquisition interface displayed by the terminal.
1-2、根据每一帧人脸图像的关键点集和位置信息确定该待验证对象的运动轨迹。1-2. Determine a motion track of the object to be verified according to a key point set and position information of each frame of the face image.
本实施例中,该运动轨迹主要指待验证对象根据该动作提示信息做出相应动作时,整个人脸或者局部区域从开始动作到结束动作所形成的路线,比如眨眼轨迹、摇头轨迹等等。具体的,可以先根据每一帧人脸图像中一些重要关键点(比如眼睛、嘴角、脸颊边缘和鼻子等)的位置变化信息、以及这些重要关键点彼此之间的夹角和相对距离来确定该待验证对象的三维人脸模型,并得到每一关键点的三维坐标,之后,根据任一关键点的三维坐标确定该运动轨迹。In this embodiment, the motion trajectory mainly refers to a route formed by the entire face or a partial region from the start motion to the end motion when the object to be verified performs corresponding action according to the motion prompt information, such as a blink track, a shake track, and the like. Specifically, the position change information of some important key points (such as eyes, mouth corners, cheek edges, and nose) in each frame of the face image, and the angles and relative distances between the important key points are determined. The three-dimensional face model of the object to be verified is obtained with three-dimensional coordinates of each key point, and then the motion trajectory is determined according to the three-dimensional coordinates of any key point.
1-3、根据该运动轨迹从该视频流数据中确定目标人脸图像。1-3. Determine a target face image from the video stream data according to the motion trajectory.
例如,上述步骤1-3具体可以包括:For example, the foregoing steps 1-3 may specifically include:
判断该运动轨迹是否满足预设条件;Determining whether the motion trajectory satisfies a preset condition;
若是,则从该视频流数据中选取预设轨迹点对应的人脸图像,作为目标人脸图像;If yes, selecting a face image corresponding to the preset track point from the video stream data as the target face image;
若否,则生成指示该待验证对象为非法用户的验证结果。If not, a verification result indicating that the object to be verified is an illegal user is generated.
可理解的是,预设轨迹点为所述运动轨迹中预设的位置点。It can be understood that the preset track point is a preset position point in the motion track.
本实施例中,该预设条件主要依据人体动作特点而定,考虑到人体动作具有连贯性,该预设条件可以设定为:该运动轨迹中包含多个指定轨迹点,比如5°偏向角点、15°偏向角点和30°偏向角点等,或者,该预设条件可以设定为:该运动轨迹中的轨迹点数量达到一定值,比如10个。该预设轨迹点可以根据实际需求而定,比如,考虑到人脸图像上的关键点越多,所得结论越准确,故可以选取0°偏向角点作为该预设轨迹点,也即选取正面人脸图像作为该目标人脸图像。当然,考虑到用户可能并非从0°偏向角点开始采集,该预设轨迹点可以为包括0°偏向角点在内的某个较小区间范围的点,而非单个点。In this embodiment, the preset condition is mainly determined according to the characteristics of the human body motion. Considering that the human motion is coherent, the preset condition may be set as: the motion track includes a plurality of specified track points, such as a 5° deflection angle. Point, 15° deflection angle point and 30° deflection angle point, etc., or the preset condition can be set as follows: the number of track points in the motion track reaches a certain value, for example, 10. The preset track point may be determined according to actual needs. For example, considering that the more key points on the face image, the more accurate the conclusion is, the 0° deflecting corner point may be selected as the preset track point, that is, the front side is selected. The face image serves as the target face image. Of course, considering that the user may not start collecting from a 0° deflection corner, the preset trajectory point may be a point of a smaller interval range including a 0° deflection angle point, rather than a single point.
该非法用户主要存在两种:未知活体用户和虚拟用户,该未知活体 用户主要指未在系统平台上进行注册或认证的活体用户,该虚拟用户主要指一些不法分子利用合法用户的单张照片或视频或者人头模型伪造而成(也即屏幕翻拍而成)的伪活体用户。具体的,当该运动轨迹满足指定条件时,说明该目标人脸图像并非单张照片或多张照片翻拍而成,此时,需要进一步根据图片纹理特点确认该目标人脸图像是否是通过视频翻拍或者人头模型伪造而成的。当该运动轨迹不满足指定条件,比如轨迹点只有少量的两三个时,则说明该待验证对象极有可能是通过翻拍用户的单张照片或多张照片伪造而成的伪活体用户,此时,可以直接判定为非法用户,并提示用户重新进行检测。There are two main types of illegal users: unknown live users and virtual users. The unknown live users mainly refer to live users who are not registered or authenticated on the system platform. The virtual users mainly refer to a group of photos taken by lawless elements using legitimate users. A pseudo-live user who is forged by a video or a human head model (that is, a screen remake). Specifically, when the motion trajectory meets the specified condition, it indicates that the target face image is not a single photo or multiple photos remake. In this case, it is necessary to further confirm whether the target face image is remake by video according to the image texture feature. Or the forged model of the human head. When the motion trajectory does not meet the specified conditions, such as only a small number of two or three track points, it indicates that the object to be verified is most likely a fake living user falsified by staking a single photo or multiple photos of the user. When it is determined, it can be directly determined as an illegal user, and the user is prompted to perform the detection again.
S104、根据该目标人脸图像确定该待验证对象为活体的可信度。S104. Determine, according to the target face image, the credibility of the object to be verified as a living body.
本实施例中,该可信度主要指该待验证对象为活体的可信程度,其可以表现为概率值或者分数值的形式。由于经屏幕翻拍而成的图片的纹理和正常图片的纹理不同,故可以通过对该目标人脸图像进行特征分析来确定该待验证用户的可信度,也即,上述步骤S104具体可以包括:In this embodiment, the credibility mainly refers to the degree of credibility of the object to be verified as a living body, which may be expressed in the form of a probability value or a fractional value. The texture of the image that is reticled by the screen is different from the texture of the normal image. Therefore, the reliability of the user to be authenticated may be determined by performing feature analysis on the target facial image, that is, the step S104 may specifically include:
2-1、从该目标人脸图像的关键点集中确定至少一个目标关键点。2-1. Determine at least one target key point from the key points of the target face image.
本实施例中,该目标关键点主要包括一些相对位置比较稳定且具有明显区分特征的特征点,比如左右两个瞳孔、左右两个嘴角以及鼻尖,等等,具体可以根据实际需求而定。In this embodiment, the target key points mainly include some feature points that are relatively stable in relative position and have distinct distinguishing features, such as two left and right pupils, two left and right mouth corners, and a nose tip, etc., which may be determined according to actual needs.
2-2、根据该目标关键点的位置信息和目标人脸图像确定归一化图像。2-2. Determine a normalized image according to the location information of the target key point and the target face image.
例如,上述步骤2-2具体可以包括:For example, the foregoing step 2-2 may specifically include:
获取每一目标关键点的预设位置;Get the preset position of each target key point;
计算每一预设位置和相应的位置信息之间的欧氏距离;Calculating the Euclidean distance between each preset position and corresponding position information;
根据该欧氏距离对该目标人脸图像进行相似变换,得到归一化图像。The target face image is similarly transformed according to the Euclidean distance to obtain a normalized image.
本实施例中,该预设位置可以根据标准人脸模型得到,该欧氏距离指每一目标关键点对应的预设位置和位置信息之间的距离。该相似变换可以包括旋转、平移和缩放等操作,通常,相似变换前的图像和相似变换后的图像具有相同的图形,也即所包含的图形形状不变。具体的,通 过不断调整目标人脸图像的大小、旋转角度和坐标位置,可以使该目标关键点的预设位置和相应的位置信息之间的距离最小化,也即将该目标人脸图像归一化到标准人脸模型,得到归一化图像。In this embodiment, the preset position may be obtained according to a standard face model, where the Euclidean distance refers to a distance between a preset position and position information corresponding to each target key point. The similarity transformation may include operations such as rotation, translation, and scaling. Generally, the image before the similar transformation and the similarly transformed image have the same graphics, that is, the shape of the included graphics does not change. Specifically, by continuously adjusting the size, the rotation angle, and the coordinate position of the target face image, the distance between the preset position of the target key point and the corresponding position information can be minimized, that is, the target face image is unified. Normalize the image to a standard face model.
2-3、利用预设分类模型对该归一化图像进行计算,得到该待验证对象为活体的可信度。2-3. Calculating the normalized image by using a preset classification model, and obtaining the reliability of the object to be verified as a living body.
本实施例中,该预设分类模型主要指训练好的深度神经网络,其可以由一些深度训练模型,比如CNN(Convolutional Neural Networks,卷积神经网络)训练得到,其中,CNN是一种多层神经网络,由输入层、卷积层、池化层、全连接层和输出层组成,其支持将多维输入向量的图像直接输入网络,避免了特征提取和分类过程中数据的重建,极大降低了图像处理的复杂度。当将归一化图像输入CNN网络中时,信息会从输入层经过逐级的变换,传输到输出层,CNN网络执行的计算过程实际上就是将输入(归一化图像)与每层的权值矩阵相点乘,从而得到最终输出(也即该待验证对象的可信度)的过程。In this embodiment, the preset classification model mainly refers to a trained deep neural network, which can be trained by some deep training models, such as CNN (Convolutional Neural Networks), wherein CNN is a multi-layer The neural network is composed of an input layer, a convolution layer, a pooling layer, a fully connected layer and an output layer, which supports inputting an image of a multi-dimensional input vector directly into the network, thereby avoiding data reconstruction in feature extraction and classification, and greatly reducing The complexity of image processing. When the normalized image is input into the CNN network, the information is transformed from the input layer to the output layer, and the calculation process performed by the CNN network is actually the input (normalized image) and the weight of each layer. The value matrix is multiplied by the point to obtain the final output (ie, the confidence of the object to be verified).
容易理解的是,该预设分类模型需要提前根据样本和分类信息进行训练得到,也即,在利用预设分类模型对该归一化图像进行计算之前,该身份验证方法还可以包括:It is easy to understand that the preset classification model needs to be trained according to the sample and the classification information in advance, that is, before the normalized image is calculated by using the preset classification model, the identity verification method may further include:
获取预设人脸图像集、以及该预设人脸图集中每一预设人脸图像的类别信息;Obtaining a preset face image set, and category information of each preset face image in the preset face map set;
根据该预设图像集和类别信息对卷积神经网络进行训练,得到预设分类模型。The convolutional neural network is trained according to the preset image set and category information to obtain a preset classification model.
本实施例中,由于该预设分类模型主要用于辨别该待验证用户是否是由屏幕翻拍伪造而成的虚拟用户,故该预设人脸图像集可以包括屏幕翻拍照片样本(负样本)以及正常照片样本(正样本),具体样本数量可以根据实际需求而定。该类别信息通常由人工标注而成,其可以包括翻拍照片和正常照片这两种。In this embodiment, since the preset classification model is mainly used to identify whether the user to be verified is a virtual user forged by a screen remake, the preset face image set may include a screen remake photo sample (negative sample) and Normal photo samples (positive samples), the specific sample size can be determined according to actual needs. This category information is usually manually labeled, which can include both remake photos and normal photos.
该训练过程主要包括两个阶段:前向传播阶段和后向传播阶段,在前向传播阶段中,可以将每一样本X i(也即预设人脸图像)输入n层卷积神经网络中,得到实际输出O i,其中,O i=F n(…(F 2(F 1(X iW (1)) W (2))...)W (n)),i为正整数,W (n)为第n层的权值,F为激活函数(比如sigmoid函数或者双曲线正切函数),通过向卷积神经网络输入该预设人脸图像集,可以得到权值矩阵,之后,在后向传播阶段,可以计算每一实际输出O i和理想输出Y i的差,按极小化误差的方法反向传播调整权值矩阵,其中,Y i是根据样本X i的类别信息得到的,比如,若样本X i为正常照片,则Y i可以设为1,若样本X i为翻拍照片,则Y i可以设为0,最后,根据调整后的权值矩阵确定训练好的卷积神经网络,也即该预设分类模型。 The training process mainly includes two stages: a forward propagation phase and a backward propagation phase. In the forward propagation phase, each sample X i (that is, a preset face image) can be input into the n-layer convolutional neural network. to give actual output O i, where, O i = F n (... (F 2 (F 1 (X i W (1)) W (2)) ...) W (n)), i is a positive integer, W (n) is the weight of the nth layer, and F is an activation function (such as a sigmoid function or a hyperbolic tangent function). By inputting the preset face image set to the convolutional neural network, a weight matrix can be obtained, and then In the backward propagation phase, the difference between each actual output O i and the ideal output Y i can be calculated, and the adjustment weight matrix is back-propagated according to the method of minimizing the error, wherein Y i is obtained according to the category information of the sample X i For example, if the sample X i is a normal photo, Y i can be set to 1. If the sample X i is a remake photo, Y i can be set to 0, and finally, the trained volume is determined according to the adjusted weight matrix. The neural network, which is the preset classification model.
S105、根据该可信度和目标人脸图像对该待验证对象进行身份验证。S105. Perform identity verification on the object to be verified according to the credibility and the target face image.
例如,上述步骤S105具体可以包括:For example, the foregoing step S105 may specifically include:
判断该可信度是否大于第一预设阈值;Determining whether the credibility is greater than a first preset threshold;
若是,则根据目标人脸图像对该待验证对象进行身份验证;If yes, the object to be verified is authenticated according to the target face image;
若否,则生成指示该待验证对象为非法用户的验证结果。If not, a verification result indicating that the object to be verified is an illegal user is generated.
本实施例中,该第一预设阈值可以根据实际应用领域而定,比如,当该身份验证方法主要应用于对安全性要求较高的金融领域时,该第一预设阈值可以设置的比较大,譬如0.9,当该身份验证方法主要应用于类似会议签到系统等这些对安全性要求相对较低的领域时,该第一预设阈值可以设置的比较小,比如0.5。In this embodiment, the first preset threshold may be determined according to an actual application domain. For example, when the identity verification method is mainly applied to a financial domain with high security requirements, the first preset threshold may be set to be compared. Large, such as 0.9, when the authentication method is mainly applied to a field such as a conference sign-in system that has relatively low security requirements, the first preset threshold may be set to be relatively small, such as 0.5.
具体的,当计算出的可信度小于或等于该第一预设阈值时,说明该待验证对象极有可能是屏幕翻拍而成的虚拟用户,此时,为减少误判率,可以提示用户重新进行人脸图像采集。当计算出的可信度大于该第一预设阈值时,说明该待验证对象极有可能是活体用户,此时,需要进一步分析该活体用户是未知活体用户,还是已注册或认证的活体用户,也即,上述步骤“根据目标人脸图像对该待验证对象进行身份验证”具体可以包括:Specifically, when the calculated reliability is less than or equal to the first preset threshold, it indicates that the object to be verified is likely to be a virtual user whose screen is remake. At this time, in order to reduce the false positive rate, the user may be prompted. Perform face image acquisition again. When the calculated reliability is greater than the first preset threshold, it indicates that the object to be verified is very likely to be a living user. In this case, it is necessary to further analyze whether the living user is an unknown living user or a registered or authenticated living user. That is, the above step "identifying the object to be verified according to the target face image" may specifically include:
3-1、根据该目标人脸图像的关键点集将目标人脸图像划分成多个人脸区域。3-1. Divide the target face image into a plurality of face regions according to the key point set of the target face image.
本实施例中,该人脸区域主要指五官区域,比如眼睛、嘴巴、鼻子、 眉毛以及脸颊等,其主要基于各关键点之间的相对位置关系来对目标人脸图像进行分割。In this embodiment, the face area mainly refers to the facial features, such as eyes, mouth, nose, eyebrows, and cheeks, etc., which are mainly based on the relative positional relationship between the key points to segment the target face image.
3-2、根据该多个人脸区域确定目标特征信息。3-2. Determine target feature information according to the plurality of face regions.
例如,上述步骤3-2具体可以包括:For example, the foregoing step 3-2 may specifically include:
对该人脸区域进行特征提取操作,得到多条特征信息,每一人脸区域对应一条特征信息;Performing a feature extraction operation on the face region to obtain a plurality of feature information, each face region corresponding to one feature information;
对该多条特征信息进行重组,得到目标特征信息。The plurality of pieces of feature information are reorganized to obtain target feature information.
本实施例中,可以通过深度学习网络对人脸区域进行特征提取,并将提取出的特征进行重组,得到特征串(也即该目标特征信息),由于不同人脸区域对应的几何模型不同,为提高提取效率和精准性,可以采用不同的深度学习网络对不同的人脸区域进行提取。In this embodiment, the feature extraction may be performed on the face region through the deep learning network, and the extracted features are recombined to obtain the feature string (that is, the target feature information). Since the geometric models corresponding to different face regions are different, In order to improve extraction efficiency and accuracy, different depth learning networks can be used to extract different face regions.
3-3、根据该目标特征信息对该待验证对象进行身份验证。3-3. Perform identity verification on the object to be verified according to the target feature information.
例如,上述步骤3-3具体可以包括:For example, the foregoing step 3-3 may specifically include:
3-3-1、获取已存储特征信息集、以及该已存储特征信息集中每一已存储特征信息对应的用户标识。3-3-1. Acquire a stored feature information set, and a user identifier corresponding to each stored feature information in the stored feature information set.
本实施例中,该用户标识是用户的唯一识别标志,其可以包括注册账号。该已存储特征信息集包括至少一条已存储特征信息,不同的已存储特征信息是根据不同注册用户的人脸图像得到的。实际应用过程中,需要预先将每一注册用户的用户标识和已存储特征信息进行关联,也即,在上述步骤3-3-1之前,该身份验证方法还可以包括:In this embodiment, the user identifier is a unique identifier of the user, which may include a registered account. The stored feature information set includes at least one stored feature information, and the different stored feature information is obtained according to face images of different registered users. In the actual application process, the user identifier of each registered user and the stored feature information are associated in advance, that is, before the foregoing step 3-3-1, the identity verification method may further include:
获取用户注册请求,该用户注册请求携带待注册用户标识和待注册人脸图像;Obtaining a user registration request, where the user registration request carries a user identifier to be registered and a face image to be registered;
根据该待注册人脸图像确定待注册特征信息;Determining feature information to be registered according to the image of the face to be registered;
将该待注册特征信息和待注册用户标识进行关联,并将该待注册特征信息添入已存储特征信息集。The to-be-registered feature information is associated with the to-be-registered user identifier, and the to-be-registered feature information is added to the stored feature information set.
本实施例中,可以利用步骤3-1和3-2中所涉及的方法对该待注册人脸图像进行处理,得到该待注册特征信息。该用户注册请求可以是自动触发生成的,比如当采集完用户的人脸图像之后,可以自动生成该用户注册请求,也可以是用户触发生成的,比如当用户点击“完成”按钮 时,可以生成该用户注册请求,具体可以根据实际需求而定。该待注册人脸图像可以是现场采集的,也可以是用户提前拍摄好后上传的。In this embodiment, the to-be-registered face image may be processed by using the methods involved in steps 3-1 and 3-2 to obtain the to-be-registered feature information. The user registration request may be automatically triggered. For example, after the user's face image is collected, the user registration request may be automatically generated, or may be generated by the user, for example, when the user clicks the “finish” button, the user may generate the request. The user registration request may be determined according to actual needs. The face image to be registered may be collected on the spot, or may be uploaded after the user has taken the image in advance.
3-3-2、利用预设算法计算每一已存储特征信息和目标特征信息之间的相似度。3-3-2. Calculate the similarity between each stored feature information and the target feature information by using a preset algorithm.
本实施例中,该预设算法可以包括联合贝叶斯算法,其是统计学的一种分类方法,主要思想是将一副人脸看作两部分构成,一部分是人与人之间的差异,另一部分是个体自身的差异(比如表情变动),根据这两部分的差异计算总体相似度。In this embodiment, the preset algorithm may include a joint Bayesian algorithm, which is a classification method of statistics. The main idea is to treat a face as a two-part composition, and a part is a difference between people. The other part is the individual's own differences (such as changes in expression), and the overall similarity is calculated based on the difference between the two parts.
3-3-3、根据该相似度和对应的用户标识对该待验证对象进行身份验证。3-3-3. Perform identity verification on the object to be verified according to the similarity and the corresponding user identifier.
例如,上述步骤3-3-3具体可以包括:For example, the foregoing step 3-3-3 may specifically include:
判断计算出的所有相似度中是否存在不小于第二预设阈值的相似度;Determining whether there is a similarity that is not less than a second preset threshold among all calculated similarities;
若存在,则将该不小于第二预设阈值的相似度对应的用户标识作为目标用户标识,并生成指示该待验证对象为该目标用户标识的验证结果;If yes, the user identifier corresponding to the similarity of the second preset threshold is used as the target user identifier, and the verification result indicating that the object to be verified is the target user identifier is generated;
若不存在,则生成指示该待验证对象为非法用户的验证结果。If not, generate a verification result indicating that the object to be verified is an illegal user.
本实施例中,该第二预设阈值可以根据实际需求而定,比如可以预先采集大量用户的人脸图像,每一用户采集两张人脸图像,之后,根据每一用户采集的两张人脸图像计算对应的相似度,并统计其平均值,将其平均值作为该第二预设阈值,一般来说,由于个体自身的差异,同一用户在不同时间拍摄的两张人脸图像的相似度通常略小于1,从而该第二预设阈值也可以设置成略小于1,比如0.8。In this embodiment, the second preset threshold may be determined according to actual needs. For example, a face image of a large number of users may be collected in advance, and each user collects two face images, and then, according to two people collected by each user. The face image calculates the corresponding similarity, and the average value is counted, and the average value is used as the second preset threshold. Generally, the similarity of the two face images taken by the same user at different times due to the individual's own differences The degree is usually slightly less than 1, so that the second predetermined threshold can also be set to be slightly less than 1, such as 0.8.
需要指出的是,当生成指示该待验证对象为目标用户标识的验证结果时,说明该活体用户是已注册或认证的活体用户,此时,可以直接根据该目标用户标识进行登陆,无需用户手动输入密码和账号,方法简单,方便快捷。It should be noted that when the verification result indicating that the object to be verified is the target user identifier is generated, the living user is a registered or authenticated living user. In this case, the user can be directly logged in according to the target user identifier, without the user manually. Enter your password and account number, the method is simple, convenient and fast.
由上述可知,本实施例提供的身份验证方法,通过向待验证对象提供动作提示信息,并获取该待验证对象的视频流数据,该视频流数据为 该待验证对象根据该动作提示信息做出相应动作时采集的连续帧人脸图像,之后,根据该视频流数据确定目标人脸图像,并根据该目标人脸图像确定该待验证对象为活体的可信度,之后,根据该可信度和目标人脸图像对该待验证对象进行身份验证,能在人脸识别过程中有效阻挡照片、视频和人头模型等各种类型的攻击,方法简单,安全性高。It can be seen from the above that the identity verification method provided by the embodiment provides the action prompt information to the object to be verified, and acquires the video stream data of the object to be verified, and the video stream data is made by the object to be verified according to the action prompt information. a continuous frame face image acquired during the corresponding action, and then determining a target face image according to the video stream data, and determining, according to the target face image, the credibility of the object to be verified as a living body, and then, according to the credibility And the target face image authenticates the object to be verified, and can effectively block various types of attacks such as photos, videos, and human head models in the face recognition process, and the method is simple and the security is high.
第二实施例Second embodiment
根据实施例一所描述的方法,以下将举例作进一步详细说明。According to the method described in Embodiment 1, the following will be exemplified in further detail.
在本实施例中,将以身份验证装置集成在网络设备中为例进行详细说明。In this embodiment, the integration of the identity verification device in the network device will be described in detail as an example.
如图2a所示,一种身份验证方法,具体流程可以如下:As shown in Figure 2a, an authentication method can be as follows:
S201、网络设备获取用户注册请求,该用户注册请求携带待注册用户标识和待注册人脸图像。S201. The network device acquires a user registration request, where the user registration request carries the to-be-registered user identifier and the to-be-registered face image.
譬如,在用户首次注册某应用系统(比如会议签到系统)时,可以要求该用户提供待注册账号和待注册人脸图像,该待注册人脸图像可以是现场采集的,也可以是用户提前拍摄好后上传的,之后,当用户点击“完成”按钮时,可以生成该用户注册请求。For example, when a user first registers an application system (such as a conference check-in system), the user may be required to provide an account to be registered and a face image to be registered, and the image to be registered may be collected on site, or may be taken in advance by the user. After uploading, after the user clicks the "Finish" button, the user registration request can be generated.
S202、网络设备根据该待注册人脸图像确定待注册特征信息,之后,将该待注册特征信息和待注册用户标识进行关联,并将该待注册特征信息添入已存储特征信息集。S202. The network device determines the to-be-registered feature information according to the to-be-registered face image, and then associates the to-be-registered feature information with the to-be-registered user identifier, and adds the to-be-registered feature information to the stored feature information set.
譬如,可以对该待注册人脸图像进行关键点提取,并根据提取出的关键点将该待注册人脸图像分割成多个区域,之后利用多个深度学习网络对分割的区域进行特征提取,并将这些特征进行重组,得到该待注册特征信息。通过将每一注册用户的用户标识和特征信息进行关联存储,从而后续在登录过程中,网络设备可以根据这些存储信息对用户身份进行验证。For example, a key point extraction may be performed on the face image to be registered, and the face image to be registered is segmented into multiple regions according to the extracted key points, and then the feature extraction is performed on the segmented region by using multiple depth learning networks. And reorganizing these features to obtain the feature information to be registered. The user identifier and the feature information of each registered user are stored in association, so that in the subsequent login process, the network device can verify the identity of the user according to the stored information.
S203、网络设备获取登陆请求,并根据该登陆请求向待验证对象提供动作提示信息。S203. The network device acquires a login request, and provides action prompt information to the object to be verified according to the login request.
譬如,请参见图2b,当待验证对象点击交互界面上的“刷脸登陆”按钮时,可以生成该登录请求,此时,该交互界面上可以显示一个动作 提示框,以提示该待验证对象做出指定动作,比如摇头等。For example, referring to FIG. 2b, when the object to be verified clicks the “face login” button on the interactive interface, the login request may be generated. At this time, an action prompt box may be displayed on the interaction interface to prompt the object to be verified. Make specific actions, such as shaking your head.
S204、网络设备获取该待验证对象的视频流数据,该视频流数据为该待验证对象根据该动作提示信息做出相应动作时采集的连续帧人脸图像。S204: The network device acquires video stream data of the object to be verified, where the video stream data is a continuous frame face image collected by the object to be verified according to the action prompt information.
譬如,该视频流数据可以是规定时间内(比如1分钟)采集的人脸数据。在实际采集过程中,交互界面上可以显示一个检测框,并提示用户将人脸放入检测框中,以引导用户站立在合适的位置进行视频流数据的采集。For example, the video stream data may be face data collected within a specified time (for example, 1 minute). During the actual collection process, a detection frame can be displayed on the interactive interface, and the user is prompted to put the face into the detection frame to guide the user to stand in a suitable position for collecting video stream data.
S205、网络设备获取该视频流数据中每一帧人脸图像的关键点集、以及该关键点集中每一关键点的位置信息。S205. The network device acquires a key point set of each frame face image in the video stream data, and location information of each key point in the key point set.
譬如,可以通过ASM算法来提取每一帧人脸图像的关键点集,该关键点集中可以包括如眼睛、眉毛、鼻子、嘴巴和脸部外轮廓等在内的88个关键点。该位置信息可以是每一关键点在检测框中的显示坐标,当用户的人脸位于该检测框中时,可以自动定位出每一关键点的显示坐标。For example, the ASM algorithm can be used to extract a set of key points for each frame of the face image, which can include 88 key points such as eyes, eyebrows, nose, mouth, and facial contours. The location information may be display coordinates of each key point in the detection frame. When the user's face is located in the detection frame, the display coordinates of each key point may be automatically located.
S206、网络设备根据每一帧人脸图像的关键点集和位置信息确定该待验证对象的运动轨迹。S206. The network device determines, according to a key point set and location information of each frame of the face image, a motion trajectory of the object to be verified.
譬如,可以先根据如眼睛、嘴角和鼻子等这些重要关键点的位置变化信息,以及这些重要关键点彼此之间的夹角和相对距离等信息确定该待验证对象的三维人脸模型,并得到每一关键点的三维坐标,之后,根据任一关键点的三维坐标确定该运动轨迹。For example, the positional change information of important key points such as the eyes, the corners of the mouth and the nose, and the angles and relative distances between the important key points can be used to determine the three-dimensional face model of the object to be verified, and The three-dimensional coordinates of each key point, and then the motion trajectory is determined according to the three-dimensional coordinates of any key point.
S207、网络设备判断该运动轨迹是否满足预设条件,若是,则执行下述步骤S208,若否,则可以生成指示该待验证对象为非法用户的验证结果,并返回执行上述步骤S203。S207. The network device determines whether the motion track meets the preset condition. If yes, the following step S208 is performed. If not, the verification result indicating that the object to be verified is an illegal user may be generated, and the process returns to step S203.
譬如,若该预设条件为:包含5°偏向角点、10°偏向角点、15°偏向角和30°偏向角点,当该运动轨迹为用户的头部由正面(也即0°)旋转40°形成的,也即运动轨迹中包含0~40°的偏向角点时,可以判定满足预设条件。当该运动轨迹为用户的头部由正面(也即0°)旋转15°形成的,也即运动轨迹中只包含0~15°的偏向角点时,可以判定不 满足预设条件,此时,可以进一步提示用户验证失败,并告知失败原因,比如当前照片不符合要求等,以便用户重新拍摄。For example, if the preset condition is: including a 5° deflection angle point, a 10° deflection angle point, a 15° deflection angle point, and a 30° deflection angle point, when the motion trajectory is the user's head from the front side (ie, 0°) When the rotation is 40°, that is, when the motion trajectory includes a deflection angle of 0 to 40°, it can be determined that the preset condition is satisfied. When the motion trajectory is formed by the user's head rotated by 15° from the front side (ie, 0°), that is, when the motion trajectory only includes a deflection angle point of 0 to 15°, it can be determined that the preset condition is not satisfied. You can further prompt the user to verify the failure and inform the reason for the failure, such as the current photo does not meet the requirements, etc., so that the user can shoot again.
S208、网络设备从该视频流数据中选取预设轨迹点对应的人脸图像,作为目标人脸图像。S208. The network device selects, from the video stream data, a face image corresponding to the preset track point as the target face image.
譬如,该预设轨迹点可以为0°偏向角点,此时,该目标人脸图像也即该视频流数据中0°偏向角点对应的人脸图像。For example, the preset track point may be a 0° deflecting corner point. At this time, the target face image is also a face image corresponding to a 0° deflecting corner point in the video stream data.
S209、网络设备从该目标人脸图像的关键点集中确定至少一个目标关键点,并根据该目标关键点的位置信息和目标人脸图像确定归一化图像。S209. The network device determines at least one target key point from the key points of the target face image, and determines a normalized image according to the location information of the target key point and the target face image.
例如,上述步骤S209具体可以包括:For example, the above step S209 may specifically include:
获取每一目标关键点的预设位置;Get the preset position of each target key point;
计算每一预设位置和相应的位置信息之间的欧氏距离;Calculating the Euclidean distance between each preset position and corresponding position information;
根据该欧氏距离对该目标人脸图像进行相似变换,得到归一化图像。The target face image is similarly transformed according to the Euclidean distance to obtain a normalized image.
譬如,该目标关键点可以为左右两个瞳孔、左右两个嘴角以及鼻尖这五个点,该预设位置可以是标准人脸模型中这五个点针对于同一参考坐标系的二维坐标。通过将目标人脸图像和标准人脸模型放置在同一个参考坐标系中,并经由旋转、平移和缩放等相似变换方式对该目标人脸图像进行调整,使该目标人脸图像中的这五个点尽量靠近标准人脸模型中的相应点,可以实现该目标人脸图像的归一化处理,得到归一化图像。For example, the target key point may be five points of left and right pupils, two left and right mouth corners, and a nose point, and the preset position may be two-dimensional coordinates of the five points in the standard face model for the same reference coordinate system. By placing the target face image and the standard face model in the same reference coordinate system, and adjusting the target face image through similar transformation methods such as rotation, translation and zoom, the five in the target face image are made. The points are as close as possible to the corresponding points in the standard face model, and the normalized image of the target face image can be realized to obtain a normalized image.
S210、网络设备利用预设分类模型对该归一化图像进行计算,得到该待验证对象为活体的可信度,并判断该可信度是否大于第一预设阈值,若是,则执行下述步骤S211,若否,则可以生成指示该待验证对象为非法用户的验证结果,并返回执行上述步骤S203。S210: The network device calculates the normalized image by using a preset classification model, and obtains the credibility of the object to be verified as a living body, and determines whether the credibility is greater than a first preset threshold, and if yes, performs the following In step S211, if not, a verification result indicating that the object to be verified is an illegal user may be generated, and the process returns to step S203.
譬如,该预设分类模型可以是提前利用大量的屏幕翻拍照片样本(负样本)以及正常照片样本(正样本)对CNN训练得到的。当将归一化图像输入训练好的CNN中时,图像信息会从输入层经过逐级的变换,传输到输出层,最后经由输出层输出的是一个概率值,也即该可信 度。该第一预设阈值可以为0.5,此时,若可信度为0.7,则可以判定为是,若可信度为0.3,则可以判定为否。For example, the preset classification model may be obtained by training a CNN with a large number of screen remake photo samples (negative samples) and normal photo samples (positive samples) in advance. When the normalized image is input into the trained CNN, the image information is transformed from the input layer to the output layer, and finally output through the output layer is a probability value, that is, the reliability. The first preset threshold may be 0.5. In this case, if the reliability is 0.7, the determination may be YES, and if the reliability is 0.3, the determination may be NO.
S211、网络设备根据该目标人脸图像的关键点集将目标人脸图像划分成多个人脸区域,并根据该多个人脸区域确定目标特征信息。S211. The network device divides the target face image into a plurality of face regions according to the key point set of the target face image, and determines target feature information according to the plurality of face regions.
譬如,可以基于各关键点之间的相对位置关系来对目标人脸图像进行分割,得到包括如眼睛、嘴巴、鼻子、眉毛以及脸颊等在内的多个人脸区域,之后,通过不同的深度学习网络对不同的人脸区域进行特征提取,并将提取的特征进行重组,得到该目标特征信息。For example, the target face image can be segmented based on the relative positional relationship between the key points to obtain a plurality of face regions including eyes, mouth, nose, eyebrows, and cheeks, and then, through different depth learning. The network extracts features from different face regions and reorganizes the extracted features to obtain the target feature information.
S212、网络设备利用预设算法计算该已存储特征信息集中每一已存储特征信息和目标特征信息之间的相似度,并判断计算出的所有相似度中是否存在不小于第二预设阈值的相似度,若是,则执行下述步骤S213,若否,则可以生成指示该待验证对象为非法用户的验证结果,并返回执行上述步骤S203。S212. The network device calculates, by using a preset algorithm, a similarity between each stored feature information and the target feature information in the stored feature information set, and determines whether the calculated similarity is not less than a second preset threshold. The similarity, if yes, performs the following step S213, and if not, it can generate a verification result indicating that the object to be verified is an illegal user, and returns to the above step S203.
譬如,可以通过联合贝叶斯算法计算每一已存储特征信息和目标特征信息之间的相似度,得到多个相似度{A1、A2...An},此时,若{A1、A2...An}中存在Ai大于或等于该第二预设阈值,则可以判定为是,若不存在,则可以判定为否,其中,i∈(1、2...n),当判定为否时,可以进一步提示用户验证失败,并告知失败原因,比如无法查找到此用户等等。For example, the similarity between each stored feature information and the target feature information can be calculated by the joint Bayesian algorithm to obtain a plurality of similarities {A1, A2...An}, at this time, if {A1, A2. If there is Ai greater than or equal to the second preset threshold in An.., it can be determined as YES. If it does not exist, it can be determined as NO, where i∈(1, 2...n), when it is determined as Otherwise, you can further prompt the user to verify the failure and inform the reason for the failure, such as the inability to find the user.
S213、网络设备将该不小于第二预设阈值的相似度对应的用户标识作为目标用户标识,并生成指示该待验证对象为该目标用户标识的验证结果。S213. The network device uses the user identifier corresponding to the similarity of the second preset threshold as the target user identifier, and generates a verification result indicating that the object to be verified is the target user identifier.
譬如,可以将相似度Ai对应的用户标识(也即该目标用户标识)作为该待验证对象的身份验证结果,该结果可以通过提示信息的形式向用户显示,以告知用户登录成功。For example, the user identifier corresponding to the similarity Ai (that is, the target user identifier) may be used as the authentication result of the object to be verified, and the result may be displayed to the user in the form of prompt information to inform the user that the login is successful.
由上述可知,本实施例提供的身份验证方法,其中网络设备通过获取用户注册请求,该用户注册请求携带待注册用户标识和待注册人脸图像,并根据该待注册人脸图像确定待注册特征信息,之后,将该待注册特征信息和待注册用户标识进行关联,并将该待注册特征信息添入已存 储特征信息集,接着,获取登陆请求,并根据该登陆请求向待验证对象提供动作提示信息,之后,获取该待验证对象的视频流数据,该视频流数据为该待验证对象根据该动作提示信息做出相应动作时采集的连续帧人脸图像,并获取该视频流数据中每一帧人脸图像的关键点集、以及该关键点集中每一关键点的位置信息,之后,根据每一帧人脸图像的关键点集和位置信息确定该待验证对象的运动轨迹,并判断该运动轨迹是否满足预设条件,若否,则可以生成指示该待验证对象为非法用户的验证结果,若是,则从该视频流数据中选取预设轨迹点对应的人脸图像,作为目标人脸图像,之后,从该目标人脸图像的关键点集中确定至少一个目标关键点,并根据该目标关键点的位置信息和目标人脸图像确定归一化图像,接着,利用预设分类模型对该归一化图像进行计算,得到该待验证对象为活体的可信度,并判断该可信度是否大于第一预设阈值,若是,则据该目标人脸图像的关键点集将目标人脸图像划分成多个人脸区域,并根据该多个人脸区域确定目标特征信息,之后,利用预设算法计算该已存储特征信息集中每一已存储特征信息和目标特征信息之间的相似度,并判断计算出的所有相似度中是否存在不小于第二预设阈值的相似度,若是,将该不小于第二预设阈值的相似度对应的用户标识作为目标用户标识,并生成指示该待验证对象为该目标用户标识的验证结果,从而能在人脸识别过程中有效阻挡照片、视频和人头模型等各种类型的攻击,方法简单,安全性高,并且无需用户手动输入密码和账号即可实现身份验证,方便快捷。According to the above, the identity verification method provided by the embodiment, wherein the network device obtains a user registration request, the user registration request carries the to-be-registered user identifier and the to-be-registered face image, and determines the to-be-registered feature according to the to-be-registered face image. Information, after which the feature information to be registered is associated with the user identity to be registered, and the feature information to be registered is added to the stored feature information set, and then a login request is obtained, and an action is provided to the object to be verified according to the login request. After the information is obtained, the video stream data of the object to be verified is obtained, and the video stream data is a continuous frame face image collected by the object to be verified according to the motion prompt information, and each video stream data is acquired. The key point set of the face image and the position information of each key point in the key point set, and then determining the motion track of the object to be verified according to the key point set and position information of each frame face image, and determining Whether the motion track satisfies a preset condition, and if not, it may generate an indication that the object to be verified is not The verification result of the method user, if yes, selecting a face image corresponding to the preset track point from the video stream data as the target face image, and then determining at least one target key point from the key point of the target face image And determining a normalized image according to the location information of the target key point and the target face image, and then calculating the normalized image by using a preset classification model to obtain the reliability of the object to be verified as a living body, and Determining whether the credibility is greater than a first preset threshold, and if yes, dividing the target face image into a plurality of face regions according to the key point set of the target face image, and determining target feature information according to the plurality of face regions, Then, using a preset algorithm, calculating a similarity between each stored feature information and the target feature information in the stored feature information set, and determining whether there is a similarity between the calculated similarities that is not less than a second preset threshold. If yes, the user identifier corresponding to the similarity of the second preset threshold is used as the target user identifier, and the object to be verified is generated. The verification result of the target user identifier can effectively block various types of attacks such as photos, videos and human head models in the face recognition process, and the method is simple, high in security, and the identity can be realized without the user manually inputting the password and the account number. Verification, convenient and fast.
第三实施例Third embodiment
根据实施例一和实施例二所描述的方法,本实施例将从身份验证装置的角度进一步进行描述,该身份验证装置可以集成在网络设备中。According to the method described in Embodiment 1 and Embodiment 2, the present embodiment will be further described from the perspective of an identity verification device, which can be integrated in a network device.
请参阅图3a,图3a具体描述了本发明第三实施例提供的身份验证装置,其可以包括:Referring to FIG. 3a, FIG. 3a specifically describes an identity verification apparatus according to a third embodiment of the present invention, which may include:
至少一个存储器;At least one memory;
至少一个处理器;At least one processor;
其中,所述至少一个存储器存储有至少一个指令模块,经配置由所 述至少一个处理器执行;其中,所述至少一个指令模块包括:Wherein the at least one memory stores at least one instruction module configured to be executed by the at least one processor; wherein the at least one instruction module comprises:
提供模块10、获取模块20、第一确定模块30、第二确定模块40和验证模块50,其中:The module 10, the obtaining module 20, the first determining module 30, the second determining module 40, and the verifying module 50 are provided, wherein:
(1)提供模块10(1) Providing module 10
提供模块10,用于向待验证对象提供动作提示信息。The module 10 is configured to provide action prompt information to the object to be verified.
本实施例中,该动作提示信息主要用于提示用户做一些指定的动作,比如摇头或眨眼等,其可以通过提示框或提示界面等形式进行显示。当用户点击交互界面上的某个按钮,比如“刷脸登陆”时,可以触发提供模块10提供该动作提示信息。In this embodiment, the action prompt information is mainly used to prompt the user to perform some specified actions, such as shaking the head or blinking, etc., and the display may be displayed through a prompt box or a prompt interface. When the user clicks a button on the interactive interface, such as "brush face login", the providing module 10 can be triggered to provide the action prompt information.
(2)获取模块20(2) acquisition module 20
获取模块20,用于获取该待验证对象的视频流数据,该视频流数据为该待验证对象根据该动作提示信息做出相应动作时采集的连续帧人脸图像。The obtaining module 20 is configured to acquire video stream data of the object to be verified, where the video stream data is a continuous frame face image collected by the object to be verified according to the motion prompt information.
本实施例中,该视频流数据可以是规定时间内(比如一分钟)采集的的一段视频,其主要针对的是用户脸部的图像数据,获取模块20可以通过摄像头等视频采集设备进行采集。In this embodiment, the video stream data may be a video captured in a predetermined time (for example, one minute), and is mainly used for image data of the user's face, and the acquiring module 20 may perform the collection by using a video capturing device such as a camera.
(3)第一确定模块30(3) First determining module 30
第一确定模块30,用于根据该视频流数据确定目标人脸图像。The first determining module 30 is configured to determine a target face image according to the video stream data.
例如,该第一确定模块30具体可以用于:For example, the first determining module 30 can be specifically configured to:
1-1、获取该视频流数据中每一帧人脸图像的关键点集、以及该关键点集中每一关键点的位置信息。1-1. Obtain a key point set of each frame face image in the video stream data, and location information of each key point in the key point set.
本实施例中,该关键点集中的关键点主要指人脸图像中的特征点,也即图像灰度值发生剧烈变化的点,或者在图像边缘上曲率较大的点(即两个边缘的交点),比如眼睛、眉毛、鼻子、嘴巴和脸部外轮廓等。第一确定模块30可以通过一些深度学习模型,比如ASM(Active Shape Model,主动形状模型)或AAM(Active Appearance Model,主动外观模型)等进行关键点的提取操作。该位置信息主要是针对某一参考坐标系(比如终端显示的人脸采集界面)的二维坐标。In this embodiment, the key points in the key point focus mainly on feature points in the face image, that is, points where the image gray value changes drastically, or points with large curvature on the edge of the image (ie, two edges) Intersection), such as the eyes, eyebrows, nose, mouth and facial contours. The first determining module 30 can perform key point extraction operations through some deep learning models, such as an ASM (Active Shape Model) or an AAM (Active Appearance Model). The location information is mainly a two-dimensional coordinate for a certain reference coordinate system, such as a face acquisition interface displayed by the terminal.
1-2、根据每一帧人脸图像的关键点集和位置信息确定该待验证对象 的运动轨迹。1-2. Determine a motion track of the object to be verified according to a key point set and position information of each frame of the face image.
本实施例中,该运动轨迹主要指待验证对象根据该动作提示信息做出相应动作时,整个人脸或者局部区域从开始动作到结束动作所形成的路线,比如眨眼轨迹、摇头轨迹等等。具体的,第一确定模块30可以先根据每一帧人脸图像中一些重要关键点(比如眼睛、嘴角、脸颊边缘和鼻子等)的位置变化信息、以及这些重要关键点彼此之间的夹角和相对距离来确定该待验证对象的三维人脸模型,并得到每一关键点的三维坐标,之后,根据任一关键点的三维坐标确定该运动轨迹。In this embodiment, the motion trajectory mainly refers to a route formed by the entire face or a partial region from the start motion to the end motion when the object to be verified performs corresponding action according to the motion prompt information, such as a blink track, a shake track, and the like. Specifically, the first determining module 30 may firstly change the position change information of some important key points (such as eyes, mouth corners, cheek edges, and noses) in each frame of the face image, and the angles between the important key points. And the relative distance to determine the three-dimensional face model of the object to be verified, and obtain the three-dimensional coordinates of each key point, and then determine the motion trajectory according to the three-dimensional coordinates of any key point.
1-3、根据该运动轨迹从该视频流数据中确定目标人脸图像。1-3. Determine a target face image from the video stream data according to the motion trajectory.
例如,上述步骤1-3具体可以包括:For example, the foregoing steps 1-3 may specifically include:
判断该运动轨迹是否满足预设条件;Determining whether the motion trajectory satisfies a preset condition;
若是,则从该视频流数据中选取预设轨迹点对应的人脸图像,作为目标人脸图像;If yes, selecting a face image corresponding to the preset track point from the video stream data as the target face image;
若否,则生成指示该待验证对象为非法用户的验证结果。If not, a verification result indicating that the object to be verified is an illegal user is generated.
本实施例中,该预设条件主要依据人体动作特点而定,考虑到人体动作具有连贯性,该预设条件可以设定为:该运动轨迹中包含多个指定轨迹点,比如5°偏向角点、15°偏向角点和30°偏向角点等,或者,该预设条件可以设定为:该运动轨迹中的轨迹点数量达到一定值,比如10个。该预设轨迹点可以根据实际需求而定,比如,考虑到人脸图像上的关键点越多,所得结论越准确,故可以选取0°偏向角点作为该预设轨迹点,也即选取正面人脸图像作为该目标人脸图像。当然,考虑到用户可能并非从0°偏向角点开始采集,该预设轨迹点可以为包括0°偏向角点在内的某个较小区间范围的点,而非单个点。In this embodiment, the preset condition is mainly determined according to the characteristics of the human body motion. Considering that the human motion is coherent, the preset condition may be set as: the motion track includes a plurality of specified track points, such as a 5° deflection angle. Point, 15° deflection angle point and 30° deflection angle point, etc., or the preset condition can be set as follows: the number of track points in the motion track reaches a certain value, for example, 10. The preset track point may be determined according to actual needs. For example, considering that the more key points on the face image, the more accurate the conclusion is, the 0° deflecting corner point may be selected as the preset track point, that is, the front side is selected. The face image serves as the target face image. Of course, considering that the user may not start collecting from a 0° deflection corner, the preset trajectory point may be a point of a smaller interval range including a 0° deflection angle point, rather than a single point.
该非法用户主要存在两种:未知活体用户和虚拟用户,该未知活体用户主要指未在系统平台上进行注册或认证的活体用户,该虚拟用户主要指一些不法分子利用合法用户的单张照片或视频或者人头模型伪造而成(也即屏幕翻拍而成)的伪活体用户。具体的,当该运动轨迹满足指定条件时,说明该目标人脸图像并非单张照片或多张照片翻拍而成,此时,需要进一步根据图片纹理特点确认该目标人脸图像是否是通过视频 翻拍或者人头模型伪造而成的。当该运动轨迹不满足指定条件,比如轨迹点只有少量的两三个时,则说明该待验证对象极有可能是通过翻拍用户的单张照片或多张照片伪造而成的伪活体用户,此时,可以直接判定为非法用户,并提示用户重新进行检测。There are two main types of illegal users: unknown live users and virtual users. The unknown live users mainly refer to live users who are not registered or authenticated on the system platform. The virtual users mainly refer to a group of photos taken by lawless elements using legitimate users. A pseudo-live user who is forged by a video or a human head model (that is, a screen remake). Specifically, when the motion trajectory meets the specified condition, it indicates that the target face image is not a single photo or multiple photos remake. In this case, it is necessary to further confirm whether the target face image is remake by video according to the image texture feature. Or the forged model of the human head. When the motion trajectory does not meet the specified conditions, such as only a small number of two or three track points, it indicates that the object to be verified is most likely a fake living user falsified by staking a single photo or multiple photos of the user. When it is determined, it can be directly determined as an illegal user, and the user is prompted to perform the detection again.
(4)第二确定模块40(4) Second determining module 40
第二确定模块40,用于根据该目标人脸图像确定该待验证对象为活体的可信度。The second determining module 40 is configured to determine, according to the target face image, the credibility of the object to be verified as a living body.
本实施例中,该可信度主要指该待验证对象为活体的可信程度,其可以表现为概率值或者分数值的形式。由于经屏幕翻拍而成的图片的纹理和正常图片的纹理不同,故第二确定模块40可以通过对该目标人脸图像进行特征分析来确定该待验证用户的可信度,也即,请参阅图3b,该第二确定模块40具体可以包括第一确定子模块41、第二确定子模块42和计算子模块43,其中:In this embodiment, the credibility mainly refers to the degree of credibility of the object to be verified as a living body, which may be expressed in the form of a probability value or a fractional value. The second determining module 40 can determine the credibility of the user to be verified by performing feature analysis on the target face image, that is, see the texture of the picture that is retries by the screen. 3b, the second determining module 40 may specifically include a first determining submodule 41, a second determining submodule 42 and a calculating submodule 43, wherein:
第一确定子模块41,用于从该目标人脸图像的关键点集中确定至少一个目标关键点。The first determining sub-module 41 is configured to determine at least one target key point from the key point of the target face image.
本实施例中,该目标关键点主要包括一些相对位置比较稳定且具有明显区分特征的特征点,比如左右两个瞳孔、左右两个嘴角以及鼻尖,等等,具体可以根据实际需求而定。In this embodiment, the target key points mainly include some feature points that are relatively stable in relative position and have distinct distinguishing features, such as two left and right pupils, two left and right mouth corners, and a nose tip, etc., which may be determined according to actual needs.
第二确定子模块42,用于根据该目标关键点的位置信息和目标人脸图像确定归一化图像。The second determining sub-module 42 is configured to determine a normalized image according to the location information of the target key point and the target facial image.
例如,该第二确定子模块42具体可以用于:For example, the second determining sub-module 42 can be specifically used to:
获取每一目标关键点的预设位置;Get the preset position of each target key point;
计算每一预设位置和相应的位置信息之间的欧氏距离;Calculating the Euclidean distance between each preset position and corresponding position information;
根据该欧氏距离对该目标人脸图像进行相似变换,得到归一化图像。The target face image is similarly transformed according to the Euclidean distance to obtain a normalized image.
本实施例中,该预设位置可以根据标准人脸模型得到,该欧氏距离指每一目标关键点对应的预设位置和位置信息之间的距离。该相似变换可以包括旋转、平移和缩放等操作,通常,相似变换前的图像和相似变换后的图像具有相同的图形,也即所包含的图形形状不变。具体的,第 二确定子模块42通过不断调整目标人脸图像的大小、旋转角度和坐标位置,可以使该目标关键点的预设位置和相应的位置信息之间的距离最小化,也即将该目标人脸图像归一化到标准人脸模型,得到归一化图像。In this embodiment, the preset position may be obtained according to a standard face model, where the Euclidean distance refers to a distance between a preset position and position information corresponding to each target key point. The similarity transformation may include operations such as rotation, translation, and scaling. Generally, the image before the similar transformation and the similarly transformed image have the same graphics, that is, the shape of the included graphics does not change. Specifically, the second determining sub-module 42 can minimize the distance between the preset position of the target key point and the corresponding position information by continuously adjusting the size, the rotation angle, and the coordinate position of the target face image, that is, the distance The target face image is normalized to the standard face model to obtain a normalized image.
计算子模块43,用于利用预设分类模型对该归一化图像进行计算,得到该待验证对象为活体的可信度。The calculation sub-module 43 is configured to calculate the normalized image by using a preset classification model to obtain the reliability of the object to be verified as a living body.
本实施例中,该预设分类模型主要指训练好的深度神经网络,其可以由一些深度训练模型,比如CNN(Convolutional Neural Networks,卷积神经网络)训练得到,其中,CNN是一种多层神经网络,由输入层、卷积层、池化层、全连接层和输出层组成,其支持将多维输入向量的图像直接输入网络,避免了特征提取和分类过程中数据的重建,极大降低了图像处理的复杂度。当将归一化图像输入CNN网络中时,信息会从输入层经过逐级的变换,传输到输出层,CNN网络执行的计算过程实际上就是将输入(归一化图像)与每层的权值矩阵相点乘,从而得到最终输出(也即该待验证对象的可信度)的过程。In this embodiment, the preset classification model mainly refers to a trained deep neural network, which can be trained by some deep training models, such as CNN (Convolutional Neural Networks), wherein CNN is a multi-layer The neural network is composed of an input layer, a convolution layer, a pooling layer, a fully connected layer and an output layer, which supports inputting an image of a multi-dimensional input vector directly into the network, thereby avoiding data reconstruction in feature extraction and classification, and greatly reducing The complexity of image processing. When the normalized image is input into the CNN network, the information is transformed from the input layer to the output layer, and the calculation process performed by the CNN network is actually the input (normalized image) and the weight of each layer. The value matrix is multiplied by the point to obtain the final output (ie, the confidence of the object to be verified).
容易理解的是,该预设分类模型需要提前根据样本和分类信息进行训练得到,也即,该身份验证装置还可以包括训练模块60,用于:It is easy to understand that the preset classification model needs to be trained according to the sample and the classification information in advance, that is, the identity verification apparatus may further include a training module 60 for:
在该计算子模块43利用预设分类模型对该归一化图像进行计算之前,获取预设人脸图像集、以及该预设人脸图集中每一预设人脸图像的类别信息;Before the calculating sub-module 43 calculates the normalized image by using the preset classification model, acquiring a preset face image set and category information of each preset face image in the preset face map set;
根据该预设图像集和类别信息对卷积神经网络进行训练,得到预设分类模型。The convolutional neural network is trained according to the preset image set and category information to obtain a preset classification model.
本实施例中,由于该预设分类模型主要用于辨别该待验证用户是否是由屏幕翻拍伪造而成的虚拟用户,故该预设人脸图像集可以包括屏幕翻拍照片样本(负样本)以及正常照片样本(正样本),具体样本数量可以根据实际需求而定。该类别信息通常由人工标注而成,其可以包括翻拍照片和正常照片这两种。In this embodiment, since the preset classification model is mainly used to identify whether the user to be verified is a virtual user forged by a screen remake, the preset face image set may include a screen remake photo sample (negative sample) and Normal photo samples (positive samples), the specific sample size can be determined according to actual needs. This category information is usually manually labeled, which can include both remake photos and normal photos.
该训练过程主要包括两个阶段:前向传播阶段和后向传播阶段,在前向传播阶段中,训练模块60可以将每一样本X i(也即预设人脸图像)输入n层卷积神经网络中,得到实际输出O i,其中,O i=F n(…(F 2(F 1 (X iW (1))W (2))...)W (n)),i为正整数,W (n)为第n层的权值,F为激活函数(比如sigmoid函数或者双曲线正切函数),通过向卷积神经网络输入该预设人脸图像集,可以得到权值矩阵,之后,在后向传播阶段,训练模块60可以计算每一实际输出O i和理想输出Y i的差,按极小化误差的方法反向传播调整权值矩阵,其中,Y i是根据样本X i的类别信息得到的,比如,若样本X i为正常照片,则Y i可以设为1,若样本X i为翻拍照片,则Y i可以设为0,最后,根据调整后的权值矩阵确定训练好的卷积神经网络,也即该预设分类模型。 The training process mainly includes two phases: a forward propagation phase and a backward propagation phase. In the forward propagation phase, the training module 60 can input each sample X i (that is, a preset face image) into the n-layer convolution. neural networks, and the actual output O i, where, O i = F n (... (F 2 (F 1 (X i W (1)) W (2)) ...) W (n)), i is A positive integer, W (n) is the weight of the nth layer, and F is an activation function (such as a sigmoid function or a hyperbolic tangent function). By inputting the preset face image set to the convolutional neural network, a weight matrix can be obtained. Then, in the backward propagation phase, the training module 60 can calculate the difference between each actual output O i and the ideal output Y i , and backproper the adjustment weight matrix according to the method of minimizing the error, wherein Y i is based on the sample X i type information obtained, for example, if the sample is a normal picture X i, Y i is set to be 1, if X i is a sample picture photographing, the Y i may be set to 0, and finally, the weights adjusted according to The matrix determines the trained convolutional neural network, which is the pre-defined classification model.
(5)验证模块50(5) Verification module 50
验证模块50,用于根据该可信度和目标人脸图像对该待验证对象进行身份验证。The verification module 50 is configured to perform identity verification on the object to be verified according to the credibility and the target face image.
例如,该验证模块50具体可以包括判断子模块51、验证子模块52和生成子模块53,其中:For example, the verification module 50 may specifically include a determination sub-module 51, a verification sub-module 52, and a generation sub-module 53, wherein:
判断子模块51,用于判断该可信度是否大于第一预设阈值。The determining sub-module 51 is configured to determine whether the credibility is greater than a first preset threshold.
本实施例中,该第一预设阈值可以根据实际应用领域而定,比如,当该身份验证方法主要应用于对安全性要求较高的金融领域时,该第一预设阈值可以设置的比较大,譬如0.9,当该身份验证方法主要应用于类似会议签到系统等这些对安全性要求相对较低的领域时,该第一预设阈值可以设置的比较小,比如0.5。In this embodiment, the first preset threshold may be determined according to an actual application domain. For example, when the identity verification method is mainly applied to a financial domain with high security requirements, the first preset threshold may be set to be compared. Large, such as 0.9, when the authentication method is mainly applied to a field such as a conference sign-in system that has relatively low security requirements, the first preset threshold may be set to be relatively small, such as 0.5.
验证子模块52,用于若该可信度大于第一预设阈值,则根据目标人脸图像对该待验证对象进行身份验证。The verification sub-module 52 is configured to perform identity verification on the object to be verified according to the target face image if the reliability is greater than the first preset threshold.
本实施例中,当计算出的可信度大于该第一预设阈值时,说明该待验证对象极有可能是活体用户,此时,验证子模块52需要进一步分析该活体用户是未知活体用户,还是已注册或认证的活体用户,也即,请参阅图3c,该验证子模块52具体可以包括划分单元521、确定单元522和验证单元523,其中:In this embodiment, when the calculated reliability is greater than the first preset threshold, it indicates that the object to be verified is very likely to be a living user. At this time, the verification sub-module 52 needs to further analyze that the living user is an unknown living user. The active user who is already registered or authenticated, that is, referring to FIG. 3c, the verification sub-module 52 may specifically include a dividing unit 521, a determining unit 522, and a verifying unit 523, where:
划分单元521,用于根据该目标人脸图像的关键点集将目标人脸图像划分成多个人脸区域。The dividing unit 521 is configured to divide the target face image into a plurality of face regions according to the key point set of the target face image.
本实施例中,该人脸区域主要指五官区域,比如眼睛、嘴巴、鼻子、 眉毛以及脸颊等,其主要基于各关键点之间的相对位置关系来对目标人脸图像进行分割。In this embodiment, the face area mainly refers to the facial features, such as eyes, mouth, nose, eyebrows, and cheeks, etc., which are mainly based on the relative positional relationship between the key points to segment the target face image.
确定单元522,用于根据该多个人脸区域确定目标特征信息。The determining unit 522 is configured to determine target feature information according to the plurality of face regions.
例如,该确定单元522具体可以用于:For example, the determining unit 522 can be specifically configured to:
对该人脸区域进行特征提取操作,得到多条特征信息,每一人脸区域对应一条特征信息;Performing a feature extraction operation on the face region to obtain a plurality of feature information, each face region corresponding to one feature information;
对该多条特征信息进行重组,得到目标特征信息。The plurality of pieces of feature information are reorganized to obtain target feature information.
本实施例中,确定单元522可以通过深度学习网络对人脸区域进行特征提取,并将提取出的特征进行重组,得到特征串(也即该目标特征信息),由于不同人脸区域对应的几何模型不同,为提高提取效率和精准性,可以采用不同的深度学习网络对不同的人脸区域进行提取。In this embodiment, the determining unit 522 may perform feature extraction on the face region through the deep learning network, and recombine the extracted features to obtain a feature string (that is, the target feature information), because the geometry corresponding to different face regions Different models, in order to improve extraction efficiency and accuracy, different depth learning networks can be used to extract different face regions.
验证单元523,用于根据该目标特征信息对该待验证对象进行身份验证。The verification unit 523 is configured to perform identity verification on the object to be verified according to the target feature information.
例如,该验证单元523具体可以用于:For example, the verification unit 523 can be specifically configured to:
3-3-1、获取已存储特征信息集、以及该已存储特征信息集中每一已存储特征信息对应的用户标识。3-3-1. Acquire a stored feature information set, and a user identifier corresponding to each stored feature information in the stored feature information set.
本实施例中,该用户标识是用户的唯一识别标志,其可以包括注册账号。该已存储特征信息集包括至少一条已存储特征信息,不同的已存储特征信息是根据不同注册用户的人脸图像得到的。In this embodiment, the user identifier is a unique identifier of the user, which may include a registered account. The stored feature information set includes at least one stored feature information, and the different stored feature information is obtained according to face images of different registered users.
实际应用过程中,需要预先将每一注册用户的用户标识和已存储特征信息进行关联,也即,该身份验证装置还可以包括关联模块,用于:In the actual application process, the user identifier of each registered user and the stored feature information are associated in advance, that is, the identity verification device may further include an association module, configured to:
在该验证单元523获取已存储特征信息集、以及该已存储特征信息集中每一已存储特征信息对应的用户标识之前,获取用户注册请求,该用户注册请求携带待注册用户标识和待注册人脸图像;Before the verification unit 523 acquires the stored feature information set and the user identifier corresponding to each stored feature information in the stored feature information set, acquiring a user registration request, the user registration request carrying the to-be-registered user identifier and the to-be-registered face image;
根据该待注册人脸图像确定待注册特征信息;Determining feature information to be registered according to the image of the face to be registered;
将该待注册特征信息和待注册用户标识进行关联,并将该待注册特征信息添入已存储特征信息集。The to-be-registered feature information is associated with the to-be-registered user identifier, and the to-be-registered feature information is added to the stored feature information set.
本实施例中,该关联模块可以参考划分单元521和确定单元522所使用的方法对该待注册人脸图像进行处理,得到该待注册特征信息。该 用户注册请求可以是自动触发生成的,比如当采集完用户的人脸图像之后,可以自动生成该用户注册请求,也可以是用户触发生成的,比如当用户点击“完成”按钮时,可以生成该用户注册请求,具体可以根据实际需求而定。该待注册人脸图像可以是现场采集的,也可以是用户提前拍摄好后上传的。In this embodiment, the association module may process the to-be-registered face image by referring to the method used by the dividing unit 521 and the determining unit 522 to obtain the to-be-registered feature information. The user registration request may be automatically triggered. For example, after the user's face image is collected, the user registration request may be automatically generated, or may be generated by the user, for example, when the user clicks the “finish” button, the user may generate the request. The user registration request may be determined according to actual needs. The face image to be registered may be collected on the spot, or may be uploaded after the user has taken the image in advance.
3-3-2、利用预设算法计算每一已存储特征信息和目标特征信息之间的相似度。3-3-2. Calculate the similarity between each stored feature information and the target feature information by using a preset algorithm.
本实施例中,该预设算法可以包括联合贝叶斯算法,其是统计学的一种分类方法,主要思想是将一副人脸看做两部分构成,一部分是人与人之间的差异,另一部分是个体自身的差异(比如表情变动),根据这两部分的差异计算总体相似度。In this embodiment, the preset algorithm may include a joint Bayesian algorithm, which is a classification method of statistics. The main idea is to regard a face as a two-part composition, and a part is a difference between people. The other part is the individual's own differences (such as changes in expression), and the overall similarity is calculated based on the difference between the two parts.
3-3-3、根据该相似度和对应的用户标识对该待验证对象进行身份验证。3-3-3. Perform identity verification on the object to be verified according to the similarity and the corresponding user identifier.
进一步的,该验证单元523可以用于:Further, the verification unit 523 can be used to:
判断计算出的所有相似度中是否存在不小于第二预设阈值的相似度;Determining whether there is a similarity that is not less than a second preset threshold among all calculated similarities;
若存在,则将该不小于第二预设阈值的相似度对应的用户标识作为目标用户标识,并生成指示该待验证对象为该目标用户标识的验证结果;If yes, the user identifier corresponding to the similarity of the second preset threshold is used as the target user identifier, and the verification result indicating that the object to be verified is the target user identifier is generated;
若不存在,则生成指示该待验证对象为非法用户的验证结果。If not, generate a verification result indicating that the object to be verified is an illegal user.
本实施例中,该第二预设阈值可以根据实际需求而定,比如可以预先采集大量用户的人脸图像,每一用户采集两张人脸图像,之后,根据每一用户采集的两张人脸图像计算对应的相似度,并统计其平均值,将其平均值作为该第二预设阈值,一般来说,由于个体自身的差异,同一用户在不同时间拍摄的两张人脸图像的相似度通常略小于1,从而该第二预设阈值也可以设置成略小于1,比如0.8。In this embodiment, the second preset threshold may be determined according to actual needs. For example, a face image of a large number of users may be collected in advance, and each user collects two face images, and then, according to two people collected by each user. The face image calculates the corresponding similarity, and the average value is counted, and the average value is used as the second preset threshold. Generally, the similarity of the two face images taken by the same user at different times due to the individual's own differences The degree is usually slightly less than 1, so that the second predetermined threshold can also be set to be slightly less than 1, such as 0.8.
需要指出的是,当验证单元523生成指示该待验证对象为目标用户标识的验证结果时,说明该活体用户是已注册或认证的活体用户,此时,可以直接根据该目标用户标识进行登陆,无需用户手动输入密码和账 号,方法简单,方便快捷。It should be noted that, when the verification unit 523 generates a verification result indicating that the object to be verified is the target user identifier, it indicates that the living user is a registered or authenticated living user, and at this time, the login can be directly performed according to the target user identifier. No need for users to manually enter passwords and accounts, the method is simple, convenient and fast.
生成子模块53,用于若该可信度不大于第一预设阈值,则生成指示该待验证对象为非法用户的验证结果。The generating sub-module 53 is configured to generate a verification result indicating that the object to be verified is an illegal user, if the credibility is not greater than the first preset threshold.
本实施例中,当计算出的可信度小于或等于该第一预设阈值时,说明该待验证对象极有可能是屏幕翻拍而成的虚拟用户,此时,为减少误判率,可以提示用户重新进行人脸图像采集。In this embodiment, when the calculated reliability is less than or equal to the first preset threshold, it indicates that the object to be verified is likely to be a virtual user whose screen is retrased. At this time, in order to reduce the false positive rate, Prompt the user to re-collect the face image.
具体实施时,以上各个单元可以作为独立的实体来实现,也可以进行任意组合,作为同一或若干个实体来实现,以上各个单元的具体实施可参见前面的方法实施例,在此不再赘述。In the specific implementation, the foregoing units may be implemented as a separate entity, or may be implemented in any combination, and may be implemented as the same or a plurality of entities. For the specific implementation of the foregoing, refer to the foregoing method embodiments, and details are not described herein.
可见,当处理器执行存储器中存储的指令时可实现以上实施例中提供的身份验证方法。It can be seen that the identity verification method provided in the above embodiment can be implemented when the processor executes the instructions stored in the memory.
由上述可知,本实施例提供的身份验证装置,通过提供模块10向待验证对象提供动作提示信息,获取模块20获取该待验证对象的视频流数据,该视频流数据为该待验证对象根据该动作提示信息做出相应动作时采集的连续帧人脸图像,之后,第一确定模块30根据该视频流数据确定目标人脸图像,第二确定模块40根据该目标人脸图像确定该待验证对象为活体的可信度,之后,验证模块50根据该可信度和目标人脸图像对该待验证对象进行身份验证,能在人脸识别过程中有效阻挡照片、视频和人头模型等各种类型的攻击,方法简单,安全性高。It can be seen from the above that the identity verification device provided by the embodiment provides the action prompt information to the object to be verified by the providing module 10, and the obtaining module 20 acquires the video stream data of the object to be verified, and the video stream data is the object to be verified according to the The action prompt information is used to make a continuous frame face image collected when the action is performed. Then, the first determining module 30 determines the target face image according to the video stream data, and the second determining module 40 determines the object to be verified according to the target face image. For the credibility of the living body, the verification module 50 then authenticates the object to be verified according to the credibility and the target face image, and can effectively block various types of photos, videos, and head models in the face recognition process. The attack is simple and safe.
第四实施例Fourth embodiment
相应的,本发明实施例还提供一种身份验证系统,包括本发明实施例所提供的任一种身份验证装置,该身份验证装置具体可参见实施例三。Correspondingly, the embodiment of the present invention further provides an identity verification system, which includes any of the identity verification devices provided by the embodiments of the present invention. For details, refer to the third embodiment.
其中,网络设备可以向待验证对象提供动作提示信息;获取该待验证对象的视频流数据,该视频流数据为该待验证对象根据该动作提示信息做出相应动作时采集的连续帧人脸图像;根据该视频流数据确定目标人脸图像;根据该目标人脸图像确定该待验证对象为活体的可信度;根据该可信度和目标人脸图像对该待验证对象进行身份验证。The network device may provide action prompt information to the object to be verified; and obtain video stream data of the object to be verified, where the video stream data is a continuous frame face image collected by the object to be verified according to the motion prompt information. Determining a target face image according to the video stream data; determining, according to the target face image, the credibility of the object to be verified as a living body; and authenticating the object to be verified according to the credibility and the target face image.
以上各个设备的具体实施可参见前面的实施例,在此不再赘述。For the specific implementation of the foregoing devices, refer to the foregoing embodiments, and details are not described herein again.
由于该路况信息的生成系统可以包括本发明实施例所提供的任一种身份验证装置,因此,可以实现本发明实施例所提供的任一种身份验证装置所能实现的有益效果,详见前面的实施例,在此不再赘述。The system for generating the traffic information may include any of the identity verification devices provided by the embodiments of the present invention. Therefore, the beneficial effects of any of the identity verification devices provided by the embodiments of the present invention can be implemented. The embodiment is not described here.
第五实施例Fifth embodiment
相应的,本发明实施例还提供一种网络设备,如图4所示,其示出了本发明实施例所涉及的网络设备的结构示意图,具体来讲:Correspondingly, the embodiment of the present invention further provides a network device, as shown in FIG. 4, which shows a schematic structural diagram of a network device according to an embodiment of the present invention, specifically:
该网络设备可以包括一个或者一个以上处理核心的处理器701、一个或一个以上计算机可读存储介质的存储器702、射频(Radio Frequency,RF)电路703、电源704、输入单元705、以及显示单元707等部件。本领域技术人员可以理解,图4中示出的网络设备结构并不构成对网络设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。其中:The network device can include a processor 701 of one or more processing cores, a memory 702 of one or more computer readable storage media, a radio frequency (RF) circuit 703, a power source 704, an input unit 705, and a display unit 707 And other components. It will be understood by those skilled in the art that the network device structure illustrated in FIG. 4 does not constitute a limitation to the network device, and may include more or less components than those illustrated, or some components may be combined, or different component arrangements. among them:
处理器701是该网络设备的控制中心,利用各种接口和线路连接整个网络设备的各个部分,通过运行或执行存储在存储器702内的软件程序和/或模块,以及调用存储在存储器702内的数据,执行网络设备的各种功能和处理数据,从而对网络设备进行整体监控。可选的,处理器701可包括一个或多个处理核心;优选的,处理器701可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器701中。The processor 701 is the control center of the network device, interconnecting various portions of the entire network device using various interfaces and lines, by running or executing software programs and/or modules stored in the memory 702, and recalling stored in the memory 702. Data, performing various functions of the network device and processing data, thereby performing overall monitoring of the network device. Optionally, the processor 701 may include one or more processing cores; preferably, the processor 701 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like. The modem processor primarily handles wireless communications. It can be understood that the above modem processor may not be integrated into the processor 701.
存储器702可用于存储软件程序以及模块,处理器701通过运行存储在存储器702的软件程序以及模块,从而执行各种功能应用以及数据处理。存储器702可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据网络设备的使用所创建的数据等。此外,存储器702可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。相应地,存储器702还可以包括存储器控制器,以提供处理器701对存储器702的访问。The memory 702 can be used to store software programs and modules, and the processor 701 executes various functional applications and data processing by running software programs and modules stored in the memory 702. The memory 702 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to Data created by the use of network devices, etc. Moreover, memory 702 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, memory 702 can also include a memory controller to provide processor 701 access to memory 702.
RF电路703可用于收发信息过程中,信号的接收和发送,特别地,将基站的下行信息接收后,交由一个或者一个以上处理器701处理;另外,将涉及上行的数据发送给基站。通常,RF电路703包括但不限于天线、至少一个放大器、调谐器、一个或多个振荡器、用户身份模块(SIM)卡、收发信机、耦合器、低噪声放大器(LNA,Low Noise Amplifier)、双工器等。此外,RF电路703还可以通过无线通信与网络和其他设备通信。该无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯系统(GSM,Global System of Mobile communication)、通用分组无线服务(GPRS,General Packet Radio Service)、码分多址(CDMA,Code Division Multiple Access)、宽带码分多址(WCDMA,Wideband Code Division Multiple Access)、长期演进(LTE,Long Term Evolution)、电子邮件、短消息服务(SMS,Short Messaging Service)等。The RF circuit 703 can be used for receiving and transmitting signals during the process of transmitting and receiving information. Specifically, after receiving the downlink information of the base station, it is processed by one or more processors 701; in addition, the data related to the uplink is sent to the base station. Generally, the RF circuit 703 includes, but is not limited to, an antenna, at least one amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, and a Low Noise Amplifier (LNA). , duplexer, etc. In addition, the RF circuit 703 can also communicate with the network and other devices through wireless communication. The wireless communication can use any communication standard or protocol, including but not limited to Global System of Mobile communication (GSM), General Packet Radio Service (GPRS), and Code Division Multiple Access (CDMA). Code Division Multiple Access), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), e-mail, Short Messaging Service (SMS), and the like.
网络设备还包括给各个部件供电的电源704(比如电池),优选的,电源704可以通过电源管理系统与处理器701逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。电源704还可以包括一个或一个以上的直流或交流电源、再充电系统、电源故障检测电路、电源转换器或者逆变器、电源状态指示器等任意组件。The network device also includes a power source 704 (such as a battery) that supplies power to the various components. Preferably, the power source 704 can be logically coupled to the processor 701 through the power management system to manage functions such as charging, discharging, and power management through the power management system. . The power supply 704 can also include any one or more of a DC or AC power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
该网络设备还可包括输入单元705,该输入单元705可用于接收输入的数字或字符信息,以及产生与用户设置以及功能控制有关的键盘、鼠标、操作杆、光学或者轨迹球信号输入。具体地,在一个具体的实施例中,输入单元705可包括触敏表面以及其他输入设备。触敏表面,也称为触摸显示屏或者触控板,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触敏表面上或在触敏表面附近的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触敏表面可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器701,并能接收处理器701发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类 型实现触敏表面。除了触敏表面,输入单元705还可以包括其他输入设备。具体地,其他输入设备可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。The network device can also include an input unit 705 that can be used to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function controls. In particular, in one particular embodiment, input unit 705 can include a touch-sensitive surface as well as other input devices. Touch-sensitive surfaces, also known as touch screens or trackpads, collect touch operations on or near the user (such as the user using a finger, stylus, etc., any suitable object or accessory on a touch-sensitive surface or touch-sensitive Operation near the surface), and drive the corresponding connecting device according to a preset program. Alternatively, the touch sensitive surface may include two parts of a touch detection device and a touch controller. Wherein, the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information. The processor 701 is provided and can receive commands from the processor 701 and execute them. In addition, touch-sensitive surfaces can be implemented in a variety of types including resistive, capacitive, infrared, and surface acoustic waves. In addition to the touch-sensitive surface, the input unit 705 can also include other input devices. Specifically, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.
该网络设备还可包括显示单元706,该显示单元706可用于显示由用户输入的信息或提供给用户的信息以及网络设备的各种图形用户接口,这些图形用户接口可以由图形、文本、图标、视频和其任意组合来构成。显示单元706可包括显示面板,可选的,可以采用液晶显示器(LCD,Liquid Crystal Display)、有机发光二极管(OLED,Organic Light-Emitting Diode)等形式来配置显示面板。进一步的,触敏表面可覆盖显示面板,当触敏表面检测到在其上或附近的触摸操作后,传送给处理器701以确定触摸事件的类型,随后处理器701根据触摸事件的类型在显示面板上提供相应的视觉输出。虽然在图4中,触敏表面与显示面板是作为两个独立的部件来实现输入和输入功能,但是在某些实施例中,可以将触敏表面与显示面板集成而实现输入和输出功能。The network device can also include a display unit 706 that can be used to display information entered by the user or information provided to the user and various graphical user interfaces of the network device, the graphical user interface can be represented by graphics, text, icons, Video and any combination of them. The display unit 706 can include a display panel. Optionally, the display panel can be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch-sensitive surface may cover the display panel, and when the touch-sensitive surface detects a touch operation thereon or nearby, it is transmitted to the processor 701 to determine the type of the touch event, and then the processor 701 displays the type according to the touch event. A corresponding visual output is provided on the panel. Although in FIG. 4, the touch-sensitive surface and display panel are implemented as two separate components to perform input and input functions, in some embodiments, the touch-sensitive surface can be integrated with the display panel to implement input and output functions.
尽管未示出,网络设备还可以包括摄像头、蓝牙模块等,在此不再赘述。具体在本实施例中,网络设备中的处理器701会按照如下的指令,将一个或一个以上的应用程序的进程对应的可执行文件加载到存储器702中,并由处理器701来运行存储在存储器702中的应用程序,从而实现各种功能,如下:Although not shown, the network device may further include a camera, a Bluetooth module, and the like, and details are not described herein again. Specifically, in this embodiment, the processor 701 in the network device loads the executable file corresponding to the process of one or more applications into the memory 702 according to the following instructions, and is stored and stored by the processor 701. The application in memory 702, thereby implementing various functions, as follows:
向待验证对象提供动作提示信息;Providing action prompt information to the object to be verified;
获取该待验证对象的视频流数据,该视频流数据为该待验证对象根据该动作提示信息做出相应动作时采集的连续帧人脸图像;Obtaining video stream data of the object to be verified, where the video stream data is a continuous frame face image collected when the object to be verified performs corresponding action according to the action prompt information;
根据该视频流数据确定目标人脸图像;Determining a target face image according to the video stream data;
根据该目标人脸图像确定该待验证对象为活体的可信度;Determining, according to the target face image, the credibility of the object to be verified as a living body;
根据该可信度和目标人脸图像对该待验证对象进行身份验证。The object to be verified is authenticated according to the credibility and the target face image.
以上各操作的实现方法具体可参见上述实施例,此处不再赘述。For the implementation of the foregoing operations, refer to the foregoing embodiments, and details are not described herein again.
该网络设备可以实现本发明实施例所提供的任一种身份验证装置所能实现的有效效果,详见前面的实施例,在此不再赘述。The network device can implement the effective effects of any of the identity verification devices provided by the embodiments of the present invention. For details, refer to the foregoing embodiments, and details are not described herein again.
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM,Random Access Memory)、磁盘或光盘等。A person skilled in the art may understand that all or part of the various steps of the foregoing embodiments may be performed by a program to instruct related hardware. The program may be stored in a computer readable storage medium, and the storage medium may include: Read Only Memory (ROM), Random Access Memory (RAM), disk or optical disk.
本发明实施例还提供一种计算机存储介质,其内存储有计算机可读指令或程序,所述计算机可读指令或程序被处理器执行上述方法。Embodiments of the present invention also provide a computer storage medium having stored therein computer readable instructions or programs, the computer readable instructions or programs being executed by a processor.
以上对本发明实施例所提供的一种身份验证方法、装置和系统进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上该,本说明书内容不应理解为对本发明的限制。The foregoing describes an identity verification method, apparatus, and system provided by the embodiments of the present invention. The principles and implementation manners of the present invention are described in the following. The description of the foregoing embodiments is only for helping understanding. The method of the present invention and its core idea; at the same time, those skilled in the art, according to the idea of the present invention, there are some changes in the specific implementation manner and application scope. In summary, the content of the present specification should not be understood as Limitations of the invention.

Claims (25)

  1. 一种身份验证方法,由网络设备执行,该方法包括:An authentication method is performed by a network device, and the method includes:
    向待验证对象提供动作提示信息;Providing action prompt information to the object to be verified;
    获取所述待验证对象的视频流数据,所述视频流数据为所述待验证对象根据所述动作提示信息做出相应动作时采集的连续帧人脸图像;Acquiring the video stream data of the object to be verified, where the video stream data is a continuous frame face image collected when the object to be verified performs corresponding action according to the action prompt information;
    根据所述视频流数据确定目标人脸图像;Determining a target face image according to the video stream data;
    根据所述目标人脸图像确定所述待验证对象为活体的可信度;Determining, according to the target face image, the credibility of the object to be verified as a living body;
    根据所述可信度和所述目标人脸图像对所述待验证对象进行身份验证。And authenticating the object to be verified according to the credibility and the target face image.
  2. 根据权利要求1所述的身份验证方法,其中,所述根据所述视频流数据确定目标人脸图像,包括:The identity verification method according to claim 1, wherein the determining the target face image according to the video stream data comprises:
    获取所述视频流数据中每一帧人脸图像的关键点集、以及所述关键点集中每一关键点的位置信息;Obtaining a key point set of each frame face image in the video stream data, and location information of each key point in the key point set;
    根据每一帧人脸图像的关键点集和位置信息确定所述待验证对象的运动轨迹;Determining a motion trajectory of the object to be verified according to a key point set and position information of each frame of the face image;
    根据所述运动轨迹从所述视频流数据中确定所述目标人脸图像。Determining the target face image from the video stream data according to the motion trajectory.
  3. 根据权利要求2所述的身份验证方法,其中,所述根据所述运动轨迹从所述视频流数据中确定目标人脸图像,包括:The identity verification method according to claim 2, wherein the determining the target face image from the video stream data according to the motion trajectory comprises:
    判断所述运动轨迹是否满足预设条件;Determining whether the motion trajectory meets a preset condition;
    若是,则从所述视频流数据中选取预设轨迹点对应的人脸图像,并将选取出的人脸图像作为目标人脸图像;所述预设轨迹点为所述运动轨迹中预设的位置点;If yes, selecting a face image corresponding to the preset track point from the video stream data, and using the selected face image as the target face image; the preset track point is preset in the motion track Location point
    若否,则生成指示所述待验证对象为非法用户的验证结果。If not, generating a verification result indicating that the object to be verified is an illegal user.
  4. 根据权利要求2所述的身份验证方法,其中,所述根据所述目标人脸图像确定所述待验证对象为活体的可信度,包括:The identity verification method according to claim 2, wherein the determining, according to the target face image, the credibility of the object to be verified as a living body comprises:
    从所述目标人脸图像的关键点集中确定至少一个目标关键点;Determining at least one target key point from a key point of the target face image;
    根据所述目标关键点的位置信息和所述目标人脸图像确定归一化图像;Determining a normalized image according to the location information of the target key point and the target face image;
    利用预设分类模型对所述归一化图像进行计算,得到所述待验证对象为活体的可信度。The normalized image is calculated by using a preset classification model to obtain the reliability of the object to be verified as a living body.
  5. 根据权利要求4所述的身份验证方法,其中,所述根据所述目标关键点的位置信息和所述目标人脸图像确定归一化图像,包括:The identity verification method according to claim 4, wherein the determining the normalized image according to the location information of the target keypoint and the target facial image comprises:
    获取每一目标关键点的预设位置;Get the preset position of each target key point;
    计算每一目标关键点的预设位置和在所述目标人脸图像中相应的位置信息之间的欧氏距离;Calculating an Euclidean distance between a preset position of each target key point and corresponding position information in the target face image;
    根据所述欧氏距离对所述目标人脸图像进行相似变换,得到所述归一化图像。Performing a similar transformation on the target face image according to the Euclidean distance to obtain the normalized image.
  6. 根据权利要求4所述的身份验证方法,其中,在利用预设分类模型对所述归一化图像进行计算之前,还包括:The identity verification method according to claim 4, further comprising: before calculating the normalized image by using a preset classification model, further comprising:
    获取预设人脸图像集、以及所述预设人脸图集中每一预设人脸图像的类别信息;Obtaining a preset face image set, and category information of each preset face image in the preset face map set;
    根据所述预设图像集和所述类别信息对卷积神经网络进行训练,得到所述预设分类模型。The convolutional neural network is trained according to the preset image set and the category information to obtain the preset classification model.
  7. 根据权利要求2-6中任意一项所述的身份验证方法,其中,所述根据所述可信度和所述目标人脸图像对所述待验证对象进行身份验证,包括:The identity verification method according to any one of claims 2-6, wherein the authenticating the object to be verified according to the credibility and the target face image comprises:
    判断所述可信度是否大于第一预设阈值;Determining whether the credibility is greater than a first preset threshold;
    若是,则根据所述目标人脸图像对所述待验证对象进行身份验证;If yes, performing identity verification on the object to be verified according to the target face image;
    若否,则生成指示所述待验证对象为非法用户的验证结果。If not, generating a verification result indicating that the object to be verified is an illegal user.
  8. 根据权利要求7所述的身份验证方法,其中,所述根据目标人脸图像对所述待验证对象进行身份验证,包括:The identity verification method according to claim 7, wherein the authenticating the object to be verified according to the target face image comprises:
    根据所述目标人脸图像的关键点集将所述目标人脸图像划分成多个人脸区域;Dividing the target face image into a plurality of face regions according to a key point set of the target face image;
    根据所述多个人脸区域确定目标特征信息;Determining target feature information according to the plurality of face regions;
    根据所述目标特征信息对所述待验证对象进行身份验证。And authenticating the object to be verified according to the target feature information.
  9. 根据权利要求8所述的身份验证方法,其中,所述根据所述多个人脸区域确定目标特征信息,包括:The identity verification method according to claim 8, wherein the determining the target feature information according to the plurality of face regions comprises:
    对所述多个人脸区域进行特征提取操作,得到多条特征信息;每一人脸区域对应一条特征信息;Performing a feature extraction operation on the plurality of face regions to obtain a plurality of pieces of feature information; each face region corresponding to one piece of feature information;
    对所述多条特征信息进行重组,得到所述目标特征信息。Recombining the plurality of pieces of feature information to obtain the target feature information.
  10. 根据权利要求8所述的身份验证方法,其中,所述根据所述目标特征信息对所述待验证对象进行身份验证,包括:The identity verification method according to claim 8, wherein the authenticating the object to be verified according to the target feature information comprises:
    获取已存储特征信息集、以及所述已存储特征信息集中每一已存储特征信息对应的用户标识;Obtaining a stored feature information set, and a user identifier corresponding to each stored feature information in the stored feature information set;
    利用预设算法计算每一已存储特征信息和所述目标特征信息之间的相似度;Calculating a similarity between each stored feature information and the target feature information by using a preset algorithm;
    根据所述相似度和对应的用户标识对所述待验证对象进行身份验证。And authenticating the object to be verified according to the similarity and the corresponding user identifier.
  11. 根据权利要求10所述的身份验证方法,其中,所述根据所述相似度和对应的用户标识对所述待验证对象进行身份验证,包括:The identity verification method according to claim 10, wherein the authenticating the object to be verified according to the similarity and the corresponding user identifier comprises:
    判断计算出的所有相似度中是否存在不小于第二预设阈值的相似度;Determining whether there is a similarity that is not less than a second preset threshold among all calculated similarities;
    若存在,则将所述不小于第二预设阈值的相似度对应的用户标识作为目标用户标识,并生成指示所述待验证对象为所述目标用户标识的验证结果;If yes, the user identifier corresponding to the similarity of the second preset threshold is used as the target user identifier, and the verification result indicating that the object to be verified is the target user identifier is generated;
    若不存在,则生成指示所述待验证对象为非法用户的验证结果。If not, generate a verification result indicating that the object to be verified is an illegal user.
  12. 根据权利要求10所述的身份验证方法,其中,在获取已存储特征信息集、以及所述已存储特征信息集中每一已存储特征信息对应的用户标识之前,还包括:The identity verification method according to claim 10, further comprising: before acquiring the stored feature information set and the user identifier corresponding to each stored feature information in the stored feature information set, further comprising:
    获取用户注册请求,所述用户注册请求携带待注册用户标识和待注册人脸图像;Obtaining a user registration request, where the user registration request carries a user identifier to be registered and a face image to be registered;
    根据所述待注册人脸图像确定待注册特征信息;Determining feature information to be registered according to the face image to be registered;
    将所述待注册特征信息和待注册用户标识进行关联,并将所述待注册特征信息添入已存储特征信息集。Associating the to-be-registered feature information with the to-be-registered user identifier, and adding the to-be-registered feature information to the stored feature information set.
  13. 一种身份验证装置,包括:An authentication device includes:
    至少一个存储器;At least one memory;
    至少一个处理器;At least one processor;
    其中,所述至少一个存储器存储有至少一条指令,所述至少一条指令由所述至少一个处理器执行并实现以下方法:Wherein the at least one memory stores at least one instruction, the at least one instruction being executed by the at least one processor and implementing the following method:
    向待验证对象提供动作提示信息;Providing action prompt information to the object to be verified;
    获取所述待验证对象的视频流数据,所述视频流数据为所述待验证对象根据所述动作提示信息做出相应动作时采集的连续帧人脸图像;Acquiring the video stream data of the object to be verified, where the video stream data is a continuous frame face image collected when the object to be verified performs corresponding action according to the action prompt information;
    根据所述视频流数据确定目标人脸图像;Determining a target face image according to the video stream data;
    根据所述目标人脸图像确定所述待验证对象为活体的可信度;Determining, according to the target face image, the credibility of the object to be verified as a living body;
    根据所述可信度和所述目标人脸图像对所述待验证对象进行身份验证。And authenticating the object to be verified according to the credibility and the target face image.
  14. 根据权利要求13所述的身份验证装置,其中,所述根据所述视频流数据确定目标人脸图像,包括:获取所述视频流数据中每一帧人脸图像的关键点集、以及所述关键点集中每一关键点的位置信息;根据每一帧人脸图像的关键点集和位置信息确定所述待验证对象的运动轨 迹;根据所述运动轨迹从所述视频流数据中确定目标人脸图像。The identity verification device according to claim 13, wherein the determining the target face image according to the video stream data comprises: acquiring a key point set of each frame face image in the video stream data, and the Key points are collected for position information of each key point; determining a motion trajectory of the object to be verified according to a key point set and position information of each frame of the face image; determining a target person from the video stream data according to the motion trajectory Face image.
  15. 根据权利要求14所述的身份验证装置,其中,所述根据所述目标人脸图像确定所述待验证对象为活体的可信度,包括:从所述目标人脸图像的关键点集中确定至少一个目标关键点;根据所述目标关键点的位置信息和目标人脸图像确定归一化图像;利用预设分类模型对所述归一化图像进行计算,得到所述待验证对象为活体的可信度。The identity verification device according to claim 14, wherein the determining, according to the target face image, the credibility of the object to be verified as a living body comprises: determining at least a key point set of the target face image a target key point; determining a normalized image according to the position information of the target key point and the target face image; calculating the normalized image by using a preset classification model, and obtaining the object to be verified as a living body Reliability.
  16. 根据权利要求14所述的身份验证装置,其中,所述根据所述运动轨迹从所述视频流数据中确定目标人脸图像,包括:判断所述运动轨迹是否满足预设条件;若是,则从所述视频流数据中选取预设轨迹点对应的人脸图像,并将选取出的人脸图像作为目标人脸图像;所述预设轨迹点为所述运动轨迹中预设的位置点;若否,则生成指示所述待验证对象为非法用户的验证结果。The identity verification device according to claim 14, wherein the determining the target face image from the video stream data according to the motion trajectory comprises: determining whether the motion trajectory satisfies a preset condition; if so, Selecting a face image corresponding to the preset track point in the video stream data, and using the selected face image as the target face image; the preset track point is a preset position point in the motion track; No, a verification result indicating that the object to be verified is an illegal user is generated.
  17. 根据权利要求15所述的身份验证装置,其中,所述根据所述目标关键点的位置信息和所述目标人脸图像确定归一化图像,包括:获取每一目标关键点的预设位置;计算每一目标关键点的预设位置和在所述目标人脸图像中相应的位置信息之间的欧氏距离;根据所述欧氏距离对所述目标人脸图像进行相似变换,得到所述归一化图像。The identity verification device according to claim 15, wherein the determining the normalized image according to the location information of the target key point and the target facial image comprises: acquiring a preset position of each target key point; Calculating an Euclidean distance between a preset position of each target key point and corresponding position information in the target face image; performing similar transformation on the target face image according to the Euclidean distance, to obtain the Normalize the image.
  18. 根据权利要求15所述的身份验证装置,其中,在利用预设分类模型对所述归一化图像进行计算之前,还包括:获取预设人脸图像集、以及所述预设人脸图集中每一预设人脸图像的类别信息;根据所述预设图像集和所述类别信息对卷积神经网络进行训练,得到所述预设分类模型。The identity verification device according to claim 15, wherein before calculating the normalized image by using a preset classification model, the method further comprises: acquiring a preset face image set, and the preset face map set Type information of each preset face image; training the convolutional neural network according to the preset image set and the category information to obtain the preset classification model.
  19. 根据权利要求14~18任意一项所述的身份验证装置,其中,The identity verification device according to any one of claims 14 to 18, wherein
    所述根据所述可信度和所述目标人脸图像对所述待验证对象进行身份验证,包括:判断所述可信度是否大于第一预设阈值;若是,则根据所述目标人脸图像对所述待验证对象进行身份验证;若否,则生成指示所述待验证对象为非法用户的验证结果。The authenticating the object to be verified according to the credibility and the target face image includes: determining whether the credibility is greater than a first preset threshold; if yes, according to the target face The image is authenticated to the object to be verified; if not, a verification result indicating that the object to be verified is an illegal user is generated.
  20. 根据权利要求19所述的身份验证装置,其中,所述根据目标人脸图像对所述待验证对象进行身份验证,包括:The identity verification device according to claim 19, wherein the authenticating the object to be verified according to the target face image comprises:
    根据所述目标人脸图像的关键点集将所述目标人脸图像划分成多个人脸区域;根据所述多个人脸区域确定目标特征信息;根据所述目标特征信息对所述待验证对象进行身份验证。Dividing the target face image into a plurality of face regions according to the key point set of the target face image; determining target feature information according to the plurality of face regions; and performing the to-be-verified object according to the target feature information Authentication.
  21. 根据权利要求20所述的身份验证装置,其中,所述根据所述多个人脸区域确定目标特征信息,包括:对所述多个人脸区域进行特征 提取操作,得到多条特征信息;每一人脸区域对应一条特征信息;对所述多条特征信息进行重组,得到所述目标特征信息.The identity verification device according to claim 20, wherein the determining the target feature information according to the plurality of face regions comprises: performing a feature extraction operation on the plurality of face regions to obtain a plurality of pieces of feature information; each face The area corresponds to a piece of feature information; the plurality of pieces of feature information are reorganized to obtain the target feature information.
  22. 根据权利要求20所述的身份验证装置,其中,所述根据所述目标特征信息对所述待验证对象进行身份验证,包括:获取已存储特征信息集、以及所述已存储特征信息集中每一已存储特征信息对应的用户标识;利用预设算法计算每一已存储特征信息和所述目标特征信息之间的相似度;根据所述相似度和对应的用户标识对所述待验证对象进行身份验证。The identity verification device according to claim 20, wherein the authenticating the object to be verified according to the target feature information comprises: acquiring a stored feature information set, and each of the stored feature information sets The user identifier corresponding to the feature information is stored; the similarity between each stored feature information and the target feature information is calculated by using a preset algorithm; and the identity of the object to be verified is determined according to the similarity and the corresponding user identifier. verification.
  23. 根据权利要求22所述的身份验证装置,其中,所述根据所述相似度和对应的用户标识对所述待验证对象进行身份验证,包括:判断计算出的所有相似度中是否存在不小于第二预设阈值的相似度;若存在,则将所述不小于第二预设阈值的相似度对应的用户标识作为目标用户标识,并生成指示所述待验证对象为所述目标用户标识的验证结果;The identity verification device according to claim 22, wherein the authenticating the object to be verified according to the similarity and the corresponding user identifier comprises: determining whether the calculated similarity is not less than And a similarity of the preset threshold; if yes, the user identifier corresponding to the similarity of the second preset threshold is used as the target user identifier, and the verification that the object to be verified is the target user identifier is generated result;
    若不存在,则生成指示所述待验证对象为非法用户的验证结果。If not, generate a verification result indicating that the object to be verified is an illegal user.
  24. 根据权利要求22所述的身份验证装置,其中,在获取已存储特征信息集、以及所述已存储特征信息集中每一已存储特征信息对应的用户标识之前,还包括:获取用户注册请求,所述用户注册请求携带待注册用户标识和待注册人脸图像;根据所述待注册人脸图像确定待注册特征信息;将所述待注册特征信息和待注册用户标识进行关联,并将所述待注册特征信息添入已存储特征信息集。The identity verification device according to claim 22, further comprising: obtaining a user registration request, before acquiring the stored feature information set and the user identifier corresponding to each stored feature information in the stored feature information set The user registration request carries the to-be-registered user identifier and the to-be-registered face image; the to-be-registered feature information is determined according to the to-be-registered face image; the to-be-registered feature information is associated with the to-be-registered user identifier, and the The registration feature information is added to the stored feature information set.
  25. 一种计算机存储介质,其内存储有计算机可读指令或程序,所述计算机可读指令或程序被处理器执行如权利要求1-12任一项所述的方法。A computer storage medium having stored therein computer readable instructions or programs, the computer readable instructions or programs being executed by a processor as claimed in any one of claims 1-12.
PCT/CN2018/082803 2017-04-20 2018-04-12 Identity authentication method and apparatus, and storage medium WO2018192406A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710261931.0A CN107066983B (en) 2017-04-20 2017-04-20 Identity verification method and device
CN201710261931.0 2017-04-20

Publications (1)

Publication Number Publication Date
WO2018192406A1 true WO2018192406A1 (en) 2018-10-25

Family

ID=59600617

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/082803 WO2018192406A1 (en) 2017-04-20 2018-04-12 Identity authentication method and apparatus, and storage medium

Country Status (2)

Country Link
CN (1) CN107066983B (en)
WO (1) WO2018192406A1 (en)

Cited By (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109635021A (en) * 2018-10-30 2019-04-16 平安科技(深圳)有限公司 A kind of data information input method, device and equipment based on human testing
CN109670285A (en) * 2018-11-13 2019-04-23 平安科技(深圳)有限公司 Face recognition login method, device, computer equipment and storage medium
CN109726648A (en) * 2018-12-14 2019-05-07 深圳壹账通智能科技有限公司 A kind of facial image recognition method and device based on machine learning
CN109815658A (en) * 2018-12-14 2019-05-28 平安科技(深圳)有限公司 A kind of verification method and device, computer equipment and computer storage medium
CN109934187A (en) * 2019-03-19 2019-06-25 西安电子科技大学 Based on face Activity determination-eye sight line random challenge response method
CN110111129A (en) * 2019-03-28 2019-08-09 中国科学院深圳先进技术研究院 A kind of data analysing method, advertisement playing device and storage medium
CN110287971A (en) * 2019-05-22 2019-09-27 平安银行股份有限公司 Data verification method, device, computer equipment and storage medium
CN110288272A (en) * 2019-04-19 2019-09-27 平安科技(深圳)有限公司 Data processing method, device, electronic equipment and storage medium
CN110399794A (en) * 2019-06-20 2019-11-01 平安科技(深圳)有限公司 Gesture recognition method, device, equipment and storage medium based on human body
CN110443137A (en) * 2019-07-03 2019-11-12 平安科技(深圳)有限公司 The recognition methods of various dimensions identity information, device, computer equipment and storage medium
CN110688517A (en) * 2019-09-02 2020-01-14 平安科技(深圳)有限公司 Audio distribution method, device and storage medium
CN111062323A (en) * 2019-12-16 2020-04-24 腾讯科技(深圳)有限公司 Face image transmission method, numerical value transfer method, device and electronic equipment
CN111143703A (en) * 2019-12-19 2020-05-12 上海寒武纪信息科技有限公司 Intelligent line recommendation method and related product
CN111160243A (en) * 2019-12-27 2020-05-15 深圳云天励飞技术有限公司 Passenger flow volume statistical method and related product
CN111178287A (en) * 2019-12-31 2020-05-19 云知声智能科技股份有限公司 Audio-video fusion end-to-end identity recognition method and device
CN111191207A (en) * 2019-12-23 2020-05-22 深圳壹账通智能科技有限公司 Electronic file control method and device, computer equipment and storage medium
CN111241505A (en) * 2018-11-28 2020-06-05 深圳市帝迈生物技术有限公司 Terminal device, login verification method thereof and computer storage medium
CN111414785A (en) * 2019-01-07 2020-07-14 财团法人交大思源基金会 Identification system and identification method
CN111461368A (en) * 2019-01-21 2020-07-28 北京嘀嘀无限科技发展有限公司 Abnormal order processing method, device, equipment and computer readable storage medium
CN111652086A (en) * 2020-05-15 2020-09-11 汉王科技股份有限公司 Face living body detection method and device, electronic equipment and storage medium
CN111723655A (en) * 2020-05-12 2020-09-29 五八有限公司 Face image processing method, device, server, terminal, equipment and medium
CN111753271A (en) * 2020-06-28 2020-10-09 深圳壹账通智能科技有限公司 Account opening identity verification method, account opening identity verification device, account opening identity verification equipment and account opening identity verification medium based on AI identification
CN111898536A (en) * 2019-08-27 2020-11-06 创新先进技术有限公司 Certificate identification method and device
CN111950401A (en) * 2020-07-28 2020-11-17 深圳数联天下智能科技有限公司 Method, image processing system, device, and medium for determining key point region position
CN111985298A (en) * 2020-06-28 2020-11-24 百度在线网络技术(北京)有限公司 Face recognition sample collection method and device
CN112307817A (en) * 2019-07-29 2021-02-02 中国移动通信集团浙江有限公司 Face living body detection method and device, computing equipment and computer storage medium
CN112383737A (en) * 2020-11-11 2021-02-19 从法信息科技有限公司 Multi-user online content same-screen video processing verification method and device and electronic equipment
CN112434547A (en) * 2019-08-26 2021-03-02 中国移动通信集团广东有限公司 User identity auditing method and device
CN112491840A (en) * 2020-11-17 2021-03-12 平安养老保险股份有限公司 Information modification method and device, computer equipment and storage medium
CN112633129A (en) * 2020-12-18 2021-04-09 深圳追一科技有限公司 Video analysis method and device, electronic equipment and storage medium
CN112767436A (en) * 2019-10-21 2021-05-07 深圳云天励飞技术有限公司 Face detection tracking method and device
TWI727337B (en) * 2019-06-06 2021-05-11 大陸商鴻富錦精密工業(武漢)有限公司 Electronic device and face recognition method
CN112818733A (en) * 2020-08-24 2021-05-18 腾讯科技(深圳)有限公司 Information processing method, device, storage medium and terminal
CN112906741A (en) * 2019-05-21 2021-06-04 北京嘀嘀无限科技发展有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113128452A (en) * 2021-04-30 2021-07-16 重庆锐云科技有限公司 Greening satisfaction acquisition method and system based on image recognition
WO2021158168A1 (en) * 2020-02-04 2021-08-12 Grabtaxi Holdings Pte. Ltd. Method, server and communication system of verifying user for transportation purposes
CN113316781A (en) * 2019-01-17 2021-08-27 电装波动株式会社 Authentication system, authentication device, and authentication method
CN113361366A (en) * 2021-05-27 2021-09-07 北京百度网讯科技有限公司 Face labeling method and device, electronic equipment and storage medium
CN113569676A (en) * 2021-07-16 2021-10-29 北京市商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113742776A (en) * 2021-09-08 2021-12-03 未鲲(上海)科技服务有限公司 Data verification method and device based on biological recognition technology and computer equipment
CN113780212A (en) * 2021-09-16 2021-12-10 平安科技(深圳)有限公司 User identity verification method, device, equipment and storage medium
CN114267066A (en) * 2021-12-24 2022-04-01 北京的卢深视科技有限公司 Face recognition method, electronic device and storage medium
CN114626036A (en) * 2020-12-08 2022-06-14 腾讯科技(深圳)有限公司 Information processing method and device based on face recognition, storage medium and terminal
CN114760068A (en) * 2022-04-08 2022-07-15 中国银行股份有限公司 User identity authentication method, system, electronic device and storage medium
CN116469196A (en) * 2023-03-16 2023-07-21 东莞市恒鑫科技信息有限公司 Digital integrated management system and method
EP3975047B1 (en) * 2019-06-11 2024-04-10 Honor Device Co., Ltd. Method for determining validness of facial feature, and electronic device

Families Citing this family (57)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104518877A (en) * 2013-10-08 2015-04-15 鸿富锦精密工业(深圳)有限公司 Identity authentication system and method
CN107066983B (en) * 2017-04-20 2022-08-09 腾讯科技(上海)有限公司 Identity verification method and device
GB2567798A (en) * 2017-08-22 2019-05-01 Eyn Ltd Verification method and system
EP3447684A1 (en) 2017-08-22 2019-02-27 Eyn Limited Verification method and system
CN107590485A (en) * 2017-09-29 2018-01-16 广州市森锐科技股份有限公司 It is a kind of for the auth method of express delivery cabinet, device and to take express system
CN107729857B (en) * 2017-10-26 2021-05-28 Oppo广东移动通信有限公司 Face recognition method and device, storage medium and electronic equipment
CN107733911A (en) * 2017-10-30 2018-02-23 郑州云海信息技术有限公司 A kind of power and environmental monitoring system client login authentication system and method
CN108171109A (en) * 2017-11-28 2018-06-15 苏州市东皓计算机系统工程有限公司 A kind of face identification system
CN109993024A (en) * 2017-12-29 2019-07-09 技嘉科技股份有限公司 Authentication means, auth method and computer-readable storage medium
CN108335394A (en) * 2018-03-16 2018-07-27 东莞市华睿电子科技有限公司 A kind of long-range control method of intelligent door lock
CN108494942B (en) * 2018-03-16 2021-12-10 深圳八爪网络科技有限公司 Unlocking control method based on cloud address book
CN108564673A (en) * 2018-04-13 2018-09-21 北京师范大学 A kind of check class attendance method and system based on Global Face identification
CN108615007B (en) * 2018-04-23 2019-07-19 深圳大学 Three-dimensional face identification method, device and storage medium based on characteristic tensor
WO2019205009A1 (en) 2018-04-25 2019-10-31 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for identifying a body motion
CN108647874B (en) * 2018-05-04 2020-12-08 科大讯飞股份有限公司 Threshold value determining method and device
CN110210276A (en) * 2018-05-15 2019-09-06 腾讯科技(深圳)有限公司 A kind of motion track acquisition methods and its equipment, storage medium, terminal
CN110826045B (en) * 2018-08-13 2022-04-05 深圳市商汤科技有限公司 Authentication method and device, electronic equipment and storage medium
CN110197108A (en) * 2018-08-17 2019-09-03 平安科技(深圳)有限公司 Auth method, device, computer equipment and storage medium
CN109190522B (en) * 2018-08-17 2021-05-07 浙江捷尚视觉科技股份有限公司 Living body detection method based on infrared camera
CN109146879B (en) * 2018-09-30 2021-05-18 杭州依图医疗技术有限公司 Method and device for detecting bone age
CN109583165A (en) * 2018-10-12 2019-04-05 阿里巴巴集团控股有限公司 A kind of biological information processing method, device, equipment and system
CN109635625B (en) * 2018-10-16 2023-08-18 平安科技(深圳)有限公司 Intelligent identity verification method, equipment, storage medium and device
CN111144169A (en) * 2018-11-02 2020-05-12 深圳比亚迪微电子有限公司 Face recognition method and device and electronic equipment
CN111209768A (en) * 2018-11-06 2020-05-29 深圳市商汤科技有限公司 Identity authentication system and method, electronic device, and storage medium
CN109376684B (en) 2018-11-13 2021-04-06 广州市百果园信息技术有限公司 Face key point detection method and device, computer equipment and storage medium
CN109670440B (en) * 2018-12-14 2023-08-08 央视国际网络无锡有限公司 Identification method and device for big bear cat face
CN111372023B (en) * 2018-12-25 2023-04-07 杭州海康威视数字技术股份有限公司 Code stream encryption and decryption method and device
CN111382624B (en) * 2018-12-28 2023-08-11 杭州海康威视数字技术股份有限公司 Action recognition method, device, equipment and readable storage medium
CN109815835A (en) * 2018-12-29 2019-05-28 联动优势科技有限公司 A kind of interactive mode biopsy method
CN109934191A (en) * 2019-03-20 2019-06-25 北京字节跳动网络技术有限公司 Information processing method and device
CN110210312A (en) * 2019-04-29 2019-09-06 众安信息技术服务有限公司 A kind of method and system verifying certificate and holder
CN111866589A (en) * 2019-05-20 2020-10-30 北京嘀嘀无限科技发展有限公司 Video data verification method and device, electronic equipment and storage medium
CN110443621A (en) * 2019-08-07 2019-11-12 深圳前海微众银行股份有限公司 Video core body method, apparatus, equipment and computer storage medium
CN115311706A (en) * 2019-08-28 2022-11-08 视联动力信息技术股份有限公司 Personnel identification method, device, terminal equipment and storage medium
CN110968239B (en) * 2019-11-28 2022-04-05 北京市商汤科技开发有限公司 Control method, device and equipment for display object and storage medium
CN111881707B (en) * 2019-12-04 2021-09-14 马上消费金融股份有限公司 Image reproduction detection method, identity verification method, model training method and device
CN113095110B (en) * 2019-12-23 2024-03-08 浙江宇视科技有限公司 Method, device, medium and electronic equipment for dynamically warehousing face data
CN111060507B (en) * 2019-12-24 2021-05-04 北京嘀嘀无限科技发展有限公司 Vehicle verification method and device
CN111178259A (en) * 2019-12-30 2020-05-19 八维通科技有限公司 Recognition method and system supporting multi-algorithm fusion
CN111259757B (en) * 2020-01-13 2023-06-20 支付宝实验室(新加坡)有限公司 Living body identification method, device and equipment based on image
CN111091388B (en) * 2020-02-18 2024-02-09 支付宝实验室(新加坡)有限公司 Living body detection method and device, face payment method and device and electronic equipment
CN111523408B (en) * 2020-04-09 2023-09-15 北京百度网讯科技有限公司 Motion capturing method and device
CN111932755A (en) * 2020-07-02 2020-11-13 北京市威富安防科技有限公司 Personnel passage verification method and device, computer equipment and storage medium
CN112084858A (en) * 2020-08-05 2020-12-15 广州虎牙科技有限公司 Object recognition method and device, electronic equipment and storage medium
CN112101286A (en) * 2020-09-25 2020-12-18 北京市商汤科技开发有限公司 Service request method, device, computer equipment and storage medium
CN112364733B (en) * 2020-10-30 2022-07-26 重庆电子工程职业学院 Intelligent security face recognition system
CN112700344A (en) * 2020-12-22 2021-04-23 成都睿畜电子科技有限公司 Farm management method, farm management device, farm management medium and farm management equipment
CN112287909B (en) * 2020-12-24 2021-09-07 四川新网银行股份有限公司 Double-random in-vivo detection method for randomly generating detection points and interactive elements
CN112800885B (en) * 2021-01-16 2023-09-26 南京众鑫云创软件科技有限公司 Data processing system and method based on big data
CN113255512B (en) * 2021-05-21 2023-07-28 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for living body identification
CN113255529A (en) * 2021-05-28 2021-08-13 支付宝(杭州)信息技术有限公司 Biological feature identification method, device and equipment
CN113536270B (en) * 2021-07-26 2023-08-08 网易(杭州)网络有限公司 Information verification method, device, computer equipment and storage medium
CN113505756A (en) * 2021-08-23 2021-10-15 支付宝(杭州)信息技术有限公司 Face living body detection method and device
CN115514893B (en) * 2022-09-20 2023-10-27 北京有竹居网络技术有限公司 Image uploading method, image uploading device, readable storage medium and electronic equipment
CN115512426B (en) * 2022-11-04 2023-03-24 安徽五域安全技术有限公司 Intelligent face recognition method and system
CN116152936A (en) * 2023-02-17 2023-05-23 深圳市永腾翼科技有限公司 Face identity authentication system with interactive living body detection and method thereof
CN115937961B (en) * 2023-03-02 2023-07-11 济南丽阳神州智能科技有限公司 Online learning identification method and equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105227316A (en) * 2015-09-01 2016-01-06 深圳市创想一登科技有限公司 Based on mobile Internet account login system and the method for facial image authentication
CN105468950A (en) * 2014-09-03 2016-04-06 阿里巴巴集团控股有限公司 Identity authentication method and apparatus, terminal and server
CN105718874A (en) * 2016-01-18 2016-06-29 北京天诚盛业科技有限公司 Method and device of in-vivo detection and authentication
CN105989264A (en) * 2015-02-02 2016-10-05 北京中科奥森数据科技有限公司 Bioassay method and bioassay system for biological characteristics
CN106302330A (en) * 2015-05-21 2017-01-04 腾讯科技(深圳)有限公司 Auth method, device and system
CN106557723A (en) * 2015-09-25 2017-04-05 北京市商汤科技开发有限公司 A kind of system for face identity authentication with interactive In vivo detection and its method
CN107066983A (en) * 2017-04-20 2017-08-18 腾讯科技(上海)有限公司 A kind of auth method and device

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000113197A (en) * 1998-10-02 2000-04-21 Victor Co Of Japan Ltd Individual identifying device
CN101162500A (en) * 2006-10-13 2008-04-16 上海银晨智能识别科技有限公司 Sectorization type human face recognition method
CN104036276A (en) * 2014-05-29 2014-09-10 无锡天脉聚源传媒科技有限公司 Face recognition method and device
CN106156578B (en) * 2015-04-22 2020-02-14 深圳市腾讯计算机系统有限公司 Identity verification method and device
US10275672B2 (en) * 2015-04-29 2019-04-30 Beijing Kuangshi Technology Co., Ltd. Method and apparatus for authenticating liveness face, and computer program product thereof
CN105069408B (en) * 2015-07-24 2018-08-03 上海依图网络科技有限公司 Video portrait tracking based on recognition of face under a kind of complex scene
CN105426827B (en) * 2015-11-09 2019-03-08 北京市商汤科技开发有限公司 Living body verification method, device and system
CN105426850B (en) * 2015-11-23 2021-08-31 深圳市商汤科技有限公司 Associated information pushing device and method based on face recognition
CN105847735A (en) * 2016-03-30 2016-08-10 宁波三博电子科技有限公司 Face recognition-based instant pop-up screen video communication method and system
CN106295574A (en) * 2016-08-12 2017-01-04 广州视源电子科技股份有限公司 Face characteristic based on neutral net extracts modeling, face identification method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105468950A (en) * 2014-09-03 2016-04-06 阿里巴巴集团控股有限公司 Identity authentication method and apparatus, terminal and server
CN105989264A (en) * 2015-02-02 2016-10-05 北京中科奥森数据科技有限公司 Bioassay method and bioassay system for biological characteristics
CN106302330A (en) * 2015-05-21 2017-01-04 腾讯科技(深圳)有限公司 Auth method, device and system
CN105227316A (en) * 2015-09-01 2016-01-06 深圳市创想一登科技有限公司 Based on mobile Internet account login system and the method for facial image authentication
CN106557723A (en) * 2015-09-25 2017-04-05 北京市商汤科技开发有限公司 A kind of system for face identity authentication with interactive In vivo detection and its method
CN105718874A (en) * 2016-01-18 2016-06-29 北京天诚盛业科技有限公司 Method and device of in-vivo detection and authentication
CN107066983A (en) * 2017-04-20 2017-08-18 腾讯科技(上海)有限公司 A kind of auth method and device

Cited By (64)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109635021A (en) * 2018-10-30 2019-04-16 平安科技(深圳)有限公司 A kind of data information input method, device and equipment based on human testing
CN109670285A (en) * 2018-11-13 2019-04-23 平安科技(深圳)有限公司 Face recognition login method, device, computer equipment and storage medium
CN111241505A (en) * 2018-11-28 2020-06-05 深圳市帝迈生物技术有限公司 Terminal device, login verification method thereof and computer storage medium
CN109726648A (en) * 2018-12-14 2019-05-07 深圳壹账通智能科技有限公司 A kind of facial image recognition method and device based on machine learning
CN109815658A (en) * 2018-12-14 2019-05-28 平安科技(深圳)有限公司 A kind of verification method and device, computer equipment and computer storage medium
CN111414785A (en) * 2019-01-07 2020-07-14 财团法人交大思源基金会 Identification system and identification method
CN113316781A (en) * 2019-01-17 2021-08-27 电装波动株式会社 Authentication system, authentication device, and authentication method
CN111461368B (en) * 2019-01-21 2024-01-09 北京嘀嘀无限科技发展有限公司 Abnormal order processing method, device, equipment and computer readable storage medium
CN111461368A (en) * 2019-01-21 2020-07-28 北京嘀嘀无限科技发展有限公司 Abnormal order processing method, device, equipment and computer readable storage medium
CN109934187B (en) * 2019-03-19 2023-04-07 西安电子科技大学 Random challenge response method based on face activity detection-eye sight
CN109934187A (en) * 2019-03-19 2019-06-25 西安电子科技大学 Based on face Activity determination-eye sight line random challenge response method
CN110111129B (en) * 2019-03-28 2024-01-19 中国科学院深圳先进技术研究院 Data analysis method, advertisement playing device and storage medium
CN110111129A (en) * 2019-03-28 2019-08-09 中国科学院深圳先进技术研究院 A kind of data analysing method, advertisement playing device and storage medium
CN110288272B (en) * 2019-04-19 2024-01-30 平安科技(深圳)有限公司 Data processing method, device, electronic equipment and storage medium
CN110288272A (en) * 2019-04-19 2019-09-27 平安科技(深圳)有限公司 Data processing method, device, electronic equipment and storage medium
CN112906741A (en) * 2019-05-21 2021-06-04 北京嘀嘀无限科技发展有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN110287971B (en) * 2019-05-22 2023-11-14 平安银行股份有限公司 Data verification method, device, computer equipment and storage medium
CN110287971A (en) * 2019-05-22 2019-09-27 平安银行股份有限公司 Data verification method, device, computer equipment and storage medium
TWI727337B (en) * 2019-06-06 2021-05-11 大陸商鴻富錦精密工業(武漢)有限公司 Electronic device and face recognition method
EP3975047B1 (en) * 2019-06-11 2024-04-10 Honor Device Co., Ltd. Method for determining validness of facial feature, and electronic device
CN110399794A (en) * 2019-06-20 2019-11-01 平安科技(深圳)有限公司 Gesture recognition method, device, equipment and storage medium based on human body
CN110443137A (en) * 2019-07-03 2019-11-12 平安科技(深圳)有限公司 The recognition methods of various dimensions identity information, device, computer equipment and storage medium
CN110443137B (en) * 2019-07-03 2023-07-25 平安科技(深圳)有限公司 Multi-dimensional identity information identification method and device, computer equipment and storage medium
CN112307817A (en) * 2019-07-29 2021-02-02 中国移动通信集团浙江有限公司 Face living body detection method and device, computing equipment and computer storage medium
CN112307817B (en) * 2019-07-29 2024-03-19 中国移动通信集团浙江有限公司 Face living body detection method, device, computing equipment and computer storage medium
CN112434547A (en) * 2019-08-26 2021-03-02 中国移动通信集团广东有限公司 User identity auditing method and device
CN112434547B (en) * 2019-08-26 2023-11-14 中国移动通信集团广东有限公司 User identity auditing method and device
CN111898536A (en) * 2019-08-27 2020-11-06 创新先进技术有限公司 Certificate identification method and device
CN110688517A (en) * 2019-09-02 2020-01-14 平安科技(深圳)有限公司 Audio distribution method, device and storage medium
CN110688517B (en) * 2019-09-02 2023-05-30 平安科技(深圳)有限公司 Audio distribution method, device and storage medium
CN112767436A (en) * 2019-10-21 2021-05-07 深圳云天励飞技术有限公司 Face detection tracking method and device
CN111062323A (en) * 2019-12-16 2020-04-24 腾讯科技(深圳)有限公司 Face image transmission method, numerical value transfer method, device and electronic equipment
CN111062323B (en) * 2019-12-16 2023-06-02 腾讯科技(深圳)有限公司 Face image transmission method, numerical value transfer method, device and electronic equipment
CN111143703B (en) * 2019-12-19 2023-05-23 上海寒武纪信息科技有限公司 Intelligent line recommendation method and related products
CN111143703A (en) * 2019-12-19 2020-05-12 上海寒武纪信息科技有限公司 Intelligent line recommendation method and related product
CN111191207A (en) * 2019-12-23 2020-05-22 深圳壹账通智能科技有限公司 Electronic file control method and device, computer equipment and storage medium
CN111160243A (en) * 2019-12-27 2020-05-15 深圳云天励飞技术有限公司 Passenger flow volume statistical method and related product
CN111178287A (en) * 2019-12-31 2020-05-19 云知声智能科技股份有限公司 Audio-video fusion end-to-end identity recognition method and device
WO2021158168A1 (en) * 2020-02-04 2021-08-12 Grabtaxi Holdings Pte. Ltd. Method, server and communication system of verifying user for transportation purposes
CN111723655B (en) * 2020-05-12 2024-03-08 五八有限公司 Face image processing method, device, server, terminal, equipment and medium
CN111723655A (en) * 2020-05-12 2020-09-29 五八有限公司 Face image processing method, device, server, terminal, equipment and medium
CN111652086A (en) * 2020-05-15 2020-09-11 汉王科技股份有限公司 Face living body detection method and device, electronic equipment and storage medium
CN111652086B (en) * 2020-05-15 2022-12-30 汉王科技股份有限公司 Face living body detection method and device, electronic equipment and storage medium
CN111985298B (en) * 2020-06-28 2023-07-25 百度在线网络技术(北京)有限公司 Face recognition sample collection method and device
CN111753271A (en) * 2020-06-28 2020-10-09 深圳壹账通智能科技有限公司 Account opening identity verification method, account opening identity verification device, account opening identity verification equipment and account opening identity verification medium based on AI identification
CN111985298A (en) * 2020-06-28 2020-11-24 百度在线网络技术(北京)有限公司 Face recognition sample collection method and device
CN111950401A (en) * 2020-07-28 2020-11-17 深圳数联天下智能科技有限公司 Method, image processing system, device, and medium for determining key point region position
CN111950401B (en) * 2020-07-28 2023-12-08 深圳数联天下智能科技有限公司 Method, image processing system, device and medium for determining position of key point area
CN112818733A (en) * 2020-08-24 2021-05-18 腾讯科技(深圳)有限公司 Information processing method, device, storage medium and terminal
CN112818733B (en) * 2020-08-24 2024-01-05 腾讯科技(深圳)有限公司 Information processing method, device, storage medium and terminal
CN112383737A (en) * 2020-11-11 2021-02-19 从法信息科技有限公司 Multi-user online content same-screen video processing verification method and device and electronic equipment
CN112491840B (en) * 2020-11-17 2023-07-07 平安养老保险股份有限公司 Information modification method, device, computer equipment and storage medium
CN112491840A (en) * 2020-11-17 2021-03-12 平安养老保险股份有限公司 Information modification method and device, computer equipment and storage medium
CN114626036A (en) * 2020-12-08 2022-06-14 腾讯科技(深圳)有限公司 Information processing method and device based on face recognition, storage medium and terminal
CN112633129A (en) * 2020-12-18 2021-04-09 深圳追一科技有限公司 Video analysis method and device, electronic equipment and storage medium
CN113128452A (en) * 2021-04-30 2021-07-16 重庆锐云科技有限公司 Greening satisfaction acquisition method and system based on image recognition
CN113361366A (en) * 2021-05-27 2021-09-07 北京百度网讯科技有限公司 Face labeling method and device, electronic equipment and storage medium
CN113569676A (en) * 2021-07-16 2021-10-29 北京市商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113742776A (en) * 2021-09-08 2021-12-03 未鲲(上海)科技服务有限公司 Data verification method and device based on biological recognition technology and computer equipment
CN113780212A (en) * 2021-09-16 2021-12-10 平安科技(深圳)有限公司 User identity verification method, device, equipment and storage medium
CN114267066A (en) * 2021-12-24 2022-04-01 北京的卢深视科技有限公司 Face recognition method, electronic device and storage medium
CN114760068A (en) * 2022-04-08 2022-07-15 中国银行股份有限公司 User identity authentication method, system, electronic device and storage medium
CN116469196B (en) * 2023-03-16 2024-03-15 南京誉泰瑞思科技有限公司 Digital integrated management system and method
CN116469196A (en) * 2023-03-16 2023-07-21 东莞市恒鑫科技信息有限公司 Digital integrated management system and method

Also Published As

Publication number Publication date
CN107066983A (en) 2017-08-18
CN107066983B (en) 2022-08-09

Similar Documents

Publication Publication Date Title
WO2018192406A1 (en) Identity authentication method and apparatus, and storage medium
US11330012B2 (en) System, method, and device of authenticating a user based on selfie image or selfie video
KR102139548B1 (en) System and method for decentralized identifier based on face recognition
US11704939B2 (en) Liveness detection
US10395018B2 (en) System, method, and device of detecting identity of a user and authenticating a user
CN108804884B (en) Identity authentication method, identity authentication device and computer storage medium
TWI700612B (en) Information display method, device and system
US10268910B1 (en) Authentication based on heartbeat detection and facial recognition in video data
EP2869238B1 (en) Methods and systems for determining user liveness
KR101629224B1 (en) Authentication method, device and system based on biological characteristics
WO2016169432A1 (en) Identity authentication method and device, and terminal
CN106778141B (en) Unlocking method and device based on gesture recognition and mobile terminal
CN107992728B (en) Face verification method and device
US20230273986A1 (en) Passive Identification of a Device User
WO2019153504A1 (en) Group creation method and terminal thereof
WO2020135081A1 (en) Identity recognition method and apparatus based on dynamic rasterization management, and server
TW201512882A (en) Identity authentication system and method thereof
CN112115455B (en) Method, device, server and medium for setting association relation of multiple user accounts
WO2018068664A1 (en) Network information identification method and device
CN112818733B (en) Information processing method, device, storage medium and terminal
CN107483423A (en) A kind of user login validation method
Yuan et al. SALM: smartphone-based identity authentication using lip motion characteristics
CN112115454B (en) Single sign-on method, first server and electronic equipment
CN112131553B (en) Single sign-on method, first server and electronic equipment
US11983964B2 (en) Liveness detection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18787038

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18787038

Country of ref document: EP

Kind code of ref document: A1