CN111291668A - Living body detection method, living body detection device, electronic equipment and readable storage medium - Google Patents

Living body detection method, living body detection device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN111291668A
CN111291668A CN202010075822.1A CN202010075822A CN111291668A CN 111291668 A CN111291668 A CN 111291668A CN 202010075822 A CN202010075822 A CN 202010075822A CN 111291668 A CN111291668 A CN 111291668A
Authority
CN
China
Prior art keywords
detection
living body
behavior data
target user
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010075822.1A
Other languages
Chinese (zh)
Inventor
吴明治
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202010075822.1A priority Critical patent/CN111291668A/en
Publication of CN111291668A publication Critical patent/CN111291668A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The embodiment of the disclosure provides a method, a device, an electronic device and a readable storage medium for detecting a living body, wherein the method comprises the following steps: acquiring behavior data of living body detection of a target user; inputting the behavior data into a risk prediction model, and outputting a risk value corresponding to the behavior data through the risk prediction model; the risk prediction model is obtained by training according to collected historical behavior data of historical living body detection and risk value marking information corresponding to the historical behavior data; and adjusting the detection parameters of the living body detection of the target user according to the risk value so as to improve or reduce the difficulty of the living body detection of the target user. The embodiment of the disclosure can improve the living body detection difficulty of the abnormal user and improve the safety of the face recognition process while ensuring the experience of the normal user.

Description

Living body detection method, living body detection device, electronic equipment and readable storage medium
Technical Field
Embodiments of the present disclosure relate to the field of computers, and in particular, to a method and an apparatus for detecting a living body, an electronic device, and a readable storage medium.
Background
The face recognition is an effective natural person identification technology, is friendly to interaction, efficient and convenient, and is widely applied to scenes such as electronic payment, attendance record and the like.
However, face recognition is currently faced with a variety of attacking approaches, including: print photo attacks, video recording attacks, 3D (3Dimensions, three dimensional) model attacks, etc. In order to prevent the above attack means, in the process of face recognition, the user is generally randomly guided to do actions such as "blink", "open mouth", "shake head", "nod head", "smile", and the like. The face recognition system detects whether the user performs corresponding actions according to guidance in real time, and if so, the user of face recognition is considered to be a real person instead of an attack behavior. This technique is called "biopsy".
However, since a fixed guiding action is usually adopted during the living body detection, an attacker can easily imitate a video or a photo corresponding to the guiding action, and further attack the face recognition process, so that the face recognition process still has potential safety hazards.
Disclosure of Invention
Embodiments of the present disclosure provide a method and an apparatus for detecting a living body, an electronic device, and a readable storage medium, so as to improve safety of face recognition.
According to a first aspect of embodiments of the present disclosure, there is provided a method of living body detection, the method comprising:
acquiring behavior data of living body detection of a target user;
inputting the behavior data into a risk prediction model, and outputting a risk value corresponding to the behavior data through the risk prediction model; the risk prediction model is obtained by training according to collected historical behavior data of historical living body detection and risk value marking information corresponding to the historical behavior data;
and adjusting the detection parameters of the living body detection of the target user according to the risk value so as to improve or reduce the difficulty of the living body detection of the target user.
According to a second aspect of embodiments of the present disclosure, there is provided a living body detection apparatus, the apparatus comprising:
the behavior data acquisition module is used for acquiring the behavior data of the living body detection of the target user;
the behavior risk prediction module is used for inputting the behavior data into a risk prediction model and outputting a risk value corresponding to the behavior data through the risk prediction model; the risk prediction model is obtained by training according to collected historical behavior data of historical living body detection and risk value marking information corresponding to the historical behavior data;
and the detection parameter adjusting module is used for adjusting the detection parameters of the living body detection of the target user according to the risk value so as to improve or reduce the difficulty of the living body detection of the target user.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor, a memory and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the aforementioned liveness detection method when executing the program.
According to a fourth aspect of embodiments of the present disclosure, there is provided a readable storage medium having instructions that, when executed by a processor of an electronic device, enable the electronic device to perform the aforementioned living body detection method.
The embodiment of the disclosure provides a method, a device, an electronic device and a readable storage medium for detecting a living body, wherein the method comprises the following steps:
acquiring behavior data of living body detection of a target user; inputting the behavior data into a risk prediction model, and outputting a risk value corresponding to the behavior data through the risk prediction model; and adjusting the detection parameters of the living body detection of the target user according to the risk value so as to improve or reduce the difficulty of the living body detection of the target user. The risk prediction model is obtained by training according to collected historical behavior data of historical living body detection and risk value marking information corresponding to the historical behavior data. Through the embodiment of the disclosure, the safety intensity of the whole detection can be dynamically adjusted based on the user behavior, so that the experience of a normal user can be guaranteed, the living body detection difficulty of an abnormal user can be improved, and the safety of the face recognition process can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings needed to be used in the description of the embodiments of the present disclosure will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
FIG. 1 illustrates a flow chart of steps of a liveness detection method in one embodiment of the present disclosure;
FIG. 2 shows a block diagram of a living body detecting apparatus in one embodiment of the present disclosure;
fig. 3 shows a block diagram of an electronic device provided by an embodiment of the present disclosure.
Detailed Description
Technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are some, but not all, of the embodiments of the present disclosure. All other embodiments, which can be obtained by a person skilled in the art without making creative efforts based on the embodiments of the present disclosure, belong to the protection scope of the embodiments of the present disclosure.
Example one
Referring to FIG. 1, a flow chart illustrating the steps of a liveness detection method in one embodiment of the present disclosure, the method comprising:
step 101, acquiring behavior data of living body detection of a target user;
step 102, inputting the behavior data into a risk prediction model, and outputting a risk value corresponding to the behavior data through the risk prediction model; the risk prediction model is obtained by training according to collected historical behavior data of historical living body detection and risk value marking information corresponding to the historical behavior data;
and 103, adjusting detection parameters of the living body detection of the target user according to the risk value so as to improve or reduce the difficulty of the living body detection of the target user.
The in-vivo detection method of the present disclosure may be applied to electronic devices, including, but not limited to: smart phones, tablet computers, electronic book readers, MP3 (Moving picture Experts Group Audio Layer III) players, MP4 (Moving picture Experts Group Audio Layer IV) players, laptop portable computers, car-mounted computers, desktop computers, set-top boxes, smart televisions, wearable devices, and the like.
The electronic equipment can be configured with a camera for acquiring a face image of a target user to perform face recognition on the target user, and the living body detection method disclosed by the invention is adopted in the face recognition process to perform living body detection on the target user.
Specifically, the embodiment of the disclosure may acquire behavior data of the living body detection of the target user, input the behavior data into a risk prediction model, output a risk value corresponding to the behavior data through the risk prediction model, and adjust the detection parameter of the living body detection of the target user according to the risk value, thereby improving the difficulty of the living body detection or reducing the difficulty of the living body detection. For example, if the risk value output by the risk prediction model is large, which indicates that the probability that the behavior data is an attack behavior is high, at this time, the detection parameters of the living body detection of the target user may be adjusted to improve the difficulty of the living body detection of the target user, prevent the target user from continuing the attack behavior, and improve the security of the face recognition process. On the contrary, if the risk value output by the risk prediction model is smaller, the probability that the behavior data of the target user is an attack behavior is lower, and at this time, the detection parameters of the living body detection of the target user can be adjusted, so that the difficulty of the living body detection of the target user is reduced, and the efficiency and the experience degree of the face recognition process are improved.
The risk prediction model is obtained by training according to collected historical behavior data of historical living body detection and risk value labeling information corresponding to the historical behavior data. For example, historical behavior data of historical living body detection of a large number of users can be collected, wherein the historical behavior data can comprise negative sample data with attack behaviors and positive sample data without attack behaviors, and the collected large number of historical behavior data are labeled to obtain risk value labeling information corresponding to each historical behavior data. For example, whether each historical behavior data has an attack risk or not can be marked, or the probability value of the attack risk of each historical behavior data can be marked. And then training to obtain a risk prediction model according to the collected historical behavior data and the corresponding risk value marking information, wherein the risk prediction model can be used for predicting the risk value of the behavior data of the in-vivo detection, and the higher the risk value is, the higher the probability that the behavior data is an attack behavior is.
In practical applications, the living body detection process generally refers to detecting whether a series of actions input by a user meet preset conditions. In the embodiment of the present disclosure, the adjusting of the detection parameter of the live body detection of the target user may be adjusting (taking effect at this time) the detection parameter of the live body detection of the target user when a series of motion detections of the live body detection at this time are not finished; alternatively, after the series of motion detections of the current biopsy is finished, the detection parameters of the next biopsy of the target user may be adjusted (the next detection becomes effective).
For example, if a user needs to be guided to perform 3 target actions in a one-time living body detection process, if it is detected that behavior data of a target user has a high risk value according to a risk prediction model in a process of inputting a first target action by a target user, detection parameters of the current living body detection of the target user can be adjusted in real time, for example, 2 target actions are added on the basis of the original 3 target actions to improve difficulty of the living body detection, and the current living body detection can be performed only if the user completes the original 3 target actions and the added 2 target actions as required.
Of course, in practical applications, if it is detected that the behavior data of the target user has a higher risk value, the detection parameters of the next live body detection of the target user may be adjusted without adjusting the detection parameters of the current live body detection, and the adjusted parameters are validated only when the next live body detection is completed.
The disclosed embodiments may train a risk prediction model prior to performing in vivo tests on a user. Specifically, the step of training the risk prediction model comprises:
step S11, collecting historical behavior data of historical living body detection;
step S12, labeling the historical behavior data to obtain risk value labeling information corresponding to the historical behavior data;
step S13, taking the historical behavior data and the risk value labeling information as a training set, and inputting an initial risk prediction model;
and step S14, adjusting model parameters of the initial risk prediction model according to the difference between the prediction result output by the initial risk prediction model and the risk value marking information until the initial risk prediction model meets a preset convergence condition, and obtaining a trained risk prediction model.
First, historical behavior data of historical live body detections of a large number of users may be collected, which may include negative sample data with an aggressive behavior and positive sample data without an aggressive behavior. And labeling a large amount of collected historical behavior data to obtain risk value labeling information corresponding to each historical behavior data. For example, whether each historical behavior data has an attack risk or not can be marked, or the probability value of the attack risk of each historical behavior data can be marked. For example, for historical behavior data f, the information can be labeled by r, where r is 0 to indicate that f is normal behavior (no risk), and r is 1 to indicate that f is aggressive behavior (risk).
In an optional embodiment of the present disclosure, the behavior data may specifically include: detecting the forward behavior data and/or the detecting behavior data; wherein the pre-detection behavior data is generated within a preset time before the start of the in-vivo detection, and the in-detection behavior data is generated during the detection process of the in-vivo detection.
In a specific application, the behavior data of the user includes a plurality of types, and the following two types of historical behavior data related to risk detection are screened out in the embodiment of the present disclosure: detecting the preceding behavior data and the detecting behavior data. The pre-test behavior data is generated within a preset time before the start of the in-vivo test, and the preset time can be set according to actual conditions, such as within 10 minutes before the start of the in-vivo test.
Optionally, the pre-detection behavior data at least includes any one of the following items: whether to modify the password, whether to log in again, whether to switch equipment to log in, whether to jump coordinates, and whether to log in with multiple accounts. The behavior-under-test data is generated during a test procedure of a living body test, and optionally, the behavior-under-test data at least includes any one of the following items: the number of continuous detection failures, the number of occurrences of the first face, the number of occurrences of the second face, the probability of the existence of the preset features, and the number of rapid movement of the face.
In the embodiment of the present disclosure, the pre-detection behavior data is recorded as fiI is more than or equal to 1 and less than or equal to 6. Wherein f is1Indicating whether the password is modified within a preset time before the start of the living body detection, if so, f1Is 1, if not, f1Is 0; f. of2Indicating whether to log in again within a preset time before the start of the biopsy, if so, f2Is 1, if not, f2Is 0; f. of3Indicating whether equipment is switched to log in within a preset time before the living body detection is started, if so, f3Is 1, if not, f3Is 0; f. of4Indicating whether coordinate jumping exists in the preset time before the living body detection is started, if so, f4Is 1, if not, f4Is 0; the coordinate jump refers to a large change in the position of the target user in a short time, for example, the position coordinate of the target user is in beijing before 10 minutes, but the target user is currently in the Shanghai.
The multiple account login may include the same device multiple account login or the same IP (Internet Protocol) address multiple account login. f. of5Indicating whether the same equipment multi-account login exists within preset time before the living body detection starts, if so, f5Is 1, if not, f5Is 0; f. of6Indicating whether the same IP address multi-account login exists within preset time before the living body detection starts, if so, f6Is 1, if not, f6Is 0.
It is to be understood that the above six types of behavior-before-detection data are only used as an application example of the present disclosure, and the specific type of the behavior-before-detection data is not limited by the embodiments of the present disclosure.
In the embodiment of the present disclosure, the detected behavior data is recorded as tjJ is more than or equal to 1 and less than or equal to 6. Wherein, t1Indicates the number of successive detection failures within a predetermined time, t1Is a non-zero positive integer; t is t2Indicating the number of occurrences of the first face, t, during the detection process2Is a non-zero positive integer; t is t3Representing the number of occurrences of a second face, t, during the detection process3Is a non-zero positive integer.
The first face is a face with the similarity with the target face image smaller than a first threshold, and the second face is a face with the similarity with the target face image smaller than the first threshold. The target face image may be used to determine whether a face image of a target user currently performing live body detection is a face image of a real target user. The target face image can be a pre-input face image and can be used as an identity of a user. For example, the target face image may be a face image that a user registers in a public security department that an identity card is input; or the target face image can be a face image and the like recorded when the electronic payment platform is opened for the user. t2 is used to identify the number of times that a face with extremely low similarity to the target face image appears during the detection process, and if the number of times exceeds a first threshold, it indicates that the current target user is not a real target user. t3 is used to identify the number of times of face with extremely high similarity to the target face image in the detection process, and if the number of times exceeds a second threshold, it indicates that the probability of the current target user attacking the target user with video or photos is high. Because the similarity of the extracted face images is usually slightly different in the process of detecting the actions of normal users. And the attacker only has a small amount of videos or photos to repeatedly attack, so that the extracted face images are basically the same (or have higher similarity).
In practical applications, an attacker may use another electronic device to play a pre-prepared video or photo against a camera of the electronic device that needs to perform liveness detection at present, so as to impersonate a target user to perform liveness detection. In the process of using the video or the photo to impersonate the target user to perform the living body detection, the detected face of the target user is a face image in the video or the photo, and the face image has different image characteristics from a face image of the user acquired by the camera in real time.
Specifically, the detected behavior data may include a probability of existence of a preset feature, and optionally, the preset feature may include at least the following three types: the probability that a square frame exists around the face image, the probability that the face image has moire and the probability that the face image has reflection. In particular, t4Probability of the existence of a square frame around the face image, t5Indicating the probability of the presence of a Moire in the face image, t6Representing the probability of reflection of the face image; wherein, t4、t5、t6Is [0,1 ]]Floating point numbers in between.
t7Indicates the number of times of rapid movement of the face in the detection process, t7Is a non-zero positive integer. Wherein the rapid face movement refers to that the face image of the target user moves from the outside of the screen to the inside of the screen within a short time, such as 2 seconds. In a specific application, the face image is rapidly moved from the outside of the screen to the inside of the screen, and it may be that an attacker plays a video or a photo of a target user by using another electronic device, and intends to impersonate the target user for live body detection. According to the method and the device, the times of rapid movement of the face in the living body detection process are collected, so that the trained risk prediction model can identify the masquerading behavior of the attacker, and the safety of the living body detection can be improved.
It is to be understood that the seven kinds of behavior-in-detection data are only used as an application example of the present disclosure, and the specific type of the behavior-in-detection data is not limited by the embodiment of the present disclosure.
After collecting the historical behavior data, feature extraction and labeling can be performed on the historical behavior data to obtain a training set for training a risk prediction model, which is specifically as follows:
[f1,f2,f3,…fi-1,fi,t1,t2,t3,…tj-1,tj,r]
wherein r represents the risk value marking information, r is 0, f represents normal behavior (no risk), and r is 1, f represents attack behavior (risk).
And then, inputting the training set into an initial risk prediction model, and adjusting model parameters of the initial risk prediction model according to the difference between the prediction result output by the initial risk prediction model and the risk value marking information until the initial risk prediction model meets a preset convergence condition to obtain a trained risk prediction model.
Optionally, in the embodiment of the present disclosure, the training set may be input into an SVM (Support Vector Machine) for model training, so as to obtain a risk value prediction model.
It is to be understood that the specific type of the risk value prediction model is not limited by the embodiments of the present disclosure, and the risk value prediction model may include DNN (Deep Neural Networks). The deep neural network may fuse a variety of neural networks including, but not limited to, at least one or a combination, superposition, nesting of at least two of the following: CNN (Convolutional Neural Network), LSTM (Long Short-Term Memory) Network, RNN (Simple Recurrent Neural Network), attention Neural Network, and the like.
After the risk prediction model training is completed, the trained risk prediction model can be used for estimating the risk value of the behavior of the user before the in vivo detection and the behavior in the in vivo detection.
Specifically, pre-detection behavior data f of a target user within pre-living body detection preset is acquiredi', and acquiring in-detection behavior data t of the target user in the in-vivo detection processj’,Detecting the acquired forward behavior data fi' and in-detect behavior data tj'inputting a risk value prediction model and outputting a prediction result r'. The method comprises the following specific steps:
[f1’,f2’,f3’,…fi-1’,fi’,t1’,t2’,t3’,…tj-1’,tj’,r’]
further, according to the number distribution of the predicted results r', the embodiment of the present disclosure may classify the risk values into 5 levels, as shown in table 1.
TABLE 1
Risk rating Value of risk Ratio of number to number
Extremely low risk [0,0.31) 2.30%
Low risk [0.31,0.42) 13.60%
Moderate risk [0.42,0.63) 68.20%
High risk [0.63~0.68) 13.60%
Extremely high risk [0.75~1] 2.30%
The adjusting, according to the risk value, a detection parameter of the living body detection of the target user may specifically include: and adjusting the detection parameters of the living body detection of the target user according to the risk level corresponding to the risk value.
In one application example, if the risk level is moderate, the current detection parameters may not be adjusted. If the risk level is high or extremely high, the current detection parameters can be adjusted to improve the difficulty of in-vivo detection, and the higher the risk level is, the greater the difficulty after adjustment is. If the risk level is low or extremely low, the current detection parameters can be adjusted to reduce the difficulty of in-vivo detection, and the lower the risk level is, the smaller the difficulty after adjustment is.
The manner of dividing the risk level is only an application example of the present disclosure, and the embodiment of the present disclosure does not limit the specific manner of dividing the risk level.
In an optional embodiment of the present disclosure, the detection parameter may include at least any one of: the number of target actions, the target action completion time, the maximum live body detection times, an action matching threshold value and a face matching threshold value.
In a specific application, one biopsy usually requires that the user perform n target actions consecutively, and each target action needs to be completed within a specified time (e.g. m seconds). If a user action matching the target action is not detected within a specified time, the liveness detection fails. And if the continuous failure times exceed a preset upper limit within the preset time, prohibiting the current target user from continuously executing the face recognition operation. Therefore, after determining the risk value corresponding to the behavior data of the target user, the embodiments of the present disclosure may adjust the detection parameters of the living body detection of the target user according to the risk value, such as adjusting at least one of the number of target actions, the target action completion time, the maximum living body detection number, the action matching threshold, and the face matching threshold.
The number of target actions refers to the number of target actions required to be continuously completed in one biopsy, such as n target actions. The target action completion time is used to define a maximum time to complete each target action. The maximum number of times of live body detection is the maximum number of allowed continuous detection failures, for example, if 3 continuous detections fail, the face recognition operation is prohibited from continuing within 10 minutes. And the action matching threshold is used for limiting the minimum matching degree between the collected user action and the target action, and the action with the matching degree exceeding the action matching threshold is considered to pass the detection. The face matching threshold is used for limiting the minimum matching degree between the acquired face image and the real face image, and the face image with the matching degree exceeding the face matching threshold is considered to pass the detection. The motion matching threshold and the face matching threshold may be floating point numbers.
The adjusting, according to the risk value, a detection parameter of the living body detection of the target user may specifically include: increasing at least one of the following detection parameters of the target user as the risk value increases: the method comprises the following steps of (1) reducing at least one of the following detection parameters of a target user by the aid of a target action number, an action matching threshold and a face matching threshold: target action completion time and maximum biopsy times.
In the detection parameters of the living body detection, the greater the number n of target actions required to be continuously completed, the greater the attack difficulty; the smaller the time m allowed for completing each target action is, the greater the attack difficulty is; the smaller the allowed maximum in-vivo detection times k is, the greater the attack difficulty is; and the larger the action matching threshold f1 and the face matching threshold f2 are, the greater the difficulty of attack is.
Thus, as the risk value increases, at least one of the following detection parameters of the target user may be increased: the number n of target actions, an action matching threshold f1 and a face matching threshold f 2; and reducing at least one of the following detection parameters of the target user: the target action completion time m and the maximum living body detection times k so as to improve the difficulty of the living body detection of the target user.
Increasing n, f1, f2, and decreasing m, k may increase the difficulty of live detection, which requires the user to complete more actions in a shorter time and not have too many errors, which may result in a decrease in user experience. Decreasing n, f1, f2, increasing m, k, may reduce the difficulty of live detection, improve user experience, but may increase the risk of being attacked.
In specific application, the safety and the experience degree can be balanced according to actual requirements, and appropriate detection parameters are set. In one application example of the present disclosure, the parameters of the living body detection shown in table 2 may be set according to different risk levels to achieve a balance between safety and experience.
TABLE 2
Risk rating n m k f1 f2
Extremely low risk 2 5 4 0.55 0.61
Low risk 3 5 4 0.56 0.63
Moderate risk 4 4 3 0.57 0.66
High risk 5 3 2 0.60 0.71
Extremely high risk 6 2 1 0.61 0.82
It can be understood that, in practical application, the types of the detection parameters can be set according to actual needs, and each detection parameter can be adjusted according to the risk value. For example, the detection parameters may further include the complexity of the target motion, the difficulty of the in-vivo detection may be increased by increasing the complexity of the target motion, the difficulty of the in-vivo detection may be decreased by decreasing the complexity of the target motion, and the like.
After the detection parameters of the living body detection of the target user are adjusted according to the risk values in step 103, the living body detection of the target user can be performed according to the adjusted detection parameters.
In an optional embodiment of the disclosure, after the adjusting the detection parameter of the living body detection of the target user, the method may further include:
step S21, judging whether the number of continuous detection failures in preset time is smaller than the adjusted maximum living body detection number, if so, acquiring action data and face data input by the target user;
step S22, calculating a first matching degree of each action and the target action in the action data according to the adjusted target action number and/or the adjusted target action completion time;
step S23, calculating a second matching degree of the face data and the target face image according to a pre-recorded target face image of the target user;
step S24, if the first matching degree exceeds the adjusted action matching threshold and the second matching degree exceeds the adjusted face matching threshold, determining that the detection result of the target user is successful.
Firstly, judging whether the number of continuous detection failures in preset time is less than the adjusted maximum living body detection number, if so, continuing to execute subsequent operations and receiving action data and face data input by the target user; if the current time is greater than the preset time, the subsequent detection operation is prohibited from being continuously executed, and a prompt message such as 'try again after 10 minutes' can be sent out. The action data and the face data input by the target user are received, and specifically, the action data and the face data can be video images which are acquired through a camera of the electronic equipment and contain the action of the target user and the face.
And calculating a first matching degree of each action and the target action in the action data according to the adjusted target action number and/or the adjusted target action completion time so as to judge whether the target user completes the target actions with the specified number within the specified time and whether each action is matched with the target action. And if the completion time of each action in the action data is less than the adjusted target action completion time, the first matching degree of each action and the target action exceeds the adjusted action matching threshold, and the number of actions contained in the action data accords with the adjusted target action number, determining that the action data input by the user passes the detection.
In the process of detecting the user action, a face image can be extracted from received face data, a second matching degree of the extracted face image and the target face image is calculated according to a target face image of the target user which is pre-recorded, so as to judge whether the currently received face data is from a real target user, and if the second matching degree exceeds an adjusted face matching threshold, the identity detection of the current target user is determined to be passed.
The target face image can be used for judging whether the face image of the target user currently executing the living body detection is the real face image of the target user. The target face image can be a pre-input face image and can be used as an identity of a user. For example, the target face image may be a face image that a user registers in a public security department that an identity card is input; or the target face image can be a face image and the like recorded when the electronic payment platform is opened for the user.
And if the first matching degree exceeds the adjusted action matching threshold value and the second matching degree exceeds the adjusted face matching threshold value, determining that the detection result of the target user is successful.
In an optional embodiment of the present disclosure, after the obtaining the motion data and the face data input by the target user, the method may further include: if the action data and/or the face data input by the target user are determined to meet any one of the following conditions, determining that the detection result of the target user is detection failure:
the number of target actions contained in the action data is smaller than the adjusted number of target actions; or
The completion time of any action contained in the action data is longer than the adjusted target action completion time; or
A first matching degree of any one action contained in the action data and the corresponding target action is smaller than the adjusted action matching threshold; or
And the second matching degree of a preset number of image frames and the target face image in the face data is smaller than the adjusted face matching threshold.
In the live body detection process of the above steps S21 to S24, if it is determined that the motion data and/or the face data input by the target user satisfy any of the above conditions, it may be determined that the live body detection of this time has failed.
And under the condition that the living body detection fails, recording the frequency of continuous detection failures, judging whether the recorded frequency of the continuous detection failures reaches the adjusted maximum living body detection frequency or not at the next living body detection, and if so, prohibiting the operation of executing the face recognition within the preset time.
In summary, the embodiment of the present disclosure obtains the behavior data of the living body detection of the target user; inputting the behavior data into a risk prediction model, and outputting a risk value corresponding to the behavior data through the risk prediction model; and then, according to the risk value, adjusting detection parameters of the living body detection of the target user so as to improve or reduce the difficulty of the living body detection of the target user. The risk prediction model is obtained by training according to collected historical behavior data of historical living body detection and risk value marking information corresponding to the historical behavior data. Therefore, through the embodiment of the disclosure, the security intensity of the whole detection can be dynamically adjusted based on the user behavior, so that the experience of a normal user can be ensured, the living body detection difficulty of an abnormal user can be improved, and the security of the face recognition process can be improved.
It is noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the disclosed embodiments are not limited by the described order of acts, as some steps may occur in other orders or concurrently with other steps in accordance with the disclosed embodiments. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required of the disclosed embodiments.
Example two
Referring to fig. 2, a block diagram of a living body detecting apparatus in one embodiment of the present disclosure is shown, specifically as follows.
A behavior data acquiring module 201, configured to acquire behavior data of living body detection of a target user;
the behavior risk prediction module 202 is configured to input the behavior data into a risk prediction model, and output a risk value corresponding to the behavior data through the risk prediction model; the risk prediction model is obtained by training according to collected historical behavior data of historical living body detection and risk value marking information corresponding to the historical behavior data;
and the detection parameter adjusting module 203 is configured to adjust a detection parameter of the living body detection of the target user according to the risk value, so as to improve or reduce the difficulty of the living body detection of the target user.
Specifically, if the risk value output by the risk prediction model is large, which indicates that the probability that the current behavior data is an attack behavior is high, at this time, the detection parameters of the living body detection of the target user can be adjusted to improve the difficulty of the living body detection of the target user, prevent the target user from continuing the attack behavior, and improve the safety of the face recognition process. On the contrary, if the risk value output by the risk prediction model is smaller, the probability that the behavior data of the target user is an attack behavior is lower, and at this time, the detection parameters of the living body detection of the target user can be adjusted, so that the difficulty of the living body detection of the target user is reduced, and the efficiency and the experience degree of the face recognition process are improved.
Optionally, the detection parameter adjusting module 203 includes:
the first adjusting submodule is used for adjusting the detection parameters of the current living body detection of the target user according to the risk value; or
And the second adjusting submodule is used for adjusting the detection parameters of the next living body detection of the target user according to the risk value.
In the embodiment of the present disclosure, the detection parameter adjusting module 203 adjusts the detection parameters of the live body detection of the target user, and may adjust the detection parameters of the live body detection of the target user (which is effective at this time) when a series of motion detections of the live body detection of the current time is not finished; alternatively, after the series of motion detections of the current biopsy is finished, the detection parameters of the next biopsy of the target user may be adjusted (the next detection becomes effective).
Optionally, the detection parameter at least includes any one of the following items: the number of target actions, the target action completion time, the maximum living body detection times, an action matching threshold value and a human face matching threshold value;
the detection parameter adjusting module is specifically configured to increase, with an increase in the risk value, at least one of the following detection parameters of the target user: the method comprises the following steps of (1) reducing at least one of the following detection parameters of a target user by the aid of a target action number, an action matching threshold and a face matching threshold: target action completion time and maximum biopsy times.
Specifically, the number n of target actions, the action matching threshold f1, and the face matching threshold f2 are increased, the target action completion time m and the maximum live body detection time k are reduced, and the difficulty of live body detection can be increased, which requires a user to complete more actions in a shorter time and cannot have too many errors, and may result in a reduction in user experience. Conversely, decreasing n, f1, f2 and increasing m, k can reduce the difficulty of live body detection and improve the user experience, but may increase the risk of being attacked. In practical application, appropriate detection parameters can be set according to actual requirements so as to achieve balance of safety and experience.
Optionally, the apparatus further comprises:
the judgment acquisition module is used for judging whether the number of continuous detection failures in preset time is less than the maximum living body detection number, and if so, acquiring action data and face data input by the target user;
the first calculation module is used for calculating a first matching degree of each action in the action data and the target action according to the adjusted target action number and/or the adjusted target action completion time;
the second calculation module is used for calculating a second matching degree of the face data and the target face image according to a pre-input target face image of the target user;
and the first determining module is used for determining that the detection result of the target user is successful if the first matching degree exceeds the adjusted action matching threshold and the second matching degree exceeds the adjusted face matching threshold.
After the detection parameter adjusting module 203 adjusts the detection parameters of the living body detection of the target user according to the risk values, the living body detection device can perform the living body detection on the target user according to the adjusted detection parameters, so as to realize the adaptive adjustment of the detection parameters.
Optionally, the apparatus further comprises:
a second determining module, configured to determine that the detection result of the target user is a detection failure if it is determined that the motion data and/or the face data input by the target user satisfy any one of the following conditions:
the number of target actions contained in the action data is smaller than the adjusted number of target actions; or
The completion time of any action contained in the action data is longer than the adjusted target action completion time; or
A first matching degree of any one action contained in the action data and the corresponding target action is smaller than the adjusted action matching threshold; or
And the second matching degree of a preset number of image frames and the target face image in the face data is smaller than the adjusted face matching threshold.
In the process of the living body detection by the living body detection device, if the action data and/or the human face data input by the target user are determined to meet any one of the above conditions, the living body detection at this time can be considered to be failed. And under the condition that the living body detection fails, the frequency of continuous detection failure can be recorded, whether the recorded frequency of continuous detection failure reaches the adjusted maximum living body detection frequency is judged during the next living body detection, and if so, the operation of face recognition is prohibited to be executed within the preset time, so that further attack behaviors are prevented.
Optionally, the apparatus further comprises:
the data collection module is used for collecting historical behavior data of historical in-vivo detection;
the data marking module is used for marking the historical behavior data to obtain risk value marking information corresponding to the historical behavior data;
the data input module is used for inputting the historical behavior data and the risk value marking information into an initial risk prediction model by taking the historical behavior data and the risk value marking information as a training set;
and the model training module is used for adjusting model parameters of the initial risk prediction model according to the difference between the prediction result output by the initial risk prediction model and the risk value marking information until the initial risk prediction model meets a preset convergence condition, so as to obtain a trained risk prediction model.
It is to be understood that the specific type of the risk value prediction model is not limited by the embodiments of the present disclosure, and the risk value prediction model may include DNN. The deep neural network may fuse a variety of neural networks including, but not limited to, at least one or a combination, superposition, nesting of at least two of the following: CNN, LSTM network, RNN, attention neural network, etc.
After the risk prediction model training is completed, the trained risk prediction model can be used for estimating the risk value of the behavior of the user before the in vivo detection and the behavior in the in vivo detection.
Optionally, the behavior data includes: detecting the forward behavior data and/or the detecting behavior data; wherein the pre-detection behavior data is generated within a preset time before the start of the in-vivo detection, and the in-detection behavior data is generated during the detection process of the in-vivo detection.
In a specific application, the behavior data of the user includes a plurality of types, and the following two types of historical behavior data related to risk detection are screened out in the embodiment of the present disclosure: detecting the preceding behavior data and the detecting behavior data. According to the behavior data before detection and the behavior data during detection, various attack behaviors including the attack behaviors of an attacker impersonating a target user to perform living body detection by using videos or photos can be identified, and the safety of the living body detection is further improved.
Optionally, the pre-detection behavior data at least includes any one of the following items: whether to modify the password, whether to log in again, whether to switch equipment to log in, whether to jump coordinates, whether to log in by multiple accounts, or not; the detected behavior data at least comprises any one of the following items: the number of continuous detection failures, the number of occurrences of the first face, the number of occurrences of the second face, the probability of the existence of the preset features, and the number of rapid movement of the face.
It is to be understood that the above-mentioned six types of pre-detection behavior data and the in-detection behavior data are only used as an application example of the present disclosure, and the specific types of the pre-detection behavior data and the in-detection behavior data are not limited by the embodiments of the present disclosure.
In summary, in the living body detection apparatus according to the embodiment of the present disclosure, behavior data of living body detection of a target user is first acquired by a behavior data acquisition module; then inputting the behavior data into a risk prediction model through a behavior risk prediction module, and further outputting a risk value corresponding to the behavior data through the risk prediction model; and finally, adjusting the detection parameters of the living body detection of the target user through a detection parameter adjusting module according to the risk value so as to improve or reduce the difficulty of the living body detection of the target user. The risk prediction model is obtained by training according to collected historical behavior data of historical living body detection and risk value marking information corresponding to the historical behavior data. Therefore, through the embodiment of the disclosure, the security intensity of the whole detection can be dynamically adjusted based on the user behavior, so that the experience of a normal user can be ensured, the living body detection difficulty of an abnormal user can be improved, and the security of the face recognition process can be improved.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
An embodiment of the present disclosure also provides an electronic device, referring to fig. 3, including: a processor 301, a memory 302, and a computer program 3021 stored on and executable on the memory, the processor implementing the living body detection method of the foregoing embodiments when executing the programs.
Embodiments of the present disclosure also provide a readable storage medium in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform the living body detection method of the foregoing embodiments.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. In addition, embodiments of the present disclosure are not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the embodiments of the present disclosure as described herein, and any descriptions of specific languages are provided above to disclose the best modes of the embodiments of the present disclosure.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the present disclosure may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the disclosure, various features of the embodiments of the disclosure are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that is, claimed embodiments of the disclosure require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of an embodiment of this disclosure.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
The various component embodiments of the disclosure may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. It will be appreciated by those skilled in the art that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components in a sequencing device according to embodiments of the present disclosure. Embodiments of the present disclosure may also be implemented as an apparatus or device program for performing a portion or all of the methods described herein. Such programs implementing embodiments of the present disclosure may be stored on a computer readable medium or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit embodiments of the disclosure, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. Embodiments of the disclosure may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The above description is only for the purpose of illustrating the preferred embodiments of the present disclosure and is not to be construed as limiting the embodiments of the present disclosure, and any modifications, equivalents, improvements and the like that are made within the spirit and principle of the embodiments of the present disclosure are intended to be included within the scope of the embodiments of the present disclosure.
The above description is only a specific implementation of the embodiments of the present disclosure, but the scope of the embodiments of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the embodiments of the present disclosure, and all the changes or substitutions should be covered by the scope of the embodiments of the present disclosure. Therefore, the protection scope of the embodiments of the present disclosure shall be subject to the protection scope of the claims.

Claims (11)

1. A method of in vivo detection, the method comprising:
acquiring behavior data of living body detection of a target user;
inputting the behavior data into a risk prediction model, and outputting a risk value corresponding to the behavior data through the risk prediction model; the risk prediction model is obtained by training according to collected historical behavior data of historical living body detection and risk value marking information corresponding to the historical behavior data;
and adjusting the detection parameters of the living body detection of the target user according to the risk value so as to improve or reduce the difficulty of the living body detection of the target user.
2. The method of claim 1, wherein the adjusting detection parameters for the in vivo detection of the target user according to the risk value comprises:
adjusting the detection parameters of the living body detection of the target user according to the risk value; or
And adjusting the detection parameters of the next living body detection of the target user according to the risk value.
3. The method according to claim 1, wherein the detection parameters include at least any one of: the number of target actions, the target action completion time, the maximum living body detection times, an action matching threshold value and a human face matching threshold value;
the adjusting the detection parameters of the living body detection of the target user according to the risk value comprises:
increasing at least one of the following detection parameters of the target user as the risk value increases: the method comprises the following steps of (1) reducing at least one of the following detection parameters of a target user by the aid of a target action number, an action matching threshold and a face matching threshold: target action completion time and maximum biopsy times.
4. The method of claim 1, wherein after adjusting the detection parameters for the live detection of the target user, the method further comprises:
judging whether the number of continuous detection failures in preset time is less than the maximum living body detection number, if so, acquiring action data and face data input by the target user;
calculating a first matching degree of each action in the action data and the target action according to the adjusted target action number and/or the adjusted target action completion time;
calculating a second matching degree of the face data and the target face image according to a pre-input target face image of the target user;
and if the first matching degree exceeds the adjusted action matching threshold value and the second matching degree exceeds the adjusted face matching threshold value, determining that the detection result of the target user is successful.
5. The method of claim 4, wherein after the obtaining of the motion data and the face data input by the target user, the method further comprises:
if the action data and/or the face data input by the target user are determined to meet any one of the following conditions, determining that the detection result of the target user is detection failure:
the number of target actions contained in the action data is smaller than the adjusted number of target actions; or
The completion time of any action contained in the action data is longer than the adjusted target action completion time; or
A first matching degree of any one action contained in the action data and the corresponding target action is smaller than the adjusted action matching threshold; or
And the second matching degree of a preset number of image frames and the target face image in the face data is smaller than the adjusted face matching threshold.
6. The method of claim 1, wherein prior to entering the behavioral data into a risk prediction model, the method further comprises:
collecting historical behavior data of historical in-vivo detection;
marking the historical behavior data to obtain risk value marking information corresponding to the historical behavior data;
taking the historical behavior data and the risk value labeling information as a training set, and inputting an initial risk prediction model;
and adjusting model parameters of the initial risk prediction model according to the difference between the prediction result output by the initial risk prediction model and the risk value marking information until the initial risk prediction model meets a preset convergence condition, so as to obtain a trained risk prediction model.
7. The method of claim 1, wherein the behavior data comprises: detecting the forward behavior data and/or the detecting behavior data; wherein the pre-detection behavior data is generated within a preset time before the start of the in-vivo detection, and the in-detection behavior data is generated during the detection process of the in-vivo detection.
8. The method of claim 7, wherein the detecting the behavior data comprises at least any one of: whether to modify the password, whether to log in again, whether to switch equipment to log in, whether to jump coordinates, whether to log in by multiple accounts, or not; the detected behavior data at least comprises any one of the following items: the number of continuous detection failures, the number of occurrences of the first face, the number of occurrences of the second face, the probability of the existence of the preset features, and the number of rapid movement of the face.
9. A living body detection apparatus, the apparatus comprising:
the behavior data acquisition module is used for acquiring the behavior data of the living body detection of the target user;
the behavior risk prediction module is used for inputting the behavior data into a risk prediction model and outputting a risk value corresponding to the behavior data through the risk prediction model; the risk prediction model is obtained by training according to collected historical behavior data of historical living body detection and risk value marking information corresponding to the historical behavior data;
and the detection parameter adjusting module is used for adjusting the detection parameters of the living body detection of the target user according to the risk value so as to improve or reduce the difficulty of the living body detection of the target user.
10. An electronic device, comprising:
processor, memory and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the liveness detection method according to one or more of claims 1-8 when executing the program.
11. A readable storage medium, characterized in that instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the liveness detection method according to one or more of the method claims 1-8.
CN202010075822.1A 2020-01-22 2020-01-22 Living body detection method, living body detection device, electronic equipment and readable storage medium Withdrawn CN111291668A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010075822.1A CN111291668A (en) 2020-01-22 2020-01-22 Living body detection method, living body detection device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010075822.1A CN111291668A (en) 2020-01-22 2020-01-22 Living body detection method, living body detection device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN111291668A true CN111291668A (en) 2020-06-16

Family

ID=71021294

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010075822.1A Withdrawn CN111291668A (en) 2020-01-22 2020-01-22 Living body detection method, living body detection device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111291668A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111861701A (en) * 2020-07-09 2020-10-30 深圳市富之富信息技术有限公司 Wind control model optimization method and device, computer equipment and storage medium
CN111914626A (en) * 2020-06-18 2020-11-10 北京迈格威科技有限公司 Living body identification/threshold value adjustment method, living body identification/threshold value adjustment device, electronic device, and storage medium
CN112836627A (en) * 2021-01-29 2021-05-25 支付宝(杭州)信息技术有限公司 Living body detection method and apparatus
CN113705428A (en) * 2021-08-26 2021-11-26 北京市商汤科技开发有限公司 Living body detection method and apparatus, electronic device, and computer-readable storage medium
CN115459952A (en) * 2022-08-09 2022-12-09 北京旷视科技有限公司 Attack detection method, electronic device, and computer-readable storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007280367A (en) * 2006-03-14 2007-10-25 Omron Corp Face collation device
JP2011059791A (en) * 2009-09-07 2011-03-24 Hitachi Solutions Ltd Bioinformation authentication device and bioinformation authentication program
US20170109509A1 (en) * 2014-07-31 2017-04-20 Nok Nok Labs, Inc. System and method for performing authentication using data analytics
US20180034852A1 (en) * 2014-11-26 2018-02-01 Isityou Ltd. Anti-spoofing system and methods useful in conjunction therewith
WO2018139847A1 (en) * 2017-01-25 2018-08-02 국립과학수사연구원 Personal identification method through facial comparison
US20180232904A1 (en) * 2017-02-10 2018-08-16 Seecure Systems, Inc. Detection of Risky Objects in Image Frames
CN108875688A (en) * 2018-06-28 2018-11-23 北京旷视科技有限公司 A kind of biopsy method, device, system and storage medium
CN109269556A (en) * 2018-09-06 2019-01-25 深圳市中电数通智慧安全科技股份有限公司 A kind of equipment Risk method for early warning, device, terminal device and storage medium
CN109919754A (en) * 2019-01-24 2019-06-21 北京迈格威科技有限公司 A kind of data capture method, device, terminal and storage medium
CN110110592A (en) * 2019-03-26 2019-08-09 中国人民财产保险股份有限公司 Method for processing business, model training method, equipment and storage medium
CN110276313A (en) * 2019-06-25 2019-09-24 网易(杭州)网络有限公司 Identity identifying method, identification authentication system, medium and calculating equipment
WO2019184124A1 (en) * 2018-03-30 2019-10-03 平安科技(深圳)有限公司 Risk-control model training method, risk identification method and apparatus, and device and medium
CN110633659A (en) * 2019-08-30 2019-12-31 北京旷视科技有限公司 Living body detection method, living body detection device, computer equipment and storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007280367A (en) * 2006-03-14 2007-10-25 Omron Corp Face collation device
JP2011059791A (en) * 2009-09-07 2011-03-24 Hitachi Solutions Ltd Bioinformation authentication device and bioinformation authentication program
US20170109509A1 (en) * 2014-07-31 2017-04-20 Nok Nok Labs, Inc. System and method for performing authentication using data analytics
US20180034852A1 (en) * 2014-11-26 2018-02-01 Isityou Ltd. Anti-spoofing system and methods useful in conjunction therewith
WO2018139847A1 (en) * 2017-01-25 2018-08-02 국립과학수사연구원 Personal identification method through facial comparison
US20180232904A1 (en) * 2017-02-10 2018-08-16 Seecure Systems, Inc. Detection of Risky Objects in Image Frames
WO2019184124A1 (en) * 2018-03-30 2019-10-03 平安科技(深圳)有限公司 Risk-control model training method, risk identification method and apparatus, and device and medium
CN108875688A (en) * 2018-06-28 2018-11-23 北京旷视科技有限公司 A kind of biopsy method, device, system and storage medium
CN109269556A (en) * 2018-09-06 2019-01-25 深圳市中电数通智慧安全科技股份有限公司 A kind of equipment Risk method for early warning, device, terminal device and storage medium
CN109919754A (en) * 2019-01-24 2019-06-21 北京迈格威科技有限公司 A kind of data capture method, device, terminal and storage medium
CN110110592A (en) * 2019-03-26 2019-08-09 中国人民财产保险股份有限公司 Method for processing business, model training method, equipment and storage medium
CN110276313A (en) * 2019-06-25 2019-09-24 网易(杭州)网络有限公司 Identity identifying method, identification authentication system, medium and calculating equipment
CN110633659A (en) * 2019-08-30 2019-12-31 北京旷视科技有限公司 Living body detection method, living body detection device, computer equipment and storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914626A (en) * 2020-06-18 2020-11-10 北京迈格威科技有限公司 Living body identification/threshold value adjustment method, living body identification/threshold value adjustment device, electronic device, and storage medium
CN111861701A (en) * 2020-07-09 2020-10-30 深圳市富之富信息技术有限公司 Wind control model optimization method and device, computer equipment and storage medium
CN112836627A (en) * 2021-01-29 2021-05-25 支付宝(杭州)信息技术有限公司 Living body detection method and apparatus
CN113705428A (en) * 2021-08-26 2021-11-26 北京市商汤科技开发有限公司 Living body detection method and apparatus, electronic device, and computer-readable storage medium
WO2023024473A1 (en) * 2021-08-26 2023-03-02 上海商汤智能科技有限公司 Living body detection method and apparatus, and electronic device, computer-readable storage medium and computer program product
CN115459952A (en) * 2022-08-09 2022-12-09 北京旷视科技有限公司 Attack detection method, electronic device, and computer-readable storage medium

Similar Documents

Publication Publication Date Title
CN111291668A (en) Living body detection method, living body detection device, electronic equipment and readable storage medium
KR102486699B1 (en) Method and apparatus for recognizing and verifying image, and method and apparatus for learning image recognizing and verifying
KR102415503B1 (en) Method for training classifier and detecting object
US20160217198A1 (en) User management method and apparatus
CN107368827B (en) Character recognition method and device, user equipment and server
CN110612530B (en) Method for selecting frames for use in face processing
CN108124486A (en) Face living body detection method based on cloud, electronic device and program product
US11645546B2 (en) System and method for predicting fine-grained adversarial multi-agent motion
CN109902475B (en) Verification code image generation method and device and electronic equipment
CN109413023A (en) The training of machine recognition model and machine identification method, device, electronic equipment
JP2019057815A (en) Monitoring system
KR102552968B1 (en) Method of tracking multiple objects and apparatus for the same
CN113505682B (en) Living body detection method and living body detection device
US11908191B2 (en) System and method for merging asynchronous data sources
WO2023173686A1 (en) Detection method and apparatus, electronic device, and storage medium
CN111160251B (en) Living body identification method and device
CN106250755B (en) Method and device for generating verification code
CN110895602B (en) Identity authentication method and device, electronic equipment and storage medium
CN110288668B (en) Image generation method, device, computer equipment and storage medium
CN110895691B (en) Image processing method and device and electronic equipment
CN109598201B (en) Action detection method and device, electronic equipment and readable storage medium
CN113326829B (en) Method and device for recognizing gesture in video, readable storage medium and electronic equipment
US20220122341A1 (en) Target detection method and apparatus, electronic device, and computer storage medium
CN109376585B (en) Face recognition auxiliary method, face recognition method and terminal equipment
US12100244B2 (en) Semi-supervised action-actor detection from tracking data in sport

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20200616

WW01 Invention patent application withdrawn after publication