CN116486494A - Living body detection method, training method and device of living body detection model - Google Patents
Living body detection method, training method and device of living body detection model Download PDFInfo
- Publication number
- CN116486494A CN116486494A CN202310534152.9A CN202310534152A CN116486494A CN 116486494 A CN116486494 A CN 116486494A CN 202310534152 A CN202310534152 A CN 202310534152A CN 116486494 A CN116486494 A CN 116486494A
- Authority
- CN
- China
- Prior art keywords
- user
- equipment
- attack
- target
- living body
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 190
- 238000012549 training Methods 0.000 title claims abstract description 130
- 238000000034 method Methods 0.000 title claims abstract description 109
- 230000008447 perception Effects 0.000 claims description 188
- 238000010586 diagram Methods 0.000 claims description 160
- 238000012795 verification Methods 0.000 claims description 84
- 238000012545 processing Methods 0.000 claims description 73
- 230000004927 fusion Effects 0.000 claims description 44
- 238000004891 communication Methods 0.000 claims description 27
- 238000007499 fusion processing Methods 0.000 claims description 27
- 230000006399 behavior Effects 0.000 claims description 25
- 230000001960 triggered effect Effects 0.000 claims description 14
- 238000001727 in vivo Methods 0.000 claims description 13
- 230000004044 response Effects 0.000 claims description 8
- 230000003993 interaction Effects 0.000 abstract description 14
- 230000007547 defect Effects 0.000 abstract description 11
- 238000005516 engineering process Methods 0.000 description 21
- 238000004458 analytical method Methods 0.000 description 19
- 238000004590 computer program Methods 0.000 description 12
- 230000006870 function Effects 0.000 description 12
- 238000003860 storage Methods 0.000 description 12
- 238000003064 k means clustering Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 8
- 230000003190 augmentative effect Effects 0.000 description 6
- 238000001574 biopsy Methods 0.000 description 6
- 241000282414 Homo sapiens Species 0.000 description 5
- 238000012360 testing method Methods 0.000 description 5
- 238000006243 chemical reaction Methods 0.000 description 4
- 238000013500 data storage Methods 0.000 description 4
- 238000005457 optimization Methods 0.000 description 4
- 238000013473 artificial intelligence Methods 0.000 description 3
- 239000011521 glass Substances 0.000 description 3
- 230000009897 systematic effect Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000012512 characterization method Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 230000010365 information processing Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 102000006822 Agouti Signaling Protein Human genes 0.000 description 1
- 108010072151 Agouti Signaling Protein Proteins 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000004397 blinking Effects 0.000 description 1
- 238000007621 cluster analysis Methods 0.000 description 1
- 238000005354 coacervation Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000001931 thermography Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/94—Hardware or software architectures specially adapted for image or video understanding
- G06V10/95—Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The disclosure provides a living body detection method, a training method of a living body detection model, and a device, comprising the following steps: the method comprises the steps of responding to the triggering of a living body detection task, acquiring user attack risk information of a target user triggering the task and equipment attack risk information of target equipment triggering the task, and determining the living body type of the task according to the user attack risk information and the equipment attack risk information, wherein the living body type is a living body or an attack, the defect of low hardware cost caused by a living body detection method based on a multi-mode face image can be avoided, the hardware cost can be saved, the defect of detection efficiency caused by a living body detection method based on multi-frame images of multi-action interaction can be avoided, the detection efficiency can be improved, and the accuracy and the reliability of living body detection can be improved.
Description
Technical Field
The specification relates to the technical field of artificial intelligence, and can be applied to image recognition, in particular to a living body detection method and a training method and device of a living body detection model.
Background
The face recognition technology is one of the main identity authentication and methods for identity authentication due to the convenience. However, since face recognition technology is an emerging technology, many new security threats are also faced. The living body attack is a security threat which is more common and has higher risk, and can be relatively avoided by a living body detection mode.
In the related art, the living body detection method mainly comprises two methods, namely a living body detection method based on multi-mode face images and a living body detection method based on multi-frame images of multi-action interaction.
However, the living body detection method based on the multi-mode face image depends on the multi-mode camera module, and has the defect of high hardware cost; the multi-frame image living body detection method based on multi-action interaction needs to depend on multiple interactions of users, and has the defect of low efficiency.
It should be noted that the content in the related art is only information known to the inventor, and does not represent that the information has entered the public domain before the filing date of the present disclosure, or that it may be the prior art of the present disclosure.
Disclosure of Invention
The disclosure provides a living body detection method, a training method of a living body detection model and a training device of the living body detection model, which are used for avoiding at least one of the defects.
In a first aspect, the present disclosure provides a method of in vivo detection, the method comprising:
acquiring user attack risk information of a target user triggering a task and equipment attack risk information of target equipment triggering the task in response to triggering of the task detected by a living body;
and determining the living body category of the task according to the user attack risk information and the equipment attack risk information, wherein the living body category is living body or attack.
In some embodiments, the target user has a target user characteristic; the user attack risk information is determined based on the target user characteristics;
the target device has a target device feature; the device attack risk information is determined based on the target device characteristics.
In some embodiments, the target user characteristic and the target device characteristic are determined from a pre-constructed user device relationship graph;
the user equipment relationship diagram comprises user nodes and equipment nodes, wherein the user nodes comprise user characteristics, and the equipment nodes comprise equipment characteristics; the user node comprises a node of the target user, and the equipment node comprises a node of the target equipment; the user characteristics include the target user characteristics and the device characteristics include the target device characteristics.
In some embodiments, the user device relationship graph further comprises edges between the user node and the device node; the user equipment relationship graph is constructed based on the acquired sample data; the sample data comprises user characteristics of a user, equipment characteristics of equipment and using behavior information of the user using the equipment;
wherein the user node is built based on the user, the device node is built based on the device, and the edge is built based on the usage behavior information.
In some embodiments, the task has a type attribute, which is a login type or a verification type; the user equipment relationship graph comprises a user equipment login relationship graph and a user equipment verification relationship graph;
if the type attribute is the login type, determining the target user characteristic and the target equipment characteristic from the user equipment login relation diagram;
and if the type attribute is the verification type, determining the target user characteristic and the target equipment characteristic from the user equipment verification relation diagram.
In some embodiments, the user attack risk information is obtained by performing attack risk prediction processing on the target user features based on a pre-trained user risk perception model;
The equipment attack risk information is obtained by carrying out attack risk prediction processing on the target equipment characteristics based on a pre-trained equipment risk perception model.
In some embodiments, the target user feature has a target user class attribute, and the target device feature has a target device class attribute; the user risk perception model comprises risk perception models corresponding to user class attributes respectively, and the user class attributes comprise the target user class attributes; the equipment risk perception model comprises risk perception models corresponding to equipment attributes respectively, and the equipment attributes comprise the target equipment attributes;
the user attack risk information is obtained by carrying out attack risk prediction processing on the target user characteristics based on a risk perception model corresponding to the target user attribute in the user risk perception model;
the equipment attack risk information is obtained by carrying out attack risk prediction processing on the target equipment characteristics based on a risk perception model corresponding to the target equipment attribute in the equipment risk perception model.
In some embodiments, the user risk perception model is obtained by performing clustering processing on the user features to obtain user attributes, and training the user features corresponding to the user attributes;
The equipment risk perception model is obtained by clustering the equipment characteristics to obtain equipment attributes and training the equipment characteristics corresponding to the equipment attributes.
In some embodiments, the user features include user features in the user device login relationship diagram and user features in the user device authentication relationship diagram; the device features comprise device features in the user equipment login relation diagram and device features in the user equipment verification relation diagram;
the user type attributes are obtained by carrying out fusion processing on user characteristics of the user in the user equipment login relation diagram and user characteristics of the user in the user equipment verification relation diagram aiming at each user to obtain user fusion characteristics, and carrying out clustering processing on the user fusion characteristics corresponding to each user;
the device type attribute is obtained by carrying out fusion processing on the device characteristics of the device in the user device login relation diagram and the device characteristics of the device in the user device verification relation diagram aiming at each device to obtain device fusion characteristics, and carrying out clustering processing on the device fusion characteristics corresponding to each device.
In some embodiments, the method further comprises:
carrying out attack risk prediction processing on the obtained target image triggering the task to obtain image attack risk information;
and determining a living body category of the task according to the user attack risk information and the equipment attack risk information, including: and determining the living body category according to the image attack risk information, the user attack risk information and the equipment attack risk information.
In some embodiments, the image attack risk information includes an image attack probability, the user attack risk information includes a number of user attacks, and the device attack risk information includes a number of device attacks; determining the living body category according to the image attack risk information, the user attack risk information and the equipment attack risk information, including:
calculating the total attack times according to the user attack times and the equipment attack times;
and determining the living body category according to the total attack times and the image attack probability.
In some embodiments, determining the living body category from the total number of attacks and the image attack probability includes:
Determining a predicted attack probability corresponding to the total number of attacks;
calculating to obtain the total attack probability according to the predicted attack probability and the image attack probability;
and if the total probability of the attack reaches a preset threshold, determining the living body category as the attack, and if the total probability of the attack does not reach the preset threshold, determining the living body category as the living body.
In a second aspect, the present disclosure provides a method of training a living body detection model, the method comprising:
acquiring user characteristics and equipment characteristics from a preset user equipment relationship diagram;
training according to the user characteristics to obtain a user risk perception model, wherein the user risk perception model is used for predicting user attack risk information of a target user in a living body detection task;
training according to the equipment characteristics to obtain an equipment risk perception model, wherein the equipment risk perception model is used for predicting equipment attack risk information of target equipment in the task;
the user attack risk information and the equipment attack risk information are used for determining living body categories of the tasks; the living detection model includes the user risk perception model and the device risk perception model.
In some embodiments, the user equipment relationship graph includes a user node, a device node, and an edge between the user node and the device node, the user node includes the user feature, the device node includes the device feature, and the edge is used for characterizing usage behavior information of a user using a device;
the user characteristics and the equipment characteristics are respectively obtained by optimizing adjacent nodes in the user equipment relation diagram, wherein the adjacent nodes are user nodes and equipment nodes which are connected through the edges.
In some embodiments, the user features and the device features are determined based on the multivariate map feature prediction model, which is obtained by predicting features corresponding to each node based on the user device relationship map and training according to feature difference information between the neighboring nodes.
In some embodiments, training to obtain a user risk perception model according to the user features includes:
clustering the user characteristics to obtain user attributes;
aiming at each user attribute, training a risk perception model corresponding to the user attribute according to the user characteristics of the user attribute;
The user risk perception model comprises risk perception models corresponding to all user attributes.
In some embodiments, the user device relationship graph includes a user device login relationship graph and a user device authentication relationship graph; the user characteristics are obtained by fusing the user characteristics in the user equipment login relation diagram and the user characteristics in the user equipment verification relation diagram.
In some embodiments, training to obtain a device risk perception model according to the device features includes:
clustering the equipment characteristics to obtain equipment attributes;
aiming at each equipment attribute, training a risk perception model corresponding to the equipment attribute according to the equipment characteristics of the equipment attribute;
the equipment risk perception model comprises risk perception models corresponding to equipment attributes.
In some embodiments, the user device relationship graph includes a user device login relationship graph and a user device authentication relationship graph; the device features are the features obtained by fusing the device features in the user equipment login relation diagram and the device features in the user equipment verification relation diagram.
In some embodiments, training to obtain a user risk perception model according to the user features includes:
inputting the user characteristics into a first network model, and outputting predicted user attack risk information;
and calculating a first prediction loss between the predicted user attack risk information and a preset user attack true value, and generating the user risk perception model according to the first prediction loss.
In some embodiments, training to obtain a device risk perception model according to the device features includes:
inputting the equipment characteristics into a second network model, and outputting predicted equipment attack risk information;
and calculating a second prediction loss between the predicted equipment attack risk information and a preset equipment attack true value, and generating the equipment risk perception model according to the second prediction loss.
In a third aspect, the present disclosure provides a living body detection apparatus, the apparatus comprising:
the acquisition unit is used for responding to the triggered task detected by the living body and acquiring user attack risk information of a target user triggering the task and equipment attack risk information of target equipment triggering the task;
and the determining unit is used for determining the living body category of the task according to the user attack risk information and the equipment attack risk information, wherein the living body category is living body or attack.
In some embodiments, the target user has a target user characteristic; the user attack risk information is determined based on the target user characteristics;
the target device has a target device feature; the device attack risk information is determined based on the target device characteristics.
In some embodiments, the target user characteristic and the target device characteristic are determined from a pre-constructed user device relationship graph;
the user equipment relationship diagram comprises user nodes and equipment nodes, wherein the user nodes comprise user characteristics, and the equipment nodes comprise equipment characteristics; the user node comprises a node of the target user, and the equipment node comprises a node of the target equipment; the user characteristics include the target user characteristics and the device characteristics include the target device characteristics.
In some embodiments, the user device relationship graph further comprises edges between the user node and the device node; the user equipment relationship graph is constructed based on the acquired sample data; the sample data comprises user characteristics of a user, equipment characteristics of equipment and using behavior information of the user using the equipment;
Wherein the user node is built based on the user, the device node is built based on the device, and the edge is built based on the usage behavior information.
In some embodiments, the task has a type attribute, which is a login type or a verification type; the user equipment relationship graph comprises a user equipment login relationship graph and a user equipment verification relationship graph;
if the type attribute is the login type, determining the target user characteristic and the target equipment characteristic from the user equipment login relation diagram;
and if the type attribute is the verification type, determining the target user characteristic and the target equipment characteristic from the user equipment verification relation diagram.
In some embodiments, the user attack risk information is obtained by performing attack risk prediction processing on the target user features based on a pre-trained user risk perception model;
the equipment attack risk information is obtained by carrying out attack risk prediction processing on the target equipment characteristics based on a pre-trained equipment risk perception model.
In some embodiments, the target user feature has a target user class attribute, and the target device feature has a target device class attribute; the user risk perception model comprises risk perception models corresponding to user class attributes respectively, and the user class attributes comprise the target user class attributes; the equipment risk perception model comprises risk perception models corresponding to equipment attributes respectively, and the equipment attributes comprise the target equipment attributes;
The user attack risk information is obtained by carrying out attack risk prediction processing on the target user characteristics based on a risk perception model corresponding to the target user attribute in the user risk perception model;
the equipment attack risk information is obtained by carrying out attack risk prediction processing on the target equipment characteristics based on a risk perception model corresponding to the target equipment attribute in the equipment risk perception model.
In some embodiments, the user risk perception model is obtained by performing clustering processing on the user features to obtain user attributes, and training the user features corresponding to the user attributes;
the equipment risk perception model is obtained by clustering the equipment characteristics to obtain equipment attributes and training the equipment characteristics corresponding to the equipment attributes.
In some embodiments, the user features include user features in the user device login relationship diagram and user features in the user device authentication relationship diagram; the device features comprise device features in the user equipment login relation diagram and device features in the user equipment verification relation diagram;
The user type attributes are obtained by carrying out fusion processing on user characteristics of the user in the user equipment login relation diagram and user characteristics of the user in the user equipment verification relation diagram aiming at each user to obtain user fusion characteristics, and carrying out clustering processing on the user fusion characteristics corresponding to each user;
the device type attribute is obtained by carrying out fusion processing on the device characteristics of the device in the user device login relation diagram and the device characteristics of the device in the user device verification relation diagram aiming at each device to obtain device fusion characteristics, and carrying out clustering processing on the device fusion characteristics corresponding to each device.
In some embodiments, the apparatus further comprises:
the prediction unit is used for performing attack risk prediction processing on the acquired target image triggering the task to obtain image attack risk information;
and the determining unit is used for determining the living body category according to the image attack risk information, the user attack risk information and the equipment attack risk information.
In some embodiments, the image attack risk information includes an image attack probability, the user attack risk information includes a number of user attacks, and the device attack risk information includes a number of device attacks; the determination unit includes:
The calculating subunit is used for calculating the total attack times according to the user attack times and the equipment attack times;
and the determining subunit is used for determining the living body category according to the total attack times and the image attack probability.
In some embodiments, the determining subunit comprises:
the first determining module is used for determining a predicted attack probability corresponding to the total attack times;
the calculation module is used for calculating the total attack probability according to the predicted attack probability and the image attack probability;
and the second determining module is used for determining the living body type as the attack if the total probability of the attack reaches a preset threshold value, and determining the living body type as the living body if the total probability of the attack does not reach the preset threshold value.
In a fourth aspect, the present disclosure provides a training apparatus for a living body detection model, the apparatus comprising:
the acquisition unit is used for acquiring user characteristics and equipment characteristics from a preset user equipment relationship diagram;
the first training unit is used for training to obtain a user risk perception model according to the user characteristics, wherein the user risk perception model is used for predicting user attack risk information of a target user in a living body detection task;
The second training unit is used for training to obtain a device risk perception model according to the device characteristics, wherein the device risk perception model is used for predicting device attack risk information of target devices in the task;
the user attack risk information and the equipment attack risk information are used for determining living body categories of the tasks; the living detection model includes the user risk perception model and the device risk perception model.
In some embodiments, the user equipment relationship graph includes a user node, a device node, and an edge between the user node and the device node, the user node includes the user feature, the device node includes the device feature, and the edge is used for characterizing usage behavior information of a user using a device;
the user characteristics and the equipment characteristics are respectively obtained by optimizing adjacent nodes in the user equipment relation diagram, wherein the adjacent nodes are user nodes and equipment nodes which are connected through the edges.
In some embodiments, the user features and the device features are determined based on the multivariate map feature prediction model, which is obtained by predicting features corresponding to each node based on the user device relationship map and training according to feature difference information between the neighboring nodes.
In some embodiments, the first training unit comprises:
the first clustering subunit is used for carrying out clustering processing on the user characteristics to obtain various user attributes;
the first training subunit is used for training a risk perception model corresponding to each user attribute according to the user characteristics of the user attribute aiming at each user attribute;
the user risk perception model comprises risk perception models corresponding to all user attributes.
In some embodiments, the user device relationship graph includes a user device login relationship graph and a user device authentication relationship graph; the user characteristics are obtained by fusing the user characteristics in the user equipment login relation diagram and the user characteristics in the user equipment verification relation diagram.
In some embodiments, the second training unit comprises:
the second clustering subunit is used for clustering the equipment characteristics to obtain equipment attributes;
the second training subunit is used for training a risk perception model corresponding to each equipment attribute according to the equipment characteristics of the equipment attribute;
the equipment risk perception model comprises risk perception models corresponding to equipment attributes.
In some embodiments, the user device relationship graph includes a user device login relationship graph and a user device authentication relationship graph; the device features are the features obtained by fusing the device features in the user equipment login relation diagram and the device features in the user equipment verification relation diagram.
In some embodiments, the first training unit comprises:
the first input subunit is used for inputting the user characteristics into a first network model and outputting predicted user attack risk information;
the first calculating subunit is used for calculating a first prediction loss between the predicted user attack risk information and a preset user attack true value;
and the first generation subunit is used for generating the user risk perception model according to the first prediction loss.
In some embodiments, the second training unit comprises:
the second input subunit is used for inputting the equipment characteristics into a second network model and outputting predicted equipment attack risk information;
the second calculating subunit is used for calculating a second prediction loss between the predicted equipment attack risk information and a preset equipment attack true value;
and the second generation subunit is used for generating the equipment risk perception model according to the second prediction loss.
In a fifth aspect, the present disclosure provides a processor-readable storage medium storing a computer program for causing the processor to perform the method of any one of the first aspects; alternatively, the computer program is for causing the processor to perform the method of any of the second aspects.
In a sixth aspect, the present disclosure provides an electronic device, comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executing computer-executable instructions stored in the memory to implement the method of any one of the first aspects; alternatively, the method of any of the second aspects is implemented.
In a seventh aspect, the present disclosure provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of the method of any of the first or second aspects.
In an eighth aspect, the present disclosure provides a living body detection system including:
at least one memory including at least one set of instructions to push information;
at least one processor in communication with the at least one memory;
Wherein the method of any of the first aspects above is implemented when the at least one processor executes the at least one set of instructions.
In a ninth aspect, the present disclosure provides a training system for a living body detection model, comprising:
at least one memory including at least one set of instructions to push information;
at least one processor in communication with the at least one memory;
wherein the method of any of the above second aspects is implemented when the at least one processor executes the at least one set of instructions.
The disclosure provides a living body detection method, training of a living body detection model, and a device, comprising: in response to the triggering of the task detected by the living body, acquiring user attack risk information of a target user triggering the task and equipment attack risk information of target equipment triggering the task, and determining the living body type of the task according to the user attack risk information and the equipment attack risk information, wherein the living body type is living body or attack, in the embodiment, the user attack risk information and the equipment attack risk information are respectively acquired under the condition that the task is triggered, so that the technical characteristics of a living body detection result are determined by combining the user attack risk information and the equipment attack risk information, on one hand, a multi-mode camera module is not required to be deployed, thereby avoiding the defect of lower hardware cost caused by a living body detection method based on a multi-mode face image in the related art, and being beneficial to saving hardware cost; on the other hand, multi-action interaction is not required to be executed, so that the defect of detection efficiency caused by a multi-frame image living detection method based on multi-action interaction in the related technology is avoided, and the detection efficiency is improved; in yet another aspect, by combining attack risk information of two different dimensions (i.e., a target user dimension and a target device dimension) to determine a living body detection result, accuracy and reliability of living body detection can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present description, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present description, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is an application scenario schematic diagram of a living body detection method according to an embodiment of the disclosure;
FIG. 2 is a schematic diagram of a method of in-vivo detection according to one embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a method of in-vivo detection according to another embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a training method of a living body detection model according to one embodiment of the present disclosure;
FIG. 5 is a schematic view of a living body detection apparatus according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a training device for a biopsy model according to one embodiment of the present disclosure;
fig. 7 is a hardware configuration diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
It should be understood that the terms "comprises" and "comprising," and any variations thereof, in the embodiments of the disclosure are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to those elements expressly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The term "and/or" in the embodiments of the present disclosure describes an association relationship of association objects, which indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
The term "plurality" in the embodiments of the present disclosure means two or more, and other adjectives are similar thereto.
The terms "first," "second," "third," and the like in this disclosure are used for distinguishing between similar or similar objects or entities and not necessarily for limiting a particular order or sequence, unless otherwise indicated (Unless otherwise indicated). It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the disclosure are, for example, capable of operation in sequences other than those illustrated or otherwise described herein.
The term "unit/module" as used in this disclosure refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware or/and software code that is capable of performing the function associated with that element.
To facilitate the reader's understanding of this disclosure, at least some of the terms of this disclosure are now explained as follows:
the face recognition system is a system constructed based on face recognition technology (Face Recognition Technology). The face recognition technology refers to the technology of recognizing the face by using a computer technology for analysis and comparison. Face recognition technology belongs to biological feature recognition technology, and is to distinguish organism individuals by using biological features of organisms (generally referred to as human beings).
The living body attack means that a screen, paper, mask and other modes are utilized to bypass the face recognition system.
The living body detection can also be called living body anti-attack, which means that whether a user operates as a real living body by using technologies such as artificial intelligent model detection and living body attack interception.
Clustering (Clustering) is to divide a data set into different classes or clusters according to a preset standard (such as distance), so that the similarity of data objects in the same cluster is as large as possible, and the variability of data objects not in the same cluster is also as large as possible. That is, the data of the same class after clustering are gathered together as much as possible, and the data of different classes are separated as much as possible.
The clustering analysis methods are multiple, different clustering analysis methods are adopted to analyze the same object to be analyzed, and the obtained results are different. The cluster analysis method comprises the following steps: systematic clustering, K-means clustering (K-means), second order clustering (Two Step), and the like. Wherein, the systematic clustering method is also called a coacervation type spectral clustering method (Hierarchical Cluster). K-Means clustering is also known as fast clustering (K-Means Cluster).
The face-brushing payment is a novel payment mode realized based on artificial intelligence, machine vision, three-dimensional (3D) sensing, big data and other technologies, and has the advantages of being more convenient, safer, good in experience and the like.
In recent years, with the rapid development of internet technology, face recognition technology is applied to various scenes of identity recognition and identity authentication, such as face-brushing payment, face-brushing attendance, face-brushing arrival and the like.
Although the face recognition technology is fast one of the main identity authentication and methods for identity authentication due to its convenience. However, since face recognition technology is an emerging technology, many new security threats are also faced. Among them, living attacks are more common and also security threats with higher risks.
Accordingly, the living body type can be determined by using a living body detection method to avoid living body attack as much as possible.
In the related art, the living body detection method mainly comprises two methods, namely a living body detection method based on multi-mode face images and a living body detection method based on multi-frame images of multi-action interaction.
The implementation principle of the living body detection method based on the multi-mode face image is as follows: the method comprises the steps that a multi-mode camera module is mounted in the terminal equipment, the terminal equipment collects multi-mode face images of a user through the multi-mode camera module, the multi-mode face images are input into a pre-trained multi-mode living body detection model, and a living body detection result is output.
The multi-modal face image includes a three primary color RGB image, a modern Near Infrared (NIR) image, a Depth (Depth) image, a thermal imaging image, and the like.
Although the living body detection method for determining the living body detection result by introducing the multi-mode face image can improve the living body detection performance, the multi-mode camera module has higher cost, so that the defect of higher hardware cost exists under the condition of adopting the living body detection method of the multi-mode face image to carry out living body detection.
The multi-frame image living body detection method based on multi-action interaction is realized according to the following principle: the face recognition system collects face images corresponding to the interaction actions executed by the user during interaction, namely, multiple frames of face images are collected, so that living body detection is completed based on the multiple frames of face images.
Wherein the interaction includes blinking, shaking, etc.
However, in the living body detection method, the human face recognition system needs to collect each interaction action, so that the human face recognition system needs to collect human face images for a long time, and long-time matching of users is needed, namely the defects of low living body detection efficiency and poor user experience exist.
It should be noted that the content in the related art is only information known to the inventor, and does not represent that the information has entered the public domain before the filing date of the present disclosure, or that it may be the prior art of the present disclosure.
In order to avoid at least one of the above problems, the present disclosure proposes the technical idea of the inventive effort: under the condition that the task of living body detection is triggered, the living body detection device acquires attack risk information corresponding to each of the user dimension and the equipment dimension so as to determine living body types (namely living body detection results) by combining the attack risk information of two different dimensions, thereby completing living body detection.
Before explaining the implementation principle of the living body detection method of the present disclosure, an application scenario of the living body detection method of the present disclosure is exemplarily described to deepen the reader's understanding of the living body detection method of the present disclosure.
In connection with the above analysis, the in vivo detection method of the present disclosure can be applied to different scenarios. For example, in a face-brushing payment scenario, the collected face image of the user to be paid may be subjected to living detection by the living detection method of the present disclosure; in another example, in a face-brushing attendance scene, the collected face image of the user may be subjected to in-vivo detection by the in-vivo detection method of the present disclosure, and so on, which will not be described in detail herein.
Fig. 1 is a schematic view of an application scenario of a biopsy method according to an embodiment of the present disclosure, wherein the biopsy method of the present disclosure may be applied to a biopsy system 100 as shown in fig. 1. As shown in fig. 1, the living body detection system 100 may include a target user 101, a client 102, a server 103, and a network 104.
The target user 101 may be a user who triggers the living body detection of the target face image (i.e., the face image of the target user 101), and the target user 101 may perform the operation of living body detection at the client 102.
The client 102 may be a device that performs in-vivo detection of a target face image in response to a in-vivo detection operation of the target user 102. I.e. the living detection method may be performed on the client 102. At this time, the client 102 may store data or instructions to perform the living body detection method described in the present specification, and may execute or be used to execute the data or instructions.
In some embodiments, the client 102 may include a hardware device having a data information processing function and a program necessary to drive the hardware device to operate. As shown in fig. 1, a client 102 may be communicatively connected to a server 103. The server 103 may be in communication with one client 102 or may be in communication with a plurality of clients 102.
In some embodiments, client 102 may interact with server 103 over network 104 to receive or send messages, etc.
In some embodiments, the client 102 may include a mobile device, a tablet, a laptop, a built-in device of a motor vehicle, or the like, or any combination thereof.
In some embodiments, the mobile device may include a smart home device, a smart mobile device, a virtual reality device, an augmented reality device, or the like, or any combination thereof.
In some embodiments, the smart home device may include a smart television, desktop computer, or the like, or any combination.
In some embodiments, the smart mobile device may include a smart phone, personal digital assistant, gaming device, navigation device, etc., or any combination thereof.
In some embodiments, the virtual reality device or augmented reality device may include a virtual reality helmet, virtual reality glasses, virtual reality patch, augmented reality helmet, augmented reality glasses, augmented reality patch, or the like, or any combination thereof. For example, the Virtual Reality device or the augmented Reality device may include google glass, head mounted display, virtual Reality technology (VR), and the like.
In some embodiments, built-in devices in a motor vehicle may include an on-board computer, an on-board television, and the like.
In some embodiments, the client 102 may include an image capture device and an audio capture device for capturing user data of an account number.
In some embodiments, the image capture device may be a two-dimensional image capture device (e.g., an RGB camera), or may be a two-dimensional image capture device (e.g., an RGB camera) and a depth image capture device (e.g., a 3D structured light camera, a laser detector, etc.).
In some embodiments, the client 102 may be a device with positioning technology for locating the position of the client 102.
In some embodiments, the client 102 may be installed with one or more Applications (APP). The APP can provide the target user 101 with the ability to interact with the outside world through the network 104. APP includes, but is not limited to: web browser-like APP programs, search-like APP programs, chat-like APP programs, shopping-like APP programs, video-like APP programs, financial-like APP programs, instant messaging tools, mailbox clients, social platform software, and the like.
In some embodiments, the client 102 may have a target APP installed thereon. The target APP can collect facial images, corresponding audio information, and the like of the user corresponding to the multiple accounts for the client 102, thereby obtaining a user data set.
In some embodiments, the target user 101 may also trigger a liveness detection request (i.e., a task of triggering liveness detection) through the target APP. The target APP may perform the living body detection method described in the present specification in response to the living body detection request. The living body detection method will be described in detail later.
The server 103 may be a server that provides various services, for example, a background server that provides support for user data sets and account login information corresponding to a plurality of accounts collected on the client 102, and for living detection of the plurality of accounts.
In some embodiments, the in-vivo detection method may be performed on the server 103. At this time, the server 103 may store data or instructions to perform the living body detection method described in the present specification, and may execute or be used to execute the data or instructions.
In some embodiments, the server 103 may include a hardware device having a data information processing function and a program necessary to drive the hardware device to operate. Similarly, the server 103 may be communicatively connected to one client 103 and receive data transmitted from the client 103, or may be communicatively connected to a plurality of clients 103 and receive data transmitted from each client 103.
Network 104 is a medium used to provide communication connections between clients 102 and servers 103. The network 104 may facilitate the exchange of information or data. As shown in fig. 1, a client 102 and a server 103 may be connected to a network 104, respectively, and mutually transmit information or data through the network 104.
In some embodiments, the network 104 may be any type of wired or wireless network, or a combination thereof. For example, the network 104 may include a cable network, a wired network, a fiber optic network, a telecommunications network, an intranet, the internet, a local area network (Local Area Network, LAN), a wide area network (Wide Area Network, WAN), a wireless local area network (Wireless Local Area Network, WLAN), a metropolitan area network (Metropolitan Area Network, MAN), a public switched telephone network (Public Switched Telephone Network, PSTN), a bluetooth network (TM), a short range wireless network (zigbee (TM), a near field communication (Near Field Communication, NFC) network, or the like.
In some embodiments, network 104 may include one or more network access points. For example, the network 104 may include a wired or wireless network access point, such as a base station or an internet switching point, through which one or more components of the client 102 and server 103 may connect to the network 104 to exchange data or information.
It should be understood that the number of clients 102, servers 103, and networks 104 in fig. 1 is merely illustrative. There may be any number of clients 102, servers 103, and networks 104, as desired for implementation.
It should be noted that, the living body detection method provided in the present disclosure may be performed entirely on the client 102, may be performed entirely on the server 103, may be performed partially on the client 102, and may be performed partially on the server 103.
That is, fig. 1 and the above description with respect to fig. 1 are merely for exemplarily illustrating application scenarios to which the living body detection method of the present disclosure may be applied, and are not to be construed as limiting the application scenarios.
Referring to fig. 2, fig. 2 is a schematic diagram of a living body detection method according to an embodiment of the disclosure. As shown in fig. 2, the method includes:
s201: and acquiring user attack risk information of a target user triggering the task and equipment attack risk information of target equipment triggering the task in response to the triggering of the task detected by the living body.
The execution body of the present embodiment may be a living body detection device, which may be a server, a terminal device, a processor, a chip, or the like, which are not listed here.
If the living body detection device is a server, the living body detection device can be an independent server or a cluster server; the cloud server may be a cloud server or a local server, which is not limited in this embodiment.
The living body detection device may be a client, a server, or both a client and a server, for example, in conjunction with the application scenario shown in fig. 1.
The target user may be understood as a user for which the living body detection method is aimed. The target device may be understood as a device used by a target user in the case of living body detection using a living body detection method. The user attack risk information may be understood as information about the target user being an offensive user determined from the dimensions of the target user, e.g., the user attack risk information may characterize the likelihood or confidence that the target user is an offensive user. The device attack risk information may be understood as information about a device for which the target device is aggressive, which is determined from dimensions of the target device, e.g. the device attack risk information may characterize a likelihood or a confidence that the target device is an attacking device.
The method for acquiring the user attack risk information and the device attack risk information by the living body detection device according to the embodiment is not limited, and may be acquired by a prediction method, or may be acquired by a model method, or the like.
The method comprises the steps of obtaining user attack risk information for a living body detection device through obtaining historical attack conditions of a target user in a prediction mode, and determining equipment attack risk information through obtaining the historical attack conditions of target equipment by the living body detection device. The method comprises the steps of obtaining the capability of determining user attack risk information of a network model which can be trained for a living body detection device based on sample data in a model mode, so as to obtain the user attack risk information based on the network model obtained through training; the ability of the living body detection device to determine equipment attack risk information of the network model based on sample data is obtained in a model mode, so that the equipment attack risk information is obtained based on the network model obtained through training.
As can be seen from the above analysis, the living body detection method according to the embodiment of the present disclosure may be applied to different application scenarios, and accordingly, the task may be triggered differently according to different application scenarios, that is, the task may be triggered according to different application scenarios without limitation.
For example, if the living body detection method of the embodiment of the present disclosure is applied to the application scenario of face payment, when the target user starts the function of face payment of the payment APP of the target device (such as the terminal device), the task is triggered. For another example, if the living body detection method of the embodiment of the present disclosure is applied to an application scenario of face-brushing attendance, when a target user starts a function of face-brushing attendance of an attendance APP of a target device (such as a terminal device), a task is triggered.
S202: and determining the living body category of the task according to the user attack risk information and the equipment attack risk information, wherein the living body category is living body or attack.
By way of example, a living class may be understood as a living detection result, which may be living or may be an attack.
Based on the above analysis, the present disclosure provides a living body detection method, which includes: in response to the triggering of a task detected by a living body, acquiring user attack risk information of a target user triggering the task and equipment attack risk information of target equipment triggering the task, and determining the living body type of the task according to the user attack risk information and the equipment attack risk information, wherein the living body type is living body or attack, in the embodiment, the living body detection device respectively acquires the user attack risk information and the equipment attack risk information under the condition that the task is triggered so as to determine the technical characteristics of a living body detection result by combining the user attack risk information and the equipment attack risk information, on one hand, a multi-mode camera module is not required to be deployed, thereby avoiding the defect of higher hardware cost caused by a living body detection method based on a multi-mode face image in the related art, and being beneficial to saving the hardware cost; on the other hand, multi-action interaction is not required to be executed, the defect of low detection efficiency caused by a multi-frame image living detection method based on multi-action interaction in the related technology is avoided, and the detection efficiency is improved; in yet another aspect, by combining attack risk information of two different dimensions (i.e., a target user dimension and a target device dimension) to determine a living body detection result, accuracy and reliability of living body detection can be improved.
In order to make the reader more deeply understand the implementation principle of the present disclosure, the living body detection method of the present disclosure will now be explained in detail with reference to fig. 3. Wherein, fig. 3 is a schematic diagram of a living body detection method according to another embodiment of the disclosure, as shown in fig. 3, the method includes:
s301: and responding to the triggered task of the living body detection, and acquiring a target image, a target user characteristic and a target device characteristic of the triggered task.
For example, if the target user triggers a task through the target device, the living body detection apparatus acquires a face image of the target user, which may be referred to as a target image, and acquires a feature of the target user, which may be referred to as a target user feature, and acquires a feature of the target device, which may be referred to as a target device feature.
The target user features are used for describing characteristics of the target user, such as age, gender, number of task triggering times, account number of task triggering and the like of the target user. The characteristics of the target device include characteristics of the target device itself and characteristics of the target user using the target device, such as characteristics of the target device itself including identification, model number, location, etc., characteristics of the target user using the target device include frequency, time distribution, etc. of the target user using the target device.
Taking living body detection devices as cloud servers deployed on a cloud platform and application scenes as face-brushing payment as examples, the steps can be understood as follows: the target user can open the APP of face-brushing payment of the target device (such as a mobile phone) in a touch mode so as to trigger a task of living body detection through the target device, an image acquisition device (such as a camera) is arranged on the target device, the target device acquires a face image (namely a target image) of the target user through the image acquisition device and sends the target image to a cloud server, and accordingly, the cloud server acquires the target image and acquires characteristics (namely characteristics of the target user) and characteristics (namely characteristics of the target device) of the target user.
The method for the cloud server to acquire the target user characteristics and the target device characteristics is not limited, for example, the cloud server may acquire the target user characteristics and the target device characteristics in an online manner or may acquire the target user characteristics and the target device characteristics in an offline manner.
The online acquisition mode can be understood as that the target user characteristics and the target device characteristics are not stored in the cloud server in advance, and the cloud server acquires the target user characteristics and the target device characteristics in a network communication mode under the condition that the task is triggered.
The offline acquisition can be understood as pre-storing the target user characteristics and the target device characteristics in the cloud server, and the cloud server acquires the pre-stored target user characteristics and target device characteristics under the condition that the task is triggered.
If the acquisition is performed offline, the acquisition may be performed in a tabular manner, or may be performed in a relational graph (e.g., node graph) manner, which is not limited in this embodiment.
Illustratively, in some embodiments, the target user characteristics and the target device characteristics are determined by the cloud server from a pre-built user device relationship graph. The user equipment relationship diagram comprises user nodes and equipment nodes, wherein the user nodes comprise user characteristics, and the equipment nodes comprise equipment characteristics; the user nodes comprise nodes of target users, and the equipment nodes comprise nodes of target equipment; the user features include target user features and the device features include target device features.
For example, in conjunction with the above analysis, the cloud server may pre-construct and store a user device relationship graph to obtain target user features and target device features from the user device relationship graph.
The user equipment relationship graph can represent the relationship between the user and the equipment, and specifically can be the use relationship between the user and the equipment. One user may use one device, or may use a plurality of devices. One device may be used by one user or may be used by a plurality of users.
The user equipment relationship graph may be a node graph, where the user equipment relationship graph includes a plurality of nodes, and one node corresponds to one user or one device. For example, if the ue relationship diagram includes K (positive integer with K greater than 1) nodes, and is node 1 and node 2 to node K respectively. If the node 1 is a node of the user 1, storing the characteristics of the user 1 in the node 1; if node 2 is a node of device 2, then the characteristics of device 2 are stored in node 2, and so on, which are not listed here.
Correspondingly, under the condition that a target user triggers a task through target equipment, the cloud server can determine a node of the target user from the user equipment relationship diagram and acquire the characteristics of the target user from the node; the cloud server can also determine a node of the target device from the user device relationship diagram, and acquire the target device characteristics from the node.
In this embodiment, by constructing the user equipment relationship diagram in advance to acquire the target user feature and the target equipment feature from the user equipment relationship diagram, the efficiency of acquiring the target user feature and the target equipment feature can be improved, thereby improving the efficiency of living body detection.
In some embodiments, the user device relationship graph further comprises edges between the user nodes and the device nodes; the user equipment relationship graph is constructed based on the acquired sample data; the sample data comprises user characteristics of a user, device characteristics of a device and using behavior information of the user using the device.
Wherein the user nodes are built based on users, the device nodes are built based on devices, and the edges are built based on usage behavior information.
Illustratively, in connection with the above analysis, the sample data includes data for a users and data for B devices, a+b=k. The method comprises the steps that nodes are built according to the number of users and the number of devices, the nodes corresponding to the users can be called user nodes, the nodes corresponding to the devices can be called device nodes, for each node, the cloud server stores data corresponding to the node, if the node is a user node 1 corresponding to a user 1, the cloud server stores user data of the user 1 to the user node 1, and if the node is a device node 2 corresponding to a device 2, the cloud server stores device data of the device 2 to the device node 2.
The usage behavior information is used for representing the behavior of the user using the device, and the cloud server constructs edges according to whether the user uses the device or not. For example, if the usage behavior information characterizes that the device 2 is used by the user 1, the cloud server builds an edge for connecting the user node 1 and the device node 2.
In some embodiments, the usage behavior information has a usage time, and if the cloud server determines that the user uses the device within a preset duration according to the usage time, the cloud server constructs an edge based on the usage behavior information.
Similarly, the preset duration may be determined based on a requirement, a history, a test, and the like, which is not limited in this embodiment.
In this embodiment, the cloud server constructs the user equipment relationship graph by combining sample data including user characteristics, device characteristics and usage behavior information, so that reliability and accuracy of constructing the user equipment relationship graph can be realized.
In some embodiments, the task has a type attribute, which is a login type or a verification type; the user equipment relationship graph comprises a user equipment login relationship graph and a user equipment verification relationship graph.
If the type attribute is a login type, the target user characteristic and the target device characteristic are determined from the user device login relation diagram.
If the type attribute is a verification type, the target user characteristic and the target device characteristic are determined from the user device verification relationship diagram.
For example, if the type attribute is a login type, the task is a login task; if the type attribute is a verification type, the task is a verification task. For tasks with different types of attributes, the cloud server can construct different user equipment relationship diagrams, for example, the cloud server can refer to a user equipment relationship diagram corresponding to a login task as a user equipment login relationship diagram, and can refer to a user equipment relationship diagram corresponding to a verification task as a user equipment verification relationship diagram.
Correspondingly, aiming at a login task, the cloud server acquires target user characteristics and target device characteristics from a user device login relation diagram. Aiming at the verification task, the cloud server acquires the target user characteristics and the target device characteristics from the user device verification relation diagram.
It should be understood that the above description is merely exemplary of the type attribute given by taking the login type and the authentication type as examples, and is not to be construed as limiting the type attribute.
In this embodiment, the cloud server constructs the user equipment relationship diagrams corresponding to the type attributes of the user equipment relationship diagrams, so as to determine the user equipment relationship diagrams corresponding to the type attributes of the tasks from the user equipment relationship diagrams, and acquire the target user characteristics and the target equipment characteristics from the determined user equipment relationship diagrams, so that flexibility and diversity of living body detection are improved, different scene requirements can be met, and accuracy and reliability of living body detection can be improved through targeted operation of acquiring the characteristics.
S302: inputting the target user characteristics into a pre-trained user risk perception model, outputting user attack risk information, inputting the target device characteristics into a pre-trained device risk perception model, and outputting device attack risk information.
The type, architecture, parameters and the like of the user risk perception model adopted by the cloud server are not limited, and can be determined based on requirements, historical records and experiments. Similarly, the training manner and execution of the user perception risk model are not limited, for example, the cloud server may collect sample data, where the sample data is a data set including user features, and train a preset network model based on the sample data, so that the preset network model learns and predicts the user attack risk corresponding to the user features in the sample data, such as the probability or level that the user corresponding to the user features is an attacking user, so as to obtain the user risk perception model.
According to the analysis, the method for acquiring the user attack risk information by the cloud server comprises a prediction method and a model method, and in the step, the cloud server acquires the user attack risk information by the model method, so that the efficiency and the reliability for acquiring the user attack risk information can be relatively improved.
Similarly, the type, architecture, parameters and the like of the equipment risk perception model adopted by the cloud server are not limited, and can be determined based on requirements, historical records and experiments. Similarly, the training manner and execution subject of the device perception risk model are not limited, for example, the cloud server may collect sample data, where the sample data is a data set including device features, train a preset network model based on the sample data, so that the preset network model learns and predicts a device attack risk corresponding to the device features in the sample data, and if the device corresponding to the device features is a probability or a level of an attacking device, thereby obtaining the device risk perception model.
According to the analysis, the method for acquiring the equipment attack risk information by the cloud server comprises a prediction method and a model method, and in the step, the cloud server acquires the equipment attack risk information by the model method, so that the efficiency and the reliability for acquiring the equipment attack risk information can be relatively improved.
In some embodiments, the target user feature has a target user class attribute and the target device feature has a target device class attribute; the user risk perception model comprises risk perception models corresponding to user class attributes respectively, and the user class attributes comprise target user class attributes; the equipment risk perception model comprises risk perception models corresponding to equipment class attributes respectively, and the equipment class attributes comprise target equipment class attributes.
The user attack risk information is obtained by carrying out attack risk prediction processing on the target user characteristics based on a risk perception model corresponding to the target user attribute in the user risk perception model.
The equipment attack risk information is obtained by carrying out attack risk prediction processing on the characteristics of the target equipment based on a risk perception model corresponding to the attributes of the target equipment in the equipment risk perception model.
By way of example, a user class attribute may be understood as a class to which a user feature corresponds, different users may have the same user feature, may have different user features, and may correspond to the same class, or may correspond to different classes. The device class attribute may be understood as a class to which the device feature corresponds, and different devices may have the same device feature, may have different device features, and may correspond to the same class or may correspond to different classes.
Accordingly, for different classes (including a class of a user and a class of a device), risk perception models (including a risk perception model corresponding to the class of the user and a risk perception model corresponding to the class of the device) each corresponding to the different classes may be trained in advance.
For example, if the classes of the user features share N (N is a positive integer greater than or equal to 1) classes and are respectively the first class to the nth class, the cloud server constructs in advance a user risk perception model of the user features of each class for the user features of each class, so as to obtain N user risk perception models. Correspondingly, if the target user class attribute of the target user characteristic represents that the target user characteristic is class 1, the cloud server inputs the target user characteristic into a risk perception model corresponding to a first class in the N user risk perception models, and outputs user risk perception information.
Similarly, if the classes of the device features share M (M is a positive integer greater than or equal to 1) classes and are respectively the first class to the Mth class, the cloud server builds a device risk perception model of the device features in advance aiming at the device features of each class so as to obtain M device risk perception models. Correspondingly, if the target equipment attribute of the target equipment characteristic represents that the target equipment characteristic is the 1 st class, the cloud server inputs the target equipment characteristic into a risk perception model corresponding to a first class in the M equipment risk perception models, and outputs equipment risk perception information.
In this embodiment, the cloud server determines the user risk perception information corresponding to the target user feature by combining the target user attribute, so as to determine the pertinence of the user risk perception information, thereby improving the accuracy and reliability of the user risk perception information. Similarly, the cloud server determines the equipment risk perception information corresponding to the characteristics of the target equipment by combining the attributes of the target equipment, so that the pertinence of the equipment risk perception information can be determined, and the accuracy and the reliability of the equipment risk perception information are improved.
In some embodiments, the user risk perception model is obtained by performing clustering processing on user features to obtain user attributes, and training the user features corresponding to the user attributes.
The equipment risk perception model is obtained by clustering equipment characteristics to obtain equipment attributes and training the equipment characteristics corresponding to the equipment attributes.
In this embodiment, the clustering analysis method adopted in the cloud server clustering process is not limited, for example, the cloud server may adopt a systematic clustering method, a K-means clustering method, or a second-order clustering method.
The cloud server trains the user risk perception model and the equipment risk perception model through combining clustering, and can construct risk perception models (comprising a user risk perception model corresponding to the user attribute and an equipment risk perception model corresponding to the equipment attribute) corresponding to different types of attributes (comprising the user attribute and the equipment attribute) respectively, so that the pertinence of risk perception (comprising the user risk perception and the equipment risk perception) is improved through the corresponding relation between the user characteristics and the user risk perception model and the corresponding relation between the equipment characteristics and the equipment risk perception model, and the accuracy and the effectiveness of the risk perception are improved.
In some embodiments, the user features include user features in a user device login relationship diagram and user features in a user device authentication relationship diagram; the device characteristics include device characteristics in a user device login relationship diagram and device characteristics in a user device authentication relationship diagram.
The user type attributes are obtained by carrying out fusion processing on the user characteristics of the user in the user equipment login relation diagram and the user characteristics of the user in the user equipment verification relation diagram aiming at each user to obtain user fusion characteristics, and carrying out clustering processing on the user fusion characteristics corresponding to each user.
The device type attribute is obtained by carrying out fusion processing on the device characteristics of the device in the user device login relation diagram and the device characteristics of the device in the user device verification relation diagram aiming at each device to obtain device fusion characteristics, and carrying out clustering processing on the device fusion characteristics corresponding to each device.
In combination with the above analysis, the user equipment relationship graph may include a user equipment login relationship graph and a user equipment verification relationship graph, so when the cloud server performs clustering, the features in the two relationship graphs may be combined. Taking the example of clustering processing of cloud servers to obtain user type attributes, the following explanation is carried out:
the number of the users is multiple, for each user, the cloud server may include the user characteristics of the user in the user equipment login relation diagram, and may also include the user characteristics of the user in the user equipment verification relation diagram, then the cloud server performs fusion processing on the user characteristics of the user in the user equipment login relation diagram and the user characteristics in the user equipment verification relation diagram to obtain user fusion characteristics of the user, and so on, to obtain user fusion characteristics corresponding to each user, and performs clustering processing on the user fusion characteristics, such as performing clustering processing of a K-means clustering method on the user fusion characteristics to obtain user attributes.
Taking the example of the equipment type attribute obtained by clustering, the following explanation is made:
the number of the devices is multiple, for each device, the device features of the device may be included in the user device login relation diagram, the device features of the device may also be included in the user device verification relation diagram, and then the cloud server performs fusion processing on the device features of the device in the user device login relation diagram and the device features in the user device verification relation diagram to obtain device fusion features of the device, and so on to obtain device fusion features corresponding to each device, and performs clustering processing on the device fusion features, such as performing clustering processing of a K-means clustering method on the device fusion features to obtain device attributes.
It should be noted that, for any user, the user equipment login relationship diagram may include the user characteristics of the user, and the user equipment verification relationship diagram does not include the user characteristics of the user, so that the user fusion characteristics of the user are the user characteristics of the user in the user equipment login relationship diagram; otherwise, the user equipment login relation diagram does not include the user characteristics of the user, and the user equipment verification relation diagram includes the user characteristics of the user, so that the user fusion characteristics of the user are the user characteristics of the user in the user equipment verification relation diagram.
Similarly, for any device, the user equipment login relation diagram may include the device characteristics of the device, and the user equipment verification relation diagram does not include the device characteristics of the device, so that the device fusion characteristics of the device are the device characteristics of the device in the user equipment login relation diagram; otherwise, the user equipment login relation diagram does not include the equipment characteristics of the equipment, and the user equipment verification relation diagram includes the equipment characteristics of the equipment, so that the equipment fusion characteristics of the equipment are the equipment characteristics of the equipment in the user equipment verification relation diagram.
That is, in this embodiment, for the case that different relationship graphs include user features of the same user and device features of the same device, before performing clustering processing, the cloud server performs fusion processing on the user features of the same user and performs fusion processing on the device features of the same device, so that the user fusion features and the device fusion features have higher comprehensiveness, thereby improving the effectiveness and accuracy of clustering processing.
The above examples illustrate how a user device relationship graph may be constructed, and in the case where the user device relationship graph includes a user device login relationship graph and a user device authentication relationship graph, the cloud server may construct the user device login relationship graph and the user device authentication relationship graph from sample data.
For example, the usage behavior information may include usage categories including login usage categories and verification usage categories. The login use category may be understood as that the user uses the device based on the login requirement, and the authentication use category may be understood as that the user uses the device based on the authentication requirement.
Correspondingly, the cloud server can construct a user equipment login relation diagram based on the login use category, such as the use behavior information of the edge characterization login use category used for connecting the user node and the device node in the user equipment login relation diagram. The cloud server may construct a user device verification relationship graph based on the verification usage categories, such as usage behavior information of the edge characterization verification usage categories in the user device verification relationship graph for connecting the user node and the device node.
In combination with the above analysis, the usage behavior information has a usage time, taking a login usage category as an example, if the usage time of the device 2 used by the user 1 is characterized by that the user 1 performs a login operation on the device 2 within a preset period of time (for example, Q days, Q is a positive integer greater than or equal to 1), then the cloud server constructs a point for connecting the user node 1 and the device node 2.
Taking the verification use category as an example, if the use time of the device 2 by the user 1 is characterized by, and the user 1 performs verification operation on the device 2 within a preset duration (for example, Q days, Q is a positive integer greater than or equal to 1), the cloud server constructs a point for connecting the user node 1 and the device node 2.
S303: and carrying out attack risk prediction processing on the target image to obtain image attack risk information.
The image attack risk information may be understood as related information that the cloud server determines that the target user is an aggressive user from the dimension of the target image, for example, the image attack risk information may represent a likelihood or a confidence that the target user is for attack.
As can be seen from the above description about the related art, in the related art, the attack risk situation of the user may also be determined by using an image manner, so the implementation manner of obtaining the image attack risk information is not limited in this embodiment, for example, a living body detection method of a multi-mode face image in the related art may be used, and for example, a living body detection method of a single-mode face image in the related art may also be used (the living body detection method of the multi-mode face image is a further improvement of the method, so the method is not described in this disclosure, and in particular, the present invention may refer to the prior art).
S304: and determining a living body category according to the image attack risk information, the user attack risk information and the equipment attack risk information, wherein the living body category is living body or attack.
Compared with the embodiment shown in fig. 2, in this embodiment, the cloud server may further introduce image attack risk information to combine the image attack risk information on the basis of the user attack risk information and the device attack risk information, so as to determine the living body category based on the risk information of three dimensions, thereby improving the effectiveness and accuracy of the living body category.
In some embodiments, the image attack risk information includes an image attack probability, the user attack risk information includes a number of user attacks, and the device attack risk information includes a number of device attacks; s304 may include the steps of:
a first step of: and calculating the total attack times according to the user attack times and the equipment attack times.
For example, the image attack risk information may be represented by an image attack probability, and the user attack risk information and the device attack risk information may be represented by attack times, where the total attack times is a sum of the user attack times and the device attack times.
And a second step of: and determining the living body category according to the total attack times and the image attack probability.
The cloud server can convert the total attack times into attack probabilities so as to determine living body categories by combining the image attack probabilities and the converted attack probabilities. For example, the sum of the image attack probability and the converted attack probability may be calculated to obtain a total attack probability, so that the living body category is determined to be an attack in the case of a large total attack probability, and the living body category is determined to be a living body in the case of a small total attack probability.
Illustratively, the second step may include the sub-steps of:
a first substep: and determining the predicted attack probability corresponding to the total attack times.
Regarding the conversion between the total number of attacks and the predicted attack probability, it can be determined based on a formula. For example, the cloud server may divide the total number of attacks by a preset conversion value, thereby obtaining the predicted attack probability.
Similarly, the preset conversion value can be determined based on the requirements, the history record, the test and the like. For example, if the total number of attacks is n and the preset conversion value is 10, the attack probability=n/10 is predicted.
A second substep: and calculating the total attack probability according to the predicted attack probability and the image attack probability.
Illustratively, if the image attack probability is p, in combination with the above example, the attack total probability s=p+n/10.
A third substep: if the total probability of attack reaches a preset threshold, the living body category is determined as attack, and if the total probability of attack does not reach the preset threshold, the living body category is determined as living body.
Similarly, the preset threshold may be determined based on demand, history, and experimentation. For example, if the preset threshold is T, if s reaches T (i.e., s is greater than or equal to T), the living body type is an attack; on the other hand, if s does not reach T (i.e., s is less than T), the living organism type is living organism.
It should be noted that, the image attack probability, the user attack frequency and the device attack frequency in the embodiment may also be other types of parameters, for example, may be respective corresponding risk levels, and may also be respective corresponding confidence degrees.
Taking other types of parameters as respective corresponding risk levels as an example, the cloud server may determine the living body category based on each risk level, and may determine the overall risk level by adding the risk levels to determine the living body category based on the overall risk level. For example, if the overall risk level is less than a preset level threshold, the living class is living; otherwise, if the overall risk level is greater than or equal to the preset level threshold, the living body type is attack.
In combination with the above analysis, before performing the living body detection, the cloud server may be trained in advance to obtain the user risk sensing model and the device risk sensing model, so as to complete the living body detection based on the user risk sensing model and the device risk sensing model, so that we may refer to the user risk sensing model and the device risk sensing model as a living body detection model, that is, the living body detection model includes both the user risk sensing model and the device risk sensing model.
In other embodiments, the living body detection model may further include other models, such as a multi-mode face image sensing model, to further determine the living body category based on image attack risk information determined by the multi-mode face image sensing model in the case of living body detection.
Correspondingly, based on the technical conception, the present disclosure further provides a method for training a living body detection model, and because the multi-mode face image sensing model adopts a training mode in the related art, the embodiment mainly uses the living body detection model including a user risk sensing model and a risk sensing model as an example, and exemplarily describes the training living body detection model.
Referring to fig. 4, fig. 4 is a schematic diagram of a training method of a living body detection model according to an embodiment of the disclosure. As shown in fig. 4, the method includes:
s401: and acquiring the user characteristics and the device characteristics from a preset user equipment relation diagram.
The execution subject of the present embodiment may be a training device of the living body detection model (hereinafter simply referred to as training device), and the training device may be the same device as the living body detection device or may be a device different from the living body detection device.
If the training device is different from the living body detection device, the training device and the living body detection device can be connected through a communication link, after the training device trains to obtain the living body detection model, the living body detection model can be transmitted to the living body detection device through the communication link, and accordingly, the living body detection device can realize living body detection based on the living body detection model.
It should be understood that the present embodiment is not limited to the same or similar technical features as those of the above embodiment. For example, for descriptions of the user equipment relationship diagram, the user features, and the device features, reference may be made to the above examples, which are not repeated herein.
In some embodiments, the user device relationship graph includes user nodes, device nodes, and edges between the user nodes and the device nodes, the user nodes include the user features, the device nodes include the device features, and the edges are used for characterizing usage behavior information of the user using the device.
The user characteristics and the equipment characteristics are respectively obtained by optimizing the adjacent nodes in the user equipment relation diagram, wherein the adjacent nodes are user nodes and equipment nodes which are connected through edges.
Because of the use relationship between two adjacent nodes, the features corresponding to the two adjacent nodes have a certain correlation, in this embodiment, the user features and the device features are obtained by the training device through optimization processing of the adjacent nodes, which is equivalent to considering the correlation between the two adjacent nodes when the training device obtains the user features and the device features, so that the user features and the device features have higher accuracy and reliability.
In some embodiments, the user features and the device features are determined based on a multivariate map feature prediction model, which is obtained by predicting respective corresponding features of each node based on a user device relationship map and training according to feature difference information between adjacent nodes.
For example, the training means may train a multivariate map feature prediction model based on the user equipment relationship map in advance, which multivariate map feature prediction model may learn the ability to optimize the user features and the equipment features, and in particular based on the difference information between neighboring nodes.
According to the analysis, the user equipment relationship graph comprises a user equipment login relationship graph and a user equipment verification relationship graph, and for each relationship graph, the training device can respectively construct a multivariate graph feature prediction model corresponding to the relationship graph.
Taking training device training the multi-figure feature prediction model corresponding to the user equipment login relation figure as an example:
the training device extracts user characteristics (called original user characteristics for convenience in distinguishing) and equipment characteristics (called original equipment user characteristics for convenience in distinguishing) which are respectively included in each user node from the user equipment login relation graph, inputs each original user characteristic and each original equipment characteristic into a preset network model, outputs predicted user characteristics which are respectively corresponding to each original user characteristic and predicted equipment characteristics which are respectively corresponding to each original equipment characteristic, determines a loss function according to characteristic difference information (namely consistency) between the predicted characteristics (including the predicted user characteristics and the predicted equipment characteristics) of adjacent nodes, and adjusts the preset network model based on the loss function, so that a multi-figure characteristic prediction model which is corresponding to the user equipment login relation graph is obtained.
Similarly, the type, architecture, parameters, and the like of the preset network model are not limited in this embodiment, and may be determined by the training device based on the requirements, the history, the test, and the like. For example, the predetermined network model may be a multi-layer perceptron (Multilayer Perceptron, MLP), and in particular may be a 3-layer MLP.
Regarding the implementation principle of the training device for training the multiple graph feature prediction model corresponding to the user equipment verification relationship graph, reference may be made to the implementation principle of the training device for training the multiple graph feature prediction model corresponding to the user equipment login relationship graph, which is not described herein.
In this embodiment, the training device obtains the optimized user features and the device features by training the multivariate map feature prediction model, so that the optimization efficiency can be improved.
S402: and training according to the user characteristics to obtain a user risk perception model, wherein the user risk perception model is used for predicting user attack risk information of the target user in the living body detection task.
In some embodiments, S402 may include the steps of:
a first step of: and clustering the user characteristics to obtain the attributes of each user class.
For example, in combination with the above analysis example, if the sample data includes user features corresponding to a users, the training device may perform clustering processing on the user features corresponding to a users to obtain N (N is a positive integer greater than or equal to 1) classes, and obtain user class attributes corresponding to the user features of each user.
According to the analysis, the user equipment relation diagram comprises a user equipment login relation diagram and a user equipment verification relation diagram, and the user characteristics are obtained by the training device through fusion processing of the user characteristics in the user equipment login relation diagram and the user characteristics in the user equipment verification relation diagram.
That is, the user features for the clustering process are based on the same user as the clustering process, and the training device performs the fusion process on the user features of the same user in the user equipment login relationship diagram and the user features of the same user in the user equipment verification relationship diagram, so as to obtain the features after the fusion process corresponding to each user, and performs the clustering process.
In this embodiment, the training device performs fusion processing on the user features in different relationship graphs by taking the same user as the basis of clustering processing, so that the features of the fusion processing have higher comprehensiveness, thereby improving the effectiveness and reliability of the clustering processing.
For example, taking a user equipment relationship diagram including a user equipment login relationship diagram and a user equipment verification relationship diagram, a clustering method of clustering is a K-means clustering method as an example, and the following is explained:
the user features include user features predicted based on a multivariate map feature prediction model corresponding to a user device login relationship map (for convenience of distinction, the user features may be referred to as login user features), and user features predicted based on a multivariate map feature prediction model corresponding to a user device verification relationship map (for convenience of distinction, the user features may be referred to as verification user features).
For each user in the A users, the training device determines the user characteristic corresponding to the user from the login user characteristics, determines the user characteristic corresponding to the user from the verification user characteristics, and performs fusion processing on the user characteristic corresponding to the user determined from the login user characteristics and the user characteristic corresponding to the user determined from the verification user characteristics, wherein the fusion processing can be understood as sequentially performing series processing and normalization processing, so as to obtain the user fusion characteristic corresponding to the user, and the like, so as to obtain the user fusion characteristic corresponding to each of the A users, such as the A user fusion characteristic.
The training device performs clustering processing on the A user fusion features based on a K-means clustering method to obtain N classes, so that the user class attribute corresponding to each user fusion feature is obtained.
And a second step of: and training a risk perception model corresponding to the user class attribute according to the user characteristics of the user class attribute aiming at each user class attribute.
The user risk perception model comprises risk perception models corresponding to all user attributes.
In this embodiment, the training device performs clustering processing to obtain each user attribute, and then trains the risk perception model corresponding to each user attribute, so that the risk perception model is trained with high pertinence, and the effectiveness of the risk perception model is improved.
In some embodiments, S402 may include the steps of:
a first step of: and inputting the user characteristics into the first network model, and outputting predicted user attack risk information.
Similarly, the type, architecture, parameters, and the like of the first network model are not limited in this embodiment, and may be determined by the training device based on the requirements, the history, the test, and the like. For example, the first network model may be an MLP, and in particular may be a layer 3 MLP.
And a second step of: and calculating a first prediction loss between the predicted user attack risk information and a preset user attack true value, and generating a user risk perception model according to the first prediction loss.
The preset user attack truth value is labeled in advance.
As can be seen from the above examples, the predicted user attack risk information may be the predicted number of times of user attack, the predicted user attack probability, the predicted user attack level, and so on.
Taking predicted user attack risk information as an example of predicted user attack times, presetting a user attack truth value as a user attack time truth value, and enabling a first prediction loss to be difference information between the predicted user attack times and the user attack time truth value, wherein the training device carries out iterative optimization on the first network model based on the first prediction loss until the first network model converges, so that a user risk perception model is obtained.
S403: and training according to the equipment characteristics to obtain an equipment risk perception model, wherein the equipment risk perception model is used for predicting equipment attack risk information of target equipment in the task.
The user attack risk information and the equipment attack risk information are used for determining the living body category of the task; the living body detection model comprises a user risk perception model and a device risk perception model.
In some embodiments, S403 may include the steps of:
a first step of: and clustering the equipment characteristics to obtain the equipment attributes.
For example, in combination with the above analysis example, if the sample data includes the device features corresponding to the B devices, the training apparatus may perform clustering processing on the device features corresponding to the B devices to obtain N (N is a positive integer greater than or equal to 1) classes, and obtain device class attributes corresponding to the device features of each device.
According to the analysis, the user equipment relation diagram comprises a user equipment login relation diagram and a user equipment verification relation diagram, and the equipment characteristics are obtained by fusion processing of the equipment characteristics in the user equipment login relation diagram and the equipment characteristics in the user equipment verification relation diagram.
That is, the device features for the clustering process are based on the same device, and the training device performs fusion processing on the device features of the same device in the user device login relationship diagram and the device features of the same device in the user device verification relationship diagram, so as to obtain the respective corresponding fused features of each device and perform the clustering process.
In this embodiment, the training device performs fusion processing on the device features in different relationship graphs by taking the same device as the basis of clustering processing, so that the features of the fusion processing have higher comprehensiveness, thereby improving the effectiveness and reliability of the clustering processing.
For example, taking a user equipment relationship diagram including a user equipment login relationship diagram and a user equipment verification relationship diagram, a clustering method of clustering is a K-means clustering method as an example, and the following is explained:
the device features include device features predicted based on a multivariate map feature prediction model corresponding to a user device login relationship map (for convenience of distinction, the device features may be referred to as login device features), and device features predicted based on a multivariate map feature prediction model corresponding to a user device verification relationship map (for convenience of distinction, the device features may be referred to as verification device features).
For each device in the B devices, the training device determines the device characteristic corresponding to the device from the login device characteristic, determines the device characteristic corresponding to the device from the verification device characteristic, and performs fusion processing on the device characteristic corresponding to the device determined from the login device characteristic and the device characteristic corresponding to the device determined from the verification device characteristic, wherein the fusion processing can be understood as sequentially performing series processing and normalization processing, so as to obtain the device fusion characteristic corresponding to the user, and the like, so as to obtain the device fusion characteristic corresponding to each of the B devices, such as the B device fusion characteristic.
The training device performs clustering processing on the fusion features of the B devices based on a K-means clustering method to obtain M classes, so that user class attributes corresponding to each fusion feature of the devices are obtained.
And a second step of: and training a risk perception model corresponding to the equipment attribute according to the equipment characteristics of the equipment attribute aiming at each equipment attribute.
The equipment risk perception model comprises risk perception models corresponding to equipment attributes.
In this embodiment, the training device performs clustering processing to obtain each equipment attribute, and then trains the risk perception model corresponding to each equipment attribute, so that the training risk perception model has stronger pertinence, and the effectiveness of the risk perception model is improved.
In some embodiments, S403 may include the steps of:
a first step of: and inputting the device characteristics into a second network model, and outputting predicted device attack risk information.
Similarly, the type, architecture, parameters, and the like of the second network model are not limited in this embodiment, and may be determined by the training device based on the requirements, the history, the test, and the like. For example, the second network model may be an MLP, and in particular may be a layer 3 MLP.
And a second step of: and calculating a second prediction loss between the predicted equipment attack risk information and the preset equipment attack true value, and generating an equipment risk perception model according to the second prediction loss.
The preset equipment attack true value is marked in advance.
As can be seen from the above examples, the predicted device attack risk information may be the predicted device attack number, the predicted device attack probability, the predicted device attack level, or the like.
Taking predicted equipment attack risk information as an example of predicted equipment attack times, presetting equipment attack truth values as equipment attack times truth values, and enabling second prediction loss to be difference information between the predicted equipment attack times and the equipment attack times truth values, wherein the training device carries out iterative optimization on the second network model based on the second prediction loss until the second network model converges, so that a user risk perception model is obtained.
According to the technical concept, the present disclosure also provides a living body detection apparatus.
Referring to fig. 5, fig. 5 is a schematic diagram of a living body detection apparatus according to an embodiment of the disclosure, and as shown in fig. 5, a living body detection apparatus 500 includes:
an obtaining unit 501, configured to obtain user attack risk information of a target user that triggers a task and device attack risk information of a target device that triggers the task in response to the task being triggered.
And the determining unit 502 is configured to determine a living body category of the task according to the user attack risk information and the device attack risk information, where the living body category is a living body or an attack.
In some embodiments, the target user has a target user characteristic; the user attack risk information is determined based on the target user characteristics.
The target device has a target device feature; the device attack risk information is determined based on the target device characteristics.
In some embodiments, the target user characteristic and the target device characteristic are determined from a pre-constructed user device relationship graph.
The user equipment relationship diagram comprises user nodes and equipment nodes, wherein the user nodes comprise user characteristics, and the equipment nodes comprise equipment characteristics; the user node comprises a node of the target user, and the equipment node comprises a node of the target equipment; the user characteristics include the target user characteristics and the device characteristics include the target device characteristics.
In some embodiments, the user device relationship graph further comprises edges between the user node and the device node; the user equipment relationship graph is constructed based on the acquired sample data; the sample data comprises user characteristics of a user, device characteristics of a device and using behavior information of the user using the device.
Wherein the user node is built based on the user, the device node is built based on the device, and the edge is built based on the usage behavior information.
In some embodiments, the task has a type attribute, which is a login type or a verification type; the user equipment relationship graph comprises a user equipment login relationship graph and a user equipment verification relationship graph.
And if the type attribute is the login type, determining the target user characteristic and the target equipment characteristic from the user equipment login relation diagram.
And if the type attribute is the verification type, determining the target user characteristic and the target equipment characteristic from the user equipment verification relation diagram.
In some embodiments, the user attack risk information is obtained by performing attack risk prediction processing on the target user feature based on a pre-trained user risk perception model.
The equipment attack risk information is obtained by carrying out attack risk prediction processing on the target equipment characteristics based on a pre-trained equipment risk perception model.
In some embodiments, the target user feature has a target user class attribute, and the target device feature has a target device class attribute; the user risk perception model comprises risk perception models corresponding to user class attributes respectively, and the user class attributes comprise the target user class attributes; the equipment risk perception model comprises risk perception models corresponding to equipment attributes respectively, and the equipment attributes comprise the target equipment attributes.
The user attack risk information is obtained by carrying out attack risk prediction processing on the target user characteristics based on a risk perception model corresponding to the target user attribute in the user risk perception model.
The equipment attack risk information is obtained by carrying out attack risk prediction processing on the target equipment characteristics based on a risk perception model corresponding to the target equipment attribute in the equipment risk perception model.
In some embodiments, the user risk perception model is obtained by performing clustering processing on the user features to obtain user attributes, and training the user features corresponding to the user attributes.
The equipment risk perception model is obtained by clustering the equipment characteristics to obtain equipment attributes and training the equipment characteristics corresponding to the equipment attributes.
In some embodiments, the user features include user features in the user device login relationship diagram and user features in the user device authentication relationship diagram; the device features include device features in the user device login relationship diagram and device features in the user device verification relationship diagram.
The user type attributes are obtained by carrying out fusion processing on the user characteristics of the user in the user equipment login relation diagram and the user characteristics of the user in the user equipment verification relation diagram aiming at each user to obtain user fusion characteristics, and carrying out clustering processing on the user fusion characteristics corresponding to each user.
The device type attribute is obtained by carrying out fusion processing on the device characteristics of the device in the user device login relation diagram and the device characteristics of the device in the user device verification relation diagram aiming at each device to obtain device fusion characteristics, and carrying out clustering processing on the device fusion characteristics corresponding to each device.
In some embodiments, as shown in fig. 5, the apparatus 500 further comprises:
and the prediction unit 503 is configured to perform attack risk prediction processing on the obtained target image triggering the task, so as to obtain image attack risk information.
And the determining unit 502 is configured to determine the living body category according to the image attack risk information, the user attack risk information, and the device attack risk information.
In some embodiments, the image attack risk information includes an image attack probability, the user attack risk information includes a number of user attacks, and the device attack risk information includes a number of device attacks; the determining unit 502 includes:
And the calculating subunit 5021 is configured to calculate the total attack frequency according to the user attack frequency and the device attack frequency.
A determining subunit 5022, configured to determine the living body category according to the total attack number and the image attack probability.
In some embodiments, the determining subunit 5022 comprises:
and the first determining module is used for determining the predicted attack probability corresponding to the total attack times.
And the calculation module is used for calculating the total attack probability according to the predicted attack probability and the image attack probability.
And the second determining module is used for determining the living body type as the attack if the total probability of the attack reaches a preset threshold value, and determining the living body type as the living body if the total probability of the attack does not reach the preset threshold value.
According to the technical conception, the present disclosure further provides a training device for the living body detection model.
Referring to fig. 6, fig. 6 is a schematic diagram of a training apparatus for a living body detection model according to an embodiment of the disclosure, and as shown in fig. 6, a training apparatus 600 for a living body detection model includes:
an obtaining unit 601, configured to obtain a user feature and a device feature from a preset user device relationship diagram.
The first training unit 602 is configured to train to obtain a user risk perception model according to the user characteristics, where the user risk perception model is used to predict user attack risk information of a target user in a living body detection task.
And the second training unit 603 is configured to train to obtain a device risk perception model according to the device feature, where the device risk perception model is used to predict device attack risk information of the target device in the task.
The user attack risk information and the equipment attack risk information are used for determining living body categories of the tasks; the living detection model includes the user risk perception model and the device risk perception model.
In some embodiments, the user equipment relationship graph includes a user node, a device node, and an edge between the user node and the device node, the user node includes the user feature, the device node includes the device feature, and the edge is used for characterizing usage behavior information of a user using a device.
The user characteristics and the equipment characteristics are respectively obtained by optimizing adjacent nodes in the user equipment relation diagram, wherein the adjacent nodes are user nodes and equipment nodes which are connected through the edges.
In some embodiments, the user features and the device features are determined based on the multivariate map feature prediction model, which is obtained by predicting features corresponding to each node based on the user device relationship map and training according to feature difference information between the neighboring nodes.
In some embodiments, the first training unit 602 includes:
and the first clustering subunit 6021 is configured to perform clustering processing on the user features to obtain each user attribute.
The first training subunit 6022 is configured to train, for each user class attribute, a risk perception model corresponding to the user class attribute according to the user characteristics of the user class attribute.
The user risk perception model comprises risk perception models corresponding to all user attributes.
In some embodiments, the user device relationship graph includes a user device login relationship graph and a user device authentication relationship graph; the user characteristics are obtained by fusing the user characteristics in the user equipment login relation diagram and the user characteristics in the user equipment verification relation diagram.
In some embodiments, the second training unit 603 includes:
And a second clustering subunit 6031, configured to perform clustering processing on the device features, to obtain each device attribute.
The second training subunit 6032 is configured to train, for each device class attribute, a risk perception model corresponding to the device class attribute according to the device feature of the device class attribute.
The equipment risk perception model comprises risk perception models corresponding to equipment attributes.
In some embodiments, the user device relationship graph includes a user device login relationship graph and a user device authentication relationship graph; the device features are the features obtained by fusing the device features in the user equipment login relation diagram and the device features in the user equipment verification relation diagram.
In some embodiments, the first training unit 602 includes:
a first input subunit 6023, configured to input the user characteristic to a first network model, and output predicted user attack risk information.
A first calculating subunit 6024 is configured to calculate a first predicted loss between the predicted user attack risk information and a preset user attack truth value.
A first generation subunit 6025 is configured to generate the user risk perception model according to the first prediction loss.
In some embodiments, the second training unit 603 includes:
and a second input subunit 6033, configured to input the device feature to a second network model, and output predicted device attack risk information.
A second calculating subunit 6034 is configured to calculate a second prediction loss between the predicted equipment attack risk information and a preset equipment attack truth value.
A second generation subunit 6035 is configured to generate the device risk perception model according to the second prediction loss.
According to the technical idea described above, the present disclosure further provides a processor-readable storage medium storing a computer program for causing the processor to execute the living body detection method according to any one of the embodiments described above; alternatively, the computer program is configured to cause the processor to perform the training method of the living body detection model according to any one of the embodiments described above.
According to the technical idea described above, the present disclosure further provides a computer program product comprising a computer program which, when executed by a processor, implements the living body detection method as described in any of the embodiments above; alternatively, a training method of the living body detection model as described in any one of the embodiments above is implemented.
According to the technical concept described above, the present disclosure further provides a living body detection system including:
at least one memory including at least one set of instructions to push information;
at least one processor in communication with the at least one memory;
wherein the in-vivo detection method as described in any one of the embodiments above is implemented when the at least one processor executes the at least one set of instructions.
According to the technical concept, the present disclosure further provides a training system of a living body detection model, including:
at least one memory including at least one set of instructions to push information;
at least one processor in communication with the at least one memory;
wherein the at least one processor, when executing the at least one set of instructions, implements a method of training a biopsy model as described in any of the embodiments above.
According to the technical concept, the present disclosure further provides an electronic device, including: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes the computer-executable instructions stored by the memory to implement the method of training a living body detection model as described in any of the embodiments above; alternatively, a training method of the living body detection model as described in any one of the embodiments above is implemented.
Wherein,, fig. 7 is a hardware configuration diagram of an electronic device 700 according to an embodiment of the disclosure. The electronic device 700 may perform the method of training the living body detection method or the living body detection model as described in any of the above embodiments.
Taking an application scenario in which the living body detection of the embodiment of the present disclosure is applied to the application scenario as shown in fig. 1 as an example, the following description is given to the electronic device 700:
in the case where the in-vivo detection method as described in any of the above embodiments is performed on the client 102, the electronic device 700 may be the client 102. In the case where the living body detection method described in any of the above embodiments is executed on the server 103, the electronic device 700 may be the server 300. In the case where the portions described in any of the above embodiments are executed on the client 102 and the portions are executed on the server 103, the electronic device 700 may be the client 102 and the server 103.
As shown in fig. 7, an electronic device 700 may include at least one storage medium 701 and at least one processor 702. In some embodiments, the electronic device 700 may also include a communication port 703 and an internal communication bus 704. Meanwhile, the electronic device 700 may further include an Input/Output (I/O) component 705.
Internal communication bus 704 may connect the different system components including storage medium 701, processor 702, and communication ports 703.
The I/O component 705 supports input/output between the electronic device 700 and other components.
The communication port 703 is used for data communication between the electronic device 700 and the outside world, for example, the communication port 703 may be used for data communication between the electronic device 700 and the network 104. The communication port 703 may be a wired communication port or a wireless communication port.
The storage medium 701 may include a data storage device. The data storage device may be a non-transitory storage medium or a transitory storage medium. For example, the data storage device can include one or more of a magnetic disk 7011, a Read-Only Memory (ROM) 7012, or a random access Memory (Random Access Memory, RAM) 7013. The storage medium 701 further includes at least one set of instructions stored in the data storage device. The instructions are computer program code that may include programs, routines, objects, components, data structures, procedures, modules, etc. that perform the biopsy methods provided herein.
The at least one processor 702 can be communicatively coupled with at least one storage medium 701 and a communication port 703 via an internal communication bus 704. The at least one processor 702 is configured to execute the at least one instruction set described above. When the electronic device 700 is running, the at least one processor 702 reads the at least one instruction set and performs the living detection method provided herein according to the indication of the at least one instruction set. The processor 702 may perform all the steps involved in the in vivo detection method. The processor 702 may be in the form of one or more processors, and in some embodiments, the processor 702 may include one or more hardware processors, such as microcontrollers, microprocessors, reduced instruction set computers (Reduced Instruction Set Computer, RISC), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), application specific instruction set processors (Application Specific Instruction Processor, ASIP), central processing units (Central Processing Unit, CPU), graphics processing units (graphics processing unit, GPU), physical processing units (Physics Processing Unit, PPU), microcontroller units, digital signal processors (Digital Signal Processor, DSP), field programmable gate arrays (Field Programmable Gate Array, FPGA), advanced RISC Machines (ARM), programmable logic devices (Programmable Logic Device, PLD), any circuit or processor capable of performing one or more functions, or the like, or any combination thereof. For illustrative purposes only, only one processor 702 is depicted in the electronic device 700 in this specification. However, it should be noted that the electronic device 700 may also include multiple processors in this specification, and thus, the operations and/or method steps disclosed in this specification may be performed by one processor as described in this specification, or may be performed jointly by multiple processors. For example, if the processor 702 of the electronic device 700 performs steps a and B in this specification, it should be understood that steps a and B may also be performed by two different processors 702 in combination or separately (e.g., a first processor performs step a, a second processor performs step B, or the first and second processors together perform steps a and B).
Note that the multimodal living body detection model in this embodiment is not a face image for a specific user, and cannot reflect personal information of a specific user. It should be noted that, the face image in this embodiment is from the public data set.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user accord with the regulations of related laws and regulations, and the public order colloquial is not violated.
It will be apparent to those skilled in the art that embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, magnetic disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-executable instructions. These computer-executable instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These processor-executable instructions may also be stored in a processor-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the processor-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These processor-executable instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present disclosure without departing from the spirit or scope of the disclosure. Thus, the present disclosure is intended to include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
Claims (25)
1. A method of in vivo detection, the method comprising:
acquiring user attack risk information of a target user triggering a task and equipment attack risk information of target equipment triggering the task in response to triggering of the task detected by a living body;
and determining the living body category of the task according to the user attack risk information and the equipment attack risk information, wherein the living body category is living body or attack.
2. The method of claim 1, wherein the target user has a target user characteristic; the user attack risk information is determined based on the target user characteristics;
the target device has a target device feature; the device attack risk information is determined based on the target device characteristics.
3. The method of claim 2, wherein the target user characteristic and the target device characteristic are determined from a pre-constructed user device relationship graph;
the user equipment relationship diagram comprises user nodes and equipment nodes, wherein the user nodes comprise user characteristics, and the equipment nodes comprise equipment characteristics; the user node comprises a node of the target user, and the equipment node comprises a node of the target equipment; the user characteristics include the target user characteristics and the device characteristics include the target device characteristics.
4. A method according to claim 3, wherein the user device relationship graph further comprises edges between the user node and the device node; the user equipment relationship graph is constructed based on the acquired sample data; the sample data comprises user characteristics of a user, equipment characteristics of equipment and using behavior information of the user using the equipment;
wherein the user node is built based on the user, the device node is built based on the device, and the edge is built based on the usage behavior information.
5. A method according to claim 3, wherein the task has a type attribute, which is a login type or a verification type; the user equipment relationship graph comprises a user equipment login relationship graph and a user equipment verification relationship graph;
if the type attribute is the login type, determining the target user characteristic and the target equipment characteristic from the user equipment login relation diagram;
and if the type attribute is the verification type, determining the target user characteristic and the target equipment characteristic from the user equipment verification relation diagram.
6. The method according to claim 5, wherein the user attack risk information is obtained by performing attack risk prediction processing on the target user feature based on a pre-trained user risk perception model;
the equipment attack risk information is obtained by carrying out attack risk prediction processing on the target equipment characteristics based on a pre-trained equipment risk perception model.
7. The method of claim 6, wherein the target user feature has a target user class attribute and the target device feature has a target device class attribute; the user risk perception model comprises risk perception models corresponding to user class attributes respectively, and the user class attributes comprise the target user class attributes; the equipment risk perception model comprises risk perception models corresponding to equipment attributes respectively, and the equipment attributes comprise the target equipment attributes;
the user attack risk information is obtained by carrying out attack risk prediction processing on the target user characteristics based on a risk perception model corresponding to the target user attribute in the user risk perception model;
the equipment attack risk information is obtained by carrying out attack risk prediction processing on the target equipment characteristics based on a risk perception model corresponding to the target equipment attribute in the equipment risk perception model.
8. The method according to claim 7, wherein the user risk perception model is obtained by clustering the user features to obtain user attributes and training the user features corresponding to the user attributes;
the equipment risk perception model is obtained by clustering the equipment characteristics to obtain equipment attributes and training the equipment characteristics corresponding to the equipment attributes.
9. The method of claim 8, wherein the user characteristics comprise user characteristics in the user device login relationship diagram and user characteristics in the user device authentication relationship diagram; the device features comprise device features in the user equipment login relation diagram and device features in the user equipment verification relation diagram;
the user type attributes are obtained by carrying out fusion processing on user characteristics of the user in the user equipment login relation diagram and user characteristics of the user in the user equipment verification relation diagram aiming at each user to obtain user fusion characteristics, and carrying out clustering processing on the user fusion characteristics corresponding to each user;
The device type attribute is obtained by carrying out fusion processing on the device characteristics of the device in the user device login relation diagram and the device characteristics of the device in the user device verification relation diagram aiming at each device to obtain device fusion characteristics, and carrying out clustering processing on the device fusion characteristics corresponding to each device.
10. The method according to any one of claims 1-9, further comprising:
carrying out attack risk prediction processing on the obtained target image triggering the task to obtain image attack risk information;
and determining a living body category of the task according to the user attack risk information and the equipment attack risk information, including: and determining the living body category according to the image attack risk information, the user attack risk information and the equipment attack risk information.
11. The method of claim 10, wherein the image attack risk information comprises an image attack probability, the user attack risk information comprises a number of user attacks, and the device attack risk information comprises a number of device attacks; determining the living body category according to the image attack risk information, the user attack risk information and the equipment attack risk information, including:
Calculating the total attack times according to the user attack times and the equipment attack times;
and determining the living body category according to the total attack times and the image attack probability.
12. The method of claim 11, wherein determining the living organism category from the total number of attacks and the image attack probability comprises:
determining a predicted attack probability corresponding to the total number of attacks;
calculating to obtain the total attack probability according to the predicted attack probability and the image attack probability;
and if the total probability of the attack reaches a preset threshold, determining the living body category as the attack, and if the total probability of the attack does not reach the preset threshold, determining the living body category as the living body.
13. A method of training a living body detection model, the method comprising:
acquiring user characteristics and equipment characteristics from a preset user equipment relationship diagram;
training according to the user characteristics to obtain a user risk perception model, wherein the user risk perception model is used for predicting user attack risk information of a target user in a living body detection task;
training according to the equipment characteristics to obtain an equipment risk perception model, wherein the equipment risk perception model is used for predicting equipment attack risk information of target equipment in the task;
The user attack risk information and the equipment attack risk information are used for determining living body categories of the tasks; the living detection model includes the user risk perception model and the device risk perception model.
14. The method of claim 13, wherein the user device relationship graph includes user nodes, device nodes, and edges between the user nodes and the device nodes, wherein the user nodes include the user features, wherein the device nodes include the device features, and wherein the edges are used for characterizing usage behavior information of user usage devices;
the user characteristics and the equipment characteristics are respectively obtained by optimizing adjacent nodes in the user equipment relation diagram, wherein the adjacent nodes are user nodes and equipment nodes which are connected through the edges.
15. The method of claim 14, wherein the user features and the device features are determined based on the multivariate map feature prediction model, which is obtained by predicting respective corresponding features of each node based on the user device relationship map and training the characteristics based on feature difference information between the neighboring nodes.
16. The method according to any one of claims 13-15, wherein training to obtain a user risk perception model from the user characteristics comprises:
clustering the user characteristics to obtain user attributes;
aiming at each user attribute, training a risk perception model corresponding to the user attribute according to the user characteristics of the user attribute;
the user risk perception model comprises risk perception models corresponding to all user attributes.
17. The method according to any of claims 13-16, wherein the user device relationship graph comprises a user device login relationship graph and a user device authentication relationship graph; the user characteristics are obtained by fusing the user characteristics in the user equipment login relation diagram and the user characteristics in the user equipment verification relation diagram.
18. The method according to any one of claims 13-17, wherein training a device risk perception model from the device features comprises:
clustering the equipment characteristics to obtain equipment attributes;
aiming at each equipment attribute, training a risk perception model corresponding to the equipment attribute according to the equipment characteristics of the equipment attribute;
The equipment risk perception model comprises risk perception models corresponding to equipment attributes.
19. The method according to any of claims 13-18, wherein the user device relationship graph comprises a user device login relationship graph and a user device authentication relationship graph; the device features are the features obtained by fusing the device features in the user equipment login relation diagram and the device features in the user equipment verification relation diagram.
20. The method according to any one of claims 13-19, wherein training to obtain a user risk perception model from the user characteristics comprises:
inputting the user characteristics into a first network model, and outputting predicted user attack risk information;
and calculating a first prediction loss between the predicted user attack risk information and a preset user attack true value, and generating the user risk perception model according to the first prediction loss.
21. The method according to any one of claims 13-20, wherein training a device risk perception model from the device features comprises:
inputting the equipment characteristics into a second network model, and outputting predicted equipment attack risk information;
And calculating a second prediction loss between the predicted equipment attack risk information and a preset equipment attack true value, and generating the equipment risk perception model according to the second prediction loss.
22. A living body detection apparatus, characterized in that the apparatus comprises:
the acquisition unit is used for responding to the triggered task detected by the living body and acquiring user attack risk information of a target user triggering the task and equipment attack risk information of target equipment triggering the task;
and the determining unit is used for determining the living body category of the task according to the user attack risk information and the equipment attack risk information, wherein the living body category is living body or attack.
23. A training device for a living body detection model, the device comprising:
the acquisition unit is used for acquiring user characteristics and equipment characteristics from a preset user equipment relationship diagram;
the first training unit is used for training to obtain a user risk perception model according to the user characteristics, wherein the user risk perception model is used for predicting user attack risk information of a target user in a living body detection task;
the second training unit is used for training to obtain a device risk perception model according to the device characteristics, wherein the device risk perception model is used for predicting device attack risk information of target devices in the task;
The user attack risk information and the equipment attack risk information are used for determining living body categories of the tasks; the living detection model includes the user risk perception model and the device risk perception model.
24. A living body detection system, characterized by comprising:
at least one memory including at least one set of instructions to push information;
at least one processor in communication with the at least one memory;
wherein the method of any of claims 1 to 12 is implemented when the at least one processor executes the at least one set of instructions.
25. A training system for a living body detection model, comprising:
at least one memory including at least one set of instructions to push information;
at least one processor in communication with the at least one memory;
wherein the method of any of claims 13 to 21 is implemented when the at least one processor executes the at least one set of instructions.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310534152.9A CN116486494A (en) | 2023-05-09 | 2023-05-09 | Living body detection method, training method and device of living body detection model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310534152.9A CN116486494A (en) | 2023-05-09 | 2023-05-09 | Living body detection method, training method and device of living body detection model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116486494A true CN116486494A (en) | 2023-07-25 |
Family
ID=87219582
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310534152.9A Pending CN116486494A (en) | 2023-05-09 | 2023-05-09 | Living body detection method, training method and device of living body detection model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116486494A (en) |
-
2023
- 2023-05-09 CN CN202310534152.9A patent/CN116486494A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110765923B (en) | Face living body detection method, device, equipment and storage medium | |
JP2022504704A (en) | Target detection methods, model training methods, equipment, equipment and computer programs | |
WO2021232985A1 (en) | Facial recognition method and apparatus, computer device, and storage medium | |
CN111401344A (en) | Face recognition method and device and training method and device of face recognition system | |
EP3780000A1 (en) | Beauty counseling information providing device and beauty counseling information providing method | |
KR102455966B1 (en) | Mediating Apparatus, Method and Computer Readable Recording Medium Thereof | |
CN109543633A (en) | A kind of face identification method, device, robot and storage medium | |
CN108596110A (en) | Image-recognizing method and device, electronic equipment, storage medium | |
CN113128481A (en) | Face living body detection method, device, equipment and storage medium | |
CN111597944B (en) | Living body detection method, living body detection device, computer equipment and storage medium | |
CN110288668B (en) | Image generation method, device, computer equipment and storage medium | |
CN115497176A (en) | Living body detection model training method, living body detection method and system | |
CN115984977A (en) | Living body detection method and system | |
CN116486494A (en) | Living body detection method, training method and device of living body detection model | |
CN115376198A (en) | Gaze direction estimation method, gaze direction estimation device, electronic apparatus, medium, and program product | |
CN114495188B (en) | Image data processing method and device and related equipment | |
CN116110135A (en) | Living body detection method and system | |
CN116453204B (en) | Action recognition method and device, storage medium and electronic equipment | |
CN114429669B (en) | Identity recognition method, identity recognition device, computer equipment and storage medium | |
CN117576245B (en) | Method and device for converting style of image, electronic equipment and storage medium | |
CN116343348A (en) | Living body detection method and system | |
US20230305097A1 (en) | Systems and methods for associating rf signals with an individual | |
CN115761907A (en) | Living body detection method and system | |
Bojkovic et al. | Internet of Biometric Things: Standardization Activities and Frameworks | |
CN114581978A (en) | Face recognition method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |