CN116189316A - Living body detection method and system - Google Patents

Living body detection method and system Download PDF

Info

Publication number
CN116189316A
CN116189316A CN202211731962.5A CN202211731962A CN116189316A CN 116189316 A CN116189316 A CN 116189316A CN 202211731962 A CN202211731962 A CN 202211731962A CN 116189316 A CN116189316 A CN 116189316A
Authority
CN
China
Prior art keywords
living body
living
target
detection
body detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211731962.5A
Other languages
Chinese (zh)
Inventor
曹佳炯
丁菁汀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202211731962.5A priority Critical patent/CN116189316A/en
Publication of CN116189316A publication Critical patent/CN116189316A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/45Structures or tools for the administration of authentication
    • G06F21/46Structures or tools for the administration of authentication by designing passwords or checking the strength of passwords
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/778Active pattern-learning, e.g. online learning of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Abstract

The specification provides a living body detection method and system, wherein the method comprises the following steps: acquiring a biological characteristic image containing biological characteristics of a target user, performing preliminary living body detection on the target user based on the biological characteristic image to obtain a preliminary living body detection result of the target user, further indicating the target user to input identity verification information of a target length, and performing living body detection again on the target user based on actual input information of the target user to obtain a target living body detection result of the target user, wherein the target length is related to the preliminary living body detection result. The accuracy of the living body detection result can be improved through the scheme.

Description

Living body detection method and system
Technical Field
The present disclosure relates to the field of artificial intelligence, and in particular, to a living body detection method and system.
Background
Biometric recognition (e.g., face recognition) is currently widely used in a variety of contexts, such as face recognition-based payment contexts, face recognition-based access control systems, and the like. Biometric identification can bring convenience to people's life, but at the same time, it also brings some security risks. Among them, living body attack is one of security risks faced in the field of biometric identification, for example, an attacker performs living body attack by using a mobile phone screen, a photo, a printing paper, a high-precision mask, or the like.
For this reason, the biometric identification system needs to detect whether the target user is a living body based on the biometric image after acquiring the biometric image, and execute the subsequent biometric identification process when the target user is determined to be a living body. However, in practical application, it is found that in the case of performing in vivo detection based on a biometric image, the accuracy of the detection result is not high, resulting in lower security in the field of biometric identification.
Disclosure of Invention
The specification provides a living body detection method and a living body detection system, which can improve the accuracy of a living body detection result, thereby improving the safety of the field of biological feature identification.
In a first aspect, the present specification provides a living body detection method, comprising: acquiring a biological characteristic image, wherein the biological characteristic image comprises biological characteristics of a target user; performing preliminary living body detection on the target user based on the biological characteristic image to obtain a preliminary living body detection result of the target user; and indicating the target user to input identity verification information of a target length, and performing secondary living detection on the target user based on the actual input information of the target user to obtain a target living detection result of the target user, wherein the target length is related to the preliminary living detection result.
In some embodiments, wherein the preliminary living detection results comprise: the target user is a first probability of a living body; and the target length is inversely related to the first probability.
In some embodiments, before the target user is instructed to enter the authentication information of the target length, further comprising: determining a target identity verification type; and determining the target length based on the target authentication type and the first probability.
In some embodiments, the determining the target length based on the target authentication type and the first probability comprises: determining a weight coefficient based on the type of a current application scene, wherein the weight coefficient and the requirement degree of the current application scene on the biological feature recognition safety are in an anti-correlation relationship; weighting the first probability based on the weight coefficient to obtain a first probability after scene adaptation; and determining the target length based on the target identity verification type and the first probability after scene adaptation.
In some embodiments, the determining the target length based on the target authentication type and the first probability comprises: inputting the target identity verification type and the first probability into a pre-trained length mapping model to obtain the target length, wherein the length mapping model is obtained by training a plurality of groups of training samples, and each group of training samples comprises: sample authentication type, sample probability, and sample length.
In some embodiments, the determining the target authentication type includes: randomly determining the target identity verification type in a plurality of identity verification types to be selected; or determining the type of the current application scene, and selecting an identity verification type matched with the type of the current application scene from a plurality of identity verification types to be selected as the target identity verification type; or, taking the identity verification type appointed by the target user in a plurality of identity verification types to be selected as the target identity verification type.
In some embodiments, the plurality of candidate authentication types includes at least two of: identity verification is carried out through the user identification; identity verification is carried out through the identification of the user terminal; identity verification is carried out through a registration password; and carrying out identity verification through the registration account number identification.
In some embodiments, the performing the re-living detection on the target user based on the actual input information of the target user to obtain a target living detection result of the target user includes: determining the identity information of a first user stored in a database as target identity information, wherein the first user is a login user triggering living body detection, or the first user is a user obtained by performing biological feature recognition on the biological feature image; and matching the actual input information with the target identity information, and selectively executing one of a first operation or a second operation based on a matching result, wherein the first operation comprises: and if the matching result is that the matching is successful, determining that the target user is a living body, wherein the second operation comprises: and if the matching result is determined to be the matching failure, determining that the target user is a non-living body.
In some embodiments, the preliminary in-vivo detection results include: the target user is a first probability of a living body; and performing preliminary living body detection on the target user based on the biological characteristic image to obtain a preliminary living body detection result of the target user, wherein the preliminary living body detection result comprises the following steps: inputting the biological characteristic image into a trained living body detection model, and performing living body detection processing on the biological characteristic image through the living body detection model to obtain a second probability that the target user is living body and a confidence level corresponding to the second probability; and determining the first probability based on the second probability and a confidence corresponding to the second probability.
In some embodiments, determining the first probability based on the second probability and a confidence level corresponding to the second probability comprises: and determining the product of the second probability and the confidence corresponding to the second probability as the first probability.
In some embodiments, the in vivo detection model comprises: a feature extraction network, a local living detection network, a global living detection network, and a confidence detection network; and performing living body detection processing on the biological feature image through the living body detection model to obtain a second probability that the target user is living body and a confidence corresponding to the second probability, wherein the living body detection processing comprises the following steps: performing feature extraction processing on the biological feature image through the feature extraction network to obtain a feature map; respectively performing living body detection processing on a plurality of local areas of the characteristic map through the local living body detection network to obtain living body detection results of the plurality of local areas; performing living body detection processing on the characteristic map and living body detection results of the plurality of local areas through the global living body detection network to obtain a second probability that the target user is living body; and determining, by the confidence detection network, a confidence corresponding to the second probability based on the living detection results of the plurality of local regions.
In some embodiments, the training process of the living detection model includes a first training phase configured to train the feature extraction network, the local living detection network, and the global living detection network, and a second training phase configured to train the confidence detection network.
In some embodiments, the first training phase comprises: acquiring a sample biological feature image and labeling information corresponding to the sample biological feature image, wherein the labeling information comprises: a global living body labeling result and living body labeling results of a plurality of local areas; inputting the sample biological feature image into the feature extraction network to obtain a sample feature map, inputting the sample feature map into the local living body detection network to obtain first living body detection results of a plurality of local areas, and inputting the sample feature map and the first living body detection results of the plurality of local areas into the global living body detection network to obtain first global living body detection results; and determining a first target loss based on the first global living detection result, the first living detection result of the plurality of local areas, the global living labeling result, and the living labeling result of the plurality of local areas, and training the feature extraction network, the local living detection network, and the global living detection network with the aim of minimizing the first target loss.
In some embodiments, the first training phase further comprises: performing disturbance processing on the sample biological feature image to obtain a disturbance biological feature image, inputting the disturbance biological feature image into the feature extraction network to obtain a disturbance feature map, and inputting the disturbance feature map into the local living body detection network to obtain a second living body detection result of a plurality of local areas; and determining a first target loss based on the first global biopsy result, the first biopsy result of the plurality of local regions, the global biopsy result, and the biopsy result of the plurality of local regions, comprising: the first target loss is determined based on the first global biopsy result, the first biopsy result of the plurality of local regions, the second biopsy result of the plurality of local regions, the global biopsy result, and the biopsy result of the plurality of local regions.
In some embodiments, determining the first target loss based on the first global biopsy result, the first biopsy result of the plurality of local regions, the second biopsy result of the plurality of local regions, the global biopsy result, and the biopsy result of the plurality of local regions comprises: determining a first loss based on a difference between the first global living detection result and the global living annotation result; determining a second loss based on a difference between a first living detection result of the plurality of local regions and a living labeling result of the plurality of local regions; determining a third loss based on a difference between a first living detection result of the plurality of local areas and a second living detection result of the plurality of local areas; and determining the first target loss based on the first loss, the second loss, and the third loss.
In some embodiments, the second training phase comprises: acquiring a sample biological feature image and labeling information corresponding to the sample biological feature image, wherein the labeling information comprises: a global living body labeling result and living body labeling results of a plurality of local areas; inputting the sample biological feature image into the feature extraction network to obtain a sample feature map, inputting the sample feature map into the local living network to obtain first living detection results of a plurality of local areas, inputting the first living detection results of the plurality of local areas into the confidence detection network to obtain detection confidence of the plurality of local areas, updating the corresponding local areas of the sample feature map based on the detection confidence of the plurality of local areas to obtain an updated feature map, inputting the updated feature map into the local living network to obtain third living detection results of the plurality of local areas, and inputting the updated feature map and the third living detection results of the plurality of local areas into the global living detection network to obtain second global living detection results; and determining a second target loss based on the second global living detection result, the third living detection result of the plurality of local areas, the global living labeling result and the living labeling results of the plurality of local areas, and training the confidence detection network with the second target loss minimized as a training target.
In some embodiments, performing the living body detection again on the target user based on the actual input information of the target user to obtain a target living body detection result of the target user, including: performing secondary living body detection on the target user based on the actual input information to obtain a secondary living body detection result; and determining the target living detection result based on the preliminary living detection result and the re-living detection result.
In some embodiments, the biometric features include at least one of: at least one of a face, iris, fingerprint, and palmprint.
In a second aspect, the present specification also provides a living body detection system including: at least one storage medium storing at least one set of instructions for performing a biopsy, and at least one processor communicatively coupled to the at least one storage medium, wherein the at least one processor reads the at least one set of instructions and performs the biopsy method of any one of the first aspects as directed by the at least one set of instructions when the biopsy system is in operation.
According to the technical scheme, the living body detection method and the living body detection system provided by the specification acquire a biological characteristic image containing biological characteristics of the target user, perform preliminary living body detection on the target user based on the biological characteristic image to obtain a preliminary living body detection result of the target user, further instruct the target user to input identity verification information of a target length, and perform living body detection again on the target user based on actual input information of the target user to obtain a target living body detection result of the target user, wherein the target length is related to the preliminary living body detection result. According to the scheme, on the basis of performing preliminary living body detection on the target user based on the biological characteristic image, an identity verification process is further introduced, namely, the target user is instructed to input identity verification information of a target length, and living body detection is performed on the target user again based on actual input information of the target user, so that accuracy of living body detection results can be improved on the whole through the living body detection processes of the two stages. Furthermore, as the target length corresponding to the identity verification information is related to the preliminary living body detection result, the target length has certain variability, and compared with the identity verification information with fixed length, the accuracy of the living body detection result can be further improved.
Additional functionality of the biopsy method and system provided in this specification will be set forth in part in the description that follows. The inventive aspects of the living being detection methods and systems provided herein may be fully explained by practicing or using the methods, devices, and combinations described in the detailed examples below.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present description, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present description, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 shows an application scenario schematic of a living body detection system provided according to an embodiment of the present specification;
FIG. 2 illustrates a hardware architecture diagram of a computing device provided in accordance with an embodiment of the present description;
FIG. 3 shows a flow chart of a method of in-vivo detection provided in accordance with an embodiment of the present description;
FIG. 4 shows a schematic structural view of a living body detection model provided according to an embodiment of the present specification;
FIG. 5 shows a schematic diagram of a training process of a first training phase of a living body detection model provided according to an embodiment of the present disclosure;
FIG. 6 shows a schematic diagram of a training process of a second training phase of a living body detection model provided according to an embodiment of the present disclosure; and
fig. 7 shows an interactive schematic diagram of a living body detection process provided according to an embodiment of the present specification.
Detailed Description
The following description is presented to enable one of ordinary skill in the art to make and use the invention, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the disclosure. Thus, the present description is not limited to the embodiments shown, but is to be accorded the widest scope consistent with the claims.
The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. For example, as used herein, the singular forms "a", "an" and "the" include plural referents unless the context clearly dictates otherwise. The terms "comprises," "comprising," "includes," and/or "including," when used in this specification, are taken to specify the presence of stated integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
These and other features of the present specification, as well as the operation and function of the related elements of structure, as well as the combination of parts and economies of manufacture, may be significantly improved upon in view of the following description. All of which form a part of this specification, reference is made to the accompanying drawings. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the description. It should also be understood that the drawings are not drawn to scale.
The flowcharts used in this specification illustrate operations implemented by systems according to some embodiments in this specification. It should be clearly understood that the operations of the flow diagrams may be implemented out of order. Rather, operations may be performed in reverse order or concurrently. Further, one or more other operations may be added to the flowchart. One or more operations may be removed from the flowchart.
For convenience of description, the present specification will explain terms appearing in the following description as follows:
living body attack: means for attacking face recognition systems or other biometric systems may, for example, be a living body attack by means of a mobile phone screen, printed photo, high-precision mask, etc.
Living body detection: the method refers to algorithms and techniques for preventing living body attack in face recognition or other biological recognition systems, and can be used for recognizing whether a target user is a living body or not.
Before describing the specific embodiments of the present specification, the application scenario of the present specification will be described as follows:
the living body detection method provided by the specification can be applied to the scene of identity verification based on biological characteristics. The living body detection method provided in the present specification may be regarded as a user classification method, and the target user may be classified as a living body or a non-living body. For example, after a face image is acquired by a face recognition (e.g., face payment, face access, face attendance, etc.) system, the living body detection method provided in the present specification may be used to perform living body detection on the target user based on the face image, so as to determine whether the target user is a living body. If the human body is living, the subsequent face recognition flow is continuously executed, and if the human body is not living, the subsequent face recognition flow is not needed to be executed, so that the security of face recognition is improved.
It should be noted that the above-mentioned face recognition scenario is only one of a plurality of usage scenarios provided in the present specification, and the living body detection method provided in the present specification may be applied not only to face recognition scenarios, but also to all scenarios in which authentication is performed based on other biological features, such as fingerprints, palmprints, irises, and the like. It should be understood by those skilled in the art that the living body detection method described in the present specification is applicable to other usage scenarios and is also within the protection scope of the present specification.
Fig. 1 shows an application scenario schematic diagram of a living body detection system provided according to an embodiment of the present specification. The living body detection system 001 (hereinafter referred to as system 001) can be applied to living body detection of any scene, such as a face recognition-based support scene, a face recognition-based access control scene, a face recognition-based attendance scene, and the like. As shown in fig. 1, system 001 may include target user 100, client 200, server 300, and network 400.
The target user 100 may be a user that triggers the living body detection, and the target user 100 may perform a preset operation at the client 200 to trigger the living body detection.
The client 200 may be a device that responds to a living body detection operation of the target user 100. In some embodiments, the in-vivo detection method described herein may be performed on the client 200. At this time, the client 200 may store data or instructions to perform the living body detection method described in the present specification, and may execute or be used to execute the data or instructions. In some embodiments, the client 200 may include a hardware device having a data information processing function and a program necessary to drive the hardware device to operate. As shown in fig. 1, a client 200 may be communicatively connected to a server 300. In some embodiments, the server 300 may be communicatively coupled to a plurality of clients 200. In some embodiments, client 200 may interact with server 300 over network 400 to receive or send messages, etc., such as to receive or send biometric images. In some embodiments, the client 200 may include a mobile device, a tablet, a laptop, a built-in device of a motor vehicle, or the like, or any combination thereof. In some embodiments, the mobile device may include a smart home device, a smart mobile device, a virtual reality device, an augmented reality device, or the like, or any combination thereof. In some embodiments, the smart home device may include a smart television, a desktop computer, or the like, or any combination. In some embodiments, the smart mobile device may include a smart phone, personal digital assistant, gaming device, navigation device, etc., or any combination thereof. In some embodiments, the virtual reality device or augmented reality device may include a virtual reality helmet, virtual reality glasses, virtual reality patch, augmented reality helmet, augmented reality glasses, augmented reality patch, or the like, or any combination thereof. For example, the virtual reality device or the augmented reality device may include google glass, head mounted display, VR, or the like. In some embodiments, the built-in devices in the motor vehicle may include an on-board computer, an on-board television, and the like. In some embodiments, the client 200 may include an image acquisition device for acquiring video or image information of a target user, thereby acquiring a biometric image. In some embodiments, the image capture device may be a two-dimensional image capture device (such as an RGB camera), or may be a two-dimensional image capture device (such as an RGB camera) and a depth image capture device (such as a 3D structured light camera, a laser detector, etc.). In some embodiments, the client 200 may be a device with positioning technology for locating the position of the client 200.
In some embodiments, client 200 may be installed with one or more Applications (APP). The APP can provide the target user 110 with the ability to interact with the outside world via the network 400 as well as an interface. The APP includes, but is not limited to: web browser-like APP programs, search-like APP programs, chat-like APP programs, shopping-like APP programs, video-like APP programs, financial-like APP programs, instant messaging tools, mailbox clients, social platform software, and the like. In some embodiments, the client 200 may have a target APP installed thereon. The target APP can collect video or image information of the target user for the client 200, thereby obtaining a biometric image. In some embodiments, the target user 100 may also trigger a liveness detection request through the target APP. The target APP may perform the living body detection method described in the present specification in response to the living body detection request. The living body detection method will be described in detail later.
The server 300 may be a server providing various services, such as a background server providing support for biometric images acquired on the client 200. In some embodiments, the in-vivo detection method described herein may be performed on the server 300. At this time, the server 300 may store data or instructions to perform the living body detection method described in the present specification, and may execute or be used to execute the data or instructions. In some embodiments, the server 300 may include a hardware device having a data information processing function and a program necessary to drive the hardware device to operate. The server 300 may be communicatively connected to a plurality of clients 200 and receive data transmitted from the clients 200.
The network 400 is a medium used to provide communication connections between the client 200 and the server 300. The network 400 may facilitate the exchange of information or data. As shown in fig. 1, the client 200 and the server 300 may be connected to a network 400 and transmit information or data to each other through the network 400. In some embodiments, the network 400 may be any type of wired or wireless network, or a combination thereof. For example, network 400 may include a cable network, a wired network, a fiber optic network, a telecommunications network, an intranet, the internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), a Metropolitan Area Network (MAN), a Wide Area Network (WAN), a Public Switched Telephone Network (PSTN), a bluetooth network, a ZigBee network, a Near Field Communication (NFC) network, or the like. In some embodiments, network 400 may include one or more network access points. For example, the network 400 may include a wired or wireless network access point, such as a base station or an internet switching point, through which one or more components of the client 200 and server 300 may connect to the network 400 to exchange data or information.
It should be understood that the number of clients 200, servers 300, and networks 400 in fig. 1 are merely illustrative. There may be any number of clients 200, servers 300, and networks 400, as desired for implementation.
It should be noted that, the living body detection method described in the present specification may be executed entirely on the client 200, may be executed entirely on the server 300, may be executed partly on the client 200, and may be executed partly on the server 300.
Fig. 2 illustrates a hardware architecture diagram of a computing device 600 provided in accordance with an embodiment of the present description. The computing device 600 may perform the in-vivo detection method described herein. The living body detection method is described in other parts of the specification. When the in-vivo detection method is performed on the client 200, the computing device 600 may be the client 200. When the in-vivo detection method is performed on the server 300, the computing device 600 may be the server 300. When the in-vivo detection method may be partially performed on the client 200 and partially performed on the server 300, the computing device 600 may include the client 200 and the server 300.
As shown in fig. 2, computing device 600 may include at least one storage medium 630 and at least one processor 620. In some embodiments, computing device 600 may also include a communication port 650 and an internal communication bus 610. Meanwhile, computing device 600 may also include I/O component 660.
Internal communication bus 610 may connect the various system components including storage medium 630, processor 620, and communication ports 650.
I/O component 660 supports input/output between computing device 600 and other components.
The communication port 650 is used for data communication between the computing device 600 and the outside world, for example, the communication port 650 may be used for data communication between the computing device 600 and the network 400. The communication port 650 may be a wired communication port or a wireless communication port.
The storage medium 630 may include a data storage device. The data storage device may be a non-transitory storage medium or a transitory storage medium. For example, the data storage devices may include one or more of magnetic disk 632, read Only Memory (ROM) 634, or Random Access Memory (RAM) 636. The storage medium 630 further includes at least one set of instructions stored in the data storage device. The instructions stored in the instruction set are computer program code that may include programs, routines, objects, components, data structures, procedures, modules, etc. that perform the biopsy methods provided herein.
The at least one processor 620 may be communicatively coupled with at least one storage medium 630 and a communication port 650 via an internal communication bus 610. The at least one processor 620 is configured to execute the at least one instruction set. When the computing device 600 is running, the at least one processor 620 reads the at least one instruction set and performs the in-vivo detection method provided herein according to the instructions of the at least one instruction set. The processor 620 may perform all or part of the steps involved in the in vivo detection method. The processor 620 may be in the form of one or more processors, and in some embodiments, the processor 620 may include one or more hardware processors, such as microcontrollers, microprocessors, reduced Instruction Set Computers (RISC), application Specific Integrated Circuits (ASICs), application specific instruction set processors (ASIPs), central Processing Units (CPUs), graphics Processing Units (GPUs), physical Processing Units (PPUs), microcontroller units, digital Signal Processors (DSPs), field Programmable Gate Arrays (FPGAs), advanced RISC Machines (ARM), programmable Logic Devices (PLDs), any circuit or processor capable of executing one or more functions, or the like, or any combination thereof. For illustrative purposes only, only one processor 620 is depicted in the computing device 600 in this specification. It should be noted, however, that computing device 600 may also include multiple processors, and thus, operations and/or method steps disclosed in this specification may be performed by one processor as described herein, or may be performed jointly by multiple processors. For example, if the processor 620 of the computing device 600 performs steps a and B in this specification, it should be understood that steps a and B may also be performed by two different processors 620 in combination or separately (e.g., a first processor performs step a, a second processor performs step B, or the first and second processors perform steps a and B together).
Fig. 3 shows a flowchart of a living body detection method P100 provided according to an embodiment of the present specification. As before, the computing device 600 may perform the in-vivo detection method P100 described in the present specification. Specifically, the processor 620 may read an instruction set stored in its local storage medium and then execute the living body detection method P100 described in the present specification according to the specification of the instruction set. As shown in fig. 3, the method P100 may include:
s110: a biometric image is acquired, the biometric image including a biometric of a target user.
The biometric image may refer to an image containing a biometric of the target user 100, among other things. The biometric image may be used to identify or verify the identity information of the target user 100 by exploiting the uniqueness of the biometric feature. The biological features involved in the biological recognition in the embodiments of the present application may include, for example, eye prints, voice prints, fingerprints, palm prints, heart beats, pulses, chromosomes, DNA, human tooth bites, and the like. In some embodiments, the biometric feature may include at least one of a face, an eye print, a fingerprint, a palm print, or an iris. The eye pattern may include biological features such as iris, sclera, etc. In other words, the biometric image may be at least one of a face image, an eye print image, a fingerprint image, a palm print image, or an iris image.
In some embodiments, when the computing device 600 is the client 200, the client 200 may use the image capturing module to capture the biometric image of the target user 100. In some embodiments, when the computing device 600 is the server 300, the server 300 may receive a biometric image from the client 200, where the biometric image may be acquired by the client 200 using the image acquisition module to acquire the target user 100.
S120: and performing preliminary living body detection on the target user based on the biological characteristic image to obtain a preliminary living body detection result of the target user.
The preliminary living body detection result is used for preliminarily indicating whether the target user 100 is a living body. For example, the preliminary living body detection result may include: the target user 100 is a first probability of a living body.
In some embodiments, the processor 620 may input the biometric image into a pre-trained living detection model, wherein the living detection model has the ability to identify whether the target user 100 is a living body. Thus, the preliminary living body detection of the target user 100 is performed by the living body detection model, and a preliminary living body detection result of the target user 100, that is, a first probability that the target user 100 is a living body, can be obtained. The living body detection model may be any machine learning model having living body detection capability.
In some embodiments, the above living body detection model may have the capability of detecting the confidence of the model output result in addition to the capability of identifying whether the target user 100 is a living body. Specifically, the processor 620 inputs the biometric image into a trained living body detection model, and performs living body detection processing on the biometric image through the living body detection model, so as to obtain a second probability that the target user is living body and a confidence level corresponding to the second probability. In this case, the processor 620 may determine a preliminary living body detection result of the target user 100, that is, a first probability that the target user 100 is a living body, based on the second probability and the confidence corresponding to the second probability. In some embodiments, the second probability output by the living body detection model is assumed to be marked as P, and the confidence corresponding to the second probability P is marked as C, wherein the value ranges of P and C are respectively [0,1]. The product of the second probability P and the confidence C may be determined as the first probability S, namely:
s=c×p formula (1)
As can be seen from the above formula (1), the first probability S has a positive correlation with the second probability P and a positive correlation with the confidence coefficient C, that is, the higher the second probability P output by the living body detection model, the higher the confidence coefficient C, which means that the first probability S is higher. It should be appreciated that the first probability S described above may be regarded as a probability obtained by correcting the second probability P of the model output with the confidence coefficient C, and therefore, the accuracy of the first probability S with respect to the second probability P is higher.
Fig. 4 shows a schematic structural diagram of a living body detection model provided according to an embodiment of the present specification. As shown in fig. 4, the living body detection model 400 may include: a feature extraction network 401, a local biopsy network 402, a global biopsy network 403, and a confidence detection network 404. Referring to fig. 4, a living body detection process of the living body detection model 400 on the target user 100 may include:
(1) The biological characteristic image is input into a characteristic extraction network 401, and characteristic extraction processing is carried out on the biological characteristic image through the characteristic extraction network 401, so as to obtain a characteristic map.
(2) The characteristic spectrum is input into a local living body detection network 402, and living body detection processing is respectively carried out on a plurality of local areas of the characteristic spectrum through the local living body detection network 402, so as to obtain living body detection results of the plurality of local areas.
The plurality of local regions are a plurality of regions obtained by dividing the feature map into regions according to a certain region dividing method. For example, the feature map may be divided into 3*3 =9 partial areas by 3-aliquoting in the height direction and 3-aliquoting in the width direction of the feature map. It should be understood that the embodiments of the present disclosure are not limited to the local area division manner, and for example, the plurality of local areas may also be areas obtained by random division. The living body detection result of each local area indicates a detection result obtained by living body detection of the target user based on the image content of the local area. Taking the example of dividing the characteristic spectrum into 9 local areas, inputting the characteristic spectrum into a local living body detection network, and obtaining living body probabilities corresponding to the 9 local areas.
(3) The living body detection results of the feature map and the plurality of local areas are input into the global living body detection network 403, and living body detection processing is performed on the living body detection results of the feature map and the plurality of local areas through the global living body detection network 403, so as to obtain a global living body detection result.
Wherein the global living body detection result may include: the target user is the second probability of living being (i.e., probability P described previously). It should be understood that, when global living detection is performed by the global living detection network 403, not only the feature map is based, but also living detection results of a plurality of local areas are considered, so that the obtained global living detection results are more accurate.
(4) The living body detection results of the plurality of local areas are input into the confidence detection network 404, and the confidence corresponding to the global living body detection result is obtained.
Specifically, the confidence detection network 404 may determine the confidence levels corresponding to the plurality of local regions based on the living detection results of the plurality of local regions. Wherein the confidence level corresponding to each local area indicates the credibility of the living body detection result of the local area. Further, the confidence detection network 404 may determine a confidence level (e.g., confidence level C described above) corresponding to the global biopsy result based on the confidence levels corresponding to the plurality of local regions. For example, a weighted average operation may be performed on the confidence levels corresponding to the plurality of local regions, so as to obtain a confidence level corresponding to the global living detection result.
Note that, the specific network structures of the feature extraction network 401, the local living detection network 402, the global living detection network 403, and the confidence detection network 404 are not limited to the above embodiments, and for example, CNN (Convolutional Neural Network ), DNN (Deep Neural Network, deep neural network), RNN (Recurrent Neural Network, cyclic neural network), or any other possible network structure may be used.
The following describes a training process of the living body detection model shown in fig. 4.
The living body detection model can be obtained by training in a staged training mode. In some embodiments, the training process of the living body detection model may include a first training phase and a second training phase. Wherein the first training phase is configured to train the feature extraction network 401, the local biopsy network 402, and the global biopsy network 403. The second training phase is configured to train the confidence detection network 404. That is, the feature extraction network 401, the local living detection network 402, and the global living detection network 403 are trained first, and after the three network elements converge, the confidence detection network 404 is trained. It should be appreciated that by training in stages, the amount of training calculation at each stage can be reduced, which is beneficial to improving overall training efficiency.
Fig. 5 shows a schematic diagram of a training process of a first training phase of a living body detection model provided according to an embodiment of the present specification. As shown in fig. 5, in some embodiments, the training process of the first training phase may include:
(1) Acquiring a sample biological feature image and labeling information corresponding to the sample biological feature image, wherein the labeling information comprises: global living labeling results and living labeling results of a plurality of local areas.
The labeling information corresponding to the sample biological characteristic image is used for indicating whether the sample biological characteristic image is acquired through living bodies or is acquired through a living body attack mode.
(2) Referring to fig. 5, the sample biometric image is input into a feature extraction network 401 to obtain a sample feature map, the sample feature map is input into a local living body detection network 402 to obtain first living body detection results of a plurality of local areas, and the sample feature map and the first living body detection results of the plurality of local areas are input into a global living body detection network 403 to obtain first global living body detection results.
(3) With continued reference to fig. 5, a first target loss is determined based on the first global biopsy result, the first biopsy result of the plurality of local regions, the global biopsy result, and the biopsy result of the plurality of local regions, and the feature extraction network 401, the local biopsy network 402, and the global biopsy network 403 are trained with the objective of minimizing the first target loss.
In some embodiments, the first target loss may be calculated as follows: a global loss is determined based on a difference between the first global living detection result and the global living labeling result, a local loss is determined based on a difference between the first living detection result of the plurality of local areas and the living labeling result of the plurality of local areas, and a first target loss is determined based on the global loss and the local loss. With continued reference to fig. 5, after determining the first target loss, network parameters of the feature extraction network 401, the local living network 402, and the global living network 403 may be adjusted with the first target loss minimized as a training target until the three network elements reach a convergence condition, or until a preset number of iterations is reached.
In the first training stage, when the first target loss is determined, not only the global loss but also the local loss are considered, so that the determined first target loss is more accurate. After the first training phase, the biopsy model has the ability to perform a biopsy based on the biometric image.
In some embodiments, the first training phase may further include: and carrying out disturbance processing on the sample biological characteristic image to obtain a disturbance biological characteristic image. The disturbance processing may be global disturbance or local disturbance, for example, noise may be added to all or part of the area of the sample biometric image to form a disturbance biometric image. And inputting the disturbance biological feature image into the feature extraction network to obtain a disturbance feature map, and inputting the disturbance feature map into the local living body detection network to obtain second living body detection results of a plurality of local areas.
On this basis, the first target loss may be determined based on the first global living detection result, the first living detection result of the plurality of local areas, the second living detection result of the plurality of local areas, the global living labeling result, and the living labeling result of the plurality of local areas. In some embodiments, the first target loss may be calculated as follows: a first loss is determined based on a difference between the first global living detection result and a global living labeling result, a second loss is determined based on a difference between the first living detection result of the plurality of local areas and a living labeling result of the plurality of local areas, a third loss is determined based on a difference between the first living detection result of the plurality of local areas and a second living detection result of the plurality of local areas, and further, a first target loss is determined based on the first loss, the second loss, and the third loss. Wherein the first loss is used for representing global loss, the second loss is used for representing local loss, and the third loss is used for representing disturbance consistency loss, namely, difference between local living detection results of images before and after disturbance.
In the first training stage described above, not only the original sample biometric image but also the perturbed biometric image is utilized, and not only the global loss (i.e., the first loss) and the local loss (i.e., the second loss) but also the perturbed consistency loss (i.e., the third loss) are considered when determining the first target loss, so that the post-training living detection model is insensitive to perturbation, i.e., the living detection results for the perturbed image have consistency with the living detection results for the non-perturbed image. Thus, the trained living body detection model has higher accuracy on living body detection results of the disturbance image and the non-disturbance image.
Fig. 6 shows a schematic diagram of a training process of a second training phase of a living body detection model provided according to an embodiment of the present specification. As shown in fig. 6, in some embodiments, the training process of the second training phase may include:
(1) Acquiring a sample biological feature image and labeling information corresponding to the sample biological feature image, wherein the labeling information comprises: global living labeling results and living labeling results of a plurality of local areas.
(2) Referring to fig. 6, the sample biometric image is input to a feature extraction network 401 to obtain a sample feature map, the sample feature map is input to a local living body detection network 402 to obtain first living body detection results of a plurality of local areas, the first living body detection results of the plurality of local areas are input to a confidence detection network 404 to obtain detection confidence of the plurality of local areas, corresponding local areas of the sample feature map are updated based on the detection confidence of the plurality of local areas to obtain an updated sample feature map, the updated sample feature map is input to a local living body detection network 402 to obtain third living body detection results of the plurality of local areas, and the updated sample feature map and the third living body detection results of the plurality of local areas are input to a global living body detection network 403 to obtain a second global living body detection result.
The second training stage is different from the first training stage in that, after the second training stage obtains the first living body detection results of the plurality of local areas through the local living body detection network 402, the confidence level detection network 404 also obtains the detection confidence levels of the plurality of local areas, and further, the detection confidence levels of the plurality of local areas are used as weight coefficients to update the corresponding local areas of the feature map, so as to obtain an updated feature map. Further, the updated feature map is subjected to in vivo detection through the local in vivo detection network 402 and the global in vivo detection network 403. It should be appreciated that the living detection process of the local living detection network 402 and the global living detection network 403 on the updated feature map is similar to the first training phase, and will not be described here.
(3) With continued reference to fig. 6, a second target loss is determined based on the second global biopsy result, a third biopsy result of the plurality of local regions, the global biopsy result, and the biopsy result of the plurality of local regions, and the confidence detection network 404 is trained with a training target that minimizes the second target loss.
It should be understood that the second target loss in the second training phase is calculated in a similar manner to that of the first target loss in the first training phase, and will not be described herein. After the second target loss is calculated, the network parameters of the confidence detection network 404 are adjusted with the minimum second target loss as a training target until the confidence detection network 404 reaches a convergence condition or until a preset number of iterations is reached.
After the first training stage and the second training stage, the living body detection model has not only living body detection capability, but also capability of detecting the confidence coefficient of the model output result, namely, the living body detection model outputs not only the second probability P but also the confidence coefficient C.
S130: and indicating the target user to input identity verification information of a target length, and performing secondary living detection on the target user based on the actual input information of the target user to obtain a target living detection result of the target user, wherein the target length is related to the primary living detection result.
The authentication information is information for authenticating the identity of the target user. For example, the authentication information may include at least one of: user identification (e.g., a certificate number), identification of a user terminal (e.g., a cell phone number), registration account identification (e.g., an account number registered by a user in an app), registration password, etc.
In the embodiments of the present specification, the preliminary living detection result may also be referred to as an intermediate living detection result, and the target living detection result may also be referred to as a final living detection result.
The embodiments of the present specification further introduce an authentication flow, i.e., authentication information indicating that the target user inputs a target length, after obtaining the preliminary living detection result of the target user. Further, the processor 620 acquires actual input information input by the target user according to the instruction, and performs the living body detection again on the target user based on the actual input information, thereby obtaining a target living body detection result of the target user. It can be understood that the above-described scheme can improve accuracy of the living detection result as a whole by the detection of the two stages of preliminary living detection and the living detection again. Furthermore, as the target length is related to the preliminary living body detection result, the target length has a certain variability, and compared with the identity verification information with a fixed length, the accuracy of the living body detection result can be further improved.
In some embodiments, the preliminary in-vivo detection results include: the target user is a first probability of living being. The target length is inversely related to the first probability. That is, the larger the first probability, the shorter the target length, and the smaller the first probability, the longer the target length. It should be understood that when the first probability is larger, the target user is illustrated to be a living body with a larger probability, that is, the security of the biometric feature identification is higher, so that the target user can be instructed to input the authentication information with a shorter length under the condition, so that the interaction time consumption of the target user can be reduced, and the living body detection efficiency is improved. When the first probability is smaller, the target user is indicated to be a non-living body with a larger probability, that is, the security of the current biometric identification is lower, so that the target user can be instructed to input identity verification information with a longer length under the condition, and the security of the current biometric identification is improved. Therefore, the target length and the first probability are in an inverse relation, interaction time consumption of the target user can be reduced as much as possible on the premise that the safety of biological feature identification is guaranteed, and living body detection efficiency is improved.
In some embodiments, before S130, further comprising: and determining a target identity verification type, and determining a target length based on the target identity verification type and the first probability. The target authentication type may be one of the following candidate authentication types: identity verification is performed by a user identifier, identity verification is performed by an identifier of a user terminal, identity verification is performed by a registration account identifier, and identity verification is performed by a registration password. The different identity authentication types have different security guarantees, and one of the identity authentication types can be determined as a target authentication type according to requirements.
In some embodiments, the target authentication type may be determined among a plurality of candidate authentication types in one of the following ways:
mode 1: the target authentication type is randomly determined among a plurality of candidate authentication types. For example, one of the plurality of candidate authentication types may be randomly selected as the target authentication type. The random mode ensures that a certain identity authentication type is not fixedly adopted in different biological characteristic recognition flows, thereby further improving the safety of biological characteristic recognition.
Mode 2: based on the type of the current application scene, selecting an identity verification type matched with the type of the current application scene from a plurality of identity verification types to be selected as the target identity verification type. Different identity verification types to be selected correspond to different security levels, for example, the security level of "identity verification through a registration password" is higher than the security level of "identity verification through an identifier of a user terminal", so that in a scene with higher security requirements, "identity verification through a registration password" can be used as a target identity verification type, and in a scene with lower security requirements, "identity verification through an identifier of a user terminal" can be used as a target identity verification type. The mode can meet the requirements of different application scenes on safety.
Mode 3: and taking the identity verification type appointed by the target user in a plurality of identity verification types to be selected as the target identity verification type. For example, the target user may set the target authentication type during the registration phase or any other phase to specify which authentication type is to be the target authentication type. The method can increase the verification flexibility of the target user and meet the personalized verification requirements of different users.
In some embodiments, the target length may be determined as follows: and inputting the target identity verification type and the first probability into a pre-trained length mapping model to obtain the target length. The length mapping model is obtained by training a plurality of groups of training samples, and each group of training samples comprises: sample identity type, sample probability, and sample length. The plurality of groups of training samples are obtained based on statistics of historical data, and the safety of biological feature identification corresponding to the historical data meets preset requirements. It should be noted that, the embodiment of the present disclosure is not limited to the network structure of the length mapping model, for example, a neural network may be used, an MLP (Multi-Layer Perceptron) network may also be used, or any other feasible network structure may be used. The trained length mapping model has the ability to map the target authentication type and the first probability to a target length. The embodiment determines the target length by adopting the pre-trained length mapping model, so that the processing efficiency is high on one hand, and the target length is high in accuracy on the other hand.
In some embodiments, the target length may also be determined as follows: determining a weight coefficient based on the type of a current application scene, wherein the weight coefficient and the security requirement degree of the current application scene on biological feature recognition are in an inverse relation; weighting the first probability based on the weight coefficient to obtain a first probability after scene adaptation; and determining a target length based on the target identity verification type and the first probability after scene adaptation.
Specifically, the higher the security requirement level of the current application scene for the biological feature recognition is, the smaller the weight coefficient is, and the lower the security requirement level of the current application scene for the biological feature recognition is, the larger the weight coefficient is. For example, scenes may be classified into high security scenes, medium security scenes, and low security scenes based on the level of security demand. And if the current application scene is a high-safety scene, determining the weight coefficient as 0.8, if the current application scene is a medium-safety scene, determining the weight coefficient as 1, and if the current application scene is a low-safety scene, determining the weight coefficient as 1.25.
Marking the determined weight coefficient as alpha and the first probability as S, and obtaining the first probability S after scene adaptation The following formula can be used for obtaining:
S =*S
after determining the first probability S of scene adaptation Thereafter, the target authentication type and the scene-adapted first probability S may be used Inputting a trained length mapping model to obtain the target lengthDegree.
After determining the target length, the target user may be instructed to enter authentication information for the target length. For example, assuming that the target length is 4 and the target authentication type is "authentication by user terminal identification", the target user may be instructed to enter the last four characters of the terminal identification (e.g., a mobile phone number). For another example, assuming that the target length is 6 and the target authentication type is "authentication through registration password", the target user may be instructed to input the 2 nd to 7 th characters of the registration password. It should be noted that, the manner of indicating the target user in the embodiments of the present disclosure is not limited, for example, the target user may be indicated to input the authentication information of the target length by text in the interactive interface, or the target user may be indicated to input the authentication information of the target length by voice, or any other manner may be used to indicate the target user to input the authentication information of the target length.
After instructing the target user to input the authentication information of the target length, the processor 620 may acquire actual input information input by the target user based on the instruction. It should be noted that, the input manner adopted by the target user in the embodiments of the present disclosure is not limited, for example, the target user may input by text, may input by voice, or may input by any other possible manner. Further, the processor 620 may perform the re-living detection of the target user based on the actual input information of the target user, thereby obtaining a target living detection result of the target user.
In some embodiments, the target biopsy result of the target user may be determined as follows: and determining the identity information of the first user stored in the database as target identity information, wherein the first user is a login user triggering living body detection, or the first user is a user obtained by performing biological feature recognition on the biological feature image. For example, the current login user may be used as the first user, and the identity information of the first user may be queried in the database as the target identity information. For another example, the biometric image may be subjected to biometric identification, and the identified user may be used as the first user, and the identity information of the first user may be queried in the database as the target identity information. And matching the actual input information of the target user with the target identity information to obtain a matching result, and executing one of the first operation and the second operation based on the matching result. Wherein the first operation comprises: and if the matching result is that the matching is successful, determining that the target user is a living body, wherein the second operation comprises: and if the matching result is determined to be failed in matching, determining that the target user is a non-living body.
Fig. 7 shows an interactive schematic diagram of a living body detection process provided according to an embodiment of the present specification. As shown in fig. 7, taking the face payment scenario as an example, the client 200 presents an interface 701, and in the interface 701, after the target user clicks "start face payment", the client 200 presents an interface 702. As shown in interface 702, client 200 initiates an image acquisition module to acquire a face image of a target user. After acquiring the face image, the client 200 or the server 300 (e.g., the client 200 sends the biometric image to the server 300) inputs the face image into the living body detection model shown in fig. 4, so as to obtain a first probability P and a confidence coefficient C that the target user is a living body, and further calculate a first probability S that the target user is a living body. Further, based on the target authentication type and the first probability S, a target length of authentication information that needs to be input by the target user in the authentication phase may be determined. Assuming that the determined target length is 4, the client 200 may present an interface 703, in which the target user is instructed to enter 4-bit authentication information (e.g., the last 4 digits of the cell phone number). Referring to interface 704, after the target user inputs the 4-bit authentication information, click to confirm, client 200 or server 300 obtains the actual input information of the target user (e.g., "1234"), and matches the actual input information with the identity information of the first user stored in the database, if matching is successful, it is determined that the target user is a living body, and if matching is failed, it is determined that the target user is a non-living body. In the case that the target user is determined to be a living body, the client 200 or the server 300 may perform face recognition on the target user based on the face image to verify the identity of the target user, pay if the verification is passed, and display the interface 705 after the payment is successful. In the case that the target user is determined to be a non-living body, the client 200 or the server 300 does not execute the subsequent face recognition process any more, and the face payment fails, so that the target user can be prompted that the authentication fails in the display interface.
It should be noted that, fig. 7 illustrates an interaction process taking a face payment scenario as an example, and the interaction process of the living body detection scheme provided in the present specification applied to other scenarios is similar, which is not illustrated in the present specification.
In some embodiments, the preliminary living detection result includes a first probability that the target user is living, and S130 is performed if it is determined that the first probability is greater than or equal to a preset probability. For example, the preset probability may be 30%, that is, S130 is performed when the first probability is greater than or equal to 30%. Thus, the condition that the target user is obviously non-living body can be filtered, namely, the authentication link is not required to be executed in the condition that the target user is obviously non-living body, so that the living body detection efficiency can be improved.
In some embodiments, after the actual input information of the target user is acquired, a re-living detection may be performed on the target user based on the actual input information, so as to obtain a re-living detection result, and further determine a target living detection result of the target user based on the preliminary living detection result and the re-living detection result. For example, assuming that the preliminary living detection result indicates that the probability that the target user is living is 55% and the living detection result again indicates that the probability that the target user is living is 90%, the average (or weighted average) of the two probabilities may be taken as the final probability that the target user is living, the target user is determined to be living if the final probability is greater than or equal to a preset threshold, and the target user is determined to be non-living if the final probability is less than the preset probability. In this embodiment, by comprehensively analyzing the preliminary living body detection result and the re-living body detection result to determine the target living body detection result, the accuracy of the target living body detection result can be further improved.
In summary, the living body detection method P100 and the system 001 provided in the present disclosure acquire a biological feature image including a biological feature of a target user, perform preliminary living body detection on the target user based on the biological feature image, obtain a preliminary living body detection result of the target user, further instruct the target user to input identity verification information of a target length, and perform secondary living body detection on the target user based on actual input information of the target user, so as to obtain a target living body detection result of the target user, where the target length is related to the preliminary living body detection result. According to the scheme, on the basis of performing preliminary living body detection on the target user based on the biological characteristic image, an identity verification process is further introduced, namely, the target user is instructed to input identity verification information of a target length, and living body detection is performed on the target user again based on actual input information of the target user, so that accuracy of living body detection results can be improved on the whole through the living body detection processes of the two stages. Furthermore, as the target length corresponding to the identity verification information is related to the preliminary living body detection result, the target length has certain variability, and compared with the identity verification information with fixed length, the accuracy of the living body detection result can be further improved. In addition, the scheme carries out the variable-length authentication flow based on the preliminary living body detection result, so that the scheme has higher application flexibility. For example, when the preliminary living body detection result indicates that the probability that the target user is a living body is high, the target user can be instructed to input the identity verification information with a short length, so that the interaction time of the target user is saved, and the living body detection efficiency is improved; when the preliminary living body detection result indicates that the probability that the target user is living body is low, the target user can be instructed to input the identity verification information with a longer length, so that the accuracy of the living body detection result is ensured as much as possible.
In the technical scheme provided by the specification, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user accord with the regulations of related laws and regulations, and the public order is not violated.
Another aspect of the present disclosure provides a non-transitory storage medium storing at least one set of executable instructions for performing a biopsy. When executed by a processor, the executable instructions direct the processor to perform the steps of the in-vivo detection method P100 described herein. In some possible implementations, aspects of the specification can also be implemented in the form of a program product including program code. The program code is for causing the computing device 600 to perform the steps of the in-vivo detection method P100 described in the present specification when the program product is run on the computing device 600. The program product for implementing the methods described above may employ a portable compact disc read only memory (CD-ROM) comprising program code and may run on computing device 600. However, the program product of the present specification is not limited thereto, and in the present specification, the readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system. The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. The computer readable storage medium may include a data signal propagated in baseband or as part of a carrier wave, with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable storage medium may also be any readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. Program code for carrying out operations of the present specification may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on computing device 600, partly on computing device 600, as a stand-alone software package, partly on computing device 600, partly on a remote computing device, or entirely on a remote computing device.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In view of the foregoing, it will be evident to a person skilled in the art that the foregoing detailed disclosure may be presented by way of example only and may not be limiting. Although not explicitly described herein, those skilled in the art will appreciate that the present description is intended to encompass various adaptations, improvements, and modifications of the embodiments. Such alterations, improvements, and modifications are intended to be proposed by this specification, and are intended to be within the spirit and scope of the exemplary embodiments of this specification.
Furthermore, certain terms in the present description have been used to describe embodiments of the present description. For example, "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present description. Thus, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined as suitable in one or more embodiments of the invention.
It should be appreciated that in the foregoing description of embodiments of the present specification, various features have been combined in a single embodiment, the accompanying drawings, or description thereof for the purpose of simplifying the specification in order to assist in understanding one feature. However, this is not to say that a combination of these features is necessary, and it is entirely possible for a person skilled in the art to label some of the devices as separate embodiments to understand them upon reading this description. That is, embodiments in this specification may also be understood as an integration of multiple secondary embodiments. While each secondary embodiment is satisfied by less than all of the features of a single foregoing disclosed embodiment.
Each patent, patent application, publication of patent application, and other materials, such as articles, books, specifications, publications, documents, articles, etc., cited herein are hereby incorporated by reference. All matters are to be interpreted in a generic and descriptive sense only and not for purposes of limitation, except for any prosecution file history associated therewith, any and all matters not inconsistent or conflicting with this document or any and all matters not complaint file histories which might have a limiting effect on the broadest scope of the claims. Now or later in association with this document. For example, if there is any inconsistency or conflict between the description, definition, and/or use of terms associated with any of the incorporated materials, the terms in the present document shall prevail.
Finally, it is to be understood that the embodiments of the application disclosed herein are illustrative of the principles of the embodiments of the present specification. Other modified embodiments are also within the scope of this specification. Accordingly, the embodiments disclosed herein are by way of example only and not limitation. Those skilled in the art can adopt alternative arrangements to implement the application in the specification based on the embodiments in the specification. Therefore, the embodiments of the present specification are not limited to the embodiments precisely described in the application.

Claims (19)

1. A living body detection method, comprising:
acquiring a biological characteristic image, wherein the biological characteristic image comprises biological characteristics of a target user;
performing preliminary living body detection on the target user based on the biological characteristic image to obtain a preliminary living body detection result of the target user; and
and indicating the target user to input identity verification information of a target length, and performing secondary living detection on the target user based on the actual input information of the target user to obtain a target living detection result of the target user, wherein the target length is related to the primary living detection result.
2. The method of claim 1, wherein the preliminary in-vivo detection result comprises: the target user is a first probability of a living body; and
The target length is inversely related to the first probability.
3. The method of claim 2, wherein prior to instructing the target user to enter authentication information of a target length, further comprising:
determining a target identity verification type; and
the target length is determined based on the target authentication type and the first probability.
4. A method according to claim 3, wherein the determining the target length based on the target authentication type and the first probability comprises:
determining a weight coefficient based on the type of a current application scene, wherein the weight coefficient and the requirement degree of the current application scene on the biological feature recognition safety are in an anti-correlation relationship;
weighting the first probability based on the weight coefficient to obtain a first probability after scene adaptation; and
and determining the target length based on the target identity verification type and the first probability after scene adaptation.
5. A method according to claim 3, wherein the determining the target length based on the target authentication type and the first probability comprises:
inputting the target identity verification type and the first probability into a pre-trained length mapping model to obtain the target length,
The length mapping model is obtained by training a plurality of groups of training samples, and each group of training samples comprises: sample authentication type, sample probability, and sample length.
6. A method according to claim 3, wherein said determining a target authentication type comprises:
randomly determining the target identity verification type in a plurality of identity verification types to be selected; or alternatively
Determining the type of a current application scene, and selecting an identity verification type matched with the type of the current application scene from a plurality of identity verification types to be selected as the target identity verification type; or alternatively
And taking the identity verification type appointed by the target user in a plurality of identity verification types to be selected as the target identity verification type.
7. The method of claim 6, wherein the plurality of candidate authentication types comprises at least two of:
identity verification is carried out through the user identification;
identity verification is carried out through the identification of the user terminal;
identity verification is carried out through a registration password; and
and carrying out identity verification through the registration account number identification.
8. The method of claim 1, wherein the performing the re-living detection on the target user based on the actual input information of the target user to obtain the target living detection result of the target user comprises:
Determining the identity information of a first user stored in a database as target identity information, wherein the first user is a login user triggering living body detection, or the first user is a user obtained by performing biological feature recognition on the biological feature image; and
matching the actual input information with the target identity information, and selecting one of the first operation or the second operation to execute based on the matching result, wherein,
the first operation includes: determining that the matching result is successful, determining that the target user is a living body, and
the second operation includes: and if the matching result is determined to be the matching failure, determining that the target user is a non-living body.
9. The method of claim 1, wherein the preliminary in-vivo detection result comprises: the target user is a first probability of a living body; and
the preliminary living body detection is carried out on the target user based on the biological characteristic image, and a preliminary living body detection result of the target user is obtained, and the method comprises the following steps:
inputting the biological characteristic image into a trained living body detection model, and performing living body detection processing on the biological characteristic image through the living body detection model to obtain a second probability that the target user is living body and a confidence level corresponding to the second probability; and
And determining the first probability based on the second probability and the confidence corresponding to the second probability.
10. The method of claim 9, wherein determining the first probability based on the second probability and a confidence level corresponding to the second probability comprises:
and determining the product of the second probability and the confidence corresponding to the second probability as the first probability.
11. The method of claim 9, wherein the living detection model comprises: a feature extraction network, a local living detection network, a global living detection network, and a confidence detection network; and
performing living body detection processing on the biological feature image through the living body detection model to obtain a second probability that the target user is living body and a confidence level corresponding to the second probability, wherein the living body detection processing comprises the following steps:
performing feature extraction processing on the biological feature image through the feature extraction network to obtain a feature map;
respectively performing living body detection processing on a plurality of local areas of the characteristic map through the local living body detection network to obtain living body detection results of the plurality of local areas;
performing living body detection processing on the characteristic map and living body detection results of the plurality of local areas through the global living body detection network to obtain a second probability that the target user is living body; and
And determining the confidence corresponding to the second probability based on the living body detection results of the plurality of local areas through the confidence detection network.
12. The method of claim 11, wherein the training process of the living body detection model includes a first training phase and a second training phase, wherein,
the first training stage is configured to train the feature extraction network, the local biopsy network and the global biopsy network,
the second training stage is configured to train the confidence detection network.
13. The method of claim 12, wherein the first training phase comprises:
acquiring a sample biological feature image and labeling information corresponding to the sample biological feature image, wherein the labeling information comprises: a global living body labeling result and living body labeling results of a plurality of local areas;
inputting the sample biological feature image into the feature extraction network to obtain a sample feature map, inputting the sample feature map into the local living body detection network to obtain first living body detection results of a plurality of local areas, and inputting the sample feature map and the first living body detection results of the plurality of local areas into the global living body detection network to obtain first global living body detection results; and
And determining a first target loss based on the first global living detection result, the first living detection results of the plurality of local areas, the global living labeling result and the living labeling results of the plurality of local areas, and training the feature extraction network, the local living detection network and the global living detection network with the aim of minimizing the first target loss.
14. The method of claim 13, wherein,
the first training phase further comprises: performing disturbance processing on the sample biological feature image to obtain a disturbance biological feature image, inputting the disturbance biological feature image into the feature extraction network to obtain a disturbance feature map, and inputting the disturbance feature map into the local living body detection network to obtain a second living body detection result of a plurality of local areas; and
determining a first target loss based on the first global biopsy result, the first biopsy result of the plurality of local regions, the global biopsy result, and the biopsy result of the plurality of local regions, comprising:
the first target loss is determined based on the first global biopsy result, the first biopsy result of the plurality of local regions, the second biopsy result of the plurality of local regions, the global biopsy result, and the biopsy result of the plurality of local regions.
15. The method of claim 14, wherein determining the first target loss based on the first global biopsy result, the first biopsy result for the plurality of local regions, the second biopsy result for the plurality of local regions, the global biopsy result, and the biopsy result for the plurality of local regions comprises:
determining a first loss based on a difference between the first global living detection result and the global living annotation result;
determining a second loss based on a difference between a first living detection result of the plurality of local regions and a living labeling result of the plurality of local regions;
determining a third loss based on a difference between a first living detection result of the plurality of local areas and a second living detection result of the plurality of local areas; and
the first target loss is determined based on the first loss, the second loss, and the third loss.
16. The method of claim 12, wherein the second training phase comprises:
acquiring a sample biological feature image and labeling information corresponding to the sample biological feature image, wherein the labeling information comprises: a global living body labeling result and living body labeling results of a plurality of local areas;
Inputting the sample biological feature image into the feature extraction network to obtain a sample feature map, inputting the sample feature map into the local living network to obtain first living detection results of a plurality of local areas, inputting the first living detection results of the plurality of local areas into the confidence detection network to obtain detection confidence of the plurality of local areas, updating the corresponding local areas of the sample feature map based on the detection confidence of the plurality of local areas to obtain an updated feature map, inputting the updated feature map into the local living network to obtain third living detection results of the plurality of local areas, and inputting the updated feature map and the third living detection results of the plurality of local areas into the global living detection network to obtain second global living detection results; and
and determining a second target loss based on the second global living detection result, the third living detection result of the plurality of local areas, the global living labeling result and the living labeling results of the plurality of local areas, and training the confidence detection network by taking the second target loss as a training target.
17. The method of claim 1, wherein the re-living detection of the target user based on the actual input information of the target user, to obtain a target living detection result of the target user, comprises:
performing secondary living body detection on the target user based on the actual input information to obtain a secondary living body detection result; and
the target living body detection result is determined based on the preliminary living body detection result and the re-living body detection result.
18. The method of claim 1, wherein the biometric features comprise at least one of: at least one of a face, iris, fingerprint, and palmprint.
19. A living body detection system, characterized by comprising:
at least one storage medium storing at least one set of instructions for performing a living organism detection; and
at least one processor communicatively coupled to the at least one storage medium,
wherein the at least one processor reads the at least one instruction set and performs the in-vivo detection method of any one of claims 1 to 18 as directed by the at least one instruction set when the in-vivo detection system is running.
CN202211731962.5A 2022-12-30 2022-12-30 Living body detection method and system Pending CN116189316A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211731962.5A CN116189316A (en) 2022-12-30 2022-12-30 Living body detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211731962.5A CN116189316A (en) 2022-12-30 2022-12-30 Living body detection method and system

Publications (1)

Publication Number Publication Date
CN116189316A true CN116189316A (en) 2023-05-30

Family

ID=86441574

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211731962.5A Pending CN116189316A (en) 2022-12-30 2022-12-30 Living body detection method and system

Country Status (1)

Country Link
CN (1) CN116189316A (en)

Similar Documents

Publication Publication Date Title
US10769423B2 (en) Method, system and terminal for identity authentication, and computer readable storage medium
KR101629224B1 (en) Authentication method, device and system based on biological characteristics
US10740636B2 (en) Method, system and terminal for identity authentication, and computer readable storage medium
US20200184053A1 (en) Generative adversarial network training and feature extraction for biometric authentication
US20170262472A1 (en) Systems and methods for recognition of faces e.g. from mobile-device-generated images of faces
CA3152812A1 (en) Facial recognition method and apparatus
CN111310705A (en) Image recognition method and device, computer equipment and storage medium
US10885171B2 (en) Authentication verification using soft biometric traits
CN103714282A (en) Interactive type identification method based on biological features
KR101656212B1 (en) system for access control using hand gesture cognition, method thereof and computer recordable medium storing the method
CN111368814A (en) Identity recognition method and system
JP2005259049A (en) Face collation device
US20240028698A1 (en) System and method for perfecting and accelerating biometric identification via evolutionary biometrics via continual registration
WO2022222957A1 (en) Method and system for identifying target
CN115880530A (en) Detection method and system for resisting attack
CN116189316A (en) Living body detection method and system
JP5279007B2 (en) Verification system, verification method, program, and recording medium
CN116994294B (en) Virtual reality equipment user identification system based on neural network
US20230259600A1 (en) Adaptive personalization for anti-spoofing protection in biometric authentication systems
CN116071581A (en) Recognition of attack-resistant image and training method and system of recognition model thereof
CN116092200A (en) Biological identification method and system
CN117315709A (en) Identification method and related device
CN116468113A (en) Living body detection model training method, living body detection method and living body detection system
CN116665315A (en) Living body detection model training method, living body detection method and living body detection system
CN116343348A (en) Living body detection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination