CN111046845A - Living body detection method, device and system - Google Patents

Living body detection method, device and system Download PDF

Info

Publication number
CN111046845A
CN111046845A CN201911385846.0A CN201911385846A CN111046845A CN 111046845 A CN111046845 A CN 111046845A CN 201911385846 A CN201911385846 A CN 201911385846A CN 111046845 A CN111046845 A CN 111046845A
Authority
CN
China
Prior art keywords
data
attack
living body
training
sample data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911385846.0A
Other languages
Chinese (zh)
Inventor
马玉
张珅哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Junyu Digital Technology Co ltd
Original Assignee
Shanghai Junyu Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Junyu Digital Technology Co ltd filed Critical Shanghai Junyu Digital Technology Co ltd
Priority to CN201911385846.0A priority Critical patent/CN111046845A/en
Publication of CN111046845A publication Critical patent/CN111046845A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole

Abstract

The invention provides a method, a device and a system for detecting a living body, which relate to the technical field of image detection and comprise the following steps: acquiring living sample data marked with a living label and attack sample data marked with an attack label; training a detection model to be trained based on living sample data and attack sample data to obtain a trained detection model; and when receiving target face data to be detected, performing living body detection on the target face data through the trained detection model to obtain a living body detection result. The invention can improve the efficiency of in vivo detection and simplify the difficulty of in vivo detection.

Description

Living body detection method, device and system
Technical Field
The invention relates to the technical field of image detection, in particular to a method, a device and a system for detecting a living body.
Background
With the wide application of Face recognition technology in daily life such as finance, entrance guard, mobile devices and the like, Face Anti-counterfeiting/living body detection (Face Anti-Spoofing) technology has gained more and more attention in recent years. The core technology of face recognition is face verification, but before face verification, whether the face is a live face of a real person or not needs to be judged in advance, and the face should be rejected when an algorithm is attacked by a synthesized or other person photo. The most common attack modalities at present are photo attacks and video playback attacks. The photo attack is an attack of directly carrying out a recognition algorithm by using a photo of a target object; the video playback attack is to collect some video segments of a target object and then carry out attack of an algorithm. However, a relatively effective in vivo detection method is still lacking for the above attack method.
Disclosure of Invention
The invention aims to provide a method, a device and a system for in-vivo detection, which are used for improving the efficiency of in-vivo detection and simplifying the difficulty of in-vivo detection.
The invention provides a living body detection method, which comprises the following steps: acquiring living sample data marked with a living label and attack sample data marked with an attack label; training a detection model to be trained based on the living sample data and the attack sample data to obtain a trained detection model; when target face data to be detected are received, performing deep detection on the target face data through a detection model which is trained to obtain the proportion of a plane area in the target face data, and determining a living body detection result corresponding to the target face data based on the obtained proportion of the plane area; the plane area is an image area of which the depth information of the feature points in the target face data is smaller than a preset depth threshold.
Further, the step of obtaining the attack sample data labeled with the attack tag includes: acquiring an attack sample pair; wherein, the attack sample pair comprises two training face images with different face deflection angles; detecting global feature points of the training face image, and determining depth information of the feature points based on two-dimensional coordinates of the detected feature points; generating human face three-dimensional point cloud data corresponding to the training human face image based on the two-dimensional coordinates and the depth information of the feature points; and labeling an attack label on the human face three-dimensional point cloud data, and taking the human face three-dimensional point cloud data labeled with the attack label as attack sample data.
Further, the step of determining depth information of the feature point based on the two-dimensional coordinates of the detected feature point includes: calculating a distance value between every two feature points in the two training face images, and determining a feature point pair based on the calculated distance value; determining a relative rotation angle and a relative translation variable between the two training face images according to the two-dimensional coordinates of the feature points in the feature point pairs and an epipolar geometric constraint algorithm; and determining the depth information of the feature points in the feature point pairs according to a triangulation algorithm and the relative rotation angle and translation variable between the two training face images.
Further, the step of calculating the distance value between the feature points in the two training face images includes: calculating the distance value between the feature points in the two training face images according to the following formula:
Figure BDA0002336207010000021
wherein h is1,iIs a descriptor of a feature point i in a first training face image, h2,iIs a descriptor, dist (h), of the feature point i in the second training face image1,i,h2,i) Is a characteristic point h1,iAnd a characteristic point h2,iThe value of the distance therebetween, 128, represents the preset total number of feature points.
Further, the step of determining pairs of feature points based on the calculated distance values includes: determining two characteristic points of which the calculated distance value is smaller than a distance threshold value as candidate characteristic point pairs; and optimizing the candidate characteristic point pairs according to the RANSAC algorithm to obtain final characteristic point pairs.
Further, the step of training the detection model to be trained based on the living sample data and the attack sample data includes: inputting the living sample data and the attack sample data into a detection model to be trained; performing living body detection on the living body sample data through the detection model to be trained to obtain a first probability value that the living body sample data belongs to the living body data; performing living body detection on the attack sample data through the detection model to be trained to obtain a second probability value of the attack sample data belonging to the attack data; and adjusting parameters of the detection model to be trained according to the living body label and the attack label until the first probability value reaches a preset first probability threshold and the second probability value reaches a preset second probability threshold, and determining that the training is finished to obtain the detection model which finishes the training.
Further, the step of determining a living body detection result corresponding to the target face data based on the obtained proportion of the planar area includes: and comparing the obtained proportion of the plane area with a preset proportion threshold, and determining that the living body detection result corresponding to the target face data is the target face data as attack data when the comparison result is that the proportion of the plane area is greater than the proportion threshold.
The invention provides a living body detection device, comprising: the sample data acquisition module is used for acquiring living sample data marked with a living label and attack sample data marked with an attack label; the model training module is used for training a detection model to be trained based on the living sample data and the attack sample data to obtain a detection model which is trained; the living body detection module is used for carrying out deep detection on target face data through a trained detection model when the target face data to be detected are received to obtain the proportion of a plane area in the target face data, and determining a living body detection result corresponding to the target face data based on the obtained proportion of the plane area; the plane area is an image area of which the depth information of the feature points in the target face data is smaller than a preset depth threshold.
The invention provides a living body detection system, which comprises: a processor and a storage device; the storage means has stored thereon a computer program which, when executed by the processor, performs the method described above.
The present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the above-described method.
The embodiment of the invention provides a method, a device and a system for detecting living body, which comprises the steps of firstly obtaining living body sample data marked with a living body label and attack sample data marked with an attack label; training the detection model to be trained based on the living sample data and the attack sample data to obtain a trained detection model; finally, when target face data to be detected are received, performing deep detection on the target face data through the trained detection model to obtain the proportion of a plane area in the target face data, and determining a living body detection result corresponding to the target face data based on the obtained proportion of the plane area; and the plane area is an image area of which the depth information of the feature points in the target face data is smaller than a preset depth threshold. In the living body detection method provided by the embodiment, the detection model capable of detecting the living body data and the attack data is trained firstly, then the trained detection model is directly utilized to carry out depth detection on the target face data in actual production, and the depth information can better reflect the three-dimensional space characteristics of the detected object, so that the living body detection is carried out by utilizing the plane area determined by the depth information, the real object in the three-dimensional space and the plane object in the two-dimensional image can be distinguished more accurately, the living body detection efficiency is effectively improved, and the living body detection difficulty is simplified.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flowchart of a method for in vivo detection according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a feature point pair provided in the embodiment of the present invention;
FIG. 3 is a schematic diagram of a triangularization algorithm provided by an embodiment of the present invention;
FIG. 4 is a block diagram of a living body detecting apparatus according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a biopsy system according to an embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the following embodiments, and it should be understood that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
At present, in the living body detection of the human face, aiming at attack algorithms such as photo attack, video playback attack and the like, a relatively effective living body detection method is also lacked, and based on the method, the device and the system for detecting the living body provided by the embodiment of the invention, the living body detection efficiency can be improved, and the difficulty of the living body detection is simplified.
To facilitate understanding of the present embodiment, a detailed description will be given of a method for detecting a living body disclosed in the present embodiment.
The first embodiment is as follows:
referring to the flowchart of the living body detecting method shown in fig. 1, the method mainly includes the following steps S102 and S106:
step S102, obtaining living sample data marked with a living label and attack sample data marked with an attack label. In order to improve the reliability of the sample data, the living sample data can adopt the face of a real user; the attack sample data can be a pre-acquired image containing a human face or a three-dimensional model of the human face.
And step S104, training the detection model to be trained based on the living sample data and the attack sample data to obtain the detection model with the training completed.
Step S106, when target face data to be detected are received, performing depth detection on the target face data through the trained detection model to obtain the proportion of a plane area in the target face data, and determining a living body detection result corresponding to the target face data based on the obtained proportion of the plane area; the plane area is an image area of which the depth information of the feature points in the target face data is smaller than a preset depth threshold.
In this embodiment, the target face data to be detected may be a real face (i.e. living body data) in a three-dimensional space, or may be a non-real face (i.e. total data) such as a copied photograph, a played back video, or a production mask. It can be understood that, for real faces and non-real faces, the depth information of feature points therein is greatly different: depth information in a real face is matched with facial features and presents depth values with different sizes, while depth information in a non-real face is biased to a two-dimensional plane, and the presented depth values are less different. Especially for a photograph containing a target face, the front and back backgrounds of the target face are inevitably included. Based on this, when the living body detection is performed on the target face data, the depth information and the preset depth threshold value can be used for determining a plane area in the target face data, then the proportion of the obtained plane area is compared with the preset proportion threshold value, and when the comparison result shows that the proportion of the plane area is greater than the proportion threshold value, the depth information of the feature points of the target face in the target face data is not much different, and the three-dimensional stereo characteristic cannot be presented well, so that the living body detection result corresponding to the target face data is determined as the target face data as the attack data; correspondingly, when the comparison result is that the proportion of the plane area is smaller than or equal to the proportion threshold value, the target face data can better present three-dimensional stereo characteristics, and therefore the living body detection result corresponding to the target face data is determined to be that the target face data is living body data.
In the living body detection method provided by the embodiment, the detection model capable of detecting the living body data and the attack data is trained firstly, then the trained detection model is directly utilized to carry out the depth detection on the received target face data to be detected in the actual production, the depth information can better reflect the three-dimensional space characteristics of the detected object, so that the living body detection is carried out by utilizing the plane area determined by the depth information, the real object in the three-dimensional space and the plane object in the two-dimensional image can be distinguished more accurately, the living body detection efficiency is effectively improved, and the difficulty of the living body detection is simplified.
In order to enable the detection model to be directly applied to living body detection, the detection model needs to be trained in advance, parameters of the detection model need to be obtained through training, and the purpose of training the detection model is to finally determine the parameters which can meet the requirements. And the detection model can obtain expected in-vivo detection effect by using the trained parameters. Before the detection model training, sample data for the model training, that is, the living sample data and the attack sample data in step S102, needs to be acquired first. The present embodiment provides a method for acquiring attack sample data, which refers to the following steps (1) to (4):
(1) acquiring an attack sample pair; wherein, the attack sample pair comprises two training face images with different face deflection angles. The acquisition mode of the training face image can be various, for example: acquiring images of a target face under different deflection angles through a fixed camera, and thus obtaining a training face image; or; the images of the target face are respectively obtained through cameras arranged at different angles, and therefore the training face image is obtained. The foregoing is by way of example only and is not to be construed as limiting.
(2) Detecting global feature points of the training face image, and determining depth information of the feature points based on two-dimensional coordinates of the detected feature points.
In this embodiment, a Scale-invariant feature transform (SIFT-invariant feature transform) algorithm may be adopted to extract feature points and descriptors of each training face image. The two-dimensional coordinates of the feature points can be directly obtained from the training face image, and on the basis, the depth information of the feature points can be determined by combining a epipolar geometric constraint algorithm and a triangulation algorithm.
(3) And generating human face three-dimensional point cloud data corresponding to the training human face image based on the two-dimensional coordinates and the depth information of the feature points. It can be understood that the two-dimensional coordinates of the feature points and the depth information are combined to form three-dimensional coordinates of the feature points, and the three-dimensional point cloud data of the face corresponding to the training face image is generated based on the three-dimensional coordinates of the feature points.
(4) And marking an attack label on the human face three-dimensional point cloud data, and taking the human face three-dimensional point cloud data marked with the attack label as attack sample data. During specific implementation, the three-dimensional point cloud data of the face can be spliced and subjected to noise reduction, and the processed data is rendered to obtain a three-dimensional face model corresponding to a training face image; and then adding an attack tag to the face three-dimensional model, thereby obtaining attack sample data.
In order to facilitate understanding of the depth information of the feature points in the step (2), the present embodiment provides a method for determining depth information, which may include the following steps (a) to (c):
(a) and calculating the distance value between every two characteristic points in the two training face images, and determining a characteristic point pair based on the calculated distance value.
Assuming that the number of features preset in the image is 128, in the present example, matching of feature points can be achieved by calculating euclidean distances between feature points of 128 dimensions. The vector of the descriptor of the feature point in the two training face images can be represented as H1=(h1,1,h1,2,...,h1,i,...,h1,128) And H2=(h2,1,h2,2,...,h2,i,...,h2,128) Wherein H is1And H2Respectively representing a first training face image and a second training face image, h1,iAnd h2,iAnd respectively representing descriptors of the ith feature point in the first training face image and the second training face image. Based on this, h1,iAnd h2,iThe distance value therebetween can be calculated by the following similarity calculation formula (1):
Figure BDA0002336207010000081
wherein h is1,iIs a descriptor of a feature point i in a first training face image, h2,iIs a descriptor, dist (h), of the feature point i in the second training face image1,i,h2,i) Is a characteristic point h1,iAnd a characteristic point h2,iThe value of the distance therebetween, 128, represents the preset total number of feature points.
The smaller the calculated Euclidean distance value is, the higher the similarity between the two feature points is, and when the Euclidean distance value is smaller than a set threshold value, the matching can be judged to be successful. Thus, two feature points whose calculated distance value is smaller than the distance threshold value may be determined as candidate feature point pairs. The feature points found by the SIFT algorithm are points which are quite prominent and cannot be changed by factors such as illumination, affine transformation, noise and the like, such as corner points, edge points, bright points in a dark area, dark points in a bright area and the like, and the matching result of the feature points is influenced, namely the candidate feature point pairs may have mismatching feature point pairs. Based on this, the candidate feature point pairs may be optimized according to a Random Sample Consensus (Random Sample Consensus) algorithm to obtain final feature point pairs. The RANSAC algorithm can estimate and optimize the characteristic point pairs by using random samples of the candidate characteristic point pairs, so that the RANSAC algorithm is adopted to optimize the characteristic point pairs, the quality of the characteristic point pairs is improved, and the characteristic point pairs with higher matching degree are reserved.
The feature point pairs having matching relationships may refer to a schematic diagram of feature point pairs as shown in fig. 2, which schematically shows several feature point pairs, such as a feature point pair composed of a feature point at the nose tip position in the first training face image and a feature point at the nose tip position in the second training face image.
(b) And determining the relative rotation angle and translation variable between the two training face images according to the two-dimensional coordinates of the feature points in the feature point pairs and a epipolar geometric constraint algorithm.
In the concrete implementation of the step, an essential matrix E or a basic matrix F can be solved according to the two-dimensional coordinates of the characteristic points in the characteristic point pairs; and then calculating the rotation angle R and the translation variable t of the second training face image relative to the first training face image according to the intrinsic matrix E or the basic matrix F.
(c) And determining the depth information of the feature points in the feature point pairs according to a triangulation algorithm and the relative rotation angle and translation variable between the two training face images.
As shown in fig. 3, for the first training face image H1And a second training face image H2Can be considered as being used for photographing H1Has an optical center of O1For taking H2Has an optical center of O2. In the first training face image H1Well characteristic point p1(ii) a Correspondingly, a second training face image H2Well characteristic point p2. Theoretically straight line O1p1And O2p2In space, the two feature points intersect at a point P, which is the position in the three-dimensional scene corresponding to the two feature points. However, due to the influence of noise, the two straight lines often cannot intersect. Thus, it can be solved by a minimum of two multiplications. By definition in epipolar geometry, let x1And x2Are respectively a characteristic point p1And a feature point p2Then they satisfy the following formula (2):
s1x1=s2Rx2+t (2)
wherein s is1,s2Respectively, are the characteristic points p to be determined1And a feature point p2R and t are the relative rotation angle and translation variable between the two training face images respectively. s1、s2The two depths can be separately determined by s2In order to do this, the left-hand multiplication of the two sides of the above equation by x ^ can be performed to obtain the following equation (3):
s1x1∧x1=0=s2x1∧Rx2+x1∧t (3)
the left side of the formula is zero, and the right side can be regarded as s2From which s can be directly obtained2. In determining s2Then, s can be easily obtained1
The depth information of each feature point in the first training face image and the second training face image can be obtained through the method, and the three-dimensional space coordinate of each feature can be determined by combining the two-dimensional coordinates.
Considering that equation (3) is not necessarily exactly zero due to the presence of noise, and estimated R and t, the depth information of the feature point can be calculated by solving a least square solution (instead of a zero solution) in practical applications.
Next, the present embodiment describes the training method of the detection model in step S104, which mainly includes the following four steps:
step 1, inputting living sample data and attack sample data into a detection model to be trained.
And 2, performing living body detection on living body sample data through the detection model to be trained to obtain a first probability value that the living body sample data belongs to the living body data.
And 3, performing living body detection on the attack sample data through the detection model to be trained to obtain a second probability value of the attack sample data belonging to the attack data.
And 4, adjusting parameters of the detection model to be trained according to the living body label and the attack label until the first probability value reaches a preset first probability threshold and the second probability value reaches a preset second probability threshold, and determining that the training is finished to obtain the detection model which finishes the training.
In summary, in the living body detection method provided in the above embodiment, the detection model capable of detecting the living body data and the attack data is trained first, and then the trained detection model is directly utilized to perform the depth detection on the received target face data to be detected in the actual production, so that the depth information can better reflect the three-dimensional space characteristics of the detected object, and thus the living body detection is performed by utilizing the planar area determined by the depth information, the real object in the three-dimensional space and the planar object in the two-dimensional image can be distinguished more accurately, the living body detection efficiency is effectively improved, and the difficulty of the living body detection is simplified.
Example two:
based on the living body detecting method provided by the above-mentioned embodiment, the present embodiment provides a living body detecting apparatus, referring to a block diagram of the structure of the living body detecting apparatus shown in fig. 4, the apparatus including:
a sample data obtaining module 402, configured to obtain living sample data labeled with a living tag and attack sample data labeled with an attack tag;
a model training module 404, configured to train a detection model to be trained based on living sample data and attack sample data to obtain a detection model for which training is completed;
and the living body detection module 406 is configured to perform living body detection on the target face data through the trained detection model when receiving the target face data to be detected, so as to obtain a living body detection result.
In the living body detection method provided by the embodiment, the detection model capable of detecting the living body data and the attack data is trained firstly, then the trained detection model is directly utilized to carry out deep detection on the received target face data to be detected in the actual production, the proportion of the plane area in the target face data is obtained, and the living body detection result corresponding to the target face data is determined based on the obtained proportion of the plane area; the plane area is an image area of which the depth information of the feature points in the target face data is smaller than a preset depth threshold.
In an embodiment, the sample data obtaining module 402 is configured to: acquiring an attack sample pair; wherein, the attack sample pair comprises two training face images with different face deflection angles; detecting global feature points of a training face image, and determining depth information of the feature points based on two-dimensional coordinates of the detected feature points; generating human face three-dimensional point cloud data corresponding to a training human face image based on the two-dimensional coordinates and the depth information of the feature points; and marking an attack label on the human face three-dimensional point cloud data, and taking the human face three-dimensional point cloud data marked with the attack label as attack sample data.
In an embodiment, the sample data obtaining module 402 is configured to: calculating a distance value between every two characteristic points in the two training face images, and determining a characteristic point pair based on the calculated distance value; determining a relative rotation angle and a relative translation variable between two training face images according to two-dimensional coordinates of feature points in the feature point pairs and an epipolar geometric constraint algorithm; and determining the depth information of the feature points in the feature point pairs according to a triangulation algorithm and the relative rotation angle and translation variable between the two training face images.
In an embodiment, the sample data obtaining module 402 is configured to: determining two characteristic points of which the calculated distance value is smaller than a distance threshold value as candidate characteristic point pairs; and optimizing the candidate characteristic point pairs according to the RANSAC algorithm to obtain final characteristic point pairs.
In one embodiment, the model training module 404 is configured to: inputting living sample data and attack sample data to a detection model to be trained; performing living body detection on living body sample data through a detection model to be trained to obtain a first probability value that the living body sample data belongs to the living body data; performing living body detection on the attack sample data through a detection model to be trained to obtain a second probability value of the attack sample data belonging to the attack data; and adjusting parameters of the detection model to be trained according to the living body label and the attack label until the first probability value reaches a preset first probability threshold and the second probability value reaches a preset second probability threshold, and determining that the training is finished to obtain the detection model which finishes the training.
In one embodiment, the above-mentioned living body detection module 406 is configured to: and comparing the obtained proportion of the plane area with a preset proportion threshold, and determining that the living body detection result corresponding to the target face data is the target face data as attack data when the comparison result is that the proportion of the plane area is greater than the proportion threshold.
The device provided by the embodiment of the present invention has the same implementation principle and technical effect as the method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the method embodiments without reference to the device embodiments.
The embodiment of the invention also provides a living body detection system, which comprises a memory and a processor, wherein the memory stores a computer program capable of running on the processor, and the living body detection method provided by the embodiment is realized when the processor executes the computer program.
Specifically, referring to the structural schematic diagram of the living body detecting system shown in fig. 5, the system further includes a bus 503 and a communication interface 504, and the processor 502, the communication interface 504 and the memory 501 are connected through the bus 503.
The Memory 501 may include a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 504 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used. Bus 503 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 5, but this does not indicate only one bus or one type of bus.
The processor 502 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 502. The Processor 502 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 501, and the processor 502 reads the information in the memory 501, and completes the steps of the method of the foregoing embodiment in combination with the hardware thereof.
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps of the living body detecting method of the above embodiment are performed.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A method of in vivo detection, comprising:
acquiring living sample data marked with a living label and attack sample data marked with an attack label;
training a detection model to be trained based on the living sample data and the attack sample data to obtain a trained detection model;
when target face data to be detected are received, performing deep detection on the target face data through a detection model which is trained to obtain the proportion of a plane area in the target face data, and determining a living body detection result corresponding to the target face data based on the obtained proportion of the plane area; the plane area is an image area of which the depth information of the feature points in the target face data is smaller than a preset depth threshold.
2. The method of claim 1, wherein the step of obtaining attack sample data labeled with an attack tag comprises:
acquiring an attack sample pair; wherein, the attack sample pair comprises two training face images with different face deflection angles;
detecting global feature points of the training face image, and determining depth information of the feature points based on two-dimensional coordinates of the detected feature points;
generating human face three-dimensional point cloud data corresponding to the training human face image based on the two-dimensional coordinates and the depth information of the feature points;
and labeling an attack label on the human face three-dimensional point cloud data, and taking the human face three-dimensional point cloud data labeled with the attack label as attack sample data.
3. The method of claim 2, wherein the step of determining depth information of the feature points based on the two-dimensional coordinates of the detected feature points comprises:
calculating a distance value between every two feature points in the two training face images, and determining a feature point pair based on the calculated distance value;
determining a relative rotation angle and a relative translation variable between the two training face images according to the two-dimensional coordinates of the feature points in the feature point pairs and an epipolar geometric constraint algorithm;
and determining the depth information of the feature points in the feature point pairs according to a triangulation algorithm and the relative rotation angle and translation variable between the two training face images.
4. The method according to claim 3, wherein the step of calculating the distance value between the feature points in the two training face images comprises:
calculating the distance value between the feature points in the two training face images according to the following formula:
Figure FDA0002336205000000021
wherein h is1,iIs a descriptor of a feature point i in a first training face image, h2,iIs a descriptor, dist (h), of the feature point i in the second training face image1,i,h2,i) Is a characteristic point h1,iAnd a characteristic point h2,iThe value of the distance therebetween, 128, represents the preset total number of feature points.
5. The method of claim 3, wherein the step of determining pairs of characteristic points based on the calculated distance values comprises:
determining two characteristic points of which the calculated distance value is smaller than a distance threshold value as candidate characteristic point pairs;
and optimizing the candidate characteristic point pairs according to the RANSAC algorithm to obtain final characteristic point pairs.
6. The method of claim 1, wherein the step of training a detection model to be trained based on the live sample data and the attack sample data comprises:
inputting the living sample data and the attack sample data into a detection model to be trained;
performing living body detection on the living body sample data through the detection model to be trained to obtain a first probability value that the living body sample data belongs to the living body data;
performing living body detection on the attack sample data through the detection model to be trained to obtain a second probability value of the attack sample data belonging to the attack data;
and adjusting parameters of the detection model to be trained according to the living body label and the attack label until the first probability value reaches a preset first probability threshold and the second probability value reaches a preset second probability threshold, and determining that the training is finished to obtain the detection model which finishes the training.
7. The method according to claim 1, wherein the step of determining the living body detection result corresponding to the target face data based on the obtained proportion of the plane area comprises:
and comparing the obtained proportion of the plane area with a preset proportion threshold, and determining that the living body detection result corresponding to the target face data is the target face data as attack data when the comparison result is that the proportion of the plane area is greater than the proportion threshold.
8. A living body detection device, comprising:
the sample data acquisition module is used for acquiring living sample data marked with a living label and attack sample data marked with an attack label;
the model training module is used for training a detection model to be trained based on the living sample data and the attack sample data to obtain a detection model which is trained;
the living body detection module is used for carrying out deep detection on target face data through a trained detection model when the target face data to be detected are received to obtain the proportion of a plane area in the target face data, and determining a living body detection result corresponding to the target face data based on the obtained proportion of the plane area; the plane area is an image area of which the depth information of the feature points in the target face data is smaller than a preset depth threshold.
9. A living body detection system, the system comprising: a processor and a storage device;
the storage device has stored thereon a computer program which, when executed by the processor, performs the method of any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of the claims 1 to 7.
CN201911385846.0A 2019-12-25 2019-12-25 Living body detection method, device and system Pending CN111046845A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911385846.0A CN111046845A (en) 2019-12-25 2019-12-25 Living body detection method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911385846.0A CN111046845A (en) 2019-12-25 2019-12-25 Living body detection method, device and system

Publications (1)

Publication Number Publication Date
CN111046845A true CN111046845A (en) 2020-04-21

Family

ID=70241291

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911385846.0A Pending CN111046845A (en) 2019-12-25 2019-12-25 Living body detection method, device and system

Country Status (1)

Country Link
CN (1) CN111046845A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111680577A (en) * 2020-05-20 2020-09-18 北京的卢深视科技有限公司 Face detection method and device
CN112200057A (en) * 2020-09-30 2021-01-08 汉王科技股份有限公司 Face living body detection method and device, electronic equipment and storage medium
CN113158838A (en) * 2021-03-29 2021-07-23 华南理工大学 Face representation attack detection method based on full-size depth map supervision

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103530599A (en) * 2013-04-17 2014-01-22 Tcl集团股份有限公司 Method and system for distinguishing real face and picture face
CN105023010A (en) * 2015-08-17 2015-11-04 中国科学院半导体研究所 Face living body detection method and system
CN107451540A (en) * 2017-07-14 2017-12-08 南京维睛视空信息科技有限公司 A kind of compressible 3D recognition methods
CN108319901A (en) * 2018-01-17 2018-07-24 百度在线网络技术(北京)有限公司 Biopsy method, device, computer equipment and the readable medium of face
CN109086718A (en) * 2018-08-02 2018-12-25 深圳市华付信息技术有限公司 Biopsy method, device, computer equipment and storage medium
CN109635770A (en) * 2018-12-20 2019-04-16 上海瑾盛通信科技有限公司 Biopsy method, device, storage medium and electronic equipment
CN109840467A (en) * 2018-12-13 2019-06-04 北京飞搜科技有限公司 A kind of in-vivo detection method and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103530599A (en) * 2013-04-17 2014-01-22 Tcl集团股份有限公司 Method and system for distinguishing real face and picture face
CN105023010A (en) * 2015-08-17 2015-11-04 中国科学院半导体研究所 Face living body detection method and system
CN107451540A (en) * 2017-07-14 2017-12-08 南京维睛视空信息科技有限公司 A kind of compressible 3D recognition methods
CN108319901A (en) * 2018-01-17 2018-07-24 百度在线网络技术(北京)有限公司 Biopsy method, device, computer equipment and the readable medium of face
CN109086718A (en) * 2018-08-02 2018-12-25 深圳市华付信息技术有限公司 Biopsy method, device, computer equipment and storage medium
CN109840467A (en) * 2018-12-13 2019-06-04 北京飞搜科技有限公司 A kind of in-vivo detection method and system
CN109635770A (en) * 2018-12-20 2019-04-16 上海瑾盛通信科技有限公司 Biopsy method, device, storage medium and electronic equipment

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111680577A (en) * 2020-05-20 2020-09-18 北京的卢深视科技有限公司 Face detection method and device
CN112200057A (en) * 2020-09-30 2021-01-08 汉王科技股份有限公司 Face living body detection method and device, electronic equipment and storage medium
CN112200057B (en) * 2020-09-30 2023-10-31 汉王科技股份有限公司 Face living body detection method and device, electronic equipment and storage medium
CN113158838A (en) * 2021-03-29 2021-07-23 华南理工大学 Face representation attack detection method based on full-size depth map supervision
CN113158838B (en) * 2021-03-29 2023-06-20 华南理工大学 Full-size depth map supervision-based face representation attack detection method

Similar Documents

Publication Publication Date Title
CN109086691B (en) Three-dimensional face living body detection method, face authentication and identification method and device
US10198623B2 (en) Three-dimensional facial recognition method and system
JP5384746B2 (en) Improving the performance of image recognition algorithms with pruning, image scaling, and spatially constrained feature matching
CN111160232B (en) Front face reconstruction method, device and system
CN110675487B (en) Three-dimensional face modeling and recognition method and device based on multi-angle two-dimensional face
US10650260B2 (en) Perspective distortion characteristic based facial image authentication method and storage and processing device thereof
CN111046845A (en) Living body detection method, device and system
CN110728196B (en) Face recognition method and device and terminal equipment
US10853631B2 (en) Face verification method and apparatus, server and readable storage medium
EP3309743B1 (en) Registration of multiple laser scans
CN110110793B (en) Binocular image rapid target detection method based on double-current convolutional neural network
CN110852310A (en) Three-dimensional face recognition method and device, terminal equipment and computer readable medium
US20160335523A1 (en) Method and apparatus for detecting incorrect associations between keypoints of a first image and keypoints of a second image
WO2021008205A1 (en) Image processing
WO2018058573A1 (en) Object detection method, object detection apparatus and electronic device
CN111160233B (en) Human face in-vivo detection method, medium and system based on three-dimensional imaging assistance
WO2019019160A1 (en) Method for acquiring image information, image processing device, and computer storage medium
CN113228105A (en) Image processing method and device and electronic equipment
CN113807451B (en) Panoramic image feature point matching model training method and device and server
CN115493612A (en) Vehicle positioning method and device based on visual SLAM
Sacht et al. Face and straight line detection in equirectangular images
Lin et al. Liveness detection using texture and 3d structure analysis
CN110674817A (en) License plate anti-counterfeiting method and device based on binocular camera
Ikehata et al. Confidence-based refinement of corrupted depth maps
CN115619809A (en) Image matching method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200421