CN111340014B - Living body detection method, living body detection device, living body detection apparatus, and storage medium - Google Patents

Living body detection method, living body detection device, living body detection apparatus, and storage medium Download PDF

Info

Publication number
CN111340014B
CN111340014B CN202010441322.5A CN202010441322A CN111340014B CN 111340014 B CN111340014 B CN 111340014B CN 202010441322 A CN202010441322 A CN 202010441322A CN 111340014 B CN111340014 B CN 111340014B
Authority
CN
China
Prior art keywords
image
living body
user
face
sample image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010441322.5A
Other languages
Chinese (zh)
Other versions
CN111340014A (en
Inventor
曹佳炯
李亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202010441322.5A priority Critical patent/CN111340014B/en
Priority to CN202011377618.1A priority patent/CN112507831B/en
Publication of CN111340014A publication Critical patent/CN111340014A/en
Application granted granted Critical
Publication of CN111340014B publication Critical patent/CN111340014B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

One embodiment of the present specification provides a method, an apparatus, a device and a storage medium for detecting a living body, wherein the method comprises: the method comprises the steps of obtaining a face image to be detected, wherein a shelter exists in a designated face range in the face image to be detected. And according to the facial image to be detected and a pre-trained first living body detection model, carrying out living body detection on the user in the facial image to be detected to obtain a first detection result. The first living body detection model is obtained through training based on a first sample image, and a specified face range in the first sample image is subjected to shielding processing. Based on the first detection result, it is determined whether the user in the face image to be detected is a living body.

Description

Living body detection method, living body detection device, living body detection apparatus, and storage medium
Technical Field
The present invention relates to the field of face recognition, and in particular, to a method, an apparatus, a device, and a storage medium for detecting a living body.
Background
The living body detection technology is used as a branch of the face recognition technology, and whether a user is a living body can be judged based on a face image of the user, so that an illegal user is prevented from attacking the living body in the modes of images, videos, 3D masks and the like during face recognition. At present, when the living body is detected, a face image of a user needs to be acquired, however, when a barrier exists on the face of the user, such as when the user wears a mask, the living body user is easily identified as a non-living body user through an existing living body detection algorithm. Therefore, there is a need to provide a technical solution to improve the accuracy of the living body detection when the occlusion exists on the face of the user.
Disclosure of Invention
An object of one embodiment of the present specification is to provide a living body detection method, apparatus, device, and storage medium to improve accuracy of living body detection when an obstruction is present in a face of a user.
To solve the above technical problem, one embodiment of the present specification is implemented as follows:
one embodiment of the present specification provides a method of in vivo detection, comprising: and acquiring a face image to be detected. And occlusion objects exist in the specified face range in the face image to be detected. And according to the facial image to be detected and a pre-trained first living body detection model, carrying out living body detection on the user in the facial image to be detected to obtain a first detection result. The first living body detection model is obtained through training based on a first sample image, and a specified face range in the first sample image is subjected to shielding processing. Based on the first detection result, it is determined whether the user in the face image to be detected is a living body.
One embodiment of the present specification provides a method of in vivo detection, comprising: and acquiring a face image to be detected. The user in the face image to be detected wears a mask. And according to the facial image to be detected and a pre-trained first living body detection model, carrying out living body detection on the user in the facial image to be detected to obtain a first detection result. The first living body detection model is obtained through training based on a first sample image, and the range of the face part below the eyes in the first sample image is subjected to shielding processing. Based on the first detection result, it is determined whether the user in the face image to be detected is a living body.
One embodiment of the present specification provides a living body detection apparatus including: the first acquisition module acquires a face image to be detected. And occlusion objects exist in the specified face range in the face image to be detected. The first detection module is used for carrying out living body detection on the user in the face image to be detected according to the face image to be detected and a pre-trained first living body detection model to obtain a first detection result. The first living body detection model is obtained through training based on a first sample image, and a specified face range in the first sample image is subjected to shielding processing. And a first determination module that determines whether the user in the face image to be detected is a living body based on the first detection result.
One embodiment of the present specification provides a living body detection apparatus including: and the second acquisition module acquires the facial image to be detected. The user in the face image to be detected wears a mask. And the second detection module is used for carrying out living body detection on the user in the face image to be detected according to the face image to be detected and the pre-trained first living body detection model to obtain a first detection result. The first living body detection model is obtained through training based on a first sample image, and the range of the face part below the eyes in the first sample image is subjected to shielding processing. And a second determination module that determines whether the user in the face image to be detected is a living body based on the first detection result.
One embodiment of the present specification provides a biopsy device comprising a processor and a memory arranged to store computer executable instructions which, when executed, cause the processor to carry out the steps of the above biopsy method.
One embodiment of the present description provides a storage medium for storing computer-executable instructions that, when executed, implement the steps of the above-described liveness detection method.
Drawings
In order to more clearly illustrate the technical solutions in one or more embodiments of the present disclosure, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present disclosure, and for those skilled in the art, other drawings can be obtained according to these drawings without any creative effort.
FIG. 1 is a schematic flow chart of a biopsy method provided in an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart of a biopsy method according to another embodiment of the present disclosure;
FIG. 3 is a schematic flow chart of a biopsy method according to another embodiment of the present disclosure;
FIG. 4 is a schematic block diagram illustrating an embodiment of a biopsy device according to the present disclosure;
FIG. 5 is a schematic block diagram illustrating an in-vivo detection device according to another embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a living body detecting apparatus provided in an embodiment of the present specification.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in one or more embodiments of the present disclosure, the technical solutions in one or more embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in one or more embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all embodiments. All other embodiments that can be derived by a person skilled in the art from one or more of the embodiments described herein without making any inventive step shall fall within the scope of protection of this document.
An object of one embodiment of the present specification is to provide a living body detection method, apparatus, device, and storage medium to improve accuracy of living body detection when an obstruction is present in a face of a user. The living body detection method in each embodiment of the present specification may be applied to a face recognition device, such as a face brushing device, and may also be applied to a server, which is not limited herein.
Fig. 1 is a schematic flow chart of a living body detection method provided in an embodiment of the present specification, and as shown in fig. 1, the flow chart includes the following steps:
step S102, acquiring a face image to be detected; a shelter exists in a designated face range in a face image to be detected;
step S104, performing living body detection on the user in the face image to be detected according to the face image to be detected and a pre-trained first living body detection model to obtain a first detection result; the first living body detection model is obtained based on first sample image training, and the range of a specified face in the first sample image is subjected to shielding treatment;
step S106 of determining whether the user in the face image to be detected is a living body based on the first detection result.
In an embodiment of the present specification, a face image to be detected is first obtained, a blocking object exists in a designated face range in the face image to be detected, then living body detection is performed through a first living body detection model, a first detection result is obtained, and then, based on the first detection result, whether a user in the face image to be detected is a living body is determined. Since the specified face area in the first sample image used in the training of the first living body detection model is subjected to the occlusion processing, the living body detection is performed by the first living body detection model, and the accuracy of the living body detection can be improved when an occlusion object exists on the face of the user.
In step S102, a face image to be detected is acquired. In one case, step S102 is executed by a face recognition device, where the face recognition device acquires a user image through a camera, recognizes a face region from the user image, and extracts and scales the face region to a preset size, such as 128 × 128, to obtain a face image to be detected. In another case, step S102 is executed by a server, the server is in communication with a face recognition device, the face recognition device acquires the facial image to be detected through the above process, and then sends the facial image to the server, and the server thereby acquires the facial image to be detected. In another case, step S102 is executed by a server, the server is in communication with a face recognition device, the face recognition device obtains a user image through a camera, sends the user image to the server, and the server identifies a face region from the user image, extracts the face region, and scales the face region to a preset size, such as 128 × 128, to obtain a face image to be detected.
In another embodiment, the above-mentioned face region extraction and scaling process may also be implemented by the first and second living body detection modules described below. Then, the face recognition device only needs to acquire the user image and input the user image to the first living body detection module or the second living body detection module, or the face recognition device only needs to acquire the user image and transmit the user image to the server, and the server inputs the user image to the first living body detection module or the second living body detection module.
And occlusion objects exist in the specified face range in the face image to be detected. For example, if the specified face area includes the area of the face below the eyes and the mask includes a mask, in this case, the user in the face image to be detected wears the mask, and with this embodiment, the living body detection can be performed on the user while the user wears the mask. As another example, if the specified face range includes an eye range and the obstruction includes glasses, in this case, the user in the face image to be detected wears glasses (e.g., sunglasses), and with this embodiment, the user can be subjected to live body detection while wearing glasses (e.g., sunglasses). Of course, the designated face range may be other face ranges, such as forehead, chin, etc., without limitation.
In the step S104, a living body detection is performed on the user in the face image to be detected according to the face image to be detected and the pre-trained first living body detection model, so as to obtain a first detection result. In one case, step S104 is performed by a face recognition device in which a trained first living body detection model is stored, thereby performing living body detection on the user. In another case, step S102 is executed by a server in which a trained first living body detection model is stored, so as to perform living body detection on the user.
In step S104, the face image to be detected is input to the first living body detection model for processing, and the first living body detection model can output a first detection result. The first liveness detection model may be a neural network model, such as a CNN or RNN model.
The first living body detection model is obtained through training based on a first sample image, and a specified face range in the first sample image is subjected to shielding processing. Specifically, the first in-vivo detection model is trained by:
(a1) acquiring a plurality of first face images and acquiring image marks of the first face images; the image marker comprises that the user in the image is a living body or the user in the image is a non-living body;
(a2) carrying out size normalization processing on the first face image, carrying out shielding processing on a specified face range in the image after size normalization processing, and taking the image after shielding processing as a first sample image;
(a3) dividing the first sample image into a first positive sample image and a first negative sample image according to the image marks;
(a4) a first in vivo detection model is trained using the first positive sample image and the first negative sample image.
First, in act (a 1), a plurality of first face images are acquired, the first face images may be images of the user's face without occlusion, each first face image having an image marker indicating whether the user in the image is a living body or not. The first facial image may be obtained experimentally or by crawling from a network.
Then, in the operation (a 2), the first face images are subjected to size normalization, and all the size-normalized first face images are images having the same size. Since the user's face in the first face image is not occluded, it is necessary to perform occlusion processing on the specified face range in the first face image after the size normalization processing, and to take the image after the occlusion processing as the first sample image.
In a specific embodiment, a plurality of user images are acquired, and face detection is performed on the user images by using methods such as MTCNN or FASTRCNN, so as to obtain a plurality of first face images. The size of the first face image is then normalized to 128 x 128. And then carrying out shielding processing on the specified face range in the image after size normalization processing to obtain a first sample image.
In a specific embodiment, the occlusion processing on the specified face range in the image after the size normalization processing may be: and covering the designated face range in the image after size normalization processing by using a preset pixel point. For example, the area of the face below the eye in the image after the size normalization processing is covered by black pixel points, or the area of the face below the eye in the image after the size normalization processing is covered by white pixel points, or the area of the face below the eye in the image after the size normalization processing is covered by preset pixel points of other colors or patterns. And the first face image after the shielding processing is the first sample image.
Because the first face images are subjected to size normalization in advance, the pixel range of the specified face range needing to be subjected to shielding processing in each first face image can be set in advance, for example, the pixel range of the specified face range needing to be subjected to shielding processing in each first face image is a rectangular region formed by pixel points (50, 0), (50, 128), (128, 0) and (128 ) serving as vertexes, so that the specified face range needing to be subjected to shielding processing in each first face image is ensured to be consistent, and the accuracy of model training is improved.
In the above-described operation (a 3), the first sample image is divided into a first positive sample image and a first negative sample image according to the image marker, and for example, the first sample image with the image marker "the user in the image is a living body" is regarded as a positive sample image, and the first sample image with the image marker "the user in the image is a non-living body" is regarded as a negative sample image.
In the above act (a 4), a first biopsy model is trained using a first positive sample image and the first negative sample image. Specifically, by the above-described actions (a 1) to (a 3), the first positive sample image and the first negative sample image can be obtained, and both the first positive sample image and the first negative sample image are images in which the specified face range is subjected to occlusion processing, so that the accuracy of the live body detection can be improved when an occlusion exists on the face of the user by using the first live body detection model trained by the first positive sample image and the first negative sample image. In act (a 4), the first liveness detection model may be trained using a training method of a general neural network model, which is not limited herein.
In the above step S106, it is determined whether the user in the face image to be detected is a living body based on the first detection result. A first probability value that the user is not a living body may be determined based on the first detection result, and if the first probability value is greater than or equal to a first probability threshold, the user is determined not to be a living body, otherwise, the user is determined to be a living body.
Specifically, the first detection result may be a probability value that the user is not a living body, and the first detection result may also be a probability value that the user is a living body. And determining a first probability value that the user is not the living body according to the first detection result, judging whether the first probability value is larger than or equal to a first probability threshold, if so, determining that the user is not the living body, otherwise, determining that the user is the living body.
In other embodiments, a probability value that the user is a living body may also be determined according to the first detection result. And if the probability value of the user as the living body is more than or equal to a certain probability value, determining that the user is the living body, and otherwise, determining that the user is not the living body.
As can be seen, with the above embodiment, the first living body detection model can be trained using the first sample image in which the designated face area is subjected to the occlusion processing, and the living body detection is performed on the user using the first living body detection model, so that the accuracy of the living body detection is improved when an occlusion object exists on the face of the user.
In addition, when the first living body detection model is trained, the image of the user with the face shielded does not need to be acquired independently as the sample image, and the image of the user without the face shielded can be shielded to obtain the first sample image, so that the acquisition difficulty of the first sample image is reduced, and the training efficiency of the model is improved.
Moreover, when the first living body detection model is trained, the first sample image can be obtained by shielding the image without shielding the face of the user according to the uniform shielding rule, so that the shielded face areas in each first sample image are the same, and the training accuracy of the first living body detection model is improved.
It can be understood that in other scenarios, a face image of a user's face with an obstruction may also be acquired as the first sample image, so that the first live detection model is trained using the first sample image.
To further improve the accuracy of the in-vivo detection, fig. 2 is a schematic flowchart of a method for in-vivo detection according to another embodiment of the present disclosure, and as shown in fig. 2, the flowchart further includes, compared with fig. 1:
and step S105, performing living body detection on the user in the face image to be detected according to the face image to be detected and a pre-trained second living body detection model to obtain a second detection result. And the second living body detection model is obtained by training based on a second sample image, and the range of the specified face in the second sample image is not subjected to shielding processing.
Correspondingly, the step S106 specifically includes:
step S1061, determining a first probability value that the user is not a living body according to the first detection result, and determining a second probability value that the user is not a living body according to the second detection result;
step S1062, carrying out weighted summation on the first probability value and the second probability value to obtain a third probability value that the user is not a living body;
in step S1063, if the third probability value is greater than or equal to the second probability threshold, it is determined that the user is not a living body, otherwise, it is determined that the user is a living body.
The second living body detection model is obtained through training based on a second sample image, and the range of the specified face in the second sample image is not subjected to shielding processing. The second in vivo detection model may also be a neural network model, the second in vivo detection model being trained by:
(b1) acquiring a plurality of second face images and acquiring image marks of the second face images; the image marker comprises that the user in the image is a living body or the user in the image is a non-living body;
(b2) carrying out size normalization processing on the second face image, and taking the image with the normalized size as a second sample image;
(b3) dividing the second sample image into a second positive sample image and a second negative sample image according to the image marks;
(b4) and training a second living body detection model by using the second positive sample image and the second negative sample image.
Specifically, similar to act (a 1), in act (b 1), a plurality of second face images may be obtained experimentally or crawled from a network, and the second face images may be images of the face of the user without occlusion, each second face image having an image flag indicating whether the user in the image is a living body or not.
Similarly to the operation (a 2), in the operation (b 2), the second face images are subjected to size normalization, and all the size-normalized second face images are images having the same size. In a specific embodiment, a plurality of user images are acquired, and face detection is performed on the user images by using methods such as MTCNN or FASTRCNN, so as to obtain a plurality of second face images. The size of the second face image is then normalized to 128 x 128.
Similarly to the action (a 3), in the action (b 3), the second sample image is divided into a second positive sample image and a second negative sample image according to the image markers, for example, the second sample image with the image marker "the user in the image is a living body" is taken as the positive sample image, and the second sample image with the image marker "the user in the image is a non-living body" is taken as the negative sample image.
Similar to act (a 4), in act (b 4), a second liveness detection model is trained using the second positive sample image and the second negative sample image. Specifically, through the above-mentioned actions (b 1) to (b 3), the second positive sample image and the second negative sample image can be obtained, and the second living body detection model can be trained by using a training method of a general neural network model, which is not limited herein.
In fig. 2, in step S105, the face image to be detected is input to the second living body detection model, and living body detection is performed by the second living body detection model, so as to obtain a second detection result. The second detection result may be a probability that the user is a living body or a probability that the user is a non-living body, similar to the first detection result.
In step S1061, a first probability value that the user is not a living body is determined according to the first detection result, and a second probability value that the user is not a living body is determined according to the second detection result.
In step S1062, the first probability value and the second probability value are weighted and summed to obtain a third probability value that the user is not a living body. For example, the first probability value and the second probability value are averaged to obtain a third probability value.
In step S1063, it is determined whether the third probability value is greater than or equal to the second probability threshold, and if so, it is determined that the user is not a living body, otherwise, it is determined that the user is a living body.
In other embodiments, the probability value that the user is a living body may be determined according to the first detection result, and the probability value that the user is a living body may be determined according to the second detection result. And carrying out weighted summation on the two probability values to obtain a comprehensive probability value of the user as a living body. And judging whether the comprehensive probability value is greater than or equal to a certain probability threshold value, if so, determining that the user is a living body, otherwise, determining that the user is not the living body.
In this embodiment, through the flow shown in fig. 2, two different biopsy models can be combined to perform biopsy on a user, so as to improve the accuracy of a detection result.
In a specific embodiment, after the acquired face image of the user, it may be determined whether a designated face area of the face of the user is blocked, and if so, performing live body detection by using the first live body detection model, or performing live body detection by using a combination of the first live body detection model and the second live body detection model, and if not, performing live body detection by using the second live body detection model. Of course, in other scenarios, the first biopsy model may be used to perform biopsy without determining the user, or the first and second biopsy models may be used in combination to perform biopsy.
The first living body detection model described above may be referred to as a living body detection model based on the local attention mechanism, and since the first living body detection model is trained while the specified face area in the face image is subjected to occlusion processing, the trained first living body detection model can be made to place more attention on the face area that is not occluded, thereby forming the local attention mechanism. Through having the live body detection model that possesses the attention mechanism, can put more attention in the facial region that is not sheltered from when carrying out the live body detection to under the condition that user's face exists the sheltering from, reduce user's mistake interception rate, improve live body detection's accuracy, promote user experience.
A specific training procedure for a specific first in-vivo detection model and a specific second in-vivo detection model is given below.
1. Data acquisition and preprocessing.
And acquiring a plurality of user images, and marking whether the user in each user image is a living user. And during image acquisition, different illumination, postures and environmental conditions are ensured to be acquired.
2. Data pre-processing
The acquired images were cut by 50%/50%, 50% as training set and the other 50% as test set.
3. Model training
3.1 face detection is performed on the images in the training set, resulting in face regions, which are scaled to a predetermined size (e.g., 128 x 128) and extracted.
3.2 for half of the extracted images, carrying out shielding processing on the specified face range in the images to obtain a positive sample image (living body image) and a negative sample image (non-living body image) for training the first living body detection model. The other half number of images to be extracted are taken as a positive sample image (live body image) and a negative sample image (non-live body image) for training the second live body detection model.
3.3 in the above image acquisition process, it is ensured as much as possible that the number of the positive sample images and the number of the negative sample images of the first in vivo detection model are equal, and that the number of the positive sample images and the number of the negative sample images of the second in vivo detection model are equal. For example, 80 images, 40 of which are live images and 40 of which are non-live images, are acquired, and 20 of the live images and 20 of the non-live images are taken as a training set, and the other 40 images are taken as a test set. In the training set, 10 live body images are respectively set as a positive sample image of the first live body detection model after being subjected to the shielding processing, 10 non-live body images are respectively set as a negative sample image of the first live body detection model after being subjected to the shielding processing, the other 10 live body images are respectively set as a positive sample image of the second live body detection model without being subjected to the shielding processing, and the other 10 non-live body images are respectively set as a negative sample image of the second live body detection model without being subjected to the shielding processing.
And 3.4 training the first living body detection model by using the positive sample image and the negative sample image of the first living body detection model and the softmax classification function, and training the second living body detection model by using the positive sample image and the negative sample image of the second living body detection model and the softmax classification function.
4. Model testing
And testing the first living body detection model and the second living body detection model by using the test set, and ending the training if the accuracy reaches a preset threshold value T. Otherwise, the super parameters such as the learning rate and the batch _ size are adjusted to perform model training again.
Fig. 3 is a schematic flowchart of a living body detecting method according to another embodiment of the present disclosure, and as shown in fig. 3, the flowchart includes:
step S302, acquiring a face image to be detected; the user in the facial image to be detected wears a mask;
step S304, according to the facial image to be detected and a pre-trained first living body detection model, carrying out living body detection on the user in the facial image to be detected to obtain a first detection result; the first living body detection model is obtained based on first sample image training, and the range of the face part below the eyes in the first sample image is subjected to shielding treatment;
step S306, based on the first detection result, determines whether the user in the face image to be detected is a living body.
The first in-vivo detection model is trained by the following steps:
(c1) acquiring a plurality of first face images and acquiring image marks of the first face images; the image marker comprises that the user in the image is a living body or the user in the image is a non-living body;
(c2) performing size normalization processing on the first face image, performing shielding processing on a face range below an eye part in the image after the size normalization processing, and taking the image after the shielding processing as a first sample image;
(c3) dividing the first sample image into a first positive sample image and a first negative sample image according to the image marks;
(c4) a first in vivo detection model is trained using the first positive sample image and the first negative sample image.
In the above operation (c 2), the blocking process is performed on the area of the face below the eye in the image after the size normalization process, specifically: and covering the face range below the eyes in the image after the size normalization processing by using black pixel points.
The specific process of fig. 3 may refer to the description above with respect to fig. 1 and 2 and will not be repeated here.
In an embodiment of the present specification, a face image to be detected is first obtained, a user in the face image to be detected wears a mask, then living body detection is performed through a first living body detection model to obtain a first detection result, and then based on the first detection result, whether the user in the face image to be detected is a living body is determined. Since the range of the face below the eyes in the first sample image used in the training of the first living body detection model is subjected to the occlusion processing, the living body detection is performed by the first living body detection model, and the accuracy of the living body detection can be improved when the user wears the mask.
Fig. 4 is a schematic block diagram illustrating a living body detecting apparatus according to an embodiment of the present disclosure, and as shown in fig. 4, the apparatus includes:
a first obtaining module 41 for obtaining a face image to be detected; a shelter exists in a designated face range in the face image to be detected;
the first detection module 42 is configured to perform living body detection on the user in the facial image to be detected according to the facial image to be detected and a pre-trained first living body detection model to obtain a first detection result; the first living body detection model is obtained through training based on a first sample image, and the range of a specified face in the first sample image is subjected to shielding processing;
a first determination module 43 that determines whether the user in the face image to be detected is a living body based on the first detection result.
Optionally, the apparatus further comprises a first training module; the first training module trains the first liveness detection module by: acquiring a plurality of first face images and acquiring image marks of the first face images; the image marker comprises that a user in the image is a living body or a user in the image is a non-living body; carrying out size normalization processing on the first face image, carrying out shielding processing on a specified face range in the image after size normalization processing, and taking the image after shielding processing as a first sample image; dividing the first sample image into a first positive sample image and a first negative sample image according to the image markers; training the first in-vivo detection model using the first positive sample image and the first negative sample image.
Optionally, the first training module covers a designated face range in the image after the size normalization processing by using a preset pixel point.
Optionally, the first determining module determines a first probability value that the user is not a living body according to the first detection result; and if the first probability value is larger than or equal to a first probability threshold value, determining that the user is not a living body, otherwise, determining that the user is the living body.
Optionally, the second detection module performs living body detection on the user in the facial image to be detected according to the facial image to be detected and a second living body detection model trained in advance to obtain a second detection result; wherein the second living body detection model is trained based on a second sample image, and the designated face range in the second sample image is not subjected to occlusion processing.
Accordingly, the first determination module: determining a first probability value that the user is not a living body according to the first detection result, and determining a second probability value that the user is not a living body according to the second detection result; performing weighted summation on the first probability value and the second probability value to obtain a third probability value that the user is not a living body; and if the third probability value is larger than or equal to a second probability threshold value, determining that the user is not a living body, otherwise, determining that the user is the living body.
Optionally, an additional training module is further included, the additional training module training the second in-vivo detection model by: acquiring a plurality of second face images and acquiring image marks of the second face images; the image marker comprises that a user in the image is a living body or a user in the image is a non-living body; carrying out size normalization processing on the second face image, and taking the image with the normalized size as a second sample image; dividing the second sample image into a second positive sample image and a second negative sample image according to the image markers; training the second in-vivo detection model using the second positive sample image and the second negative sample image.
Optionally, the specified face range comprises a face range below the eyes; the covering comprises a mask.
The living body detecting apparatus in the present embodiment corresponds to the living body detecting method in fig. 1 to 2 described above, and can realize the respective processes in fig. 1 to 2 described above and achieve the same effects and functions, which are not repeated here.
Fig. 5 is a schematic block diagram of a biopsy device according to another embodiment of the present disclosure, and as shown in fig. 5, the biopsy device includes:
a second obtaining module 51, for obtaining a face image to be detected; the user in the facial image to be detected wears a mask;
the second detection module 52 is configured to perform living body detection on the user in the facial image to be detected according to the facial image to be detected and a pre-trained first living body detection model, so as to obtain a first detection result; the first living body detection model is obtained through training based on a first sample image, and the range of the face part below the eyes in the first sample image is subjected to shielding processing;
and a second determination module 53 that determines whether the user in the face image to be detected is a living body based on the first detection result.
Optionally, the apparatus further comprises a second training module; the second training module trains the first liveness detection module by: acquiring a plurality of first face images and acquiring image marks of the first face images; the image marker comprises that a user in the image is a living body or a user in the image is a non-living body; performing size normalization processing on the first face image, performing shielding processing on a face range below an eye part in the image after the size normalization processing, and taking the image after the shielding processing as a first sample image; dividing the first sample image into a first positive sample image and a first negative sample image according to the image markers; training the first in-vivo detection model using the first positive sample image and the first negative sample image.
Optionally, the second training module covers, by using a black pixel, a face area below the eye in the image after the size normalization processing.
The living body detecting apparatus in the present embodiment corresponds to the living body detecting method in fig. 3 described above, and can realize the respective processes in fig. 3 described above and achieve the same effects and functions, which are not repeated here.
Further, another embodiment of the present specification further provides a living body detecting apparatus, fig. 6 is a schematic structural diagram of the living body detecting apparatus provided in an embodiment of the present specification, as shown in fig. 6, the living body detecting apparatus may generate a relatively large difference due to different configurations or performances, and may include one or more processors 901 and a memory 902, and one or more stored applications or data may be stored in the memory 902. Memory 902 may be, among other things, transient storage or persistent storage. The application program stored in memory 902 may include one or more modules (not shown), each of which may include a series of computer-executable instructions for the liveness detection device. Still further, the processor 901 may be configured to communicate with the memory 902 to execute a series of computer-executable instructions in the memory 902 on the liveness detection device. The liveness detection apparatus may also include one or more power supplies 903, one or more wired or wireless network interfaces 904, one or more input-output interfaces 905, one or more keyboards 906, and the like.
In a particular embodiment, a liveness detection device includes a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each module may include a series of computer-executable instructions for the liveness detection device, and the one or more programs configured to be executed by the one or more processors include computer-executable instructions for:
acquiring a facial image to be detected; a shelter exists in a designated face range in the face image to be detected;
according to the facial image to be detected and a pre-trained first living body detection model, carrying out living body detection on a user in the facial image to be detected to obtain a first detection result; the first living body detection model is obtained through training based on a first sample image, and the range of a specified face in the first sample image is subjected to shielding processing;
determining whether the user in the face image to be detected is a living body based on the first detection result.
Optionally, the computer executable instructions, when executed, the first liveness detection model is trained by: acquiring a plurality of first face images and acquiring image marks of the first face images; the image marker comprises that a user in the image is a living body or a user in the image is a non-living body; carrying out size normalization processing on the first face image, carrying out shielding processing on a specified face range in the image after size normalization processing, and taking the image after shielding processing as a first sample image; dividing the first sample image into a first positive sample image and a first negative sample image according to the image markers; training the first in-vivo detection model using the first positive sample image and the first negative sample image.
Optionally, the computer executable instructions, when executed, perform occlusion processing on a specified face range in the size-normalized image, including: and covering the designated face range in the image after size normalization processing by using a preset pixel point.
Optionally, the computer executable instructions, when executed, determine whether a user in the facial image to be detected is a living body, comprising: determining a first probability value that the user is not a living body according to the first detection result; and if the first probability value is larger than or equal to a first probability threshold value, determining that the user is not a living body, otherwise, determining that the user is the living body.
Optionally, the computer executable instructions, when executed, further comprise: according to the facial image to be detected and a pre-trained second living body detection model, carrying out living body detection on the user in the facial image to be detected to obtain a second detection result; wherein the second living body detection model is trained based on a second sample image, and the designated face range in the second sample image is not subjected to occlusion processing;
accordingly, determining whether the user in the facial image to be detected is a living body includes: determining a first probability value that the user is not a living body according to the first detection result, and determining a second probability value that the user is not a living body according to the second detection result; performing weighted summation on the first probability value and the second probability value to obtain a third probability value that the user is not a living body; and if the third probability value is larger than or equal to a second probability threshold value, determining that the user is not a living body, otherwise, determining that the user is the living body.
Optionally, the computer executable instructions, when executed, the second liveness detection model is trained by: acquiring a plurality of second face images and acquiring image marks of the second face images; the image marker comprises that a user in the image is a living body or a user in the image is a non-living body; carrying out size normalization processing on the second face image, and taking the image with the normalized size as a second sample image; dividing the second sample image into a second positive sample image and a second negative sample image according to the image markers; training the second in-vivo detection model using the second positive sample image and the second negative sample image.
Optionally, the computer executable instructions, when executed, the specified face range comprise a face range below the eyes; the covering comprises a mask.
The living body detecting apparatus in the present embodiment corresponds to the living body detecting method in fig. 1 to 2 described above, and can realize the respective processes in fig. 1 to 2 described above and achieve the same effects and functions, which are not repeated here.
In another particular embodiment, a liveness detection device includes a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each module may include a series of computer-executable instructions for the liveness detection device, and the one or more programs configured to be executed by the one or more processors include computer-executable instructions for:
acquiring a facial image to be detected; the user in the facial image to be detected wears a mask;
according to the facial image to be detected and a pre-trained first living body detection model, carrying out living body detection on a user in the facial image to be detected to obtain a first detection result; the first living body detection model is obtained through training based on a first sample image, and the range of the face part below the eyes in the first sample image is subjected to shielding processing;
determining whether the user in the face image to be detected is a living body based on the first detection result.
Optionally, the computer executable instructions, when executed, the first liveness detection model is trained by: acquiring a plurality of first face images and acquiring image marks of the first face images; the image marker comprises that a user in the image is a living body or a user in the image is a non-living body; performing size normalization processing on the first face image, performing shielding processing on a face range below an eye part in the image after the size normalization processing, and taking the image after the shielding processing as a first sample image; dividing the first sample image into a first positive sample image and a first negative sample image according to the image markers; training the first in-vivo detection model using the first positive sample image and the first negative sample image.
Optionally, the computer executable instructions, when executed, perform occlusion processing on a face area below an eye in the size-normalized image, including: and covering the face range below the eyes in the image after the size normalization processing by using black pixel points.
The living body detecting apparatus in the present embodiment corresponds to the living body detecting method in fig. 3 described above, and can realize the respective processes in fig. 3 described above and achieve the same effects and functions, which are not repeated here.
Further, another embodiment of the present specification further provides a storage medium for storing computer-executable instructions, and in a specific embodiment, the storage medium may be a usb disk, an optical disk, a hard disk, or the like, and the storage medium stores computer-executable instructions that, when executed by a processor, implement the following processes:
acquiring a facial image to be detected; a shelter exists in a designated face range in the face image to be detected;
according to the facial image to be detected and a pre-trained first living body detection model, carrying out living body detection on a user in the facial image to be detected to obtain a first detection result; the first living body detection model is obtained through training based on a first sample image, and the range of a specified face in the first sample image is subjected to shielding processing;
determining whether the user in the face image to be detected is a living body based on the first detection result.
Optionally, the storage medium stores computer-executable instructions that, when executed by the processor, the first liveness detection model is trained by: acquiring a plurality of first face images and acquiring image marks of the first face images; the image marker comprises that a user in the image is a living body or a user in the image is a non-living body; carrying out size normalization processing on the first face image, carrying out shielding processing on a specified face range in the image after size normalization processing, and taking the image after shielding processing as a first sample image; dividing the first sample image into a first positive sample image and a first negative sample image according to the image markers; training the first in-vivo detection model using the first positive sample image and the first negative sample image.
Optionally, the storage medium stores computer-executable instructions that, when executed by the processor, perform occlusion processing on a specified face area in the size-normalized image, including: and covering the designated face range in the image after size normalization processing by using a preset pixel point.
Optionally, the storage medium stores computer-executable instructions that, when executed by the processor, determine whether a user in the facial image to be detected is a living body, comprising: determining a first probability value that the user is not a living body according to the first detection result; and if the first probability value is larger than or equal to a first probability threshold value, determining that the user is not a living body, otherwise, determining that the user is the living body.
Optionally, the storage medium stores computer executable instructions that, when executed by the processor, further comprise: according to the facial image to be detected and a pre-trained second living body detection model, carrying out living body detection on the user in the facial image to be detected to obtain a second detection result; wherein the second living body detection model is trained based on a second sample image, and the designated face range in the second sample image is not subjected to occlusion processing;
accordingly, determining whether the user in the facial image to be detected is a living body includes: determining a first probability value that the user is not a living body according to the first detection result, and determining a second probability value that the user is not a living body according to the second detection result; performing weighted summation on the first probability value and the second probability value to obtain a third probability value that the user is not a living body; and if the third probability value is larger than or equal to a second probability threshold value, determining that the user is not a living body, otherwise, determining that the user is the living body.
Optionally, the storage medium stores computer-executable instructions that, when executed by the processor, the second liveness detection model is trained by: acquiring a plurality of second face images and acquiring image marks of the second face images; the image marker comprises that a user in the image is a living body or a user in the image is a non-living body; carrying out size normalization processing on the second face image, and taking the image with the normalized size as a second sample image; dividing the second sample image into a second positive sample image and a second negative sample image according to the image markers; training the second in-vivo detection model using the second positive sample image and the second negative sample image.
Optionally, the storage medium stores computer executable instructions that, when executed by the processor, the specified face range comprises a face range below the eyes; the covering comprises a mask.
The storage medium in the present embodiment corresponds to the above-described living body detection method in fig. 1 to 2, and can realize the respective processes in fig. 1 to 2 described above and achieve the same effects and functions, which are not repeated here.
Another embodiment of the present disclosure further provides a storage medium for storing computer-executable instructions, in a specific embodiment, the storage medium may be a usb disk, an optical disk, a hard disk, or the like, and the storage medium stores computer-executable instructions, which when executed by a processor, implement the following processes:
acquiring a facial image to be detected; the user in the facial image to be detected wears a mask;
according to the facial image to be detected and a pre-trained first living body detection model, carrying out living body detection on a user in the facial image to be detected to obtain a first detection result; the first living body detection model is obtained through training based on a first sample image, and the range of the face part below the eyes in the first sample image is subjected to shielding processing;
determining whether the user in the face image to be detected is a living body based on the first detection result.
Optionally, the storage medium stores computer-executable instructions that, when executed by the processor, the first liveness detection model is trained by: acquiring a plurality of first face images and acquiring image marks of the first face images; the image marker comprises that a user in the image is a living body or a user in the image is a non-living body; performing size normalization processing on the first face image, performing shielding processing on a face range below an eye part in the image after the size normalization processing, and taking the image after the shielding processing as a first sample image; dividing the first sample image into a first positive sample image and a first negative sample image according to the image markers; training the first in-vivo detection model using the first positive sample image and the first negative sample image.
Optionally, the storage medium stores computer-executable instructions that, when executed by the processor, perform occlusion processing on a face area below an eye in the size-normalized image, including: and covering the face range below the eyes in the image after the size normalization processing by using black pixel points.
The storage medium in the present embodiment corresponds to the above-described living body detection method in fig. 3, and can realize the respective processes in fig. 3 described above and achieve the same effects and functions, which are not repeated here.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Hardware Description Language), traffic, pl (core universal Programming Language), HDCal (jhdware Description Language), lang, Lola, HDL, laspam, hardward Description Language (vhr Description Language), vhal (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the various elements may be implemented in the same one or more software and/or hardware implementations in implementing one or more embodiments of the present description.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
Embodiments of the present description are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the description. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
One or more embodiments of the present description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more embodiments of the specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present specification and is not intended to limit the present document. Various modifications and changes may occur to the embodiments described herein, as will be apparent to those skilled in the art. Any modifications, equivalents, improvements, etc. which come within the spirit and principle of the disclosure are intended to be included within the scope of the claims of this document.

Claims (16)

1. A method of in vivo detection comprising:
acquiring a facial image to be detected; a shelter exists in a designated face range in the face image to be detected;
according to the facial image to be detected and a pre-trained first living body detection model, carrying out living body detection on a user in the facial image to be detected to obtain a first detection result; the first living body detection model is obtained by training based on a first sample image, the first sample image is obtained by shielding a designated face range in a first face image, and the first face image is an image without shielding of a user face;
according to the facial image to be detected and a pre-trained second living body detection model, carrying out living body detection on the user in the facial image to be detected to obtain a second detection result; the second living body detection model is obtained by training based on a second sample image, and the second sample image is an image without shielding of the face of the user;
determining a first probability value that the user is not a living body according to the first detection result, and determining a second probability value that the user is not a living body according to the second detection result;
performing weighted summation on the first probability value and the second probability value to obtain a third probability value that the user is not a living body;
and if the third probability value is larger than or equal to a second probability threshold value, determining that the user is not a living body, otherwise, determining that the user is the living body.
2. The method of claim 1, the first liveness detection model being trained by:
acquiring a plurality of first face images and acquiring image marks of the first face images; the image marker comprises that a user in the image is a living body or a user in the image is a non-living body;
carrying out size normalization processing on the first face image, carrying out shielding processing on a specified face range in the image after size normalization processing, and taking the image after shielding processing as a first sample image;
dividing the first sample image into a first positive sample image and a first negative sample image according to the image markers;
training the first in-vivo detection model using the first positive sample image and the first negative sample image.
3. The method according to claim 2, wherein the occlusion processing is performed on the designated face area in the image after the size normalization processing, and comprises the following steps:
and covering the designated face range in the image after size normalization processing by using a preset pixel point.
4. The method of claim 1, the second liveness detection model being trained by:
acquiring a plurality of second face images and acquiring image marks of the second face images; the image marker comprises that a user in the image is a living body or a user in the image is a non-living body;
carrying out size normalization processing on the second face image, and taking the image with the normalized size as a second sample image;
dividing the second sample image into a second positive sample image and a second negative sample image according to the image markers;
training the second in-vivo detection model using the second positive sample image and the second negative sample image.
5. The method of any of claims 1-4, the specified facial range comprising a range of faces below the eyes; the covering comprises a mask.
6. A method of in vivo detection comprising:
acquiring a facial image to be detected; the user in the facial image to be detected wears a mask;
according to the facial image to be detected and a pre-trained first living body detection model, carrying out living body detection on a user in the facial image to be detected to obtain a first detection result; the first living body detection model is obtained by training based on a first sample image, the first sample image is obtained by shielding the face range below the eyes in a first face image, and the first face image is an image without shielding of the face of a user;
according to the facial image to be detected and a pre-trained second living body detection model, carrying out living body detection on the user in the facial image to be detected to obtain a second detection result; the second living body detection model is obtained by training based on a second sample image, and the second sample image is an image without shielding of the face of the user;
determining a first probability value that the user is not a living body according to the first detection result, and determining a second probability value that the user is not a living body according to the second detection result;
performing weighted summation on the first probability value and the second probability value to obtain a third probability value that the user is not a living body;
and if the third probability value is larger than or equal to a second probability threshold value, determining that the user is not a living body, otherwise, determining that the user is the living body.
7. The method of claim 6, the first liveness detection model being trained by:
acquiring a plurality of first face images and acquiring image marks of the first face images; the image marker comprises that a user in the image is a living body or a user in the image is a non-living body;
performing size normalization processing on the first face image, performing shielding processing on a face range below an eye part in the image after the size normalization processing, and taking the image after the shielding processing as a first sample image;
dividing the first sample image into a first positive sample image and a first negative sample image according to the image markers;
training the first in-vivo detection model using the first positive sample image and the first negative sample image.
8. The method according to claim 7, wherein the occlusion processing is performed on the area of the face below the eye in the size-normalized image, and comprises the following steps:
and covering the face range below the eyes in the image after the size normalization processing by using black pixel points.
9. A living body detection apparatus comprising:
the first acquisition module is used for acquiring a face image to be detected; a shelter exists in a designated face range in the face image to be detected;
the first detection module is used for carrying out living body detection on the user in the facial image to be detected according to the facial image to be detected and a pre-trained first living body detection model to obtain a first detection result; the first living body detection model is obtained by training based on a first sample image, the first sample image is obtained by shielding a designated face range in a first face image, and the first face image is an image without shielding of a user face;
the second detection module is used for carrying out living body detection on the user in the facial image to be detected according to the facial image to be detected and a pre-trained second living body detection model to obtain a second detection result; the second living body detection model is obtained by training based on a second sample image, and the second sample image is an image without shielding of the face of the user;
the first determining module is used for determining a first probability value that the user is not a living body according to the first detection result and determining a second probability value that the user is not a living body according to the second detection result; performing weighted summation on the first probability value and the second probability value to obtain a third probability value that the user is not a living body; and if the third probability value is larger than or equal to a second probability threshold value, determining that the user is not a living body, otherwise, determining that the user is the living body.
10. The apparatus of claim 9, further comprising a first training module; the first training module trains the first liveness detection model by:
acquiring a plurality of first face images and acquiring image marks of the first face images; the image marker comprises that a user in the image is a living body or a user in the image is a non-living body;
carrying out size normalization processing on the first face image, carrying out shielding processing on a specified face range in the image after size normalization processing, and taking the image after shielding processing as a first sample image;
dividing the first sample image into a first positive sample image and a first negative sample image according to the image markers;
training the first in-vivo detection model using the first positive sample image and the first negative sample image.
11. The apparatus of claim 10, wherein the first and second electrodes are disposed on opposite sides of the substrate,
and the first training module covers the designated face range in the image after size normalization processing by using a preset pixel point.
12. A living body detection apparatus comprising:
the second acquisition module is used for acquiring a face image to be detected; the user in the facial image to be detected wears a mask;
the second detection module is used for carrying out living body detection on the user in the facial image to be detected according to the facial image to be detected and a pre-trained first living body detection model to obtain a first detection result; the first living body detection model is obtained by training based on a first sample image, the first sample image is obtained by shielding the face range below the eyes in a first face image, and the first face image is an image without shielding of the face of a user;
the third detection module is used for carrying out living body detection on the user in the facial image to be detected according to the facial image to be detected and a pre-trained second living body detection model to obtain a second detection result; the second living body detection model is obtained by training based on a second sample image, and the second sample image is an image without shielding of the face of the user;
a second determining module, configured to determine a first probability value that the user is not a living body according to the first detection result, and determine a second probability value that the user is not a living body according to the second detection result; performing weighted summation on the first probability value and the second probability value to obtain a third probability value that the user is not a living body; and if the third probability value is larger than or equal to a second probability threshold value, determining that the user is not a living body, otherwise, determining that the user is the living body.
13. The apparatus of claim 12, further comprising a second training module; the second training module trains the first liveness detection model by:
acquiring a plurality of first face images and acquiring image marks of the first face images; the image marker comprises that a user in the image is a living body or a user in the image is a non-living body;
performing size normalization processing on the first face image, performing shielding processing on a face range below an eye part in the image after the size normalization processing, and taking the image after the shielding processing as a first sample image;
dividing the first sample image into a first positive sample image and a first negative sample image according to the image markers;
training the first in-vivo detection model using the first positive sample image and the first negative sample image.
14. The apparatus of claim 13, wherein the first and second electrodes are disposed in a substantially cylindrical configuration,
and the second training module covers the face range below the eyes in the image after the size normalization processing by using black pixel points.
15. A living body examination apparatus comprising: a processor; and a memory arranged to store computer executable instructions which, when executed, cause the processor to carry out the steps of the liveness detection method of any one of the preceding claims 1 to 5, or the steps of the liveness detection method of any one of the preceding claims 6 to 8.
16. A storage medium storing computer-executable instructions which, when executed, implement the steps of the liveness detection method of any one of the above claims 1 to 5, or the steps of the liveness detection method of any one of the above claims 6 to 8.
CN202010441322.5A 2020-05-22 2020-05-22 Living body detection method, living body detection device, living body detection apparatus, and storage medium Active CN111340014B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010441322.5A CN111340014B (en) 2020-05-22 2020-05-22 Living body detection method, living body detection device, living body detection apparatus, and storage medium
CN202011377618.1A CN112507831B (en) 2020-05-22 2020-05-22 Living body detection method, living body detection device, living body detection apparatus, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010441322.5A CN111340014B (en) 2020-05-22 2020-05-22 Living body detection method, living body detection device, living body detection apparatus, and storage medium

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202011377618.1A Division CN112507831B (en) 2020-05-22 2020-05-22 Living body detection method, living body detection device, living body detection apparatus, and storage medium

Publications (2)

Publication Number Publication Date
CN111340014A CN111340014A (en) 2020-06-26
CN111340014B true CN111340014B (en) 2020-11-17

Family

ID=71186446

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202011377618.1A Active CN112507831B (en) 2020-05-22 2020-05-22 Living body detection method, living body detection device, living body detection apparatus, and storage medium
CN202010441322.5A Active CN111340014B (en) 2020-05-22 2020-05-22 Living body detection method, living body detection device, living body detection apparatus, and storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202011377618.1A Active CN112507831B (en) 2020-05-22 2020-05-22 Living body detection method, living body detection device, living body detection apparatus, and storage medium

Country Status (1)

Country Link
CN (2) CN112507831B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111401348B (en) * 2020-06-05 2020-09-04 支付宝(杭州)信息技术有限公司 Living body detection method and system for target object
CN111985340A (en) * 2020-07-22 2020-11-24 深圳市威富视界有限公司 Face recognition method and device based on neural network model and computer equipment
CN111680675B (en) * 2020-08-14 2020-11-17 腾讯科技(深圳)有限公司 Face living body detection method, system, device, computer equipment and storage medium
CN112800847B (en) * 2020-12-30 2023-03-24 广州广电卓识智能科技有限公司 Face acquisition source detection method, device, equipment and medium
CN114973347B (en) * 2021-04-22 2023-07-21 中移互联网有限公司 Living body detection method, device and equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106997452A (en) * 2016-01-26 2017-08-01 北京市商汤科技开发有限公司 Live body verification method and device
CN208351494U (en) * 2018-05-23 2019-01-08 国政通科技股份有限公司 Face identification system
CN110188715A (en) * 2019-06-03 2019-08-30 广州二元科技有限公司 A kind of video human face biopsy method of multi frame detection ballot

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102387571B1 (en) * 2017-03-27 2022-04-18 삼성전자주식회사 Liveness test method and apparatus for
CN107358157B (en) * 2017-06-07 2020-10-02 创新先进技术有限公司 Face living body detection method and device and electronic equipment
CN107463920A (en) * 2017-08-21 2017-12-12 吉林大学 A kind of face identification method for eliminating partial occlusion thing and influenceing
CN107862247B (en) * 2017-10-13 2018-09-11 平安科技(深圳)有限公司 A kind of human face in-vivo detection method and terminal device
CN110287767A (en) * 2019-05-06 2019-09-27 深圳市华付信息技术有限公司 Can attack protection biopsy method, device, computer equipment and storage medium
US10977355B2 (en) * 2019-09-11 2021-04-13 Lg Electronics Inc. Authentication method and device through face recognition
CN110728330A (en) * 2019-10-23 2020-01-24 腾讯科技(深圳)有限公司 Object identification method, device, equipment and storage medium based on artificial intelligence
CN111178341B (en) * 2020-04-10 2021-01-26 支付宝(杭州)信息技术有限公司 Living body detection method, device and equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106997452A (en) * 2016-01-26 2017-08-01 北京市商汤科技开发有限公司 Live body verification method and device
CN208351494U (en) * 2018-05-23 2019-01-08 国政通科技股份有限公司 Face identification system
CN110188715A (en) * 2019-06-03 2019-08-30 广州二元科技有限公司 A kind of video human face biopsy method of multi frame detection ballot

Also Published As

Publication number Publication date
CN112507831A (en) 2021-03-16
CN112507831B (en) 2022-09-23
CN111340014A (en) 2020-06-26

Similar Documents

Publication Publication Date Title
CN111340014B (en) Living body detection method, living body detection device, living body detection apparatus, and storage medium
CN107358157B (en) Face living body detection method and device and electronic equipment
CN111553333B (en) Face image recognition model training method, recognition method, device and electronic equipment
CN112800468B (en) Data processing method, device and equipment based on privacy protection
KR102316230B1 (en) Image processing method and device
CN111538968A (en) Identity verification method, device and equipment based on privacy protection
CN111523431B (en) Face recognition method, device and equipment
KR20110024169A (en) Eye state detection method
CN113223101B (en) Image processing method, device and equipment based on privacy protection
CN111091112B (en) Living body detection method and device
CN112308113A (en) Target identification method, device and medium based on semi-supervision
CN111753275A (en) Image-based user privacy protection method, device, equipment and storage medium
CN111291797A (en) Anti-counterfeiting identification method and device and electronic equipment
CN111652286A (en) Object identification method, device and medium based on graph embedding
CN110059569B (en) Living body detection method and device, and model evaluation method and device
CN111160251A (en) Living body identification method and device
CN117036829A (en) Method and system for achieving label enhancement based on prototype learning for identifying fine granularity of blade
CN112825116B (en) Method, device, medium and equipment for detecting and tracking human face of monitoring video image
CN115830633B (en) Pedestrian re-recognition method and system based on multi-task learning residual neural network
CN115546908A (en) Living body detection method, device and equipment
CN111753583A (en) Identification method and device
CN114998962A (en) Living body detection and model training method and device
CN114511911A (en) Face recognition method, device and equipment
Naika et al. Asymmetric region local binary pattern operator for person-dependent facial expression recognition
CN112927219B (en) Image detection method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40030939

Country of ref document: HK