CN112613471B - Face living body detection method, device and computer readable storage medium - Google Patents

Face living body detection method, device and computer readable storage medium Download PDF

Info

Publication number
CN112613471B
CN112613471B CN202011628054.4A CN202011628054A CN112613471B CN 112613471 B CN112613471 B CN 112613471B CN 202011628054 A CN202011628054 A CN 202011628054A CN 112613471 B CN112613471 B CN 112613471B
Authority
CN
China
Prior art keywords
face
living body
image
body detection
positive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011628054.4A
Other languages
Chinese (zh)
Other versions
CN112613471A (en
Inventor
余述超
浦贵阳
程耀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Hangzhou Information Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Hangzhou Information Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202011628054.4A priority Critical patent/CN112613471B/en
Publication of CN112613471A publication Critical patent/CN112613471A/en
Application granted granted Critical
Publication of CN112613471B publication Critical patent/CN112613471B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The embodiment of the invention relates to the field of image processing and computer vision, and discloses a human face living body detection method, a device and a computer readable storage medium, wherein the human face living body detection method comprises the following steps: acquiring a near infrared image and a visible light image containing a face to be detected; cutting the near infrared image and the visible light image respectively to obtain a first face image block of the near infrared image and a second face image block of the visible light image, wherein the first face image block and the second face image block at least comprise one face feature; and inputting the first face image block and the second face image block into a preset neural network model to carry out living body detection, and judging whether the face to be detected is a living body or not. The method, the device and the computer readable storage medium for detecting the human face living body can improve the accuracy of detecting the human face living body and ensure the use experience of a user.

Description

Face living body detection method, device and computer readable storage medium
Technical Field
The embodiment of the invention relates to the field of image processing and computer vision, in particular to a human face living body detection method, a human face living body detection device and a computer readable storage medium.
Background
Face images are the most accessible biometric means for highly accurate face recognition systems, and of course are also vulnerable to many different types of counterfeit faces. Accordingly, in-vivo detection techniques aimed at determining whether a face image captured by a camera is authentic have also been developed. The current face living body detection method comprises the following steps: the interactive living body detection based on the RGB face image of the monocular camera and the binocular living body detection based on the NIR image and the RGB image of the binocular camera are both characterized in that the characteristics (such as LBP, hoG, SIFT, SURF and DoG) selected manually are utilized to obtain the characteristic distribution of living bodies and non-living bodies, so that whether the face image is a living body or not is distinguished.
The inventor finds that at least the following problems exist in the prior art: at present, living body detection based on monocular RGB face pictures is mainly interactive living body detection, and a user is required to perform action coordination such as nodding, opening mouth, blinking and the like, so that the user coordination time is long, and the user experience is poor; binocular living body detection of two kinds of face image information based on binocular camera NIR image and RGB image mainly utilizes the characteristics of manual selection, utilizes SVM (support vector machine) to classify again. The manually selected features are sensitive to illumination, face gestures and special long-phase faces, and the SVM model cannot capture ambiguity information, so that generalization capability is poor, and accuracy of face living detection is low.
Disclosure of Invention
The embodiment of the invention aims to provide a human face living body detection method, a human face living body detection device and a computer readable storage medium, which can improve the accuracy of human face living body detection and ensure the use experience of a user.
In order to solve the above technical problems, the embodiment of the present invention provides a method for detecting a human face in vivo, including:
acquiring a near infrared image and a visible light image containing a face to be detected; cutting the near infrared image and the visible light image respectively to obtain a first face image block of the near infrared image and a second face image block of the visible light image, wherein the first face image block and the second face image block at least comprise one face feature; and inputting the first face image block and the second face image block into a preset neural network model to carry out living body detection, and judging whether the face to be detected is a living body or not.
The embodiment of the invention also provides a human face living body detection device, which comprises: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the face in-vivo detection method.
The embodiment of the invention also provides a computer readable storage medium storing a computer program which when executed by a processor realizes the human face living body detection method.
Compared with the prior art, the embodiment of the invention can directly carry out living detection according to the near infrared image and the visible light image containing the face to be detected by acquiring the near infrared image and the visible light image, so as to avoid the coordination of actions such as nodding, opening mouth, blinking and the like of a user, and improve the use experience of the user; the near infrared image and the visible light image are cut respectively to obtain the first face image block and the second face image block which at least comprise one face feature, so that the occurrence of the situation that 'the detection is carried out according to the whole face picture, the deficiency of the face local information cannot be fully utilized' is avoided, the face living body detection is carried out by adopting the face image block containing the local feature, and the accuracy of the face living body detection is improved.
In addition, the preset neural network model is obtained through training in the following way: s1: collecting positive and negative sample images, wherein the positive sample image comprises a real living body near infrared image and a real living body visible light image, and the negative sample image comprises a fake living body near infrared image and a fake living body visible light image; s2: inputting the positive and negative sample images into a convolutional neural network model to obtain a living body detection result; s3: calculating a loss function of the convolutional neural network model according to the living body detection result, and adjusting the learning rate of the convolutional neural network model according to the loss function; s4: and repeating the steps S1 to S3 until the loss function meets the preset requirement, taking the learning rate corresponding to the loss function meeting the preset requirement as the final learning rate of the convolutional neural network model, and taking the convolutional neural network model with the final learning rate as the preset neural network model.
In addition, before the positive and negative sample images are input into the convolutional neural network model, the method further comprises: adjusting the image sizes of the positive and negative sample images so that the image sizes are equal to a preset threshold value; randomly overturning and rotating the positive and negative sample images with the adjusted sizes, and then cutting the positive and negative sample images subjected to random overturning and rotation based on human face frames to obtain a plurality of human face image blocks, wherein each human face image block at least comprises one human face feature; inputting the positive and negative sample images into a convolutional neural network model to obtain a living body detection result, wherein the method comprises the following steps: and inputting the face image blocks into a convolutional neural network model to obtain a living body detection result.
In addition, the inputting the plurality of face image blocks into a convolutional neural network model to obtain a living body detection result includes: extracting features of each face image block to obtain a plurality of face features; and splicing and randomly removing the plurality of face features to obtain a feature matrix, and extracting features of the feature matrix to obtain the living body detection result.
In addition, the feature extraction for each face image block includes: carrying out depth feature extraction on each face image block by adopting a preset residual error network, wherein the preset residual error network is divided into a first part, a second part, a third part, a fourth part and a fifth part according to the sequence from low to high of a network layer; taking the features extracted in the third part as the plurality of face features; the feature extraction is performed on the feature matrix to obtain the living body detection result, which comprises the following steps: and carrying out feature extraction on the feature matrix through convolution, and sequentially inputting the feature extraction result into the fourth part, the fifth part and the full-connection layer of the preset neural network model to obtain the living body detection result.
In addition, after the feature extracted by the third portion is used as the plurality of face features, the method further includes: inputting the face features into a SEnet module to obtain new face features; the step of splicing and randomly removing the face features to obtain a feature matrix comprises the following steps: and splicing and randomly removing the new face features to obtain the feature matrix.
In addition, after the positive and negative sample images are acquired, the method further comprises: marking the real living body near-infrared image, the real living body visible light image, the fake living body near-infrared image and the fake living body visible light image respectively;
the calculating the loss function of the convolutional neural network model according to the living body detection result comprises the following steps: the loss function is calculated according to the following formula:wherein L is the loss function, N is the total number of the positive and negative sample images, y i Labeling the ith positive and negative sample image, p i Predicting correct probability for the ith positive and negative sample images, and when the positive and negative sample images are real living body near infrared images or real living body visible light images, y i Equal to 1; when the positive and negative sample images are forged living body near infrared images or forged living body visible light images, y i Equal to 0.
Drawings
One or more embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements, and in which the figures of the drawings are not to be taken in a limiting sense, unless otherwise indicated.
Fig. 1 is a flowchart of a face living body detection method provided according to a first embodiment of the present invention;
fig. 2 is a flowchart of a face living body detection method according to a second embodiment of the present invention;
FIG. 3 is a flowchart of a preset neural network model training method according to a second embodiment of the present invention;
fig. 4 is a schematic structural view of a face living body detection apparatus according to a third embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the following detailed description of the embodiments of the present invention will be given with reference to the accompanying drawings. However, those of ordinary skill in the art will understand that in various embodiments of the present invention, numerous technical details have been set forth in order to provide a better understanding of the present invention. However, the claimed invention may be practiced without these specific details and with various changes and modifications based on the following embodiments.
The first embodiment of the invention relates to a face living body detection method, and a specific flow is shown in fig. 1, comprising the following steps:
s101: and acquiring a near infrared image and a visible light image containing the face to be detected.
Specifically, the near infrared image and the visible light image containing the face to be detected may be images acquired by a binocular camera, or may be images of an electronic screen face, a paper face picture, and a face mask.
S102: and cutting the near infrared image and the visible light image respectively to obtain a first face image block of the near infrared image and a second face image block of the visible light image.
Specifically, the first face image block and the second face image block each include at least one face feature.
S103: and inputting the first face image block and the second face image block into a preset neural network model to carry out living body detection, and judging whether the face to be detected is a living body or not.
Compared with the prior art, the embodiment of the invention can directly carry out living detection according to the near infrared image and the visible light image containing the face to be detected by acquiring the near infrared image and the visible light image, so as to avoid the coordination of actions such as nodding, opening mouth, blinking and the like of a user, and improve the use experience of the user; the near infrared image and the visible light image are cut respectively to obtain the first face image block and the second face image block which at least comprise one face feature, so that the occurrence of the situation that 'the detection is carried out according to the whole face picture, the deficiency of the face local information cannot be fully utilized' is avoided, the face living body detection is carried out by adopting the face image block containing the local feature, and the accuracy of the face living body detection is improved.
The second embodiment of the present invention relates to a face living body detection method, which is further improved based on the first embodiment, and specifically improved in that: in a second embodiment, it is specifically described how to train a preset neural network model.
The specific flow of this embodiment is shown in fig. 2, and includes:
s201: and acquiring a near infrared image and a visible light image containing the face to be detected.
S202: and cutting the near infrared image and the visible light image respectively to obtain a first face image block of the near infrared image and a second face image block of the visible light image.
S203: and inputting the first face image block and the second face image block into a convolutional neural network model with the learning rate adjusted to carry out living body detection, and judging whether the face to be detected is a living body or not.
Specifically, referring to fig. 3, the present embodiment can train the preset neural network model by:
s1: positive and negative sample images are acquired.
Specifically, the positive sample image includes a true living near infrared image and a true living visible light image, and the negative sample image includes a false living near infrared image and a false living visible light image. Further, the positive sample image is RGB image and NIR image data of company staff with different light rays, angles and distances acquired by the binocular camera, and the negative sample image is RGB image and NIR image of electronic screen face, paper face picture and face mask with different light rays, angles and distances.
It should be noted that, after the positive and negative sample images are collected, the embodiment further marks the real living body near infrared image, the real living body visible light image, the fake living body near infrared image and the fake living body visible light image respectively, so that whether the living body detection result output by the model is accurate or not can be known in the subsequent steps, and the difficulty of model training is reduced.
S2: and inputting the positive and negative sample images into a convolutional neural network model to obtain a living body detection result.
Specifically, before the positive and negative sample images are input into the convolutional neural network model, the method further comprises: adjusting the image sizes of the positive and negative sample images so that the image sizes are equal to a preset threshold value; and randomly overturning and rotating the positive and negative sample images after the size of the image is regulated, and then cutting the positive and negative sample images after the random overturning and rotating based on human face frames to obtain a plurality of human face image blocks, wherein each human face image block at least comprises one human face feature. By the method, the accuracy and generalization performance of the neural network model which is trained subsequently can be improved.
More specifically, inputting the plurality of face image blocks into a convolutional neural network model to obtain a living body detection result, including: extracting features of each face image block to obtain a plurality of face features; and splicing and randomly removing the plurality of face features to obtain a feature matrix, and extracting features of the feature matrix to obtain the living body detection result. It can be understood that the combination of the face features is tighter and the robustness against non-living attacks is better by using the mode of splicing and randomly removing the NIR face image and the RGB face image.
Further, the feature extraction for each face image block includes: carrying out depth feature extraction on each face image block by adopting a preset residual error network, wherein the preset residual error network is divided into a first part, a second part, a third part, a fourth part and a fifth part according to the sequence from low to high of a network layer; taking the features extracted in the third part as the plurality of face features; the feature extraction is performed on the feature matrix to obtain the living body detection result, which comprises the following steps: and carrying out feature extraction on the feature matrix through convolution, and sequentially inputting the feature extraction result into the fourth part, the fifth part and the full-connection layer of the preset neural network model to obtain the living body detection result. By the mode, accuracy of a living body detection result can be improved, and meanwhile difficulty of model training is reduced. It is understood that the preset residual network in this embodiment may be the resnet18.
It is noted that, after the features extracted by the third portion are taken as the face features, the method further includes: inputting the face features into a SEnet module to obtain new face features; the step of splicing and randomly removing the face features to obtain a feature matrix comprises the following steps: and splicing and randomly removing the new face features to obtain the feature matrix. By adding the SEnet module into the preset residual error network, the sensitivity of the neural network model trained later to the channel is improved, and the performance of the model is better.
S3: and calculating a loss function of the convolutional neural network model according to the living body detection result, and adjusting the learning rate of the convolutional neural network model according to the loss function.
Specifically, the present embodiment calculates the loss function according to the following formula:
wherein L is the loss function, N is the total number of the positive and negative sample images, y i Labeling the ith positive and negative sample image, p i Predicting correct probability for the ith positive and negative sample images, and when the positive and negative sample images are real living body near infrared images or real living body visible light images, y i Equal to 1; when the positive and negative sample images are forged living body near infrared images or forged living body visible light images, y i Equal to 0.
S4: judging whether the loss function meets the preset requirement, if so, executing step S5; if not, step S1 is performed.
S5: and taking the learning rate corresponding to the loss function as the final learning rate of the convolutional neural network model, and taking the convolutional neural network model with the final learning rate as a preset neural network model.
In detail, with respect to steps S4 to S5, the present embodiment may calculate the learning rate of the convolutional neural network model according to the following formula:
wherein eta max For initial learning rate, η t For the learning rate at the t-th training, tcur is the total number of times of the current training, and Tmax is the total number of times of the training.
It should be noted that, whether the loss function meets the preset requirement is determined, that is, after the value of the loss function is obtained, whether the error between the value and the preset value is within the preset range is determined, where the preset value and the preset range can be set according to the actual requirement, for example, the preset value is 0.1, the preset range is between 0.02 and 0.05, and the value of the preset value and the preset range are not limited specifically in this embodiment.
Compared with the prior art, the embodiment of the invention can directly carry out living detection according to the near infrared image and the visible light image containing the face to be detected by acquiring the near infrared image and the visible light image, so as to avoid the coordination of actions such as nodding, opening mouth, blinking and the like of a user, and improve the use experience of the user; the near infrared image and the visible light image are cut respectively to obtain the first face image block and the second face image block which at least comprise one face feature, so that the occurrence of the situation that 'the detection is carried out according to the whole face picture, the deficiency of the face local information cannot be fully utilized' is avoided, the face living body detection is carried out by adopting the face image block containing the local feature, and the accuracy of the face living body detection is improved.
A third embodiment of the present invention relates to a face living body detection apparatus, as shown in fig. 4, including:
at least one processor 401; the method comprises the steps of,
a memory 402 communicatively coupled to the at least one processor 401; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory 402 stores instructions executable by the at least one processor 401, the instructions being executable by the at least one processor 401 to enable the at least one processor 401 to perform the face in vivo detection method described above.
Where the memory 402 and the processor 401 are connected by a bus, the bus may comprise any number of interconnected buses and bridges, the buses connecting the various circuits of the one or more processors 401 and the memory 402 together. The bus may also connect various other circuits such as peripherals, voltage regulators, and power management circuits, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or may be a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor 401 is transmitted over a wireless medium via an antenna, which further receives and transmits the data to the processor 401.
The processor 401 is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And memory 402 may be used to store data used by processor 401 in performing operations.
A fifth embodiment of the present invention relates to a computer-readable storage medium storing a computer program. The computer program implements the above-described method embodiments when executed by a processor.
That is, it will be understood by those skilled in the art that all or part of the steps in implementing the methods of the embodiments described above may be implemented by a program stored in a storage medium, where the program includes several instructions for causing a device (which may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps in the methods of the embodiments described herein. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples of carrying out the invention and that various changes in form and details may be made therein without departing from the spirit and scope of the invention.

Claims (7)

1. A face living body detection method, characterized by comprising:
acquiring a near infrared image and a visible light image containing a face to be detected;
cutting the near infrared image and the visible light image respectively to obtain a first face image block of the near infrared image and a second face image block of the visible light image, wherein the first face image block and the second face image block at least comprise one face feature;
inputting the first face image block and the second face image block into a preset neural network model to carry out living body detection, and judging whether the face to be detected is a living body or not;
the preset neural network model is obtained through training in the following mode:
s1: collecting positive and negative sample images, wherein the positive sample image comprises a real living body near infrared image and a real living body visible light image, and the negative sample image comprises a fake living body near infrared image and a fake living body visible light image;
s2: inputting the positive and negative sample images into a convolutional neural network model to obtain a living body detection result;
s3: calculating a loss function of the convolutional neural network model according to the living body detection result, and adjusting the learning rate of the convolutional neural network model according to the loss function;
s4: repeating the steps S1 to S3 until the loss function meets the preset requirement, taking the learning rate corresponding to the loss function meeting the preset requirement as the final learning rate of the convolutional neural network model, and taking the convolutional neural network model with the final learning rate as the preset neural network model;
the step of inputting the positive and negative sample images into a convolutional neural network model to obtain a living body detection result comprises the following steps:
extracting features of each face image block to obtain a plurality of face features;
and splicing and randomly removing the plurality of face features to obtain a feature matrix, and extracting features of the feature matrix to obtain the living body detection result.
2. The face living body detection method according to claim 1, characterized by further comprising, before inputting the positive and negative sample images into a convolutional neural network model:
adjusting the image sizes of the positive and negative sample images so that the image sizes are equal to a preset threshold value;
and randomly overturning and rotating the positive and negative sample images after the size of the image is regulated, and then cutting the positive and negative sample images after the random overturning and rotating based on human face frames to obtain a plurality of human face image blocks, wherein each human face image block at least comprises one human face feature.
3. The face living body detection method according to claim 1, wherein the feature extraction is performed on each of the face image blocks, comprising:
carrying out depth feature extraction on each face image block by adopting a preset residual error network, wherein the preset residual error network is divided into a first part, a second part, a third part, a fourth part and a fifth part according to the sequence from low to high of a network layer;
taking the features extracted in the third part as the plurality of face features;
the feature extraction is performed on the feature matrix to obtain the living body detection result, which comprises the following steps:
and carrying out feature extraction on the feature matrix through convolution, and sequentially inputting the feature extraction result into the fourth part, the fifth part and the full-connection layer of the preset neural network model to obtain the living body detection result.
4. A face living body detection method according to claim 3, characterized by further comprising, after taking the features extracted by the third section as the plurality of face features:
inputting the face features into a SEnet module to obtain new face features;
the step of splicing and randomly removing the face features to obtain a feature matrix comprises the following steps:
and splicing and randomly removing the new face features to obtain the feature matrix.
5. The face living body detection method according to claim 1, further comprising, after the positive and negative sample images are acquired: marking the real living body near-infrared image, the real living body visible light image, the fake living body near-infrared image and the fake living body visible light image respectively;
the calculating the loss function of the convolutional neural network model according to the living body detection result comprises the following steps: the loss function is calculated according to the following formula:
wherein L is the loss function, N is the total number of the positive and negative sample images, y i Labeling the ith positive and negative sample image, p i Predicting correct probability for the i-th positive and negative sample image, when the positive and negative sample images are trueWhen the near infrared image or the visible light image of the living body is real, y i Equal to 1; when the positive and negative sample images are forged living body near infrared images or forged living body visible light images, y i Equal to 0.
6. A human face living body detection apparatus, characterized by comprising: at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the face in vivo detection method of any one of claims 1 to 5.
7. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the face in vivo detection method according to any one of claims 1 to 5.
CN202011628054.4A 2020-12-31 2020-12-31 Face living body detection method, device and computer readable storage medium Active CN112613471B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011628054.4A CN112613471B (en) 2020-12-31 2020-12-31 Face living body detection method, device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011628054.4A CN112613471B (en) 2020-12-31 2020-12-31 Face living body detection method, device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112613471A CN112613471A (en) 2021-04-06
CN112613471B true CN112613471B (en) 2023-08-01

Family

ID=75253213

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011628054.4A Active CN112613471B (en) 2020-12-31 2020-12-31 Face living body detection method, device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112613471B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113989903B (en) * 2021-11-15 2023-08-29 北京百度网讯科技有限公司 Face living body detection method and device, electronic equipment and storage medium
CN115205939B (en) * 2022-07-14 2023-07-25 北京百度网讯科技有限公司 Training method and device for human face living body detection model, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107358157A (en) * 2017-06-07 2017-11-17 阿里巴巴集团控股有限公司 A kind of human face in-vivo detection method, device and electronic equipment
CN108664843A (en) * 2017-03-27 2018-10-16 北京三星通信技术研究有限公司 Live subject recognition methods, equipment and computer readable storage medium
CN110674800A (en) * 2019-12-04 2020-01-10 图谱未来(南京)人工智能研究院有限公司 Face living body detection method and device, electronic equipment and storage medium
CN111222380A (en) * 2018-11-27 2020-06-02 杭州海康威视数字技术股份有限公司 Living body detection method and device and recognition model training method thereof

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107862299B (en) * 2017-11-28 2021-08-06 电子科技大学 Living body face detection method based on near-infrared and visible light binocular cameras
CN109064397B (en) * 2018-07-04 2023-08-01 广州希脉创新科技有限公司 Image stitching method and system based on camera earphone
CN111639522B (en) * 2020-04-17 2023-10-31 北京迈格威科技有限公司 Living body detection method, living body detection device, computer equipment and storage medium
CN111680698A (en) * 2020-04-21 2020-09-18 北京三快在线科技有限公司 Image recognition method and device and training method and device of image recognition model

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108664843A (en) * 2017-03-27 2018-10-16 北京三星通信技术研究有限公司 Live subject recognition methods, equipment and computer readable storage medium
CN107358157A (en) * 2017-06-07 2017-11-17 阿里巴巴集团控股有限公司 A kind of human face in-vivo detection method, device and electronic equipment
CN111222380A (en) * 2018-11-27 2020-06-02 杭州海康威视数字技术股份有限公司 Living body detection method and device and recognition model training method thereof
CN110674800A (en) * 2019-12-04 2020-01-10 图谱未来(南京)人工智能研究院有限公司 Face living body detection method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112613471A (en) 2021-04-06

Similar Documents

Publication Publication Date Title
US11842487B2 (en) Detection model training method and apparatus, computer device and storage medium
CN108229509B (en) Method and device for identifying object class and electronic equipment
EP3916627A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN104732601B (en) Automatic high-recognition-rate attendance checking device and method based on face recognition technology
CN112633144A (en) Face occlusion detection method, system, device and storage medium
WO2021051611A1 (en) Face visibility-based face recognition method, system, device, and storage medium
CN108124486A (en) Face living body detection method based on cloud, electronic device and program product
CN112613471B (en) Face living body detection method, device and computer readable storage medium
CN111639629B (en) Pig weight measurement method and device based on image processing and storage medium
US11935213B2 (en) Laparoscopic image smoke removal method based on generative adversarial network
WO2022213396A1 (en) Cat face recognition apparatus and method, computer device, and storage medium
CN113449704B (en) Face recognition model training method and device, electronic equipment and storage medium
CN112836625A (en) Face living body detection method and device and electronic equipment
CN107844742A (en) Facial image glasses minimizing technology, device and storage medium
US20240087368A1 (en) Companion animal life management system and method therefor
CN109993021A (en) The positive face detecting method of face, device and electronic equipment
CN111862040B (en) Portrait picture quality evaluation method, device, equipment and storage medium
CN111222380A (en) Living body detection method and device and recognition model training method thereof
CN113240655A (en) Method, storage medium and device for automatically detecting type of fundus image
CN112541394A (en) Black eye and rhinitis identification method, system and computer medium
CN111259763A (en) Target detection method and device, electronic equipment and readable storage medium
WO2015131710A1 (en) Method and device for positioning human eyes
CN113052236A (en) Pneumonia image classification method based on NASN
CN110675312A (en) Image data processing method, image data processing device, computer equipment and storage medium
CN113033305B (en) Living body detection method, living body detection device, terminal equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant