CN106203305B - Face living body detection method and device - Google Patents

Face living body detection method and device Download PDF

Info

Publication number
CN106203305B
CN106203305B CN201610511947.8A CN201610511947A CN106203305B CN 106203305 B CN106203305 B CN 106203305B CN 201610511947 A CN201610511947 A CN 201610511947A CN 106203305 B CN106203305 B CN 106203305B
Authority
CN
China
Prior art keywords
image
light
highlight
images
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610511947.8A
Other languages
Chinese (zh)
Other versions
CN106203305A (en
Inventor
范浩强
印奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Megvii Technology Co Ltd
Beijing Maigewei Technology Co Ltd
Original Assignee
Beijing Megvii Technology Co Ltd
Beijing Maigewei Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Megvii Technology Co Ltd, Beijing Maigewei Technology Co Ltd filed Critical Beijing Megvii Technology Co Ltd
Priority to CN201610511947.8A priority Critical patent/CN106203305B/en
Publication of CN106203305A publication Critical patent/CN106203305A/en
Application granted granted Critical
Publication of CN106203305B publication Critical patent/CN106203305B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Abstract

The invention provides a face in-vivo detection method and a face in-vivo detection device, wherein the face in-vivo detection method comprises the following steps: acquiring an image which is not supplemented with light and is acquired under the condition that a human face object to be detected is not supplemented with light, and acquiring a plurality of light supplementing images under the condition that a plurality of groups of light sources with approximate brightness are respectively supplemented with light; calculating a highlight-free image and a highlight image of the face object based on the light-supplementing-free image and the plurality of light-supplementing images; combining the highlight-free image and the highlight image into a multi-channel image, and inputting the multi-channel image into a trained convolutional neural network; and determining whether the face object is a living object based on an output of the convolutional neural network. According to the face living body detection method and device provided by the embodiment of the invention, the real living body face is distinguished from the mask, the screen and the like by utilizing the difference of highlight characteristics of the real living body face and the mask, the screen and the like, so that the face living body detection is realized.

Description

Face living body detection method and device
Technical Field
The invention relates to the technical field of face recognition, in particular to a face living body detection method and a face living body detection device.
Background
Face recognition is a biometric technology for identity recognition based on facial feature information of a person. Face recognition usually adopts a camera or a video camera to acquire images or video streams containing faces, automatically detects and tracks the faces in the images, and further performs face recognition on the detected faces.
Face recognition has been widely used in various fields. However, due to the copying of the face information, the face information can be stolen, and other factors, some impostors may impersonate the face of a real person through a face mask, a screen face and the like, which causes the hidden danger of face recognition. Especially, as face recognition is increasingly applied to security and financial fields requiring identity verification, such as bank remote account opening, access control systems, remote transaction operation verification and the like, in these high-security level application fields, security attacks such as face masks and the like need to be prevented.
Disclosure of Invention
The present invention has been made in view of the above problems. The invention provides a face living body detection method and a face living body detection device, which are characterized in that a highlight-free image and a highlight image are synthesized based on a plurality of images (light supplement images) collected under the light supplement of a non-light supplement image and a plurality of groups of light sources, a multi-channel image (comprising a highlight component) is formed by the highlight-free image and the highlight image, and the real living body face is distinguished from a mask, a screen and the like by utilizing the difference of highlight characteristics of the real living body face and the mask, the screen and the like.
According to an aspect of the present invention, there is provided a face in-vivo detection method, including: acquiring an image which is not supplemented with light and is acquired under the condition that a human face object to be detected is not supplemented with light, and acquiring a plurality of light supplementing images under the condition that a plurality of groups of light sources with approximate brightness are respectively supplemented with light; calculating a highlight-free image and a highlight image of the face object based on the non-fill light image and the plurality of fill light images; combining the highlight-free image and the highlight image into a multi-channel image, and inputting the multi-channel image into a trained convolutional neural network; and determining whether the face object is a living object based on an output of the convolutional neural network.
In an embodiment of the present invention, some and other of the plurality of sets of light sources are respectively at different angular ranges relative to the human face object, and the calculating of the highlight-free image and the highlight image of the human face object based on the non-fill light image and the plurality of fill light images comprises: calculating a highlight-free image of the human face object based on the difference between the minimum pixel value of any same position in all light-supplemented images and the pixel value of the non-light-supplemented image at the corresponding position; and calculating the highlight image of the human face object based on the difference between the maximum pixel value and the minimum pixel value at any same position in the multiple light supplement images under the condition of multiple groups of light source light supplement in the same angle range.
Illustratively, the determining whether the face object is a living object based on the output of the convolutional neural network comprises: when the output of the convolutional neural network is greater than a preset threshold value, determining that the human face object is a living object; and when the output of the convolutional neural network is less than or equal to the predetermined threshold, determining that the human face object is a non-living object.
In an embodiment of the present invention, the face liveness detection method further includes: after the non-supplementary lighting image and the supplementary lighting images are obtained, aligning the supplementary lighting images with the non-supplementary lighting image, and calculating the highlight-free image and the highlight image of the human face object based on the non-supplementary lighting image and the aligned supplementary lighting images.
Illustratively, the aligning the plurality of fill-in images with the non-fill-in image comprises: carrying out face detection and face key point detection on the non-light-supplemented images and the multiple light-supplemented images; and aligning the multiple light supplement images with the non-light supplement images based on the positions of the key points of the human face.
In another aspect, the present invention further provides a living human face detection apparatus, including: the acquisition module is used for acquiring an image which is not subjected to light supplement and is acquired under the condition that a human face object to be detected is not subjected to light supplement and acquiring a plurality of light supplement images under the condition that a plurality of groups of light sources with approximate brightness are respectively subjected to light supplement; a calculation module, configured to calculate a highlight-free image and a highlight image of the face object based on the non-fill-in light image and the plurality of fill-in light images; the synthesis module is used for combining the highlight-free image and the highlight image into a multi-channel image and inputting the multi-channel image into a trained convolutional neural network; and an analysis module for determining whether the face object is a living object based on an output of the convolutional neural network.
In an embodiment of the invention, some and other of the plurality of sets of light sources are respectively at different angular ranges relative to the human face object, and the calculation module is further configured to: calculating a highlight-free image of the human face object based on the difference between the minimum pixel value of any same position in all light-supplemented images and the pixel value of the non-light-supplemented image at the corresponding position; and calculating the highlight image of the human face object based on the difference between the maximum pixel value and the minimum pixel value at any same position in the multiple light supplement images under the condition of multiple groups of light source light supplement in the same angle range.
Illustratively, the analysis module is further configured to: when the output of the convolutional neural network is greater than a preset threshold value, determining that the human face object is a living object; and when the output of the convolutional neural network is less than or equal to the predetermined threshold, determining that the human face object is a non-living object.
In one embodiment of the present invention, the face liveness detection apparatus further includes: the alignment module is used for aligning the multiple supplementary lighting images acquired by the acquisition module with the non-supplementary lighting image, and the calculation module is used for calculating the highlight-free image and the highlight image of the face object based on the non-supplementary lighting image and the aligned multiple supplementary lighting images.
Illustratively, the alignment module is further configured to: carrying out face detection and face key point detection on the non-light-supplemented images and the multiple light-supplemented images; and aligning the multiple light supplement images with the non-light supplement images based on the positions of the key points of the human face.
According to the face living body detection method and device provided by the embodiment of the invention, a highlight-free image and a highlight image are synthesized based on a plurality of images collected under the light supplement of a non-supplemented light image and a plurality of groups of light sources, a multi-channel image (the multi-channel image comprises a highlight component) is formed by the highlight-free image and the highlight image, the highlight characteristic difference between the real living body face and a mask, a screen and the like is utilized, the real living body face is distinguished from the mask, the screen and the like, and the face living body detection is realized.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent by describing in more detail embodiments of the present invention with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings, like reference numbers generally represent like parts or steps.
FIG. 1 is a schematic block diagram of an exemplary electronic device for implementing a face liveness detection method and apparatus in accordance with embodiments of the present invention;
FIG. 2 is a schematic flow chart diagram of a face liveness detection method according to an embodiment of the invention;
FIG. 3 is a schematic diagram of an image acquisition device and light sources distributed around the image acquisition device for a human face in-vivo detection method according to an embodiment of the invention;
FIG. 4 is a schematic flow chart diagram of a face liveness detection method according to another embodiment of the present invention;
FIG. 5 is a schematic block diagram of a face liveness detection apparatus according to an embodiment of the present invention;
FIG. 6 is a schematic block diagram of a face liveness detection apparatus according to another embodiment of the present invention; and
FIG. 7 is a schematic block diagram of a face liveness detection system according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail below with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of embodiments of the invention and not all embodiments of the invention, with the understanding that the invention is not limited to the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the invention described herein without inventive step, shall fall within the scope of protection of the invention.
Under certain lighting conditions, optical images formed by reflection of light often show local areas of high light reflection, for example, the surface of a human face reflects high light components due to the existence of the stratum corneum. The difference in physical surface material results in a difference in the intensity and angular response range of the high light component. Therefore, the real living body face can be distinguished from the mask, the screen, and the like based on the difference in the highlight characteristics of the real living body face and the simulation mask, the screen face, and the like. The invention provides a method and a device for detecting a human face living body based on the principle, and the like, so as to prevent the human face recognition system in the high-security level application field from being attacked by a human face mask and the like. Embodiments of the present invention are described below with reference to the drawings.
First, an exemplary electronic device 100 for implementing the face live detection method and apparatus of the embodiment of the present invention is described with reference to fig. 1.
As shown in FIG. 1, electronic device 100 includes one or more processors 102, one or more memory devices 104, an input device 106, an output device 108, and an image sensor 110, which are interconnected via a bus system 112 and/or other form of connection mechanism (not shown). It should be noted that the components and structure of the electronic device 100 shown in fig. 1 are exemplary only, and not limiting, and the electronic device may have other components and structures as desired.
The processor 102 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 100 to perform desired functions.
The storage 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. On which one or more computer program instructions may be stored that may be executed by processor 102 to implement client-side functionality (implemented by the processor) and/or other desired functionality in embodiments of the invention described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like.
The output device 108 may output various information (e.g., images or sounds) to an external (e.g., user), and may include one or more of a display, a speaker, and the like.
The image sensor 110 may take images (e.g., photographs, videos, etc.) desired by the user and store the taken images in the storage device 104 for use by other components.
For example, an example electronic device for implementing the method and apparatus for detecting a living human face according to the embodiment of the present invention may be implemented as an image capturing end of a smart phone, a tablet computer, an access control system, or the like.
Next, a face live detection method 200 according to an embodiment of the present invention will be described with reference to fig. 2.
In step S210, an image without light supplement acquired when the face object to be detected is not light supplemented, and a plurality of light supplement images acquired when a plurality of groups of light sources with similar brightness are respectively light supplemented are acquired.
In one example, the image may be acquired from an image capture device, which may include, for example, an image capture device on a smart device such as a cell phone, an IPAD, etc., such as a camera, etc.
The image acquisition device may acquire an image of the human face object to be detected without supplementary lighting, i.e. an image without supplementary lighting, for example denoted as I0; in addition, the image acquisition device may acquire a plurality of supplementary light images under respective supplementary light conditions of a plurality of groups of light sources having close luminance (for example, a luminance difference of each group of light sources is smaller than a preset threshold, or whether the luminance is close is determined by human eyes).
In one example, the image capture device captures four images under respective fill lighting of four sets of light sources (e.g., as shown in fig. 3). In fig. 3, although the image capturing device is shown as a camera, one of ordinary skill in the art will appreciate that other image capturing devices may be used. In fig. 3, the first group of light sources includes light emitting devices (e.g., light emitting diodes) LED _1_1 and LED _1_2, the second group of light sources includes light emitting devices LED _2_1 and LED _2_2, the third group of light sources includes light emitting devices LED _3_1 and LED _3_2, and the fourth group of light sources includes light emitting devices LED _4_1 and LED _4_ 2. In a simple example, the four sets of light sources comprise the same number of light emitting devices (each comprising two) and the brightness of each light emitting device is the same. In addition, each of the four light sources includes light emitting devices whose average positions relative to the camera are close (i.e. the centers of the two light emitting device connecting lines of each light source are close to or even coincide with each other), and all the light emitting devices are distributed and dispersed relative to the camera. In one example, the light emitting devices of the first and second sets of light sources are arranged crosswise, and the third and fourth sets of light sources are also arranged.
It will be understood by those of ordinary skill in the art that although four sets of light sources are shown in fig. 3, an appropriate number of sets of light sources may be provided as desired. Furthermore, the distribution of the light-emitting devices relative to the camera head shown in fig. 3 is merely exemplary, in which images can be acquired for subsequent calculation and processing, and other distributions can be provided as needed or in space.
Based on the above example, the image acquisition device acquires the supplementary lighting image I1 of the human face object to be detected under the supplementary lighting of the first group of light sources, acquires the supplementary lighting image I2 of the human face object to be detected under the supplementary lighting of the second group of light sources, acquires the supplementary lighting image I3 of the human face object to be detected under the supplementary lighting of the third group of light sources, and acquires the supplementary lighting image I4 of the human face object to be detected under the supplementary lighting of the fourth group of light sources.
In this example, the non-fill-in image I0 and fill-in images I1, I2, I3, and I4 with respect to the object to be detected may be acquired at step S210.
In step S220, a highlight-free image and a highlight image of the face object are calculated based on the obtained non-fill-in light image and the plurality of fill-in light images.
In one example, for a highlight-free image of a human face object, the calculation may be based on a difference between a minimum pixel value at any same position in all fill-in images and a pixel value of a non-fill-in image at a corresponding position. Following the above example, e.g., the highlight-free image to be calculated is represented as S0, then S0 ═ min (I1, I2, I3, I4) -I0. For highlight images of the human face object, the calculation can be performed based on the difference between the maximum pixel value and the minimum pixel value at any same position in a plurality of supplementary light images under the condition of supplementary lighting of a plurality of groups of light sources in the same angle range. The purpose of this is to obtain highlight components with respect to different angular response ranges of the human face object to be detected. For example, in the above example, the light emitting devices of the first and second groups of light sources are in the same angle range with respect to the human face object to be detected, and similarly, the light emitting devices of the third and fourth groups of light sources are in the same angle range with respect to the human face object to be detected. Therefore, highlight images S1 and S2 can be obtained, where S1 is max (I1, I2) -min (I1, I2), and S2 is max (I3, I4) -min (I3, I4). The highlight images S1 and S2 respectively obtain highlight components of different angular response ranges (for example, the proportions of the highlight components of the cornea and the nose tip in the highlight images S1 and S2 are different).
In step S230, the calculated highlight-free image and highlight image are combined into a multi-channel image, and the multi-channel image is input into the trained convolutional neural network.
At this step, the highlight-free image and the highlight image with respect to the human face object to be measured are merged into a multi-channel image in which each pixel point is represented by a plurality of values (for example, S0, S1, and S2 in the above example are merged into a three-channel image in which each pixel point is represented by 3 values (for example, R, G, B values corresponding to the three color channels, respectively)). Illustratively, the multi-channel image includes a highlight component. Based on inputting the multi-channel image into the trained convolutional neural network, whether the human face object to be detected is a living human face can be completely analyzed based on the highlight component in the multi-channel image. The convolutional neural network is trained based on a large number of labeled image samples, wherein the image samples are labeled as a face, a mask and the like according to the distribution of highlight components in the image.
In step S240, it is determined whether the face object is a living object based on an output of the convolutional neural network.
In one example, the output of the convolutional neural network may be a value between 0 and 1. When the output of the convolutional neural network is greater than a predetermined threshold (e.g., 0.5), determining that the human face object is a living object; and conversely, when the output of the convolutional neural network is less than or equal to the predetermined threshold, determining that the human face object is a non-living object.
The face in-vivo detection method 200 according to the embodiment of the present invention synthesizes the highlight-free image and the highlight image based on the non-supplemented light image and the plurality of images collected under the light supplementation of the plurality of groups of light sources, forms the multi-channel image (the multi-channel image includes the highlight component) by the highlight-free image and the highlight image, and distinguishes the real living face from the mask, the screen, and the like by using the difference of the highlight characteristics of the real living face and the mask, the screen, and the like, thereby implementing the face in-vivo detection.
Illustratively, the face liveness detection method according to the embodiment of the present invention may be implemented in a device, apparatus or system having a memory and a processor.
The human face living body detection method can be deployed at a human face image acquisition end, for example, the human face living body detection method can be deployed at an image acquisition end of an access control system in the field of security application; in the field of financial applications, it may be deployed at personal terminals such as smart phones, tablets, personal computers, and the like.
Alternatively, the living human face detection method according to the embodiment of the present invention may also be distributively deployed at a server side (or a cloud side) and a personal terminal side.
For example, a non-light supplement image and a plurality of light supplement images can be collected at a personal terminal, and the collected images are transmitted to a server side (or a cloud side) by the personal terminal; calculating a highlight-free image and a highlight image of the face object on the basis of the non-fill-light image and the multiple fill-light images at the server end (or the cloud end), combining the highlight-free image and the highlight image into a multi-channel image, inputting the multi-channel image into a trained convolutional neural network, and determining whether the face object is a living object on the basis of the output of the convolutional neural network.
For another example, the non-light-supplemented image and the multiple light-supplemented images can be collected at the personal terminal, the highlight-free image and the highlight image of the face object are calculated based on the non-light-supplemented image and the multiple light-supplemented images, the highlight-free image and the highlight image are combined into a multi-channel image, the multi-channel image is transmitted to the server end (or the cloud end) by the personal terminal, the multi-channel image is input into the trained convolutional neural network at the server end (or the cloud end), and whether the face object is the living object is determined based on the output of the convolutional neural network.
Next, a face live detection method 400 according to another embodiment of the present invention will be described with reference to fig. 4.
In step S410, an image without light supplement collected by the human face object to be detected under the condition of no light supplement and a plurality of light supplement images collected under the condition of light supplement by a plurality of groups of light sources with close brightness are obtained.
Step S410 in this embodiment is similar to step S210 in the human face live detection method 200 described with reference to fig. 2, and for brevity, will not be described again here.
In step S420, the plurality of light supplement images are aligned with the non-light supplement image.
And aligning the multiple light supplement images with the non-light supplement images to effectively improve the efficiency and accuracy of subsequent processing. In one example, the step of aligning may further comprise: carrying out face detection and face key point detection on the non-light-supplemented images and the multiple light-supplemented images; and aligning the multiple light supplement images with the non-light supplement images based on the positions of the key points of the human face. The images are aligned, so that the coordinates of each pixel on the face object in each image are the same, the complexity of subsequent processing operation can be reduced, and the operation speed is increased.
For example, a face may be detected from a non-fill-in image and a plurality of fill-in images, then a trained neural network is used to perform feature extraction on the face image, the position of a key point is obtained based on the extracted feature, and the plurality of fill-in images and the non-fill-in images are aligned based on the position of the key point of the face. For example, a KLT (Kanade-Lucas-Tomasi) tracker may be used to obtain the relative offset of each fill-in image I1, I2, I3, I4 with respect to I0, and then correspondingly translate I1, I2, I3, I4 so that the image of its face is aligned with I0. And using the aligned multiple light supplement images for subsequent processing.
It should be understood that the present invention is not limited by the specifically adopted face detection and face key point detection method, and that the present face detection and face key point detection method or the face detection and face key point detection method developed in the future can be applied to the face living body detection method according to the embodiment of the present invention, and shall also be included in the scope of the present invention.
In step S430, highlight-free images and highlight images of the face object are calculated based on the non-fill-in images and the aligned plurality of fill-in images.
In step S440, the calculated highlight-free image and highlight image are combined into a multi-channel image, and the multi-channel image is input into the trained convolutional neural network.
In step S450, it is determined whether the face object is a living object based on an output of the convolutional neural network.
The steps S430, S440, and S450 are similar to the steps S220, S230, and S240 in the living human face detection method 200 described with reference to fig. 2, and therefore, for brevity, no further description is provided here.
The human face living body detection method 400 according to the embodiment of the present invention synthesizes the non-highlight image and the highlight image with the plurality of images collected under light supplement based on the plurality of groups of light sources, forms a multi-channel image (the multi-channel image includes the highlight component) with the non-highlight image and the highlight image, and distinguishes the real living body human face from the mask, the screen, and the like by using the difference of highlight characteristics of the real living body human face and the mask, the screen, and the like, thereby realizing the human face living body detection. In addition, after the non-light-supplemented image and the multiple light-supplemented images are acquired, the multiple light-supplemented images are aligned with the non-light-supplemented images, and the efficiency and the accuracy of subsequent processing can be effectively improved.
Fig. 5 shows a schematic block diagram of a face liveness detection apparatus 500 according to an embodiment of the present invention.
As shown in fig. 5, the living human face detecting apparatus 500 according to the embodiment of the present invention includes an acquiring module 510, a calculating module 520, a synthesizing module 530, and an analyzing module 540.
The obtaining module 510 is configured to obtain an image without light supplement acquired by a human face object to be detected under a condition without light supplement and multiple light supplement images acquired under a condition that light supplement is respectively performed by multiple groups of light sources with close brightness.
In one example, the obtaining module 510 may obtain an image from an image capture device, which may include, for example, an image capture device on a smart device such as a cell phone, an IPAD, etc., such as a camera, etc. The image acquisition device may acquire an image of the human face object to be detected without supplementary lighting, i.e. an image without supplementary lighting, for example denoted as I0; in addition, the image acquisition equipment can acquire a plurality of light supplement images under the condition that a plurality of groups of light sources with approximate brightness respectively supplement light. In one example, the image capture device captures four images under respective fill lighting of four sets of light sources (e.g., as shown in fig. 3). Specifically, the image acquisition device acquires a supplementary lighting image I1 of the human face object to be detected under supplementary lighting of a first group of light sources, acquires a supplementary lighting image I2 of the human face object to be detected under supplementary lighting of a second group of light sources, acquires a supplementary lighting image I3 of the human face object to be detected under supplementary lighting of a third group of light sources, and acquires a supplementary lighting image I4 of the human face object to be detected under supplementary lighting of a fourth group of light sources.
The calculating module 520 is configured to calculate a highlight-free image and a highlight image of the face object based on the non-fill light image and the plurality of fill light images.
In one example, for a highlight-free image of a human face object, the calculation module 520 may calculate based on a difference between a minimum pixel value at any same position in all fill-in images and a pixel value of the non-fill-in image at a corresponding position. Following the above example, e.g., the highlight-free image to be calculated is represented as S0, then S0 ═ min (I1, I2, I3, I4) -I0. For highlight images of the human face object, the calculation module 520 may calculate the highlight image based on a difference between a maximum pixel value and a minimum pixel value at any same position in multiple fill-in images under fill-in conditions of multiple groups of light sources within the same angle range. The purpose of this is to obtain highlight components with respect to different angular response ranges of the human face object to be detected. For example, in the above example as shown in fig. 3, the light emitting devices of the first and second groups of light sources are in the same angle range with respect to the human face object to be detected, and similarly, the light emitting devices of the third and fourth groups of light sources are in the same angle range with respect to the human face object to be detected. Therefore, the calculation module 520 may calculate highlight images S1 and S2, where S1 is max (I1, I2) -min (I1, I2), and S2 is max (I3, I4) -min (I3, I4). The highlight images S1 and S2 respectively obtain highlight components of different angular response ranges (for example, the proportions of the highlight components of the cornea and the nose tip in the highlight images S1 and S2 are different).
The synthesis module 530 is configured to combine the highlight-free image and the highlight image into a multi-channel image, and input the multi-channel image into a trained convolutional neural network.
The synthesis module 530 merges the highlight-free image and the highlight image with respect to the human face object to be measured into a multi-channel image in which each pixel point is represented by a plurality of values (e.g., S0, S1, and S2 in the above example are merged into a three-channel image in which each pixel point is represented by 3 values (e.g., values of R, G, B three color channels)). Illustratively, the multi-channel image includes a highlight component. The multi-channel image is input into a trained convolutional neural network, and the analysis module 540 can completely analyze whether the human face object to be detected is a living human face based on the highlight component in the multi-channel image.
The analysis module 540 is configured to determine whether the face object is a living object based on an output of the convolutional neural network.
In one example, the output of the convolutional neural network may be a value between 0 and 1. The analysis module 540 determines that the face object is a living object when the output of the convolutional neural network is greater than a predetermined threshold (e.g., 0.5); and conversely, when the output of the convolutional neural network is less than or equal to the predetermined threshold, determining that the human face object is a non-living object.
The above-mentioned human face in-vivo detection device 500 according to the embodiment of the present invention synthesizes the highlight-free image and the highlight image based on the non-supplemented light image and the plurality of images collected under the supplemented light of the plurality of groups of light sources, forms a multi-channel image (the multi-channel image includes the highlight component) with the highlight-free image and the highlight image, and distinguishes the real living human face from the mask, the screen, and the like by using the difference of the highlight characteristics of the real living human face and the mask, the screen, and the like, thereby implementing the human face in-vivo detection.
The acquisition module 510, the calculation module 520, the synthesis module 530, and the analysis module 540 may all be implemented by the processor 102 in the electronic device shown in fig. 1 executing program instructions stored in the storage 104.
Next, a living human face detection apparatus 600 according to another embodiment of the present invention will be described with reference to fig. 6.
As shown in fig. 6, the living human face detecting apparatus 600 according to the embodiment of the present invention includes an acquiring module 610, an aligning module 620, a calculating module 630, a synthesizing module 640, and an analyzing module 650.
The obtaining module 610 is configured to obtain an image without light supplement acquired by a human face object to be detected under a condition without light supplement and a plurality of light supplement images acquired under a condition that light supplement is respectively performed by a plurality of groups of light sources with close brightness. The obtaining module 610 is similar to the obtaining module 510 in the living human face detecting device 500 described with reference to fig. 5, and for brevity, the description is omitted here.
The alignment module 620 is configured to align the multiple light-supplemented images acquired by the acquisition module 610 with the non-light-supplemented images.
The alignment module 620 aligns the plurality of light supplement images with the non-light supplement image, so that the efficiency and accuracy of subsequent processing can be improved more effectively. In an example, the alignment module 620 is further configured to perform face detection and face key point detection on the non-fill-light image and the multiple fill-light images; and aligning the multiple light supplement images with the non-light supplement images based on the positions of the key points of the human face.
In an example, the alignment module 620 may detect a face from the non-fill-in image and the plurality of fill-in images, perform feature extraction on the face image by using a trained neural network, find the positions of key points based on the extracted features, and align the plurality of fill-in images with the non-fill-in images based on the positions of the key points of the face. For example, the alignment module 620 may use a KLT (Kanade-Lucas-Tomasi) tracker to obtain the relative offset of each fill-in image I1, I2, I3, I4 with respect to I0, and then translate I1, I2, I3, I4 accordingly so that the image of its face is aligned with I0. And using the aligned multiple light supplement images for subsequent processing.
The calculating module 630 is configured to calculate a highlight-free image and a highlight image of the face object based on the non-fill-in image and the aligned plurality of fill-in images. The calculating module 630 is similar to the calculating module 520 in the human face live detecting apparatus 500 described with reference to fig. 5, except that the plurality of light supplement images based on calculation are replaced with the plurality of aligned light supplement images, and for brevity, no further description is given here.
The synthesis module 640 is configured to combine the highlight-free image and the highlight image into a multi-channel image, and input the multi-channel image into a trained convolutional neural network.
The analysis module 650 is configured to determine whether the face object is a living object based on the output of the convolutional neural network.
The synthesis module 640 and the analysis module 650 are similar to the synthesis module 530 and the analysis module 540 of the living human face detection apparatus 500 described with reference to fig. 5, respectively, and for brevity, the description thereof is omitted here.
The acquisition module 610, the alignment module 620, the calculation module 630, the synthesis module 640, and the analysis module 650 may all be implemented by the processor 102 in the electronic device shown in fig. 1 executing program instructions stored in the storage 104.
The image highlight device 600 according to the embodiment of the present invention synthesizes the highlight-free image and the highlight image based on the light-unfilled image and the plurality of images collected under the light filling of the plurality of groups of light sources, forms a multi-channel image (the multi-channel image includes a highlight component) by the highlight-free image and the highlight image, and distinguishes the real living human face from the mask, the screen, and the like by using the difference of highlight characteristics of the real living human face and the mask, the screen, and the like, thereby implementing the human face living body detection. In addition, after the obtaining module 610 obtains the non-fill-in light image and the multiple fill-in light images, the aligning module 620 aligns the multiple fill-in light images with the non-fill-in light images, so that the efficiency and accuracy of subsequent processing can be improved more effectively.
Those of ordinary skill in the art will appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
Fig. 7 shows a schematic block diagram of a face liveness detection system 700 according to an embodiment of the invention. The living human face detection system 700 includes an image acquisition device 710, a storage 720, and a processor 730.
Wherein the image capturing device 710 is used to capture images. The storage means 720 stores program codes for implementing respective steps in the face live body detection method according to the embodiment of the present invention. The processor 730 is configured to run the program codes stored in the storage 720 to execute the corresponding steps of the face liveness detection method according to the embodiment of the present invention, and is configured to implement the corresponding modules in the face liveness detection apparatus according to the embodiment of the present invention.
In one embodiment, the following steps are performed when the program code is executed by processor 730: acquiring an image which is not supplemented with light and is acquired under the condition that a human face object to be detected is not supplemented with light, and acquiring a plurality of light supplementing images under the condition that a plurality of groups of light sources with approximate brightness are respectively supplemented with light; calculating a highlight-free image and a highlight image of the face object based on the non-fill light image and the plurality of fill light images; combining the highlight-free image and the highlight image into a multi-channel image, and inputting the multi-channel image into a trained convolutional neural network; and determining whether the face object is a living object based on an output of the convolutional neural network.
Illustratively, some and others of the plurality of sets of light sources are respectively at different angular ranges relative to the human face object, and the calculating comprises: calculating a highlight-free image of the human face object based on the difference between the minimum pixel value of any same position in all light-supplemented images and the pixel value of the non-light-supplemented image at the corresponding position; and calculating the highlight image of the human face object based on the difference between the maximum pixel value and the minimum pixel value at any same position in the multiple light supplement images under the condition of multiple groups of light source light supplement in the same angle range.
Illustratively, the determining whether the face object is a living object based on the output of the convolutional neural network comprises: when the output of the convolutional neural network is greater than a preset threshold value, determining that the human face object is a living object; and when the output of the convolutional neural network is less than or equal to the predetermined threshold, determining that the human face object is a non-living object.
Further, in one embodiment, when the program code is executed by the processor 730, the following steps are also performed: after the non-supplementary lighting image and the supplementary lighting images are obtained, aligning the supplementary lighting images with the non-supplementary lighting image, and calculating the highlight-free image and the highlight image of the human face object based on the non-supplementary lighting image and the aligned supplementary lighting images.
Illustratively, the aligning the plurality of fill-in images with the non-fill-in image comprises: carrying out face detection and face key point detection on the non-light-supplemented images and the multiple light-supplemented images; and aligning the multiple light supplement images with the non-light supplement images based on the positions of the key points of the human face.
Furthermore, according to an embodiment of the present invention, there is also provided a storage medium on which program instructions are stored, which when executed by a computer or a processor are used for executing the corresponding steps of the face liveness detection method according to an embodiment of the present invention, and for implementing the corresponding modules in the face liveness detection apparatus according to an embodiment of the present invention. The storage medium may include, for example, a memory card of a smart phone, a storage component of a tablet computer, a hard disk of a personal computer, a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), a USB memory, or any combination of the above storage media. The computer-readable storage medium may be any combination of one or more computer-readable storage media.
In one embodiment, the computer program instructions, when executed by a computer or a processor, may implement the functional modules of the face liveness detection apparatus according to the embodiment of the present invention and/or may execute the face liveness detection method according to the embodiment of the present invention.
In one embodiment, the computer program instructions, when executed by a computer or processor, cause the computer or processor to perform the steps of: acquiring an image which is not supplemented with light and is acquired under the condition that a human face object to be detected is not supplemented with light, and acquiring a plurality of light supplementing images under the condition that a plurality of groups of light sources with approximate brightness are respectively supplemented with light; calculating a highlight-free image and a highlight image of the face object based on the non-fill light image and the plurality of fill light images; combining the highlight-free image and the highlight image into a multi-channel image, and inputting the multi-channel image into a trained convolutional neural network; and determining whether the face object is a living object based on an output of the convolutional neural network.
Illustratively, some and others of the plurality of sets of light sources are respectively at different angular ranges relative to the human face object, and the calculating comprises: calculating a highlight-free image of the human face object based on the difference between the minimum pixel value of any same position in all light-supplemented images and the pixel value of the non-light-supplemented image at the corresponding position; and calculating the highlight image of the human face object based on the difference between the maximum pixel value and the minimum pixel value at any same position in the multiple light supplement images under the condition of multiple groups of light source light supplement in the same angle range.
Illustratively, the determining whether the face object is a living object based on the output of the convolutional neural network comprises: when the output of the convolutional neural network is greater than a preset threshold value, determining that the human face object is a living object; and when the output of the convolutional neural network is less than or equal to the predetermined threshold, determining that the human face object is a non-living object.
Further, in one embodiment, the computer program instructions, when executed by a computer or processor, further cause the computer or processor to perform the steps of: after the non-supplementary lighting image and the supplementary lighting images are obtained, aligning the supplementary lighting images with the non-supplementary lighting image, and calculating the highlight-free image and the highlight image of the human face object based on the non-supplementary lighting image and the aligned supplementary lighting images.
Illustratively, the aligning the plurality of fill-in images with the non-fill-in image comprises: carrying out face detection and face key point detection on the non-light-supplemented images and the multiple light-supplemented images; and aligning the multiple light supplement images with the non-light supplement images based on the positions of the key points of the human face.
The modules in the face liveness detection apparatus according to the embodiment of the present invention may be implemented by a processor of an electronic device for face liveness detection according to the embodiment of the present invention running computer program instructions stored in a memory, or may be implemented when computer instructions stored in a computer readable storage medium of a computer program product according to the embodiment of the present invention are run by a computer.
According to the face in-vivo detection method, the device, the system and the storage medium provided by the embodiment of the invention, a highlight-free image and a highlight image are synthesized based on a plurality of images collected under the light supplement of a non-supplemented light image and a plurality of groups of light sources, a multi-channel image (comprising a highlight component) is formed by the highlight-free image and the highlight image, and the real living face is distinguished from a mask, a screen and the like by utilizing the difference of highlight characteristics of the real living face and the mask, the screen and the like, so that the face in-vivo detection is realized. In addition, after the non-light-supplemented image and the multiple light-supplemented images are acquired, the multiple light-supplemented images are aligned with the non-light-supplemented images, and the efficiency and the accuracy of subsequent processing can be effectively improved.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the foregoing illustrative embodiments are merely exemplary and are not intended to limit the scope of the invention thereto. Various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another device, or some features may be omitted, or not executed.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the method of the present invention should not be construed to reflect the intent: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where such features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. It will be appreciated by those skilled in the art that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some of the modules in an item analysis apparatus according to embodiments of the present invention. The present invention may also be embodied as apparatus programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The above description is only for the specific embodiment of the present invention or the description thereof, and the protection scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and the changes or substitutions should be covered within the protection scope of the present invention. The protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (8)

1. A face in-vivo detection method is characterized by comprising the following steps:
acquiring an image which is not supplemented with light and is acquired by a human face object to be detected under the condition of not supplementing light, and a plurality of supplemented light images which are acquired under the condition that a plurality of groups of light sources with close brightness are respectively supplemented with light, wherein some light sources and other light sources are respectively positioned in different angle ranges relative to the human face object;
calculating a highlight-free image and a highlight image of the face object based on the non-fill light image and the plurality of fill light images, the calculating the highlight-free image and the highlight image of the face object based on the non-fill light image and the plurality of fill light images comprising:
calculating a highlight-free image of the human face object based on the difference between the minimum pixel value of any same position in all light-supplemented images and the pixel value of the non-light-supplemented image at the corresponding position; and
calculating a highlight image of the face object based on the difference between the maximum pixel value and the minimum pixel value at any same position in a plurality of light supplement images under the condition of light supplement of a plurality of groups of light sources in the same angle range;
combining the highlight-free image and the highlight image into a multi-channel image, and inputting the multi-channel image into a trained convolutional neural network; and
determining whether the face object is a living object based on an output of the convolutional neural network.
2. The face live-action detection method according to claim 1, wherein the determining whether the face object is a live-action object based on the output of the convolutional neural network comprises:
when the output of the convolutional neural network is greater than a preset threshold value, determining that the human face object is a living object;
and when the output of the convolutional neural network is less than or equal to the predetermined threshold, determining that the human face object is a non-living object.
3. The face in-vivo detection method according to claim 1, further comprising:
after the non-fill-in light image and the multiple fill-in light images are acquired, aligning the multiple fill-in light images with the non-fill-in light image, and aligning the multiple fill-in light images with the non-fill-in light image
The calculating of the highlight-free image and the highlight image of the face object is based on the non-fill-in image and the aligned multiple fill-in images.
4. The method as claimed in claim 3, wherein the aligning the plurality of supplementary images with the non-supplementary image comprises:
carrying out face detection and face key point detection on the non-light-supplemented images and the multiple light-supplemented images; and
and aligning the multiple light supplement images with the non-light supplement image based on the positions of the key points of the human face.
5. A face liveness detection device, characterized in that the face liveness detection device comprises:
the system comprises an acquisition module, a detection module and a processing module, wherein the acquisition module is used for acquiring an image which is not supplemented with light and is acquired by a human face object to be detected under the condition of not supplementing light and a plurality of supplemented light images acquired under the condition that a plurality of groups of light sources with close brightness are supplemented with light respectively, and some and other light sources in the plurality of groups of light sources are respectively positioned in different angle ranges relative to the human face object;
a calculating module, configured to calculate a highlight-free image and a highlight image of the face object based on the non-fill light image and the plurality of fill light images, wherein the calculating the highlight-free image and the highlight image of the face object based on the non-fill light image and the plurality of fill light images includes:
calculating a highlight-free image of the human face object based on the difference between the minimum pixel value of any same position in all light-supplemented images and the pixel value of the non-light-supplemented image at the corresponding position; and
calculating a highlight image of the face object based on the difference between the maximum pixel value and the minimum pixel value at any same position in a plurality of light supplement images under the condition of light supplement of a plurality of groups of light sources in the same angle range;
the synthesis module is used for combining the highlight-free image and the highlight image into a multi-channel image and inputting the multi-channel image into a trained convolutional neural network; and
an analysis module to determine whether the face object is a living object based on an output of the convolutional neural network.
6. The living human face detection device of claim 5, wherein the analysis module is further configured to:
when the output of the convolutional neural network is greater than a preset threshold value, determining that the human face object is a living object;
and when the output of the convolutional neural network is less than or equal to the predetermined threshold, determining that the human face object is a non-living object.
7. The living human face detection device according to claim 5, further comprising:
an alignment module, configured to align the multiple light supplement images acquired by the acquisition module with the non-light supplement image, and to
The calculation module is used for calculating a highlight-free image and a highlight image of the face object based on the non-fill-in image and the aligned multiple fill-in images.
8. The living human face detection device of claim 7, wherein the alignment module is further configured to:
carrying out face detection and face key point detection on the non-light-supplemented images and the multiple light-supplemented images; and
and aligning the multiple light supplement images with the non-light supplement image based on the positions of the key points of the human face.
CN201610511947.8A 2016-06-30 2016-06-30 Face living body detection method and device Active CN106203305B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610511947.8A CN106203305B (en) 2016-06-30 2016-06-30 Face living body detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610511947.8A CN106203305B (en) 2016-06-30 2016-06-30 Face living body detection method and device

Publications (2)

Publication Number Publication Date
CN106203305A CN106203305A (en) 2016-12-07
CN106203305B true CN106203305B (en) 2020-02-04

Family

ID=57464149

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610511947.8A Active CN106203305B (en) 2016-06-30 2016-06-30 Face living body detection method and device

Country Status (1)

Country Link
CN (1) CN106203305B (en)

Families Citing this family (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529512B (en) * 2016-12-15 2019-09-10 北京旷视科技有限公司 Living body faces verification method and device
US10796178B2 (en) 2016-12-15 2020-10-06 Beijing Kuangshi Technology Co., Ltd. Method and device for face liveness detection
CN108345818B (en) * 2017-01-23 2021-08-31 北京中科奥森数据科技有限公司 Face living body detection method and device
CN107066942A (en) * 2017-03-03 2017-08-18 上海斐讯数据通信技术有限公司 A kind of living body faces recognition methods and system
CN108229326A (en) * 2017-03-16 2018-06-29 北京市商汤科技开发有限公司 Face false-proof detection method and system, electronic equipment, program and medium
CN108875467B (en) * 2017-06-05 2020-12-25 北京旷视科技有限公司 Living body detection method, living body detection device and computer storage medium
CN107194376A (en) * 2017-06-21 2017-09-22 北京市威富安防科技有限公司 Mask fraud convolutional neural networks training method and human face in-vivo detection method
CN107220990B (en) * 2017-06-22 2020-09-08 成都品果科技有限公司 Hair segmentation method based on deep learning
CN107392135A (en) * 2017-07-14 2017-11-24 广东欧珀移动通信有限公司 Biopsy method and Related product
CN107808115A (en) * 2017-09-27 2018-03-16 联想(北京)有限公司 A kind of biopsy method, device and storage medium
CN107832735A (en) * 2017-11-24 2018-03-23 百度在线网络技术(北京)有限公司 Method and apparatus for identifying face
CN108108676A (en) * 2017-12-12 2018-06-01 北京小米移动软件有限公司 Face identification method, convolutional neural networks generation method and device
CN110008783A (en) 2018-01-04 2019-07-12 杭州海康威视数字技术股份有限公司 Human face in-vivo detection method, device and electronic equipment based on neural network model
CN108416324B (en) * 2018-03-27 2022-02-25 百度在线网络技术(北京)有限公司 Method and apparatus for detecting living body
CN108509888B (en) * 2018-03-27 2022-01-28 百度在线网络技术(北京)有限公司 Method and apparatus for generating information
CN108537152B (en) * 2018-03-27 2022-01-25 百度在线网络技术(北京)有限公司 Method and apparatus for detecting living body
CN108875574B (en) * 2018-05-11 2021-06-25 北京旷视科技有限公司 Method, device and system for detecting false alarm result of pedestrian detection and storage medium
CN109086691B (en) * 2018-07-16 2020-02-21 阿里巴巴集团控股有限公司 Three-dimensional face living body detection method, face authentication and identification method and device
CN109101871A (en) * 2018-08-07 2018-12-28 北京华捷艾米科技有限公司 A kind of living body detection device based on depth and Near Infrared Information, detection method and its application
CN109255322B (en) * 2018-09-03 2019-11-19 北京诚志重科海图科技有限公司 A kind of human face in-vivo detection method and device
CN113408403A (en) 2018-09-10 2021-09-17 创新先进技术有限公司 Living body detection method, living body detection device, and computer-readable storage medium
CN109635795B (en) * 2018-10-22 2023-05-16 小黄狗环保科技有限公司 Intelligent light supplementing method for improving bottle and tank recognition rate based on VGG16 network model
CN109558840A (en) * 2018-11-29 2019-04-02 中国科学院重庆绿色智能技术研究院 A kind of biopsy method of Fusion Features
CN109522877A (en) * 2018-12-14 2019-03-26 睿云联(厦门)网络通讯技术有限公司 A kind of offline plurality of human faces recognition methods and computer equipment based on Android device
CN111310575B (en) * 2020-01-17 2022-07-08 腾讯科技(深圳)有限公司 Face living body detection method, related device, equipment and storage medium
CN111460970A (en) * 2020-03-27 2020-07-28 深圳市商汤科技有限公司 Living body detection method and device and face recognition equipment
CN111523438B (en) * 2020-04-20 2024-02-23 支付宝实验室(新加坡)有限公司 Living body identification method, terminal equipment and electronic equipment
CN112906676A (en) * 2021-05-06 2021-06-04 北京远鉴信息技术有限公司 Face image source identification method and device, storage medium and electronic equipment
CN115601245B (en) * 2021-07-07 2023-12-12 同方威视技术股份有限公司 Shadow eliminating device and method, empty disc identifying device and method
CN115995103A (en) * 2021-10-15 2023-04-21 北京眼神科技有限公司 Face living body detection method, device, computer readable storage medium and equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103106397A (en) * 2013-01-19 2013-05-15 华南理工大学 Human face living body detection method based on bright pupil effect
CN104506778A (en) * 2014-12-22 2015-04-08 厦门美图之家科技有限公司 Flashlight control method and device based on age estimation
CN105260731A (en) * 2015-11-25 2016-01-20 商汤集团有限公司 Human face living body detection system and method based on optical pulses
CN105518711A (en) * 2015-06-29 2016-04-20 北京旷视科技有限公司 In-vivo detection method, in-vivo detection system, and computer program product
CN105654028A (en) * 2015-09-29 2016-06-08 厦门中控生物识别信息技术有限公司 True and false face identification method and apparatus thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103106397A (en) * 2013-01-19 2013-05-15 华南理工大学 Human face living body detection method based on bright pupil effect
CN104506778A (en) * 2014-12-22 2015-04-08 厦门美图之家科技有限公司 Flashlight control method and device based on age estimation
CN105518711A (en) * 2015-06-29 2016-04-20 北京旷视科技有限公司 In-vivo detection method, in-vivo detection system, and computer program product
CN105654028A (en) * 2015-09-29 2016-06-08 厦门中控生物识别信息技术有限公司 True and false face identification method and apparatus thereof
CN105260731A (en) * 2015-11-25 2016-01-20 商汤集团有限公司 Human face living body detection system and method based on optical pulses

Also Published As

Publication number Publication date
CN106203305A (en) 2016-12-07

Similar Documents

Publication Publication Date Title
CN106203305B (en) Face living body detection method and device
CN106650662B (en) Target object shielding detection method and device
CN106599772B (en) Living body verification method and device and identity authentication method and device
US9818023B2 (en) Enhanced face detection using depth information
WO2019071664A1 (en) Human face recognition method and apparatus combined with depth information, and storage medium
ES2704277T3 (en) Facial recognition with self-learning using depth-based tracking for the generation and updating of databases
CN108932456B (en) Face recognition method, device and system and storage medium
CN108875731B (en) Target identification method, device, system and storage medium
CN108875517B (en) Video processing method, device and system and storage medium
CN106327546B (en) Method and device for testing face detection algorithm
CN106524909B (en) Three-dimensional image acquisition method and device
US11348370B2 (en) Iris authentication device, iris authentication method, and recording medium
JP2017017431A (en) Image processing apparatus, information processing method, and program
US9501719B1 (en) System and method for verification of three-dimensional (3D) object
US10659680B2 (en) Method of processing object in image and apparatus for same
Sajid et al. VSig: Hand-gestured signature recognition and authentication with wearable camera
US9811916B1 (en) Approaches for head tracking
CN108875506B (en) Face shape point tracking method, device and system and storage medium
KR102434574B1 (en) Method and apparatus for recognizing a subject existed in an image based on temporal movement or spatial movement of a feature point of the image
CN106682187B (en) Method and device for establishing image base
Ebihara et al. Efficient face spoofing detection with flash
US9772679B1 (en) Object tracking for device input
CN109117882B (en) Method, device and system for acquiring user track and storage medium
CN108875538B (en) Attribute detection method, device and system and storage medium
CN110728172B (en) Point cloud-based face key point detection method, device and system and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100190 Beijing, Haidian District Academy of Sciences, South Road, No. 2, block A, No. 313

Applicant after: MEGVII INC.

Applicant after: Beijing maigewei Technology Co., Ltd.

Address before: 100190 Beijing, Haidian District Academy of Sciences, South Road, No. 2, block A, No. 313

Applicant before: MEGVII INC.

Applicant before: Beijing aperture Science and Technology Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant