WO2019137178A1 - 人脸活体检测 - Google Patents

人脸活体检测 Download PDF

Info

Publication number
WO2019137178A1
WO2019137178A1 PCT/CN2018/122525 CN2018122525W WO2019137178A1 WO 2019137178 A1 WO2019137178 A1 WO 2019137178A1 CN 2018122525 W CN2018122525 W CN 2018122525W WO 2019137178 A1 WO2019137178 A1 WO 2019137178A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
face
component data
sample
target
Prior art date
Application number
PCT/CN2018/122525
Other languages
English (en)
French (fr)
Inventor
王升国
任志浩
王梁
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Publication of WO2019137178A1 publication Critical patent/WO2019137178A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Definitions

  • the present application relates to the field of face recognition technology, and in particular, to face detection.
  • face recognition technology is the most convenient and most suitable for people's habits, and it is widely used.
  • the criminals counterfeit the user's face features in the face authentication process to achieve the purpose of deceiving the system.
  • the forms of spoofing in the face authentication process mainly include: stealing a user's photo for fraud, using a public occasion, a video recorded on the network, and the like.
  • identity authentication In order to perform identity authentication more securely and detect the authenticity of the identity source, it is especially important to detect whether the identified object is a living object.
  • the present application provides a method, device, and electronic device for detecting a living body of a human face to quickly and effectively detect whether a target object is a living body.
  • the present application provides a method for detecting a human face, comprising: obtaining a target visible light image and a target infrared image about a target object to be detected; and extracting, from the target visible light image, a first person including only a human face portion a face image, and extracting a second face image including only a face portion from the target infrared image; a monochrome based on each type of monochrome component data of the first face image and the second face image Component data, determining a direction gradient histogram (HOG) feature of the target object; inputting the determined HOG feature into a pre-trained support vector machine SVM classifier for detecting, obtaining the target object Face biopsy results.
  • HOG direction gradient histogram
  • the present application provides a human face living body detecting apparatus, comprising: an image obtaining unit, configured to obtain a target visible light image and a target infrared image with respect to a target object to be detected; and a face image extracting unit, configured to Extracting a first face image including only a face portion from the target visible light image, and extracting a second face image including only a face portion from the target infrared image; the HOG feature determining unit is configured to be based on the first Determining a HOG feature of the target object by using various types of monochrome component data of the face image and monochrome component data of the second face image; and determining unit configured to input the determined HOG feature to the pre-training completion
  • the support vector machine SVM classifier performs detection to obtain a face biometric detection result of the target object.
  • the present application provides an electronic device including: an internal bus, a memory, a processor, and a communication interface.
  • the processor, the communication interface, and the memory complete communication with each other through the internal bus;
  • the memory may be a non-volatile storage medium, and is configured to store a machine corresponding to the human face detection method.
  • Executing an instruction the processor is configured to read the machine executable instructions on the memory, and execute the instructions to implement the human face detection method provided by the first aspect of the present application.
  • the image in the visible light band and the infrared band is used to fully characterize the target object; and, based on the statistical characteristics of the reflection of the human face on the optical band, the visible light corresponding to the visible light band
  • the face image is extracted from the infrared image corresponding to the image and the infrared band, and the extracted face image is analyzed by combining the HOG feature and the SVM classifier to determine whether the target object is a human face. Therefore, the solution can quickly and effectively detect whether the target object is a living body.
  • FIG. 1 is a flow chart of a method for detecting a human face living body provided by the present application
  • FIG. 2 is a flow chart of determining HOG features of a target object of FIG. 1 in accordance with an embodiment
  • FIG. 3 is a flowchart of determining an HOG feature of a target object in FIG. 1 according to another embodiment
  • FIG. 4 is a flowchart of a training process of a support vector machine SVM classifier provided by the present application
  • FIG. 5 is a flow diagram of the HOG feature of the determined sample of Figure 4, in accordance with an embodiment
  • FIG. 6 is a flow chart of determining HOG features of a sample of FIG. 4, in accordance with yet another embodiment
  • FIG. 7 is a schematic structural view of a human face living body detecting apparatus provided by the present application.
  • FIG. 8 is a schematic structural diagram of an electronic device provided by the present application.
  • first, second, third, etc. may be used to describe various information in this application, such information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other.
  • first information may also be referred to as the second information without departing from the scope of the present application.
  • second information may also be referred to as the first information.
  • word "if” as used herein may be interpreted as "when” or “when” or “in response to a determination.”
  • the present application provides a method, device and electronic device for detecting a living body of a human face to quickly and effectively detect whether a target object is a living body.
  • the method for detecting a human face living body can be applied to an electronic device.
  • the electronic device may be a camera, an attendance machine, or the like having the functions of collecting visible light images and infrared images, or the electronic device may be a device that communicates with a device having a function of collecting visible light images and infrared images, such as Servers, personal computers, and more.
  • This application uses the visible light band (380 nm-780 nm).
  • the infrared image corresponding to the visible light image and the infrared light band (780 nm-1100 nm) is used to fully characterize the image information of the target object.
  • the human face skin has significant and stable statistical characteristics for the reflection of any optical band, so that the gray value distribution corresponding to the monochrome component data of the human face image has the characteristics of uniformity and continuity, that is, gray. There is no drastic change in the degree; the grayscale image of the face of a dummy such as a photo is exactly the opposite. Therefore, the present application distinguishes between true and false faces by using the grayscale gradient of the face image as the feature value.
  • the face image extracted from the infrared image corresponding to the infrared light band is affected by external illumination, the feature is stable, and the face pupil is obviously obvious. Therefore, the infrared image in the infrared band can be used for the face. Live detection.
  • the acquisition band of the infrared image may be 850 nm, 940 nm, and the like.
  • the image in the infrared light band can be directly stored by the image sensor in the form of an 256-step 8-bit grayscale image.
  • a method for detecting a living body in a living body may include the following steps:
  • the electronic device can obtain a target visible light image and a target infrared image with respect to the target object to be detected, thereby performing a subsequent face extraction process.
  • the target visible light image and the target infrared image of the target object are two types of images of the target object at the same time.
  • two types of images can be collected by one device, for example, the device can be a binocular camera, and one lens of the binocular camera can be set.
  • the image sensor corresponding to the lens only senses the infrared band; of course, the two types of images can also be acquired by two devices, such as a visible light image acquisition device and a special infrared image acquisition device.
  • the target visible light image can be acquired by a CCD (Charge-coupled Device), a CMOS (Complementary Metal Oxide Semiconductor) or other imaging sensor.
  • the color space of the target visible light image obtained in the present application may be RGB, YUV or HIS.
  • RGB Charge-coupled Device
  • CMOS Complementary Metal Oxide Semiconductor
  • the color space of the target visible light image obtained in the present application may be RGB, YUV or HIS.
  • RGB Charge-coupled Device
  • CMOS Complementary Metal Oxide Semiconductor
  • SVM support vector machine
  • the first human face image and the second human face image are images that only include a human face portion.
  • the target visible light image and the target infrared image are obtained, since the reflection of the real human face skin on any of the light bands has significant and stable statistical characteristics, it is possible to analyze only the target visible light image and the face portion in the target infrared image.
  • the process of extracting the first face image from the target visible light image may include: performing face recognition on the target visible light image, and extracting a face region to obtain a first face image; similarly, from the target infrared image
  • the process of extracting the second face image may include: performing face recognition on the target infrared image, and extracting a face region to obtain a second face image.
  • the target visible light image and the target infrared image are subjected to face recognition by any face recognition algorithm known to those skilled in the art, which is not limited herein.
  • the target object may be considered to be a living body or a non-living body; however, if the face region is not recognized from the target visible light image, or from the target infrared image If the face area is not recognized, it may be considered that the target object is highly likely to be inactive, or the image of the target object may be acquired incorrectly. Therefore, in a specific application, when the face region cannot be recognized from the target visible light image or the target infrared image, the target object can be directly determined to be a non-human face living body, and the flow is ended. Of course, when the face region cannot be recognized from the target visible light image or the target infrared image, it is also possible to return to step S101 to reacquire the target visible light image and the target infrared image of the target object.
  • the face region is not recognized from the target infrared image
  • the target object is a display screen displaying a human face
  • the face region can be recognized from the target visible light image, but from the target infrared image The face area is not recognized.
  • the images corresponding to any of the monochrome component data are grayscale images, and any of the monochrome component data is a matrix of w*h. Where w is the number of pixels in the width and h is the number of pixels in the height. It can be understood that the types of various types of monochrome component data of the first face image are related to the corresponding color space.
  • the various types of monochrome component data are R component, G component, and B component; and for the color space of the first facial image is YUV, various types of The color component data is a Y component, a U component, and a V component.
  • the HOG feature of the target object is determined based on the various types of monochrome component data of the first face image and the monochrome component data of the second face image.
  • the steps may include the following steps S201-S203:
  • S202 Perform HOG feature extraction on the second grayscale image corresponding to the monochrome component data of the second human face image to obtain a second HOG feature.
  • S203 The respective first HOG feature and the second HOG feature are used as the HOG feature of the target object.
  • the four-dimensional HOG feature of the target object is obtained, That is, a three-dimensional first HOG feature and a one-dimensional second HOG feature.
  • HOG feature extraction is to first divide the image into small connected regions, which are called cell units; then collect the gradient or edge direction histogram of each pixel in the cell unit; and finally put these histograms Combine so that feature descriptors can be constructed.
  • cell units small connected regions
  • histograms combine so that feature descriptors can be constructed.
  • the determining, according to each type of monochrome component data of the first face image and the monochrome component data of the second face image, The step of the HOG feature of the target object may include the following steps S301-S302:
  • S301 performing dimensionality reduction processing on each type of monochrome component data of the first face image and the monochrome component data of the second face image to obtain a target grayscale image;
  • S302 Perform HOG feature extraction on the target grayscale image to obtain an HOG feature of the target object.
  • the dimension reduction process is: merging multi-dimensional monochromatic component data into a grayscale image.
  • the specific dimension reduction processing method includes, but is not limited to, weighting and averaging the component data of the same pixel in the monochrome component data of the first face image and the monochrome component data of the second face image, such that Each pixel corresponds to a result value, resulting in a grayscale image.
  • the HOG feature extraction is performed after performing dimensionality reduction processing on each type of monochrome component data of the first face image and the monochrome component data of the second face image, a one-dimensional of the target object is obtained. HOG features.
  • the images collected by the image acquisition device usually have noise interference in specific applications, and the images acquired in different scenarios may have different imaging characteristics, such as resolution, size, etc., these are all related to the detection process. There is a certain impact. Therefore, in order to eliminate these effects, image preprocessing can be performed on the image.
  • the step of determining the HOG feature of the target object based on the various types of monochrome component data of the first face image and the monochrome component data of the second face image may include: Image preprocessing is performed on the first face image and the second face image; each type of monochrome component data of the first face image after image preprocessing and a monochrome component of the second face image after image preprocessing Data, determining the HOG characteristics of the target object.
  • the image preprocessing may include at least one of denoising, histogram equalization, and size normalization, and is of course not limited thereto.
  • the image preprocessing in the face biometric detection process can be the same as the image preprocessing performed on the training samples of the SVM classifier to ensure the validity of the detection.
  • specific implementations of denoising, histogram equalization, and size normalization are all well-known in the art, and are not limited herein.
  • the determined HOG feature is input into a pre-trained support vector machine SVM classifier for detection, and a face biometric detection result of the target object is obtained.
  • the determined HOG feature may be input into a pre-trained SVM classifier for detection, and a face biometric detection result of the target object may be obtained, wherein the face biometric detection result may include: a face Living or non-human face.
  • the training process of the SVM classifier may include the following steps S401-S404:
  • the sample type of each of the samples includes a positive sample and a negative sample, and the positive sample is a living object, and the negative sample is a non-living object.
  • the collection environment of the samples used by the training SVM classifier may involve various environments, such as indoor, outdoor, strong background light, and the like. In this way, the subsequent application can be applied to the detection of target objects in a plurality of acquisition environments.
  • the non-living sample may include a type such as a photo or a video.
  • the photo may include black and white photos and color photos
  • the carrier of the video may be a display device such as a mobile phone or a tablet computer, which is not limited thereto.
  • the visible light image and the infrared image of each sample are two types of images of the sample at the same time.
  • two types of images can be collected by one device, and of course, the two types of images can be separately collected by two devices.
  • first sample image For each sample, extract a first sample image from a visible light image about the sample, and extract a second sample image from the infrared image about the sample.
  • first sample image and the second sample image are images that only include a face portion
  • the process of extracting the first sample image from the visible light image of the sample may include: performing face recognition on the visible light image of the sample, and extracting the face region to obtain the first sample image; similarly,
  • the process of extracting the second sample image from the infrared image of the sample may include: performing face recognition on the infrared image of the sample, and extracting the face region to obtain a second sample image.
  • the face recognition algorithm may be used to perform face recognition on the visible light image and the infrared image of the sample, which is not limited herein.
  • the face region is not recognized from the visible light image or the infrared image of the sample, it may be determined that the sample may be inactive or may have an error in image acquisition of the sample. If the sample is non-living, the visible light image or the infrared image that does not recognize the face region can be directly used as the corresponding face image, and then the subsequent processing steps are performed.
  • S403. Determine, for each sample, an HOG feature of the sample based on each type of monochrome component data of the first sample image of the sample and monochrome component data of the second sample image of the sample;
  • the monochromatic component data of the first sample image of the sample and the monochrome of the second sample image of the sample may be The component data determines the HOG characteristics of the sample.
  • the images corresponding to any of the monochrome component data are grayscale images, and any of the monochrome component data is a matrix of w*h. Where w is the number of pixels in the width and h is the number of pixels in the height.
  • determining HOG characteristics of the sample based on various types of monochrome component data of the first sample image of the sample and monochrome component data of the second sample image of the sample may include the following steps S501-S503:
  • S501 Perform, for the first sample image of the sample, HOG feature extraction on the third grayscale image corresponding to each type of monochrome component data, to obtain each third HOG feature of the sample;
  • S502 Perform, for the second sample image of the sample, HOG feature extraction on the fourth grayscale image corresponding to the monochrome component data, to obtain a fourth HOG feature of the sample;
  • the HOG of the sample is determined based on the various types of monochrome component data of the first sample image of the sample and the monochrome component data of the second sample image of the sample.
  • the process of the feature may include the following steps S601-S602:
  • S601 performing dimensionality reduction processing on each type of monochrome component data of the first sample image of the sample and the monochrome component data of the second sample image to obtain a target grayscale image of the sample;
  • S602 Perform HOG feature extraction on the target grayscale image of the sample to obtain an HOG feature of the sample.
  • the HOG feature of each sample is determined in the same manner as the HOG feature of the target object.
  • the pre-initialized SVM classifiers may be trained based on the HOG features of the respective samples and the sample types to which the respective samples belong until the SVM classifier learns the relationship between the HOG features of the respective samples and the sample types.
  • a specific implementation manner of training the pre-initialized SVM classifier based on the HOG feature of each sample and the sample type to which each sample belongs refer to any specific training mode that the sample data is used to train the SVM classifier, which is well known to those skilled in the art. Do not repeat them.
  • the image in the visible light band and the infrared band is used to fully characterize the target object; and, based on the statistical characteristics of the reflection of the human face on the optical band, the visible light corresponding to the visible light band
  • the face image is extracted from the infrared image corresponding to the image and the infrared band, and then the extracted face image is analyzed by combining the HOG feature and the SVM classifier to determine whether the target object is a living face, and the target object can be detected quickly and effectively. Whether it is a human face.
  • the "second” in the “first face image” in the “first face image” in the above embodiment is only used to distinguish the target visible light image and the target from the naming.
  • the face images respectively extracted from the infrared images do not have any limiting meaning; similarly, the "first”, “second”, “third” and “fourth” in other contents appearing in the above embodiments are only It is used to distinguish from naming and does not have any limiting meaning.
  • the present application also provides a human face living body detecting device.
  • the device may include the following functional units.
  • the image obtaining unit 710 is configured to obtain a target visible light image and a target infrared image about the target object to be detected.
  • the face image extracting unit 720 is configured to extract a first face image from the target visible light image, and extract a second face image from the target infrared image.
  • the first face image and the second face image are images that only include a face portion.
  • the HOG feature determining unit 730 is configured to determine an HOG feature of the target object based on each type of monochrome component data of the first face image and monochrome component data of the second face image.
  • the determining unit 740 is configured to input the determined HOG feature into the pre-trained support vector machine SVM classifier for detecting, to obtain a face biometric detection result of the target object.
  • the image in the visible light band and the infrared band is used to fully characterize the target object; and, based on the statistical characteristics of the reflection of the human face on the optical band, the visible light corresponding to the visible light band
  • the face image is extracted from the infrared image corresponding to the image and the infrared band, and then the extracted face image is analyzed by combining the HOG feature and the SVM classifier to determine whether the target object is a living face, and the target object can be detected quickly and effectively. Whether it is a human face.
  • the SVM classifier is trained by a classifier training unit, wherein the classifier training unit is specifically configured to: obtain visible light images and infrared images about a plurality of samples, wherein each sample of the samples The type includes a positive sample, which is a living object, and the negative sample is a non-living object; for each of the samples, a first sample image is extracted from a visible light image about the sample, and from Extracting a second sample image from the infrared image of the sample, the first sample image and the second sample image are both images containing only a face portion; for each of the samples, based on the first of the samples Determining the HOG features of the samples by the various types of monochrome component data of the sample image and the monochrome component data of the second sample image of the sample; training the pre-initialized based on the HOG features of the respective samples and the sample types to which the respective samples belong SVM classifier.
  • the HOG feature determining unit 330 is specifically configured to: perform HOG features on the first grayscale image corresponding to each type of monochrome component data of the first facial image. Extracting, obtaining respective first HOG features; performing HOG feature extraction on the second grayscale image corresponding to the monochrome component data of the second human face image to obtain a second HOG feature; and performing the respective first HOG features and The second HOG feature is used as the HOG feature of the target object.
  • the HOG feature determining unit 330 is specifically configured to: use the monochrome component data of the first face image and the monochrome of the second face image
  • the component data is subjected to dimensionality reduction processing to obtain a target grayscale image; HOG feature extraction is performed on the target grayscale image to obtain an HOG feature of the target object.
  • the HOG feature determining unit 330 is specifically configured to: perform image pre-processing on the first face image and the second face image; and based on the image-preprocessed first face image
  • the various types of monochrome component data and the monochrome component data of the second face image after image preprocessing are used to determine the HOG feature of the target object.
  • the embodiment of the present application further provides an electronic device.
  • the electronic device includes an internal bus 810, a memory 820, a processor 830, and a Communications Interface 840.
  • the processor 830, the communication interface 840, and the memory 820 complete communication with each other through the internal bus 810.
  • the memory 820 is configured to store machine executable instructions corresponding to the human face detection method.
  • the processor 830 is configured to read the machine executable instructions on the memory 820, and execute the instructions to implement the human face detection method provided by the present application, including: obtaining information about a target object to be detected a target visible light image and a target infrared image; extracting a first face image including only a face portion from the target visible light image, and extracting a second face image including only a face portion from the target infrared image; Determining the HOG feature of the target object by the various types of monochrome component data of the first face image and the monochrome component data of the second face image; inputting the determined HOG feature to the pre-trained completion
  • the support vector machine SVM classifier performs detection to obtain a face biometric detection result of the target object.
  • the device embodiment since it basically corresponds to the method embodiment, reference may be made to the partial description of the method embodiment.
  • the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the objectives of the present application. Those of ordinary skill in the art can understand and implement without any creative effort.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种人脸活体检测方法、装置及电子设备。获得关于待检测目标对象的目标可见光图像和目标红外图像(S101),从该目标可见光图像中提取仅包含人脸部分的第一人脸图像,以及从该目标红外图像中提取仅包含人脸部分的第二人脸图像(S102),基于该第一人脸图像的各类单色分量数据和该第二人脸图像的单色分量数据,确定该目标对象的HOG特征(S103),将所确定出的HOG特征输入至SVM分类器中进行检测,得到所述目标对象的人脸活体检测结果(S104)。

Description

人脸活体检测
相关申请的交叉引用
本专利申请要求于2018年1月12日提交的、申请号为2018100297444、发明名称为“一种人脸活体检测方法、装置及电子设备”的中国专利申请的优先权,该申请的全文以引用的方式并入本文中。
技术领域
本申请涉及人脸识别技术领域,特别涉及人脸活体检测。
背景技术
随着生物特征识别技术的快速发展,人脸识别、指纹识别、虹膜识别技术等在身份验证中扮演着重要的角色。其中,人脸识别技术是最方便、最适合人们习惯的一种识别技术,得到广泛地应用。
人脸识别技术作为当今有效的身份认证方法,其应用范围已经扩展到考勤、安全防护、海关检查、刑侦破案、银行系统等等领域。而随着应用范围的扩大,一些问题也随之出现,如不法分子在人脸认证过程中仿冒用户的人脸特征以达到欺骗系统的目的。具体的,人脸认证过程中的欺骗形式主要有:窃取用户的照片进行欺骗,用公众场合、网络上录制的视频进行欺骗等等。为了更安全地进行身份认证及检测身份来源的真实性,检测被识别的对象是否是活体显得尤为重要。
发明内容
有鉴于此,本申请提供了一种人脸活体检测方法、装置及电子设备,以快速、有效检测目标对象是否为人脸活体。
第一方面,本申请提供了一种人脸活体检测方法,包括:获得关于待检测目标对象的目标可见光图像和目标红外图像;从所述目标可见光图像中提取仅包含人脸部分的第一人脸图像,以及从所述目标红外图像中提取仅包含人脸部分的第二人脸图像;基于所述第一人脸图像的各类单色分量数据和所述第二人脸图像的单色分量数据,确定所述目标对象的方向梯度直方图(HOG)特征;将所确定出的所述HOG特征输入至预先训练 完成的支持向量机SVM分类器中进行检测,得到所述目标对象的人脸活体检测结果。
第二方面,本申请提供了一种人脸活体检测装置,包括:图像获得单元,用于获得关于待检测目标对象的目标可见光图像和目标红外图像;人脸图像提取单元,用于从所述目标可见光图像中提取仅包含人脸部分的第一人脸图像,以及从所述目标红外图像中提取仅包含人脸部分的第二人脸图像;HOG特征确定单元,用于基于所述第一人脸图像的各类单色分量数据和所述第二人脸图像的单色分量数据,确定所述目标对象的HOG特征;确定单元,用于将所确定出的HOG特征输入至预先训练完成的支持向量机SVM分类器中进行检测,得到所述目标对象的人脸活体检测结果。
第三方面,本申请提供了一种电子设备,所述电子设备包括:内部总线、存储器、处理器和通信接口。其中,所述处理器、所述通信接口、所述存储器通过所述内部总线完成相互间的通信;所述存储器可为非易失性存储介质,用于存储人脸活体检测方法对应的机器可执行指令;所述处理器,用于读取所述存储器上的所述机器可执行指令,并执行所述指令以实现本申请第一方面所提供的人脸活体检测方法。
本申请所提供方案中,基于对多光谱的分析,采用可见光波段和红外波段下的图像来全面表征目标对象;并且,基于真人人脸对光波段的反射的统计特性,从可见光波段所对应可见光图像和红外波段所对应红外图像中提取人脸图像,进而采用HOG特征和SVM分类器相结合的方式分析所提取的人脸图像,以确定目标对象是否为人脸活体。因此,本方案可以快速、有效检测目标对象是否为人脸活体。
附图说明
图1是本申请所提供的一种人脸活体检测方法的流程图;
图2是根据一实施例的图1中确定目标对象的HOG特征的流程图;
图3是根据另一实施例的图1中确定目标对象的HOG特征的流程图;
图4是本申请所提供的支持向量机SVM分类器的训练过程的流程图;
图5是根据一实施例的图4中确定样本的HOG特征的流程图;
图6是根据又一实施例的图4中确定样本的HOG特征的流程图;
图7是本申请所提供的一种人脸活体检测装置的结构示意图;
图8是本申请所提供的一种电子设备结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。
在本申请使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本申请。在本申请和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,尽管在本申请可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本申请范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。
本申请提供了一种人脸活体检测方法、装置及电子设备,以快速、有效检测目标对象是否为活体。
下面首先对本申请所提供的一种人脸活体检测方法进行介绍。
需要说明的是,本申请所提供的一种人脸活体检测方法可以应用于电子设备。在具体应用中,该电子设备可以为具有采集可见光图像和红外图像功能的摄像头、考勤机等设备,或者,该电子设备可以为与具有采集可见光图像和红外图像功能的设备相通信的设备,如服务器、个人计算机等等。
并且,考虑到多光谱的相关原理,即不同物体在同一光波段下成像效果不同、同一物体在不同光波段下成像效果亦不尽相同,本申请采用了可见光波段(380nm-780nm)所对应的可见光图像和红外光波段(780nm-1100nm)所对应的红外图像,以全面表征目标对象的图像信息。
另外,真人人脸皮肤对任一光波段的反射具有显著且稳定的统计特性,使得真人人脸图像的单色分量数据对应的灰度图中灰度值分布具有均匀性和连续性等特点,即灰度值不存在剧烈的变化;而照片等假人脸的灰度图恰恰相反。因此,本申请以人脸图像的灰度梯度作为特征值,来区分真假人脸。
需要强调的是,从红外光波段所对应红外图像中提取的人脸图像,受外部光照影响小,特征稳定,人脸瞳孔等信息明显,因此,可以采用红外波段下的红外图像用于人脸活体检测。在具体应用中,红外图像的采集波段可以为850nm、940nm等波段。并且,红外光波段下的图像可由图像传感器直接以256阶的8位灰度图形式存储。
如图1所示,本申请所提供的一种人脸活体检测方法,可以包括如下步骤:
S101,获得关于待检测目标对象的目标可见光图像和目标红外图像。
当需要检测目标对象是否为活体时,该电子设备可以获得关于待检测目标对象的目标可见光图像和目标红外图像,进而执行后续的人脸提取过程。
需要说明的是,关于该目标对象的目标可见光图像和目标红外图像为该目标对象在同一时刻的两类图像。并且,在保证采集到同一时刻的目标可见光图像和目标红外图像的前提下,可以通过一个设备来采集两类图像,例如:该设备可以为双目摄像头,该双目摄像头中的一个镜头可以设有滤光片,以使得与该镜头对应的图像传感器仅仅感应到红外波段;当然,也可以通过两个设备来采集该两类图像,例如:可见光图像采集设备和专门的红外图像采集设备。
目标可见光图像可以由CCD(Charge-coupled Device,电荷耦合元件)、CMOS(Complementary Metal Oxide Semiconductor,互补金属氧化物半导体)或其他成像传感器采集得到。并且,本申请中所获得的目标可见光图像的颜色空间可以为RGB、YUV或HIS等。其中,对于RGB颜色空间而言,其通过对红(R)、绿(G)、蓝(B)三个颜色通道的变化以及它们相互之间的叠加来得到各式各样的颜色;对于YUV颜色空间而言,“Y”表示明亮度(Luminance或Luma),“U”和“V”则是色度(Chrominance或Chroma);对于HIS颜色空间而言,其是从人的视觉系统出发,用色调(Hue)、色饱和度(Saturation或Chroma)和亮度(Intensity或Brightness)来描述颜色。为了保证检测有效性,在可见光图像与后续所提及的支持向量机(Support Vector Machine,SVM)分类器的训练样本的颜色空间不同时,可以将目标可见光图像进行颜色空间转换,然后利用转换后的目标可见光图像执行后续的步骤。其中,SVM是一种判别方法,在机器 学习领域中是一个有监督的学习模型,通常用来进行模式识别、分类以及回归分析。
S102,从该目标可见光图像中提取第一人脸图像,以及从该目标红外图像中提取第二人脸图像。其中,该第一人脸图像和该第二人脸图像均为仅包含人脸部分的图像。
在获得目标可见光图像和目标红外图像后,由于真人人脸皮肤对任一光波段的反射具有显著且稳定的统计特性,因此,可以仅仅对目标可见光图像和目标红外图像中的人脸部分进行分析。
其中,从该目标可见光图像中提取第一人脸图像的过程可以包括:对该目标可见光图像进行人脸识别,并提取人脸区域,得到第一人脸图像;类似的,从该目标红外图像中提取第二人脸图像的过程可以包括:对该目标红外图像进行人脸识别,并提取人脸区域,得到第二人脸图像。其中,可以采用本领域技术人员熟知的任一人脸识别算法来对该目标可见光图像和该目标红外图像进行人脸识别,本申请在此不做限定。
如果在目标可见光图像和目标红外图像均识别到人脸区域,可以认为目标对象可能是活体,也可能是非活体;但是,如果从目标可见光图像中未识别到人脸区域,或者从目标红外图像中未识别到人脸区域,则可以认定该目标对象极有可能为非活体,也可能对目标对象的图像获取有误。因此,在具体应用中,当从目标可见光图像或目标红外图像中无法识别到人脸区域时,可以直接判定目标对象为非人脸活体,并结束流程。当然,当从目标可见光图像或目标红外图像中无法识别到人脸区域时,也可以回到步骤S101,以重新获取目标对象的目标可见光图像和目标红外图像。
对于从目标红外图像中无法识别到人脸区域的情况,举例而言:对于目标对象是显示有人脸的显示屏的情况,从目标可见光图像中可以识别到人脸区域,但是从目标红外图像中无法识别到人脸区域。
S103,基于该第一人脸图像的各类单色分量数据和该第二人脸图像的单色分量数据,确定该目标对象的HOG(Histogram of Oriented Gradient,方向梯度直方图)特征。
在获得第一人脸图像和第二人脸图像后,可以基于该第一人脸图像的各类单色分量数据和该第二人脸图像的单色分量数据,确定该目标对象的HOG特征。任一单色分量数据对应的图像均为灰度图,且任一单色分量数据均为w*h的矩阵。其中,w为宽度上的像素点数量,h为高度上的像素点数量。可以理解的是,第一人脸图像的各类单色分量数据的类型与所对应的颜色空间相关。例如:对于第一人脸图像的颜色空间为RGB而言,各类单色分量数据为R分量、G分量和B分量;而对于第一人脸图像的颜色空 间为YUV而言,各类单色分量数据为Y分量、U分量和V分量。
在一种具体实现方式中,如图2所示,所述基于该第一人脸图像的各类单色分量数据和该第二人脸图像的单色分量数据,确定该目标对象的HOG特征的步骤,可以包括以下步骤S201-S203:
S201:对第一人脸图像的各类单色分量数据所对应的第一灰度图分别进行HOG特征提取,得到各个第一HOG特征;
S202:对第二人脸图像的单色分量数据所对应的第二灰度图进行HOG特征提取,得到第二HOG特征;
S203:将该各个第一HOG特征和该第二HOG特征作为该目标对象的HOG特征。
在该具体实现方式中,由于分别对第一人脸图像的各类单色分量数据和第二人脸图像的单色分量数据执行HOG特征提取,因此,获得该目标对象的四维的HOG特征、即三维的第一HOG特征和一维的第二HOG特征。
其中,HOG特征提取的基本思路为:首先将图像分成小的连通区域,这些连通区域被叫做细胞单元;然后采集细胞单元中各像素点的梯度的或边缘的方向直方图;最后把这些直方图组合起来,从而可以构成特征描述符。关于对任一灰度图进行HOG特征提取的具体实现方式可以采用本领域技术人员熟知的提取HOG特征的任一种实现方式,在此不做限定。
可选地,在另一种具体实现方式中,如图3所示,所述基于该第一人脸图像的各类单色分量数据和该第二人脸图像的单色分量数据,确定该目标对象的HOG特征的步骤,可以包括以下步骤S301-S302:
S301:将该第一人脸图像的各类单色分量数据和该第二人脸图像的单色分量数据进行降维处理,得到目标灰度图;
S302:对该目标灰度图进行HOG特征提取,得到该目标对象的HOG特征。
其中,所谓降维处理为:将多维的单色分量数据融合为一幅灰度图。具体的降维处理方式包括但不局限于:将第一人脸图像的各类单色分量数据和第二人脸图像的单色分量数据中同一像素点的分量数据进行加权求平均处理,这样每个像素点对应一个结果值,从而得到一幅灰度图。
在该具体实现方式,由于对第一人脸图像的各类单色分量数据和第二人脸图像的单 色分量数据进行降维处理后执行HOG特征提取,因此,获得该目标对象的一维HOG特征。
需要强调的是,由于在具体应用中图像采集设备所采集图像通常会存在噪声干扰,且不同场景下所采集图像可能有着截然不同的成像特性,如分辨率、尺寸大小等,这些均对检测过程存在一定的影响。因此,为了消除这些影响,可以对图像进行图像预处理。
基于该种处理思想,所述基于该第一人脸图像的各类单色分量数据和该第二人脸图像的单色分量数据,确定该目标对象的HOG特征的步骤,可以包括:对该第一人脸图像和该第二人脸图像进行图像预处理;基于图像预处理后的第一人脸图像的各类单色分量数据和图像预处理后的第二人脸图像的单色分量数据,确定该目标对象的HOG特征。
其中,该图像预处理可以包括去噪、直方图均衡化和尺寸归一化中的至少一种,当然并不局限于此。并且,可以理解的是,在人脸活体检测过程中的图像预处理可以与对SVM分类器的训练样本执行的图像预处理的方式相同,以保证检测的有效性。另外,关于去噪、直方图均衡化和尺寸归一化的具体实现方式均属于本领域技术人员熟知的技术,在此不做限定。
需要强调的是,基于该第一人脸图像的各类单色分量数据和该第二人脸图像的单色分量数据,确定该目标对象的HOG特征的具体实现方式仅仅作为示例,并不应该构成对本申请的限定。
S104,将所确定出的HOG特征输入至预先训练完成的支持向量机SVM分类器中进行检测,得到该目标对象的人脸活体检测结果。
在确定HOG特征后,可以将所确定出的HOG特征输入至预先训练完成的SVM分类器中进行检测,得到该目标对象的人脸活体检测结果,其中,人脸活体检测结果可以包括:人脸活体或非人脸活体。
具体的,如图4所示,所述SVM分类器的训练过程可以包括如下步骤S401-S404:
S401,获得关于多个样本的可见光图像和红外图像。其中,各个所述样本的样本类型包括正样本和负样本,该正样本为活体对象,该负样本为非活体对象。
其中,训练SVM分类器所利用的样本的采集环境可以涉及多种,如室内、室外、强背景光等不同光照条件的环境。这样,后续可以适用于多种采集环境下的目标对象的检测。并且,非活体样本可以包括照片或视频等类型。其中,照片可以包括黑白照片和彩色照片,而视频的载体可以是手机、平板电脑等显示设备,对此不作限定。
另外,各个样本的可见光图像和红外图像为样本在同一时刻的两类图像。并且,在保证采集到样本的同一时刻的可见光图像和红外图像的前提下,可以通过一个设备来采集两类图像,当然,也可以通过两个设备来分别采集该两类图像。
S402,针对每个样本,从关于该样本的可见光图像中提取第一样本图像,以及从关于该样本的红外图像中提取第二样本图像。其中,该第一样本图像和该第二样本图像均为仅包含人脸部分的图像;
其中,针对每个样本,从样本的可见光图像中提取第一样本图像的过程可以包括:对样本的可见光图像进行人脸识别,并提取人脸区域,得到第一样本图像;类似的,从样本的红外图像中提取第二样本图像的过程可以包括:对样本的红外图像进行人脸识别,并提取人脸区域,得到第二样本图像。其中,可以采用本领域技术人员熟知的任一人脸识别算法来对样本的可见光图像和红外图像进行人脸识别,本申请在此不做限定。
可以理解的是,当从样本的可见光图像或红外图像中无法识别到人脸区域时,可以认定该样本有可能是非活体,也有可能是对样本的图像获取有误。如果该样本是非活体,则可以将未识别到人脸区域的可见光图像或红外图像直接作为相应的人脸图像,进而执行后续的处理步骤。
S403,针对每个样本,基于该样本的第一样本图像的各类单色分量数据和该样本的第二样本图像的单色分量数据,确定该样本的HOG特征;
针对每个样本,在获得该样本的第一样本图像和第二样本图像后,可以基于该样本的第一样本图像的各类单色分量数据和该样本的第二样本图像的单色分量数据,确定该样本的HOG特征。任一单色分量数据对应的图像均为灰度图,且任一单色分量数据均为w*h的矩阵。其中,w为宽度上的像素点数量,h为高度上的像素点数量。
在一种具体实现方式中,如图5所示,基于该样本的第一样本图像的各类单色分量数据和该样本的第二样本图像的单色分量数据,确定该样本的HOG特征的过程可以包括以下步骤S501-S503:
S501:针对该样本的第一样本图像,对各类单色分量数据所对应的第三灰度图分别进行HOG特征提取,得到该样本的各个第三HOG特征;
S502:针对该样本的第二样本图像,对单色分量数据所对应的第四灰度图进行HOG特征提取,得到该样本的第四HOG特征;
S503:将该样本的各个第三HOG特征和第四HOG特征作为该样本的HOG特征。
在另一种具体实现方式中,如图6所示,基于该样本的第一样本图像的各类单色分量数据和该样本的第二样本图像的单色分量数据,确定该样本的HOG特征的过程可以包括以下步骤S601-S602:
S601:将该样本的第一样本图像的各类单色分量数据和第二样本图像的单色分量数据进行降维处理,得到该样本的目标灰度图;
S602:对该样本的目标灰度图进行HOG特征提取,得到该样本的HOG特征。
可以理解的是,为了保证检测的有效性,各个样本的HOG特征的确定方式与目标对象的HOG特征的确定方式相同。
S404,基于各个样本的HOG特征和各个样本所属的样本类型训练预先初始化的SVM分类器。
在获得各个样本的HOG特征后,可以基于各个样本的HOG特征和各个样本所属的样本类型训练预先初始化的SVM分类器,直至SVM分类器学习到各个样本的HOG特征与样本类型的关系为止。其中,基于各个样本的HOG特征和各个样本所属的样本类型训练预先初始化的SVM分类器的具体实现方式可以参见本领域技术人员熟知的利用样本数据训练SVM分类器的任一具体训练方式,在此不做赘述。
本申请所提供方案中,基于对多光谱的分析,采用可见光波段和红外波段下的图像来全面表征目标对象;并且,基于真人人脸对光波段的反射的统计特性,从可见光波段所对应可见光图像和红外波段所对应红外图像中提取人脸图像,进而采用HOG特征和SVM分类器相结合的方式分析所提取的人脸图像,以确定目标对象是否为人脸活体,可以快速、有效检测目标对象是否为人脸活体。
需要强调的是,上述实施例中的“第一人脸图像”中的“第一”和“第二人脸图像”中的“第二”仅仅用于从命名上区分从目标可见光图像和目标红外图像中分别提取到的人脸图像,并不具有任何限定意义;类似的,上述实施例中出现的其他内容中的“第一”“第二”“第三”和“第四”也仅仅用于从命名上起到区分作用,并不具有任何限定意义。
相应于上述方法实施例,本申请还提供了一种人脸活体检测装置,如图7所示,该装置可以包括以下功能单元。
图像获得单元710,用于获得关于待检测目标对象的目标可见光图像和目标红外图像。
人脸图像提取单元720,用于从所述目标可见光图像中提取第一人脸图像,以及从所述目标红外图像中提取第二人脸图像。其中,所述第一人脸图像和所述第二人脸图像均为仅包含人脸部分的图像。
HOG特征确定单元730,用于基于所述第一人脸图像的各类单色分量数据和所述第二人脸图像的单色分量数据,确定所述目标对象的HOG特征。
确定单元740,用于将所确定出的HOG特征输入至预先训练完成的支持向量机SVM分类器中进行检测,得到所述目标对象的人脸活体检测结果。
本申请所提供方案中,基于对多光谱的分析,采用可见光波段和红外波段下的图像来全面表征目标对象;并且,基于真人人脸对光波段的反射的统计特性,从可见光波段所对应可见光图像和红外波段所对应红外图像中提取人脸图像,进而采用HOG特征和SVM分类器相结合的方式分析所提取的人脸图像,以确定目标对象是否为人脸活体,可以快速、有效检测目标对象是否为人脸活体。
可选地,所述SVM分类器由分类器训练单元训练所得,其中,所述分类器训练单元具体用于:获得关于多个样本的可见光图像和红外图像,其中,每个所述样本的样本类型包括正样本和负样本,所述正样本为活体对象,所述负样本为非活体对象;针对每个所述样本,从关于该样本的可见光图像中提取第一样本图像,以及从关于该样本的红外图像中提取第二样本图像,所述第一样本图像和所述第二样本图像均为仅包含人脸部分的图像;针对每个所述样本,基于所述样本的第一样本图像的各类单色分量数据和所述样本的第二样本图像的单色分量数据,确定所述样本的HOG特征;基于各个样本的HOG特征和各个样本所属的样本类型训练预先初始化的SVM分类器。
可选地,在一种具体实现方式中,所述HOG特征确定单元330具体用于:对所述第一人脸图像的各类单色分量数据所对应的第一灰度图分别进行HOG特征提取,得到各个第一HOG特征;对所述第二人脸图像的单色分量数据所对应的第二灰度图进行HOG特征提取,得到第二HOG特征;将所述各个第一HOG特征和所述第二HOG特征作为所述目标对象的HOG特征。
可选地,在另一种具体实现方式中,所述HOG特征确定单元330具体用于:将所述第一人脸图像的各类单色分量数据和所述第二人脸图像的单色分量数据进行降维处理,得到目标灰度图;对所述目标灰度图进行HOG特征提取,得到所述目标对象的HOG特征。
可选地,所述HOG特征确定单元330具体用于:对所述第一人脸图像和所述第二人脸图像进行图像预处理;基于图像预处理后的所述第一人脸图像的各类单色分量数据和图像预处理后的所述第二人脸图像的单色分量数据,确定所述目标对象的HOG特征。
相应于上述方法实施例,本申请实施例还提供了一种电子设备。如图8所示,所述电子设备包括:内部总线810、存储器(memory)820、处理器(processor)830和通信接口(Communications Interface)840。其中,所述处理器830、所述通信接口840、所述存储器820通过所述内部总线810完成相互间的通信。所述存储器820,用于存储人脸活体检测方法对应的机器可执行指令。所述处理器830,用于读取所述存储器820上的所述机器可执行指令,并执行所述指令以实现本申请所提供的人脸活体检测方法,包括:获得关于待检测目标对象的目标可见光图像和目标红外图像;从所述目标可见光图像中提取仅包含人脸部分的第一人脸图像,以及从所述目标红外图像中提取仅包含人脸部分的第二人脸图像;基于所述第一人脸图像的各类单色分量数据和所述第二人脸图像的单色分量数据,确定所述目标对象的HOG特征;将所确定出的HOG特征输入至预先训练完成的支持向量机SVM分类器中进行检测,得到所述目标对象的人脸活体检测结果。
本实施例中,关于人脸活体检测方法的具体步骤的相关描述可以参见本申请所提供方法实施例中的描述内容,在此不做赘述。
上述装置中各个单元的功能和作用的实现过程具体详见上述方法中对应步骤的实现过程,在此不再赘述。
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本申请方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
以上所述仅为本申请的较佳实施例而已,并不用以限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。

Claims (15)

  1. 一种人脸活体检测方法,包括:
    获得关于待检测目标对象的目标可见光图像和目标红外图像;
    从所述目标可见光图像中提取仅包含人脸部分的第一人脸图像;
    从所述目标红外图像中提取仅包含人脸部分的第二人脸图像;
    基于所述第一人脸图像的各类单色分量数据和所述第二人脸图像的单色分量数据,确定所述目标对象的方向梯度直方图HOG特征;
    将所确定出的所述HOG特征输入至预先训练完成的支持向量机SVM分类器中进行检测,得到所述目标对象的人脸活体检测结果。
  2. 根据权利要求1所述的方法,其特征在于,所述SVM分类器的训练方式包括:
    获得关于多个样本的可见光图像和红外图像,其中,每个所述样本的样本类型包括正样本和负样本,所述正样本为活体对象,所述负样本为非活体对象;
    针对每个所述样本,
    从关于所述样本的可见光图像中提取仅包含人脸部分的第一样本图像;
    从关于所述样本的红外图像中提取仅包含人脸部分的第二样本图像;
    基于所述第一样本图像的各类单色分量数据和所述第二样本图像的单色分量数据,确定所述样本的HOG特征;
    基于各个所述样本的所述HOG特征和各个所述样本所属的所述样本类型训练预先初始化的SVM分类器。
  3. 根据权利要求1或2所述的方法,其特征在于,所述基于所述第一人脸图像的各类单色分量数据和所述第二人脸图像的单色分量数据,确定所述目标对象的HOG特征的步骤,包括:
    对所述第一人脸图像的各类单色分量数据所对应的第一灰度图分别进行HOG特征提取,得到各个第一HOG特征;
    对所述第二人脸图像的单色分量数据所对应的第二灰度图进行HOG特征提取,得到第二HOG特征;
    将所述各个第一HOG特征和所述第二HOG作为所述目标对象的HOG特征。
  4. 根据权利要求1或2所述的方法,其特征在于,所述基于所述第一人脸图像的各类单色分量数据和所述第二人脸图像的单色分量数据,确定所述目标对象的HOG特征的步骤,包括:
    将所述第一人脸图像的各类单色分量数据和所述第二人脸图像的单色分量数据进 行降维处理,得到目标灰度图;
    对所述目标灰度图进行HOG特征提取,得到所述目标对象的HOG特征。
  5. 根据权利要求1或2所述的方法,其特征在于,所述基于所述第一人脸图像的各类单色分量数据和所述第二人脸图像的单色分量数据,确定所述目标对象的HOG特征的步骤,包括:
    对所述第一人脸图像和所述第二人脸图像进行图像预处理;
    基于图像预处理后的所述第一人脸图像的各类单色分量数据和图像预处理后的所述第二人脸图像的单色分量数据,确定所述目标对象的HOG特征。
  6. 一种人脸活体检测装置,其特征在于,包括:
    图像获得单元,用于获得关于待检测目标对象的目标可见光图像和目标红外图像;
    人脸图像提取单元,用于从所述目标可见光图像中提取仅包含人脸部分的第一人脸图像,以及从所述目标红外图像中提取仅包含人脸部分的第二人脸图像;
    HOG特征确定单元,用于基于所述第一人脸图像的各类单色分量数据和所述第二人脸图像的单色分量数据,确定所述目标对象的方向梯度直方图HOG特征;
    确定单元,用于将所确定出的所述HOG特征输入至预先训练完成的支持向量机SVM分类器中进行检测,得到所述目标对象的人脸活体检测结果。
  7. 根据权利要求6所述的装置,其特征在于,所述SVM分类器由分类器训练单元训练所得,其中,所述分类器训练单元具体用于:
    获得关于多个样本的可见光图像和红外图像,其中,每个所述样本的样本类型包括正样本和负样本,所述正样本为活体对象,所述负样本为非活体对象;
    针对每个所述样本,
    从关于所述样本的可见光图像中提取仅包含人脸部分的第一样本图像;
    从关于所述样本的红外图像中提取仅包含人脸部分的第二样本图像;
    基于所述第一样本图像的各类单色分量数据和所述第二样本图像的单色分量数据,确定所述样本的HOG特征;
    基于所述各个样本的所述HOG特征和所述各个样本所属的所述样本类型训练预先初始化的SVM分类器。
  8. 根据权利要求6或7所述的装置,其特征在于,所述HOG特征确定单元具体用于:
    对所述第一人脸图像的各类单色分量数据所对应的第一灰度图分别进行HOG特征提取,得到各个第一HOG特征;
    对所述第二人脸图像的单色分量数据所对应的第二灰度图进行HOG特征提取,得到第二HOG特征;
    将所述各个第一HOG特征和所述第二HOG作为所述目标对象的HOG特征。
  9. 根据权利要求6或7所述的装置,其特征在于,所述HOG特征确定单元具体用于:
    将所述第一人脸图像的各类单色分量数据和所述第二人脸图像的单色分量数据进行降维处理,得到目标灰度图;
    对所述目标灰度图进行HOG特征提取,得到所述目标对象的HOG特征。
  10. 根据权利要求6或7所述的方法,其特征在于,所述HOG特征确定单元具体用于:
    对所述第一人脸图像和所述第二人脸图像进行图像预处理;
    基于图像预处理后的所述第一人脸图像的各类单色分量数据和图像预处理后的所述第二人脸图像的单色分量数据,确定所述目标对象的HOG特征。
  11. 一种电子设备,包括内部总线、存储器、处理器和通信接口;其中,
    所述处理器、所述通信接口、所述存储器通过所述内部总线完成相互间的通信;
    所述存储器,用于存储人脸活体检测方法对应的机器可执行指令;
    所述处理器,用于读取所述存储器上的所述机器可执行指令,并执行所述指令以实现:
    获得关于待检测目标对象的目标可见光图像和目标红外图像;
    从所述目标可见光图像中提取仅包含人脸部分的第一人脸图像,以及从所述目标红外图像中提取仅包含人脸部分的第二人脸图像;
    基于所述第一人脸图像的各类单色分量数据和所述第二人脸图像的单色分量数据,确定所述目标对象的方向梯度直方图HOG特征;
    将所确定出的所述HOG特征输入至预先训练完成的支持向量机SVM分类器中进行检测,得到所述目标对象的人脸活体检测结果。
  12. 根据权利要求11所述的电子设备,其特征在于,所述SVM分类器的训练方式包括:
    获得关于多个样本的可见光图像和红外图像,其中,每个所述样本的样本类型包括正样本和负样本,所述正样本为活体对象,所述负样本为非活体对象;
    针对每个所述样本,
    从关于所述样本的可见光图像中提取仅包含人脸部分的第一样本图像;
    从关于所述样本的红外图像中提取仅包含人脸部分的第二样本图像;
    基于所述第一样本图像的各类单色分量数据和所述第二样本图像的单色分量数据,确定所述样本的HOG特征;
    基于所述各个样本的所述HOG特征和所述各个样本所属的所述样本类型训练预先初始化的SVM分类器。
  13. 根据权利要求11所述的电子设备,其特征在于,所述基于所述第一人脸图像的各类单色分量数据和所述第二人脸图像的单色分量数据,确定所述目标对象的HOG特征的步骤,包括:
    对所述第一人脸图像的各类单色分量数据所对应的第一灰度图分别进行HOG特征提取,得到各个第一HOG特征;
    对所述第二人脸图像的单色分量数据所对应的第二灰度图进行HOG特征提取,得到第二HOG特征;
    将所述各个第一HOG特征和所述第二HOG作为所述目标对象的HOG特征。
  14. 根据权利要求11所述的电子设备,其特征在于,所述基于所述第一人脸图像的各类单色分量数据和所述第二人脸图像的单色分量数据,确定所述目标对象的HOG特征的步骤,包括:
    将所述第一人脸图像的各类单色分量数据和所述第二人脸图像的单色分量数据进行降维处理,得到目标灰度图;
    对所述目标灰度图进行HOG特征提取,得到所述目标对象的HOG特征。
  15. 根据权利要求11所述的电子设备,其特征在于,所述基于所述第一人脸图像的各类单色分量数据和所述第二人脸图像的单色分量数据,确定所述目标对象的HOG特征的步骤,包括:
    对所述第一人脸图像和所述第二人脸图像进行图像预处理;
    基于图像预处理后的所述第一人脸图像的各类单色分量数据和图像预处理后的所述第二人脸图像的单色分量数据,确定所述目标对象的HOG特征。
PCT/CN2018/122525 2018-01-12 2018-12-21 人脸活体检测 WO2019137178A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810029744.4A CN110032915A (zh) 2018-01-12 2018-01-12 一种人脸活体检测方法、装置及电子设备
CN201810029744.4 2018-01-12

Publications (1)

Publication Number Publication Date
WO2019137178A1 true WO2019137178A1 (zh) 2019-07-18

Family

ID=67218794

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/122525 WO2019137178A1 (zh) 2018-01-12 2018-12-21 人脸活体检测

Country Status (2)

Country Link
CN (1) CN110032915A (zh)
WO (1) WO2019137178A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110956080A (zh) * 2019-10-14 2020-04-03 北京海益同展信息科技有限公司 一种图像处理方法、装置、电子设备及存储介质
CN111160257A (zh) * 2019-12-30 2020-05-15 河南中原大数据研究院有限公司 一种对光照变换稳健的单目人脸活体检测方法
CN111191519A (zh) * 2019-12-09 2020-05-22 同济大学 一种用于移动供电装置用户接入的活体检测方法
CN111860357A (zh) * 2020-07-23 2020-10-30 中国平安人寿保险股份有限公司 基于活体识别的出勤率计算方法、装置、终端及存储介质
CN113239762A (zh) * 2021-04-29 2021-08-10 中国农业大学 一种基于视觉与红外信号的活体检测方法及装置

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111126229A (zh) * 2019-12-17 2020-05-08 中国建设银行股份有限公司 数据处理方法和装置
CN111104917A (zh) * 2019-12-24 2020-05-05 杭州魔点科技有限公司 基于人脸的活体检测方法、装置、电子设备及介质
CN111291685B (zh) * 2020-02-10 2023-06-02 支付宝实验室(新加坡)有限公司 人脸检测模型的训练方法及装置
CN113449567B (zh) * 2020-03-27 2024-04-02 深圳云天励飞技术有限公司 一种人脸温度检测方法、装置、电子设备及存储介质
CN111811432A (zh) * 2020-06-16 2020-10-23 中国民用航空飞行学院 一种三维成像系统及方法
CN113724091A (zh) * 2021-08-13 2021-11-30 健医信息科技(上海)股份有限公司 一种保险理赔的方法以及装置
CN114445925B (zh) * 2022-04-11 2022-07-22 深圳市润璟元信息科技有限公司 一种能够自动载入和删除的面部识别智能考勤系统

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102332093A (zh) * 2011-09-19 2012-01-25 汉王科技股份有限公司 一种掌纹和人脸融合识别的身份认证方法及装置
CN105069448A (zh) * 2015-09-29 2015-11-18 厦门中控生物识别信息技术有限公司 一种真假人脸识别方法及装置

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103440478B (zh) * 2013-08-27 2016-08-10 电子科技大学 一种基于hog特征的人脸检测方法
CN103886301B (zh) * 2014-03-28 2017-01-18 北京中科奥森数据科技有限公司 一种人脸活体检测方法
CN105868677B (zh) * 2015-01-19 2022-08-30 创新先进技术有限公司 一种活体人脸检测方法及装置
CN104778474B (zh) * 2015-03-23 2019-06-07 四川九洲电器集团有限责任公司 一种用于目标检测的分类器构建方法及目标检测方法
CN104794464B (zh) * 2015-05-13 2019-06-07 上海依图网络科技有限公司 一种基于相对属性的活体检测方法
CN105046224A (zh) * 2015-07-16 2015-11-11 东华大学 基于分块自适应加权梯度方向直方图特征的人脸识别方法
KR101653278B1 (ko) * 2016-04-01 2016-09-01 수원대학교산학협력단 색상 기반 얼굴 검출을 통한 실시간 얼굴 추적 시스템
CN106407900B (zh) * 2016-08-31 2019-04-19 上海交通大学 基于多源航片的异常场景识别方法
CN107292246A (zh) * 2017-06-05 2017-10-24 河海大学 基于hog‑pca和迁移学习的红外人体目标识别方法
CN107358181A (zh) * 2017-06-28 2017-11-17 重庆中科云丛科技有限公司 用于人脸活体判断的单目红外可见光摄像头装置及方法
CN107392187B (zh) * 2017-08-30 2020-08-11 西安建筑科技大学 一种基于梯度方向直方图的人脸活体检测方法

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102332093A (zh) * 2011-09-19 2012-01-25 汉王科技股份有限公司 一种掌纹和人脸融合识别的身份认证方法及装置
CN105069448A (zh) * 2015-09-29 2015-11-18 厦门中控生物识别信息技术有限公司 一种真假人脸识别方法及装置

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110956080A (zh) * 2019-10-14 2020-04-03 北京海益同展信息科技有限公司 一种图像处理方法、装置、电子设备及存储介质
CN110956080B (zh) * 2019-10-14 2023-11-03 京东科技信息技术有限公司 一种图像处理方法、装置、电子设备及存储介质
CN111191519A (zh) * 2019-12-09 2020-05-22 同济大学 一种用于移动供电装置用户接入的活体检测方法
CN111191519B (zh) * 2019-12-09 2023-11-24 同济大学 一种用于移动供电装置用户接入的活体检测方法
CN111160257A (zh) * 2019-12-30 2020-05-15 河南中原大数据研究院有限公司 一种对光照变换稳健的单目人脸活体检测方法
CN111160257B (zh) * 2019-12-30 2023-03-24 潘若鸣 一种对光照变换稳健的单目人脸活体检测方法
CN111860357A (zh) * 2020-07-23 2020-10-30 中国平安人寿保险股份有限公司 基于活体识别的出勤率计算方法、装置、终端及存储介质
CN111860357B (zh) * 2020-07-23 2024-05-14 中国平安人寿保险股份有限公司 基于活体识别的出勤率计算方法、装置、终端及存储介质
CN113239762A (zh) * 2021-04-29 2021-08-10 中国农业大学 一种基于视觉与红外信号的活体检测方法及装置

Also Published As

Publication number Publication date
CN110032915A (zh) 2019-07-19

Similar Documents

Publication Publication Date Title
WO2019137178A1 (zh) 人脸活体检测
WO2019134536A1 (zh) 基于神经网络模型的人脸活体检测
Patel et al. Secure face unlock: Spoof detection on smartphones
US10810423B2 (en) Iris liveness detection for mobile devices
US9652663B2 (en) Using facial data for device authentication or subject identification
Boulkenafet et al. On the generalization of color texture-based face anti-spoofing
Wen et al. Face spoof detection with image distortion analysis
CN110889312B (zh) 活体检测方法和装置、电子设备、计算机可读存储介质
US20230274577A1 (en) Device and method with image matching
JP6242888B2 (ja) 顔検証のためのシステムおよび方法
WO2016010720A1 (en) Multispectral eye analysis for identity authentication
WO2016010721A1 (en) Multispectral eye analysis for identity authentication
WO2016084072A1 (en) Anti-spoofing system and methods useful in conjunction therewith
WO2016010724A1 (en) Multispectral eye analysis for identity authentication
CN112052831A (zh) 人脸检测的方法、装置和计算机存储介质
US11315360B2 (en) Live facial recognition system and method
Wasnik et al. Presentation attack detection for smartphone based fingerphoto recognition using second order local structures
Song et al. Face liveness detection based on joint analysis of rgb and near-infrared image of faces
KR20130111021A (ko) 영상처리장치 및 영상처리방법
He et al. Face Spoofing Detection Based on Combining Different Color Space Models
Benlamoudi Multi-modal and anti-spoofing person identification
Abaza et al. Human ear detection in the thermal infrared spectrum
CN108875472B (zh) 图像采集装置及基于该图像采集装置的人脸身份验证方法
JP3962517B2 (ja) 顔面検出方法及びその装置、コンピュータ可読媒体
Nourmohammadi-Khiarak et al. An ear anti-spoofing database with various attacks

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18899936

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18899936

Country of ref document: EP

Kind code of ref document: A1