CN106778518B - Face living body detection method and device - Google Patents

Face living body detection method and device Download PDF

Info

Publication number
CN106778518B
CN106778518B CN201611053558.1A CN201611053558A CN106778518B CN 106778518 B CN106778518 B CN 106778518B CN 201611053558 A CN201611053558 A CN 201611053558A CN 106778518 B CN106778518 B CN 106778518B
Authority
CN
China
Prior art keywords
image
face
histogram
human face
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611053558.1A
Other languages
Chinese (zh)
Other versions
CN106778518A (en
Inventor
刘昌平
孙旭东
黄磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hanwang Technology Co Ltd
Original Assignee
Hanwang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hanwang Technology Co Ltd filed Critical Hanwang Technology Co Ltd
Priority to CN201611053558.1A priority Critical patent/CN106778518B/en
Publication of CN106778518A publication Critical patent/CN106778518A/en
Application granted granted Critical
Publication of CN106778518B publication Critical patent/CN106778518B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Abstract

The invention provides a face in-vivo detection method, belongs to the field of biological feature recognition, and solves the problem of low recognition accuracy of the face in-vivo detection method in the prior art. The method comprises the following steps: acquiring a first image and a second image which have acquisition time intervals smaller than a preset time length and contain a face to be detected, wherein the first image is acquired in an environment where the active light source is started, and the second image is acquired in an environment where the active light source is closed; then, respectively determining regions to be detected in the first image and the second image, and acquiring a differential image of the regions to be detected in the first image and the second image; extracting features to be identified from the differential image; and finally, inputting the features to be recognized into a preset classifier to perform human face living body detection. The region to be detected takes the face to be detected as the center and comprises the face to be detected and a part of background around the face to be detected.

Description

Face living body detection method and device
Technical Field
The invention relates to the field of biological feature recognition, in particular to a human face in-vivo detection method and a human face in-vivo detection device.
Background
Biometric identification technology is widely applied to various fields in life, wherein face identification technology is most widely applied due to the characteristics of convenience and sanitation in feature acquisition, for example, face identification is applied to the fields of security and entrance guard. With the expansion of the application field of face recognition, more and more methods for attacking face recognition also appear. Common attack methods include using media such as face photos, videos, and 3D mask models to simulate faces to attack face recognition in front of face recognition devices. Therefore, most of the attack on the face recognition in the prior art are non-living media, so that the living detection on the face to be recognized to resist the attack on the recognition is a problem to be solved urgently.
In the prior art, methods for detecting living human faces are mainly divided into three categories, namely methods based on texture features, methods based on motion features and methods based on other features. In the face living body detection method in the prior art, the face living body detection method based on the motion characteristics has low identification accuracy under the condition that a video is taken as an attack medium; the human face living body detection method based on the texture features or other features is greatly influenced by illumination, and the identification accuracy is unstable.
In summary, the living human face detection method in the prior art at least has the problems of limited applicable attack media and low accuracy of living human face detection and identification.
Disclosure of Invention
The embodiment of the invention provides a face in-vivo detection method and a face in-vivo detection device, which aim to solve the problem of low identification accuracy of the existing face in-vivo detection method.
In a first aspect, an embodiment of the present invention provides a face live detection method, which is applied to an electronic device with an active light source, and the method includes:
acquiring a first image and a second image which have acquisition time intervals smaller than a preset time length and contain a face to be detected;
acquiring a differential image between the respective areas to be detected of the first image and the second image;
extracting features to be identified from the differential image, and carrying out human face living body detection;
the first image is an image acquired in an environment where the active light source is started, and the second image is an image acquired in an environment where the active light source is closed.
In a second aspect, an embodiment of the present invention further provides a human face living body detection apparatus, which is applied to an electronic device with an active light source, and the apparatus includes: :
the image acquisition module is used for acquiring a first image and a second image which have acquisition time intervals smaller than a preset time length and contain a face to be detected;
the differential image acquisition module is used for acquiring a differential image between the respective areas to be detected of the first image and the second image;
the human face living body detection module is used for extracting the features to be identified from the difference image and carrying out human face living body detection;
the first image is an image acquired in an environment where the active light source is started, and the second image is an image acquired in an environment where the active light source is closed.
In a third aspect, an embodiment of the present invention further provides an electronic device, which includes an active light source, and the electronic device further includes the human face living body detection apparatus in the embodiment of the present invention.
In this way, the living human face detection method disclosed in the embodiment of the present invention obtains the first image and the second image, which have the collection time interval smaller than the preset time length and contain the human face to be detected, wherein the first image is an image collected in the environment where the active light source is started, and the second image is an image collected in the environment where the active light source is closed; then, obtaining a differential image between the respective areas to be detected of the first image and the second image; extracting features to be identified from the differential image; and finally, inputting the features to be recognized into a preset classifier to perform face living body detection, thereby solving the problem of low recognition accuracy of the face living body detection method in the prior art. The living body detection is carried out according to the characteristics extracted from the to-be-detected region including the human face and the background around the human face in the differential image of the two images, so that the accuracy of the human face living body detection is effectively improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 is a flowchart of a human face live detection method according to a first embodiment of the present invention;
FIG. 2 is a flowchart of a human face live detection method according to a second embodiment of the present invention;
FIG. 3 is a schematic diagram of the region to be detected determined in the second embodiment of the present invention;
FIG. 4 is a schematic diagram of a human face region and a non-human face region in a difference image according to a second embodiment of the present invention;
FIG. 5 is a flowchart of a face liveness detection method according to a third embodiment of the present invention;
FIG. 6 is a flowchart of a face live detection method according to a fourth embodiment of the present invention;
FIG. 7 is a schematic diagram of a human face liveness detection apparatus according to a fifth embodiment of the present invention;
FIG. 8 is a diagram showing a structure of a living human face detection apparatus according to a sixth embodiment of the present invention;
FIG. 9 is a block diagram of a living human face detecting apparatus according to a sixth embodiment of the present invention;
fig. 10 is a structural diagram of a face liveness detection apparatus according to a seventh embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The first embodiment is as follows:
the embodiment provides a face living body detection method, which is applied to an electronic device with an active light source, and as shown in fig. 1, the method includes: step 10 to step 12.
Step 10, acquiring a first image and a second image which are acquired at time intervals less than a preset time length and contain a face to be detected.
In specific implementation, the active light source may be an active infrared light source, such as an infrared LED fill light, or an active visible light source, such as an LED light having a light emitting wavelength within a visible light range.
The front panel of the electronic equipment for face recognition is usually provided with a camera for collecting a face image to be recognized, and the electronic equipment applied in the invention is also provided with an active light source. When the face is identified, the electronic equipment starts the active light source to supplement light for the face to be acquired so as to improve the quality of the acquired face image to be identified.
Firstly, a first image and a second image of a face to be detected are collected through the camera within a preset time. The first image is an image acquired in an environment where the active light source is started, and the second image is an image acquired in an environment where the active light source is closed. In a specific implementation, the first image may be acquired first, or the second image may be acquired first, which is not limited in the present invention. The time difference between the acquisition of the first image and the acquisition of the second image is required to be within a preset time length, such as 500 milliseconds, so that if the electronic device acquires an image of a living human face, the first image and the second image have a certain difference, but the difference is not large. In specific implementation, the primary face detection can be performed on the acquired image, and if the acquired first image or second image does not contain a face, the two images are discarded together, and the image acquisition is performed again.
And 11, acquiring a difference image between the respective areas to be detected of the first image and the second image.
Firstly, regions to be detected in the first image and the second image are respectively determined.
The region to be detected takes the face to be detected as the center and comprises the face to be detected and a part of background around the face to be detected. In specific implementation, the region to be detected is a rectangular region which takes the face region as the center and comprises part of the background around the face. For the collected first image and the second image, a face region detection algorithm is used for positioning coordinates of a face and two eyes, the coordinates of the two eyes are used as datum points, and a pixel value of a distance between the two eyes is used as a unit length; and (3) rotating the picture properly to enable the connecting lines of the two eyes to be in a horizontal state, taking the position of one unit length to the left of the left eye as a left boundary, taking the position of one unit length to the right of the right eye as a right boundary, taking the position of one unit length above the reference point of the two eyes as an upper boundary, and taking the positions of two unit lengths below the two eyes as a lower boundary to determine a minimum rectangular area as a face area. Then, the located face region is expanded to the periphery along four directions, namely, the upper direction, the lower direction, the left direction and the right direction, by preset pixels, so that a larger rectangular region which takes the face region (namely, the minimum rectangular region containing human eyes) as the center and comprises partial background around the face is determined to be used as a region to be detected. The preset pixels are determined according to the pixel sizes of the first image and the second image and the pixel size of the face region, for example, the preset pixels may be a value obtained by subtracting a quarter of the pixel width of the face region from the pixel width of the first image. In specific implementation, a face template matching method can be adopted to determine a face region in an image, and an Ada Boos t algorithm with Haar-l ike characteristics can also be adopted to detect the face region. When the face region in the image is located, a scheme in the prior art can be adopted, and details are not repeated here.
Then, a difference image of the region to be detected in the first image and the second image is obtained.
For the face pictures acquired under different batches and different distances, because the sizes of the regions to be detected in the first image and the second image determined in the previous step may be different, the regions to be detected in all the acquired images are normalized to the preset same size. Then, a difference image of the region to be detected in the first image and the second image is obtained.
And step 12, extracting the features to be identified from the difference image, and carrying out human face living body detection.
Firstly, features to be recognized are extracted from the difference image. The feature to be recognized extracted from the obtained difference image may be one of a face context feature, a texture feature or an illumination feature of a preset dimension extracted from the difference image. The face context feature may be a feature composed of information entropy extracted based on a face region in the difference image and a non-face region in the difference image. Texture features are, for example, LBP (Local Binary Pattern) features, DCT (Di scree Cos ine Transform) features, Gabor features, and the like. The illumination characteristic is that the difference image contains the statistical characteristic of the statistical distribution information of the illumination of the face region.
In order to further improve the accuracy of the living body detection, extracting the feature to be identified from the obtained difference image may further include combining a context feature or a texture feature with an illumination feature to perform the living body detection of the human face.
After the features to be recognized are extracted, inputting the features to be recognized into a preset classifier, and performing human face living body detection. Before the face living body detection is carried out, a classifier is trained firstly.
In specific implementation, a sample set is formed on the basis of collected positive samples (namely human face living body images) and negative samples (namely human face pictures, videos, mask models and other images); then, extracting features to be identified of the samples in the sample set respectively, such as the face context feature, the illumination feature or the context-specific combination illumination feature and the like in step 13; finally, a classifier, such as a Support Vector Machine (SVM) classifier, is trained based on the extracted features.
When the human face living body detection is carried out, the features to be recognized are extracted from the difference image and input to a classifier obtained by pre-training, and the human face living body detection is carried out. The classifier identifies the input features to obtain the result of the living human face or the non-living human face in the image.
The embodiment of the invention discloses a face in-vivo detection method, which comprises the steps of acquiring a first image and a second image, wherein the first image and the second image are acquired at an acquisition time interval which is less than a preset time length and comprise a face to be detected, the first image is an image acquired in an environment of starting an active light source, and the second image is an image acquired in an environment of closing the active light source; then, obtaining a differential image between the respective areas to be detected of the first image and the second image; the features to be identified are extracted from the difference image for face living body detection, so that the problem of low identification accuracy of the face living body detection method in the prior art is solved. The living body detection is carried out according to the characteristics extracted from the to-be-detected region including the human face and the background around the human face in the differential image of the two images, so that the accuracy of the human face living body detection is effectively improved.
Example two:
referring to fig. 2, the living human face detection method disclosed in another embodiment of the present invention is applied to an electronic device having an active light source, and the method includes steps 20 to 24.
And 20, acquiring a first image and a second image which have acquisition time intervals smaller than a preset time length and contain the face to be detected.
The first image is an image acquired in an environment where the active light source is started, and the second image is an image acquired in an environment where the active light source is closed. In a specific implementation, the active light source may be an active infrared light source, in this embodiment, the method for detecting a living human face is described in detail by taking the active light source as the active infrared light source as an example, and the first image is an image acquired in an environment where the active infrared light source is started. In this embodiment, the electronic device is provided with an infrared camera, the resolution of the infrared camera is 1280 × 720 pixels, and an active infrared light source is additionally installed around the infrared camera of the camera, for example: the wavelength of the near infrared LED light source is 850 nm. When the face of the electronic equipment is identified, the distance between the face and the camera is 30-80 cm.
Normally, the active infrared light source is in an off state, so the camera may be controlled to capture a second image first, i.e. capture an image in an environment where the active light source is off. Then, the active infrared light source is started, and the camera is controlled to collect a first image, namely, an image is collected under the environment of starting the active light source. There is a preset time interval between the acquisition of the first image and the acquisition of the second image.
And 21, respectively determining the areas to be detected in the first image and the second image.
The region to be detected is a rectangular region which takes the face region as the center and comprises part of the background around the face. In specific implementation, for the acquired first image and the second image, the face regions in the first image and the second image may be determined by using a method in the prior art. The face region located by the method of locating a face region in the related art is a minimum rectangular region including human eyes. Then, the located face region is expanded to the periphery along four directions, namely, the upper direction, the lower direction, the left direction and the right direction, by preset pixels, so that a larger rectangular region which takes the face region (namely, the minimum rectangular region containing human eyes) as the center and comprises partial background around the face is determined to be used as a region to be detected. The preset pixels are determined according to the pixel sizes of the first image and the second image and the pixel size of the face region, for example, the preset pixels may be a value obtained by subtracting a quarter of the pixel width of the face region from the pixel width of the first image.
The region to be detected is a rectangular region which takes the face region as the center and comprises part of the background around the face. Determining the regions to be detected in the first image and the second image respectively comprises: respectively determining human face areas in the first image and the second image; and expanding the face area to the periphery by a preset size to obtain the area to be detected. For the first image and the second image which are acquired, the method in the prior art can be adopted to respectively determine the face areas in the first image and the second image. The face region located by the method of locating a face region in the related art is a minimum rectangular region including human eyes. For example: using a Viola-Jones detector of OpenCV to detect the human face of the first image and the second image and locate the positions of human eyes; then, a face region is further determined according to the located eye position, and the face region is usually the smallest rectangular region including the eyes, as shown in 301 in fig. 3. For the acquired first image and the second image, other methods in the prior art can be further adopted to respectively determine the face regions in the first image and the second image, which is not described herein again.
In specific implementation, in order to improve accuracy of human eye positioning, the performing human eye positioning on the first image and the second image respectively includes: performing gamma transformation on the first image and the second image respectively to adjust illumination of the first image and the second image; and respectively carrying out face positioning on the illumination of the first image and the second image after the illumination is adjusted. Under most circumstances, the image of gathering under the environment that does not open the initiative light source can be darker, and the image of gathering under the environment of opening the initiative light source can be brighter, consequently, when carrying out the people's eye location, can carry out the preliminary treatment to the image of gathering through automatic illumination adjustment algorithm, use gamma compression to darker second image, use gamma expansion to brighter first image to make things convenient for face detection and location. Face detection is performed on the gamma-transformed image using the Viola-Jones detector of OpenCV, and the position of the human eyes is located. Then, a face region is further determined according to the located positions of the human eyes, and the face region is usually the smallest rectangular region including the human eyes. And then, determining the positions of the face areas in the first image and the second image according to the positions of the face areas obtained by positioning.
Then, the located face region is extended by preset pixels to the periphery along four directions, i.e., up, down, left, and right, respectively, to determine a larger rectangular region that takes the face region as the center and includes a part of the background around the face as the face region, as shown in 302 in fig. 3. The preset pixels are determined according to the pixel sizes of the first image and the second image and the pixel size of the face area, for example, the preset pixels may be a value obtained by subtracting a quarter of the pixel width of the face area from the pixel width of the first image. If the size of the face region is small relative to the size of the first image or the second image in which the face region is located, such as 1/2 which is smaller than the size of the first image or the second image in which the face region is located, the preset pixels may be set to be 1/2 of the pixel width of the rectangular region of the face.
Step 22, obtaining a difference image of the region to be detected in the first image and the second image.
The obtaining a differential image of the region to be detected in the first image and the second image includes: respectively carrying out normalization processing on the images in the regions to be detected in the first image and the second image; and obtaining a difference image of the region to be detected in the first image and the second image according to the normalized image in the region to be detected. For the face pictures acquired under different batches and different distances, because the sizes of the regions to be detected in the first image and the second image determined in the previous step may be different, the regions to be detected in the acquired images are normalized to the preset same size. Then, a difference image of the region to be detected in the first image and the second image is obtained.
Abstracting the experimental environment according to a Lambert illumination reflection model to obtain the following illumination hypothesis:
a) there is a main external light source near the face, the light source is mainly emitted by fluorescent lamp (fluorescent lamp), and the emitted light is mainly composed of natural light, and is marked as I1
b) An active infrared light source is arranged right in front of the face, and the power of the light source is relative to I1Smaller, faster decay, denoted as I2
c) Including display screens, sunlight, and other remote light sources, collectively designated as ambient light Ia
As can be known from the reflection model, the face reflection light received by the camera is mainly divided into diffuse reflection light and specular reflection light, that is, for each pixel point x in the image collected by the camera, the pixel value I (x) of the face image I can be expressed as:
Figure BDA0001161451640000091
wherein, Ii,dA diffuse reflected light component caused for the ith light source; i isi,sA specular reflected light component caused by the ith light source; f (d)i) Is the distance d between the light source and the faceiIs a decay function of the argument.
According to the foregoingIllumination assumption, light source I2Will quickly decay and the light source I1The decay is slow.
In this embodiment, the first image is an image acquired in an environment where the active light source is started, and an image of a region to be detected in the first image is denoted as I(L)(ii) a The second image is an image acquired in the environment of turning off the active light source, and an image of the region to be detected in the second image is marked as I. According to the Lambert illumination reflection model, an image I of a region to be detected in the first image(L)And the image I of the region to be detected in the second image both contain Ia,I1,d,I1,sReflection component of light source, and image I of region to be detected in first image(L)Simultaneously have I2,dAnd I2,sThe reflection component of the light source, namely:
I=Ia+f(d1)·(I1,d+I1,s);
I(L)=Ia+f(d1)·(I1,d+I1,s)+f(d2)·(I2,d+I2,s)。
since the time interval between the acquisition of the first image and the acquisition of the second image is within a predetermined time, such as 100 ms, the change in the pose of the human head in the two images is approximately negligible. Because the head posture is not changed greatly, the fluorescent lamp I1The resulting reflected component also remains substantially unchanged; ambient light IaAnd an active infrared light source I2Independently, it can be considered as a constant. A region I to be detected in a first image is detected(L)Subtracting the region I to be detected in the second image to obtain a differential image I of the region to be detected according to the following formulaD:ID=I(L)-I=f(d2)·(I2,d+I2,s)。
The active infrared light source can reduce the influence of external illumination factors such as ambient light change, side light and the like on pattern recognition problems such as face recognition and the like, and meanwhile, the near infrared spectrum can enable the image quality of the image to be better, for example, the image pixels are not saturated or even overexposed. Therefore, the active infrared light source is used for human face living body detection, and more accurate detection effect can be obtained.
In specific implementation, the active infrared light source I2Or an active visible light source can be replaced, and the calculation mode of the differential image is similar, which is not described herein again. And step 23, extracting the context characteristics of the human face from the difference image.
Since each face attack method requires an attack medium, such as a face photo. Therefore, in the non-human face area around the human face in the image acquired by the electronic device, some 'background pixels' do not belong to the real background, but belong to the attack medium. In the real face image, most of the pixels in the non-face area belong to background pixels. For the distance between the background pixel point in the real face image collected by the camera and the camera is farther than the distance between the face and the camera, the value of the illumination attenuation function f (d) is relatively much smaller. That is, the pixel values of the pixel points in the non-face region in the real face image collected by the camera change relatively little before and after the active infrared light source is turned on. In the photo for face attack, the distances between the face area and the non-face area are the same as those between the non-face area and the camera, and the reflection characteristics of the face area and the non-face area to illumination are similar. Therefore, by analyzing the pixel distribution of the face region and the non-face region in the difference image, living body judgment can be performed.
In this embodiment, the feature to be recognized includes a face context feature, and extracting the feature to be recognized from the obtained difference image includes: and extracting the context characteristics of the human face from the differential image. The face context feature may be a feature composed of information entropy extracted based on a face region in the difference image and a non-face region in the difference image. Wherein, the extracting the context features of the human face from the difference images respectively comprises: determining a human face region and a non-human face region in the differential image; acquiring a human face region histogram and a non-human face region histogram; generating a difference histogram according to the human face region histogram and the non-human face region histogram; and respectively extracting information entropies from the human face region histogram, the non-human face region histogram and the difference histogram.
In specific implementation, a human face region (e.g. 401 in fig. 4) and a non-human face region (e.g. 402 in fig. 4) in a difference image are first determined by an ellipse model.
Then, pixel histograms H of the face regions 401 are calculated, respectivelyface' and non-human face region 402nonface'. Since the number of pixels in the face region and the non-face region may be different, two histograms H need to be generatedface' and Hnonface' normalization processing, such as L1 normalization (L1-normalization), is performed, i.e., letting the sum of all components in the statistical histogram of each be 1. Thus, the histogram difference caused by the difference of the total pixel number can be eliminated, and the normalized pixel histogram H of the human face region 401 can be obtainedfaceAnd a pixel histogram H of the non-human face region 402nonface
According to the histogram H of the human face regionfaceAnd non-human face region histogram HnonfaceGenerating a difference histogram Hdiff. Because the human face is not in a standard ellipse shape, when the human face region in the differential image is determined through the ellipse model, the set elliptical human face region covers some pixel points of the non-human face region. In order to avoid detection errors caused by the fact that the human face area contains background pixel points, H is usedfaceAnd HnonfaceConstructing another difference histogram Hdiff. The concrete formula is as follows:
Hdiff(i)=max(Hface(i)-Hnonface(i) 0); (formula 2)
Wherein i represents a gray value, and the value range is 0 to 255; hdiff(i) Is the value of the difference histogram when the gray value is i; hface(i) The histogram value is the histogram value when the gray value of the human face area is i after normalization; hnonface(i) The histogram value is the histogram value when the non-human face area gray value after normalization is i.
Finally, histogram H is obtained from the face regionfaceHistogram H of non-human face regionnonfaceHistogram of differences HdiffRespectively extracting information entropy as the features to be identified. Calculating the information entropy of the three histograms to obtain a three-dimensional face context feature vector, such as (entropy (H)face),entropy(Hnonface),entropy(Hdiff) Each dimension corresponds to the information entropy of one histogram. In specific implementation, the method for calculating the information entropy of the histogram comprises the following steps:
entropy (h) - Σ (h (i) × log (h (i)); (formula 3)
Where H (i) is a value with a gray scale value i in the histogram H.
Because the neckline, the hair, the hat or other ornaments of the overcoat are closer to the human face and are similar to attack media in reflection characteristic, the background in the collected image can affect the histogram of the non-human face area and easily cause misjudgment, and therefore, in specific implementation, additional human face context characteristics are further extracted through the block model. Extracting the features to be identified from the obtained difference image further comprises: dividing the difference image into a plurality of adjacent image blocks; respectively determining a human face area and a non-human face area in each image block; respectively acquiring a human face region histogram and a non-human face region histogram in each image block; aiming at each image block, generating a difference histogram according to a face region histogram and a non-face region histogram of the image block respectively; and for each image block, extracting information entropy from a human face region histogram, a non-human face region histogram and a difference histogram of the image block respectively.
In particular, the differential image may be equally divided into two parts along the horizontal direction and the vertical direction, so that the entire image is equally divided into four adjacent image blocks, such as P1, P2, P3 and P4. Then, a human face region and a non-human face region in each image block are respectively determined. For each image block, a human face region histogram and a non-human face region histogram in the image block are respectively obtained, and the following can be obtained: face region histogram H of image block P1P1_face' and non-human face region histogram HP1_nonface' the human face area of the image block P2 is straightSquare HP2_face' and non-human face region histogram HP2_nonface', face region histogram H of image block P3P3_face' and non-human face region histogram HP3_nonface', face region histogram H of image block P4P4_face' and non-human face region histogram HP4_nonface'. Then, normalizing the histogram of each image block to obtain a normalized histogram HP1_face、HP1_nonface、HP2_face、HP2_nonface、HP3_face、HP3_nonface、HP4_face、HP4_nonface. For each image block, a difference histogram is generated by adopting a method for generating a difference histogram according to a human face area histogram and a non-human face area histogram of the image block by adopting a formula 2, so that four difference histograms H can be obtainedp1_diff、Hp2_diff、Hp3_diffAnd Hp4_diff. For each image block, extracting information entropies from the face region histogram, the non-face region histogram and the difference histogram of the image block respectively through formula 3 to obtain 12 information entropies, which are specifically as follows: information entropy (H) of image block P1p1_face),entropy(Hp1_nonface),entropy(Hp1_diff) Information entropy (H) of image block P2p2face),entropy(Hp2_nonface),entropy(Hp2_diff) Information entropy of image block P3
entropy(Hp3_face),entropy(Hp3_nonface),entropy(Hp3_diff) And the information entropy of the image block P4
entropy(Hp4_face),entropy(Hp4_nonface),entropy(Hp4_diff). And finally, combining the information entropy of the difference image and the information entropy of each image block of the difference image according to a preset rule to obtain a multi-dimensional face context feature vector as a feature to be identified.
In the scheme, the information entropy (formula 3) is used as the feature to be recognized to analyze the image pixel distribution of the face region and the non-face region: if the image pixel values have higher consistency in the face area and the non-face area, the image pixel values are likely to be an attack face; on the contrary, if the consistency is low, it is considered that there is no attack medium around the face, and it is likely to be a real face. In another specific embodiment, the extracting the feature to be identified from the difference image may further include: and extracting texture features from the differential image. The texture features may be: any one of LBP (Local Binary Pattern) features, DCT (Discrete Cosine Transform) features, Gabor features, and the like. The specific method for extracting texture features is referred to in the prior art, and is not described herein again.
And 24, performing face living body detection according to the face context characteristics.
Before the face living body detection is carried out, a classifier is trained firstly.
In specific implementation, firstly, a sample set is formed based on collected positive samples (namely human face living body images) and negative samples (namely human face photos, videos, mask models and other images), and positive and negative sample labels are set; then, respectively extracting the face context characteristics of the samples in the sample set; finally, a classifier, such as a Support Vector Machine (SVM) classifier, is trained based on the extracted features.
When the human face living body detection is performed, the human face context features extracted from the difference image in the step 23 are input to a classifier obtained by training in advance, and the human face living body detection is performed. The classifier identifies the input features to obtain the result of the living human face or the non-living human face in the image.
In specific implementation, other manners may also be adopted to perform living human face detection according to the extracted illumination features, for example, a pre-trained recognition model is used to perform living human face detection, which is not described herein again.
The embodiment of the invention discloses a face in-vivo detection method, which comprises the steps of acquiring a first image and a second image which have an acquisition time interval smaller than a preset time length and contain a face to be detected; respectively determining regions to be detected in the first image and the second image; acquiring a difference image of the to-be-detected region in the first image and the second image, and extracting context characteristics of a human face from the difference image; and finally, the context characteristics of the human face are input into a preset classifier to carry out human face living body detection, so that the problem of low identification accuracy of the human face living body detection method under the condition that the area of an attack medium is large or the distance from the camera is very close and the frame information of the attack medium is completely absent in the picture in the prior art is solved. The living body detection is carried out according to the context characteristics of the human face extracted from the to-be-detected region comprising the human face and the background around the human face in the differential image of the two images, so that the accuracy of the human face living body detection is effectively improved.
When the information entropy of the image is used as the face context feature, the difference image is partitioned, the information entropy of each block and the information entropy of the difference image are extracted and combined with the information entropy of the difference image to form the face context feature, so that the influence of collars, hairs, ornaments and the like on the detection accuracy is effectively avoided, and the accuracy of face living body detection is further improved.
Example three:
referring to fig. 5, in another embodiment of the living human face detection method of the present invention, the method includes steps 50 to 55.
And step 50, acquiring a first image and a second image which have acquisition time intervals smaller than a preset time length and contain the face to be detected.
In a preset time, the specific implementation of acquiring the first image and the second image including the face to be detected through the camera is as shown in embodiment two, which is not described herein again.
And 51, respectively determining the regions to be detected in the first image and the second image.
For a specific implementation of determining the regions to be detected in the first image and the second image respectively, refer to example two, which is not described herein again.
Step 52, obtaining a difference image of the region to be detected in the first image and the second image.
For a specific implementation of obtaining a difference image between the to-be-detected region in the first image and the to-be-detected region in the second image, reference is made to embodiment two, and details are not repeated here.
And 53, extracting the context characteristics of the human face from the difference image.
For a specific implementation of extracting the context features of the human face from the difference image, refer to the second embodiment, which is not described herein again.
In this step, the face context feature extracted from the difference image may be an information entropy extracted from the whole difference image, or a face context feature obtained by combining the information entropy extracted from the difference image and information entropy extracted from image blocks obtained by dividing the difference image.
In order to further improve the accuracy of human face living body detection, extracting the feature to be recognized from the obtained difference image may further include: and extracting illumination characteristics from the human face region in the differential image. And combining the context characteristics of the human face and the illumination characteristics into characteristics to be recognized, and jointly performing human face living body detection.
And step 54, extracting illumination characteristics from the human face region in the differential image.
The context consistency features proposed in the foregoing are still valid for scenes where the attack medium is close to the camera and the attack medium border cannot be detected, but the effect of the method is affected if the attack medium is deliberately clipped, only the face region is left, or the real face has dense decorations around the face in all directions. Therefore, the reflection characteristics of different attack media are analyzed for several attack modes, and illumination characteristics are provided to assist in live body detection.
Since the distance from the face to the camera is much greater than the depth of the face itself, the illumination decay function can be considered to be substantially a certain value in the face region, or to be only a smooth change. The present embodiment continues the analysis based on the infrared difference image. Based on the lambertian reflection model, the infrared differential image can be simply expressed as:
ID(x)=kd·ωd(x)·E+ks·ωs(x)·E;
wherein E is the illumination intensity and is a constant value in each pixel point of the face region; omegad(x),ωs(x) Respectively, the human face surface geometric characteristic factors related to diffuse reflection and specular reflection; and k isdAnd ksThe weighting factors of diffuse reflection and specular reflection respectively comprise attenuation coefficients, material reflection parameters and the like. The material reflection parameters are related to the material, pixel point position, incident light angle, etc., but for a reflective scene incident on an opaque material from the front, these parameters can also be considered constant.
Based on the difference image, the main reflection characteristics of the real face and the three attacking faces are briefly analyzed.
a) Real human face: the specularly reflective highlight is mainly concentrated at specific locations such as the nose tip, cheeks, forehead, and glasses. Geometric factors of the face surface are complex, diffuse reflection is different in each area, and a difference image has a more obvious shadow area.
b) Photograph printed on plain a4 paper: specular reflection is very weak with few highlight regions. Even if the A4 paper is folded, the whole face surface is smooth, and therefore the diffuse reflection may have a gradual change characteristic.
c) Photograph printed with resin material: the surface of the resinous photo is smooth and has a polished surface, so that a large specular reflection component may occur. In addition, the glossy ink layer on the surface of the resin photo can cause the photo to generate intense scattering under the near infrared spectrum, so that the diffuse reflection generated by the resin photo can be enhanced.
d) Picture shown on screen: the display screen can not be folded at will and cannot be bent geometrically, so that the normal vectors of all pixels are basically the same; regular specular components are often present on the display screen. The display screen mainly emits light by itself, and is less influenced by active infrared illumination, and meanwhile, the diffuse reflection component on the display screen can be influenced.
According to the above analysis, the real face and various attack faces are greatly different in diffuse reflection and specular reflection components. In addition, the mirror reflection can greatly increase the gray value of the pixel in a small range and increase the pixel value variance of the whole image; and the diffuse reflection can increase the gray value of a large-area pixel in a small amplitude, and weaken the edge information of the picture to a certain extent. In a differential picture, the above difference can greatly affect the gray value of the pixel, and the distribution of the pixel in the whole picture. And any color distribution in the image can be represented by its moments. Therefore, in order to further improve the accuracy of the living body detection, extracting the feature to be recognized from the acquired difference image may further include extracting an illumination feature from a human face region in the difference image. Since the color distribution information is mainly concentrated in the low-order moment, it is sufficient to express the color distribution of the image only with the first-order moment (mean), the second-order moment (standard deviation), and the third-order moment (slope sketch) of the color. Therefore, after counting the values of all the pixels in the face region, one or more of the average value, the standard deviation and the inclination of the image pixel values can be extracted as the illumination features. Of course, other statistical features that can include statistical distribution information of the illumination of the face region may be extracted as the illumination features. And then combining the context characteristics of the human face with the illumination characteristics to jointly detect the living human face.
In this embodiment, in practical implementation, the pixel value in the face region of the difference image is recorded as xiAnd i is 1, a. Wherein, the calculation formula of the average value mu is as follows:
Figure BDA0001161451640000161
the formula for calculating the standard deviation σ is:
Figure BDA0001161451640000162
(c) the slope γ is calculated as:
Figure BDA0001161451640000163
and step 55, detecting the living human face according to the feature to be identified obtained by combining the context feature of the human face and the illumination feature.
And then, combining the extracted context characteristics and illumination characteristics of the human face according to a preset mode to obtain a multi-dimensional characteristic vector, and inputting the multi-dimensional characteristic vector to a preset classifier to perform human face living body detection.
Before the face living body detection is carried out, a classifier is trained firstly. In specific implementation, firstly, a sample set is formed based on collected positive samples (namely human face living body images) and negative samples (namely human face photos, videos, mask models and other images), and positive and negative sample labels are set; then, respectively extracting the face context characteristics and the illumination characteristics of the samples in the sample set; finally, a classifier, such as a Support Vector Machine (SVM) classifier, is trained based on the extracted features.
When the human face living body detection is performed, the human face context feature extracted from the difference image in the step 53 and the illumination feature extracted from the human face region in the difference image in the step 54 are combined into a feature to be recognized according to a preset rule, for example, the human face context feature and the illumination feature are sequentially connected in series to be combined into a feature to be recognized, and the feature is input to a classifier obtained through pre-training to perform human face living body detection. The classifier identifies the input features to obtain the result of the living human face or the non-living human face in the image.
The embodiment of the invention discloses a face in-vivo detection method, which comprises the steps of acquiring a first image and a second image which have an acquisition time interval smaller than a preset time length and contain a face to be detected; respectively determining regions to be detected in the first image and the second image; acquiring a difference image of the to-be-detected region in the first image and the second image, extracting a face context feature from the difference image, and extracting an illumination feature from a face region in the difference image; and finally, inputting the feature to be recognized, which is obtained by combining the face context feature and the illumination feature, into a preset classifier for face living body detection, so that the problem of low recognition accuracy of the face living body detection method in the prior art is solved.
The living body detection is carried out according to the context characteristics of the human face extracted from the to-be-detected region comprising the human face and the background around the human face in the differential image of the two images, so that the accuracy of the human face living body detection is effectively improved. When the information entropy of the image is used as the face context feature, the face context feature is formed by partitioning the difference image, extracting the information entropy of each block and the information entropy of the difference image, and combining the information entropy of the difference image with the information entropy of the difference image, so that the influence of collars, hairs, ornaments and the like on the detection accuracy is effectively avoided, and the accuracy of face living body detection is further improved.
And the living body detection of the human face is carried out by combining the context characteristics of the human face with the illumination characteristics of the human face area, so that the living body detection of the human face can be influenced when an attack medium is deliberately cut and only the human face area is left, or when the real human face has dense ornaments around the human face in all directions, and the accuracy of the living body detection of the human face is effectively ensured.
Example four:
in another embodiment of the living human face detection method of the present invention, the method includes steps 60 to 64.
And step 60, acquiring a first image and a second image which have acquisition time intervals smaller than a preset time length and contain the face to be detected.
In a preset time, the specific implementation of acquiring the first image and the second image including the face to be detected through the camera is as shown in embodiment two, which is not described herein again.
And step 61, determining the areas to be detected in the first image and the second image respectively.
For a specific implementation of determining the regions to be detected in the first image and the second image respectively, refer to example two, which is not described herein again.
Step 62, obtaining a difference image of the region to be detected in the first image and the second image.
For a specific implementation of obtaining a difference image between the to-be-detected region in the first image and the to-be-detected region in the second image, reference is made to embodiment two, and details are not repeated here.
And 63, extracting illumination characteristics from the human face region in the differential image.
For a specific implementation of extracting the illumination features from the face region in the difference image, refer to the third embodiment, which is not described herein again.
After counting the values of all the pixels in the face region, one or more of the average value, the standard deviation and the inclination of the image pixel values can be extracted as the illumination characteristics. Of course, other statistical features that can include statistical distribution information of the illumination of the face region may be extracted as the illumination features. And then, the extracted illumination features are used for detecting the living human face.
In this embodiment, in practical implementation, the pixel value in the face region of the difference image is recorded as xiAnd i is 1, a. Wherein, the calculation formula of the average value mu is as follows:
Figure BDA0001161451640000181
the formula for calculating the standard deviation σ is:
Figure BDA0001161451640000182
(c) the slope γ is calculated as:
Figure BDA0001161451640000183
and step 64, detecting the living human face according to the illumination characteristics.
And then, inputting the extracted illumination characteristics into a preset classifier to perform human face living body detection.
Before the face living body detection is carried out, a classifier is trained firstly. In specific implementation, firstly, a sample set is formed based on collected positive samples (namely human face living body images) and negative samples (namely human face photos, videos, mask models and other images), and positive and negative sample labels are set; then, respectively extracting the illumination characteristics of the samples in the sample set; finally, a classifier, such as a Support Vector Machine (SVM) classifier, is trained based on the extracted features.
When the human face living body detection is performed, the illumination features extracted from the difference image in the step 63 are input to a classifier obtained through pre-training, and the human face living body detection is performed. The classifier identifies the input features to obtain the result of the living human face or the non-living human face in the image.
In specific implementation, other manners may also be adopted to perform living human face detection according to the extracted illumination features, for example, a pre-trained recognition model is used to perform living human face detection, which is not described herein again.
The embodiment of the invention discloses a face in-vivo detection method, which comprises the steps of acquiring a first image and a second image which have an acquisition time interval smaller than a preset time length and contain a face to be detected; respectively determining regions to be detected in the first image and the second image; acquiring a difference image of the to-be-detected region in the first image and the second image, and extracting illumination characteristics from the difference image; and finally, inputting the illumination characteristics into a preset classifier to carry out face living body detection, thereby solving the problem of low identification accuracy of the face living body detection method in the prior art. The living body detection is carried out according to the illumination characteristics extracted from the human face in the difference image of the two images, and when an attack medium is deliberately cut out, only the human face area is left, or the attack medium is close to the camera, the accuracy of the human face living body detection can be effectively improved.
Example five:
correspondingly, as shown in fig. 7, the present invention also discloses a human face living body detection device, which is applied to an electronic device with an active light source, and the device comprises:
the image acquisition module 70 is configured to acquire a first image and a second image, which have an acquisition time interval smaller than a preset duration and contain a face to be detected;
a difference image obtaining module 71, configured to obtain a difference image between respective regions to be detected of the first image and the second image;
a face living body detection module 72, configured to extract features to be identified from the difference image acquired by the difference image acquisition module 71, and perform face living body detection;
the first image is an image acquired in an environment where the active light source is started, and the second image is an image acquired in an environment where the active light source is closed.
The embodiment of the invention discloses a human face living body detection device, which comprises a first image and a second image, wherein the first image and the second image are acquired at an acquisition time interval which is less than a preset time length and contain a human face to be detected, the first image is acquired in an environment of starting an active light source, and the second image is acquired in an environment of closing the active light source; then, obtaining a differential image between the respective areas to be detected of the first image and the second image; extracting features to be identified from the differential image; and finally, inputting the features to be recognized into a preset classifier to perform face living body detection, thereby solving the problem of low recognition accuracy of the face living body detection method in the prior art. The living body detection is carried out according to the characteristics extracted from the to-be-detected region including the human face and the background around the human face in the differential image of the two images, so that the accuracy of the human face living body detection is effectively improved.
Example six:
as shown in fig. 8, based on the fifth embodiment, in a specific embodiment of the present invention, optionally, the region to be detected takes the face to be detected as a center, and includes the face to be detected and a part of the background around the face to be detected; the face liveness detection module 72 includes: a first feature extraction unit 721, configured to extract a face context feature from the difference image, where the face context feature is composed of information entropy extracted based on a face region in the difference image and a non-face region in the difference image.
The region to be detected is a rectangular region which takes the face region as the center and comprises part of the background around the face. In specific implementation, for the acquired first image and the second image, the face regions in the first image and the second image may be determined by using a method in the prior art. The face region located by the method of locating a face region in the related art is a minimum rectangular region including human eyes.
Optionally, as shown in fig. 9, the first feature extraction unit 721 includes:
a first human face region determining subunit 7210 configured to determine a human face region and a non-human face region in the difference image;
a first histogram obtaining subunit 7211 configured to obtain a face region histogram and a non-face region histogram;
a second histogram obtaining subunit 7212 configured to generate a difference histogram from the face region histogram and the non-face region histogram;
a first feature extraction subunit 7213, configured to extract information entropies from the human face region histogram, the non-human face region histogram, and the difference histogram, respectively.
In another preferred embodiment of the present invention, optionally, the first feature extraction unit 721 further includes:
an image block dividing subunit 7214 configured to divide the difference image into a plurality of adjacent image blocks;
a second face region determining subunit 7215, configured to determine a face region and a non-face region in each of the image blocks respectively;
a third histogram obtaining subunit 7216, configured to separately obtain, for each image block, a histogram of a human face region and a histogram of a non-human face region in the image block;
a fourth histogram obtaining subunit 7217, configured to generate, for each image block, a difference histogram according to the face region histogram and the non-face region histogram of the image block respectively;
a second feature extraction subunit 7218, configured to, for each image block, extract information entropy from the face region histogram, the non-face region histogram, and the difference histogram of the image block, respectively.
The embodiment of the invention discloses a face living body detection device, which comprises a first image and a second image, wherein the first image and the second image are acquired, and the acquisition time interval is less than the preset time length; the method comprises the steps of obtaining a difference image between the respective areas to be detected of the first image and the second image, extracting face context characteristics from the difference image, and carrying out face in-vivo detection, and solves the problem that in the prior art, the face in-vivo detection method has low identification accuracy under the condition that the area of an attack medium is large or the attack medium is very close to a camera and the frame information of the attack medium is completely absent in a photo. The living body detection is carried out according to the context characteristics of the human face extracted from the to-be-detected region comprising the human face and the background around the human face in the differential image of the two images, so that the accuracy of the human face living body detection is effectively improved.
When the information entropy of the image is used as the face context feature, the face context feature is formed by partitioning the difference image, extracting the information entropy of each block and the information entropy of the difference image, and combining the information entropy of the difference image with the information entropy of the difference image, so that the influence of collars, hairs, ornaments and the like on the detection accuracy is effectively avoided, and the accuracy of face living body detection is further improved.
Example seven:
based on the sixth embodiment, in another specific embodiment of the present invention, as shown in fig. 10, the face live detection module 72 further includes:
a second feature extraction unit 722, configured to extract an illumination feature from the human face region in the difference image. Optionally, the illumination characteristics include at least an average, a standard deviation, and a slope of image pixel values.
The human face living body detection is carried out by combining the context characteristics of the human face with the illumination characteristics of the human face area, the attack medium can be cut off deliberately, only the human face area is left, or the influence on the human face living body detection is caused when the real human face has dense ornaments around the human face in all directions, thereby effectively ensuring the accuracy of the human face living body detection.
In a specific implementation, the living human face detection module 72 may further include only the second feature extraction unit 722. The face living body detection method can effectively ensure the accuracy of face living body detection when the attack medium is deliberately cut and only the face area is left.
Correspondingly, the embodiment of the invention also discloses electronic equipment, wherein the electronic equipment comprises an active light source, and the electronic equipment further comprises the human face living body detection device in the fourth embodiment and the fifth embodiment. The electronic equipment can be a mobile phone, a PAD, a tablet personal computer, a face recognition machine and the like.
The embodiment of the device and the method of the invention correspond, and the specific implementation of each module and each unit in the embodiment of the device is referred to as the embodiment of the method, which is not described herein again.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It will be appreciated by those of ordinary skill in the art that in the embodiments provided herein, the units described as separate components may or may not be physically separate, may be located in one place, or may be distributed across multiple network elements. In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention can be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art will appreciate that changes and substitutions without inventive step in the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (14)

1. A human face living body detection method is applied to an electronic device with an active light source, and is characterized by comprising the following steps:
acquiring a first image and a second image which have acquisition time intervals smaller than a preset time length and contain a face to be detected;
acquiring a differential image between a region to be detected in the first image and a region to be detected in the second image, wherein the differential image is a difference value of light source reflection components under different illumination conditions; the region to be detected comprises a face region and a non-face region;
extracting features to be identified from the differential image, analyzing the pixel distribution difference of a face region and a non-face region in the differential image, and performing living body detection;
the first image is an image acquired in an environment where the active light source is started, and the second image is an image acquired in an environment where the active light source is closed.
2. The method according to claim 1, wherein the region to be detected is centered on the face to be detected and includes the face to be detected and a part of a background around the face to be detected;
the extracting the feature to be identified from the difference image, analyzing the pixel distribution difference of the human face area and the non-human face area in the difference image, and the performing the living body detection comprises the following steps: extracting face context characteristics from the difference image, wherein the face context characteristics consist of information entropy extracted based on a face region in the difference image and a non-face region in the difference image; and performing living body detection according to the feature vector formed by the information entropy.
3. The method of claim 2, wherein the step of extracting the context feature of the human face from the difference image comprises:
determining a human face region and a non-human face region in the differential image;
acquiring a human face region histogram and a non-human face region histogram;
generating a difference histogram according to the human face region histogram and the non-human face region histogram;
and respectively extracting information entropies from the human face region histogram, the non-human face region histogram and the difference histogram.
4. The method of claim 3, further comprising:
dividing the difference image into a plurality of adjacent image blocks;
respectively determining a human face area and a non-human face area in each image block;
respectively acquiring a human face region histogram and a non-human face region histogram in each image block;
aiming at each image block, generating a difference histogram according to a face region histogram and a non-face region histogram of the image block respectively;
and for each image block, extracting information entropy from a human face region histogram, a non-human face region histogram and a difference histogram of the image block respectively.
5. The method according to any one of claims 1 to 4, wherein the step of extracting the feature to be identified from the difference image for biopsy comprises:
and extracting illumination characteristics from the human face region in the differential image, and performing living body detection according to the illumination characteristics.
6. The method of claim 5, wherein the illumination characteristics comprise at least mean, standard deviation, slope of image pixel values.
7. The method of claim 5, wherein the extracting the feature to be identified from the difference image for in vivo detection further comprises: and after the context characteristics of the face are obtained, carrying out face living body detection according to the characteristics to be identified, which are obtained by combining the context characteristics of the face and the illumination characteristics.
8. A human face living body detection device is applied to an electronic device with an active light source, and is characterized by comprising:
the image acquisition module is used for acquiring a first image and a second image which have acquisition time intervals smaller than a preset time length and contain a face to be detected;
the differential image acquisition module is used for acquiring a differential image between a region to be detected in the first image and a region to be detected in the second image, wherein the differential image is a difference value of light source reflection components under different illumination conditions; the region to be detected comprises a face region and a non-face region;
the human face living body detection module is used for extracting features to be identified from the difference image acquired by the difference image acquisition module, analyzing the pixel distribution difference of a human face area and a non-human face area in the difference image and carrying out human face living body detection;
the first image is an image acquired in an environment where the active light source is started, and the second image is an image acquired in an environment where the active light source is closed.
9. The apparatus according to claim 8, wherein the region to be detected is centered on the face to be detected and includes the face to be detected and a part of the background around the face to be detected; the face in-vivo detection module comprises:
and the first feature extraction unit is used for extracting the face context feature from the difference image, wherein the face context feature is composed of information entropy extracted based on the face region in the difference image and the non-face region in the difference image.
10. The apparatus according to claim 9, wherein the first feature extraction unit includes:
a first human face region determining subunit, configured to determine a human face region and a non-human face region in the difference image;
the first histogram acquisition subunit is used for acquiring a face region histogram and a non-face region histogram;
a second histogram obtaining subunit, configured to generate a difference histogram according to the face region histogram and the non-face region histogram;
and the first feature extraction subunit is used for respectively extracting information entropies from the human face region histogram, the non-human face region histogram and the difference histogram.
11. The apparatus of claim 10, wherein the first feature extraction unit further comprises:
an image block dividing subunit, configured to divide the difference image into a plurality of adjacent image blocks;
the second face area determining subunit is used for respectively determining a face area and a non-face area in each image block;
the third histogram acquisition subunit is used for respectively acquiring a human face area histogram and a non-human face area histogram in each image block;
a fourth histogram obtaining subunit, configured to generate, for each image block, a difference histogram according to the face region histogram and the non-face region histogram of the image block;
and the second feature extraction subunit is used for extracting information entropy from the face region histogram, the non-face region histogram and the difference histogram of each image block respectively.
12. The apparatus of any one of claims 8 to 11, wherein the face liveness detection module further comprises:
and the second feature extraction unit is used for extracting illumination features from the human face region in the differential image.
13. The apparatus of claim 12, wherein the illumination characteristics comprise at least a mean, a standard deviation, and a slope of image pixel values.
14. An electronic device comprising an active light source, the electronic device further comprising the living human face detection apparatus of any one of claims 8 to 13.
CN201611053558.1A 2016-11-24 2016-11-24 Face living body detection method and device Active CN106778518B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611053558.1A CN106778518B (en) 2016-11-24 2016-11-24 Face living body detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611053558.1A CN106778518B (en) 2016-11-24 2016-11-24 Face living body detection method and device

Publications (2)

Publication Number Publication Date
CN106778518A CN106778518A (en) 2017-05-31
CN106778518B true CN106778518B (en) 2021-01-08

Family

ID=58912415

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611053558.1A Active CN106778518B (en) 2016-11-24 2016-11-24 Face living body detection method and device

Country Status (1)

Country Link
CN (1) CN106778518B (en)

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107368783B (en) * 2017-06-14 2020-04-17 Oppo广东移动通信有限公司 Living iris detection method, electronic device, and computer-readable storage medium
CN109389002A (en) * 2017-08-02 2019-02-26 阿里巴巴集团控股有限公司 Biopsy method and device
CN107679457A (en) * 2017-09-06 2018-02-09 阿里巴巴集团控股有限公司 User identity method of calibration and device
CN107590473B (en) * 2017-09-19 2021-06-22 杭州登虹科技有限公司 Human face living body detection method, medium and related device
US10679082B2 (en) * 2017-09-28 2020-06-09 Ncr Corporation Self-Service Terminal (SST) facial authentication processing
CN107818313B (en) 2017-11-20 2019-05-14 腾讯科技(深圳)有限公司 Vivo identification method, device and storage medium
CN108875508B (en) * 2017-11-23 2021-06-29 北京旷视科技有限公司 Living body detection algorithm updating method, device, client, server and system
CN109961587A (en) * 2017-12-26 2019-07-02 天地融科技股份有限公司 A kind of monitoring system of self-service bank
CN108846321B (en) * 2018-05-25 2022-05-03 北京小米移动软件有限公司 Method and device for identifying human face prosthesis and electronic equipment
CN109118529B (en) * 2018-08-13 2022-06-03 四川长虹电器股份有限公司 Screw hole image rapid positioning method based on vision
CN109165640B (en) * 2018-10-16 2022-01-28 北方工业大学 Hand back vein identification method and identification system based on bit plane internal block mutual information
CN111222380B (en) * 2018-11-27 2023-11-03 杭州海康威视数字技术股份有限公司 Living body detection method and device and recognition model training method thereof
CN110493532B (en) * 2018-12-12 2021-06-29 杭州海康威视数字技术股份有限公司 Image processing method and system
CN110493506B (en) * 2018-12-12 2021-03-02 杭州海康威视数字技术股份有限公司 Image processing method and system
CN109377628A (en) * 2018-12-17 2019-02-22 深圳市恩钛控股有限公司 A kind of intelligent access control system and method
CN111353326A (en) * 2018-12-20 2020-06-30 上海聚虹光电科技有限公司 In-vivo detection method based on multispectral face difference image
CN110069983A (en) * 2019-03-08 2019-07-30 深圳神目信息技术有限公司 Vivo identification method, device, terminal and readable medium based on display medium
CN110059579B (en) * 2019-03-27 2020-09-04 北京三快在线科技有限公司 Method and apparatus for in vivo testing, electronic device, and storage medium
CN109982011A (en) * 2019-04-23 2019-07-05 思特威电子科技(开曼)有限公司 Imaging method based on infrared structure light
CN110210393A (en) * 2019-05-31 2019-09-06 百度在线网络技术(北京)有限公司 The detection method and device of facial image
CN110505377B (en) * 2019-05-31 2021-06-01 杭州海康威视数字技术股份有限公司 Image fusion apparatus and method
CN111160235A (en) * 2019-12-27 2020-05-15 联想(北京)有限公司 Living body detection method and device and electronic equipment
CN111310575B (en) * 2020-01-17 2022-07-08 腾讯科技(深圳)有限公司 Face living body detection method, related device, equipment and storage medium
CN111325139B (en) * 2020-02-18 2023-08-04 浙江大华技术股份有限公司 Lip language identification method and device
CN111369544B (en) * 2020-03-09 2023-11-03 广州市技田信息技术有限公司 Tray positioning detection method and device and intelligent forklift
CN111460419B (en) * 2020-03-31 2020-11-27 深圳市微网力合信息技术有限公司 Internet of things artificial intelligence face verification method and Internet of things cloud server
CN111680563B (en) * 2020-05-09 2023-09-19 苏州中科先进技术研究院有限公司 Living body detection method, living body detection device, electronic equipment and storage medium
CN111814682A (en) * 2020-07-09 2020-10-23 泰康保险集团股份有限公司 Face living body detection method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105718863A (en) * 2016-01-15 2016-06-29 北京海鑫科金高科技股份有限公司 Living-person face detection method, device and system
CN105975926A (en) * 2016-04-29 2016-09-28 中山大学 Human face living detection method based on light field camera

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4671049B2 (en) * 2006-11-20 2011-04-13 ソニー株式会社 Authentication device, registration device, registration method, registration program, authentication method, and authentication program
KR20080101277A (en) * 2007-05-16 2008-11-21 삼성테크윈 주식회사 Digiatal image process apparatus for displaying histogram and method thereof
US9025830B2 (en) * 2012-01-20 2015-05-05 Cyberlink Corp. Liveness detection system based on face behavior
CN102708383B (en) * 2012-05-21 2014-11-26 广州像素数据技术开发有限公司 System and method for detecting living face with multi-mode contrast function
CN103679118B (en) * 2012-09-07 2017-06-16 汉王科技股份有限公司 A kind of human face in-vivo detection method and system
CN103106397B (en) * 2013-01-19 2016-09-21 华南理工大学 Human face in-vivo detection method based on bright pupil effect
CN103914683A (en) * 2013-12-31 2014-07-09 闻泰通讯股份有限公司 Gender identification method and system based on face image
CN105243386B (en) * 2014-07-10 2019-02-05 汉王科技股份有限公司 Face living body judgment method and system
CN104732516A (en) * 2014-12-29 2015-06-24 西安交通大学 Double threshold blood vessel image processing method based on random direction histogram ratio
CN105893920B (en) * 2015-01-26 2019-12-27 阿里巴巴集团控股有限公司 Face living body detection method and device
CN104966070B (en) * 2015-06-30 2018-04-10 北京汉王智远科技有限公司 Biopsy method and device based on recognition of face
CN105389553A (en) * 2015-11-06 2016-03-09 北京汉王智远科技有限公司 Living body detection method and apparatus
CN105260731A (en) * 2015-11-25 2016-01-20 商汤集团有限公司 Human face living body detection system and method based on optical pulses
CN105512637A (en) * 2015-12-22 2016-04-20 联想(北京)有限公司 Image processing method and electric device
CN105912908A (en) * 2016-04-14 2016-08-31 苏州优化智能科技有限公司 Infrared-based real person living body identity verification method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105718863A (en) * 2016-01-15 2016-06-29 北京海鑫科金高科技股份有限公司 Living-person face detection method, device and system
CN105975926A (en) * 2016-04-29 2016-09-28 中山大学 Human face living detection method based on light field camera

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Context based face anti-spoofing;Jukka Komulainen et al;《2013 IEEE Sixth International Conference on Biometrics: Theory, Applications and Systems (BTAS)》;20131002;第1-8页 *
Face liveness detection by exploring multiple scenic clues;Junjie Yan et al;《2012 12th International Conference on Control Automation Robotics & Vision (ICARCV)》;20121207;第188-193页 *
基于局部Gabor自适应三值模式的人脸识别;夏军等;《计算机工程与应用》;20160617;第52卷(第18期);第203-207页 *
灰度直方图结合信息熵对图像检测的研究;吴宪君等;《消费电子》;20131130;第97页 *

Also Published As

Publication number Publication date
CN106778518A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
CN106778518B (en) Face living body detection method and device
US10691939B2 (en) Systems and methods for performing iris identification and verification using mobile devices
CN108319953B (en) Occlusion detection method and device, electronic equipment and the storage medium of target object
CN109583285B (en) Object recognition method
Fattal Dehazing using color-lines
US7715596B2 (en) Method for controlling photographs of people
Steiner et al. Reliable face anti-spoofing using multispectral swir imaging
Galbally et al. Face anti-spoofing based on general image quality assessment
Kim et al. Face liveness detection from a single image via diffusion speed model
CN108764071B (en) Real face detection method and device based on infrared and visible light images
CN106372629B (en) Living body detection method and device
KR102561723B1 (en) System and method for performing fingerprint-based user authentication using images captured using a mobile device
US20190095701A1 (en) Living-body detection method, device and storage medium
WO2018040307A1 (en) Vivo detection method and device based on infrared visible binocular image
CN109684925B (en) Depth image-based human face living body detection method and device
JP5955133B2 (en) Face image authentication device
CN112052831B (en) Method, device and computer storage medium for face detection
CN109858439A (en) A kind of biopsy method and device based on face
WO2016084072A1 (en) Anti-spoofing system and methods useful in conjunction therewith
EP3241151A1 (en) An image face processing method and apparatus
CN108875485A (en) A kind of base map input method, apparatus and system
CN109583304A (en) A kind of quick 3D face point cloud generation method and device based on structure optical mode group
CN108764058A (en) A kind of dual camera human face in-vivo detection method based on thermal imaging effect
CN111582197A (en) Living body based on near infrared and 3D camera shooting technology and face recognition system
CN106651879B (en) Method and system for extracting nail image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Liu Changping

Inventor after: Sun Xudong

Inventor after: Huang Lei

Inventor before: Liu Changping

Inventor before: Huang Lei

Inventor before: Sun Xudong

GR01 Patent grant
GR01 Patent grant