CN111814682A - Face living body detection method and device - Google Patents
Face living body detection method and device Download PDFInfo
- Publication number
- CN111814682A CN111814682A CN202010655086.7A CN202010655086A CN111814682A CN 111814682 A CN111814682 A CN 111814682A CN 202010655086 A CN202010655086 A CN 202010655086A CN 111814682 A CN111814682 A CN 111814682A
- Authority
- CN
- China
- Prior art keywords
- image
- detected
- face
- face image
- generate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 79
- 238000012545 processing Methods 0.000 claims abstract description 77
- 238000000034 method Methods 0.000 claims abstract description 44
- 238000000605 extraction Methods 0.000 claims abstract description 38
- 238000005286 illumination Methods 0.000 claims abstract description 38
- 238000010606 normalization Methods 0.000 claims abstract description 31
- 238000012549 training Methods 0.000 claims abstract description 30
- 238000013145 classification model Methods 0.000 claims abstract description 21
- 238000010801 machine learning Methods 0.000 claims abstract description 17
- 238000001228 spectrum Methods 0.000 claims abstract description 16
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 15
- 230000009466 transformation Effects 0.000 claims abstract description 13
- 238000004590 computer program Methods 0.000 claims description 16
- 238000001914 filtration Methods 0.000 claims description 10
- 238000013527 convolutional neural network Methods 0.000 claims description 7
- 238000001727 in vivo Methods 0.000 claims description 7
- 238000007781 pre-processing Methods 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 8
- 238000010304 firing Methods 0.000 description 6
- 238000011176 pooling Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 3
- 239000013598 vector Substances 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000001574 biopsy Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Biophysics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
Abstract
The invention discloses a human face living body detection method and a device, wherein the method comprises the following steps: collecting a human face image to be detected; carrying out feature extraction processing on a face image to be detected to generate a plurality of feature images; inputting a face image to be detected and a feature image into a face living body detection classification model obtained through machine learning training in advance, and outputting a face living body detection result; the method comprises the following steps of carrying out feature extraction processing on a face image to be detected to generate a plurality of feature images, including: carrying out illumination normalization processing on the face image to be detected to generate an illumination normalization processing image; carrying out feature extraction processing on a face image to be detected by adopting an LBP algorithm to generate a texture feature image; converting the RGB color space of the face image to be detected into an HSV space to generate an HSV image; and performing DCT (discrete cosine transformation) on the face image to be detected to generate a frequency spectrum image. The invention has short detection time and high precision of human face living body detection, and can reduce the influence of illumination on identification.
Description
Technical Field
The invention relates to the technical field of face recognition, in particular to a face in-vivo detection method and a face in-vivo detection device.
Background
With the continuous development and progress of the biometric identification technology, the face identification is widely applied to the fields of business, safety and the like based on the characteristics of visualization and accordance with the thinking habits of people. However, the face recognition system is vulnerable to malicious attacks from illegal users, which brings great threat to the security performance of the face recognition system. Aiming at malicious attacks, the design of a human face anti-spoofing system with high detection precision, short time consumption, strong robustness and strong generalization capability is very important. The anti-spoofing detection of the face recognition system is also called face live detection.
The in vivo detection techniques employed in the prior art can be divided into two categories: one type of method carries out calculation based on multi-frame images or video sequences, adopts an interactive or non-interactive detection mode, and identifies the authenticity of a human face by analyzing action information in continuous multi-frame images, but the method has complex calculation and long required time. The other type is based on single-frame picture input, and identification is carried out by analyzing the characteristics such as texture, frequency spectrum, reflection degree and the like in the picture, but the method depends on the common judgment characteristics of a large number of forged faces, and the accuracy of face living body detection is lower.
Disclosure of Invention
The embodiment of the invention provides a human face in-vivo detection method, which has the advantages of simple detection process, short required time and high precision of human face in-vivo detection, and comprises the following steps:
collecting a human face image to be detected;
carrying out feature extraction processing on the face image to be detected to generate a plurality of feature images;
inputting the face image to be detected and the feature image into a face living body detection classification model obtained through machine learning training in advance, and outputting a face living body detection result;
carrying out feature extraction processing on the face image to be detected to generate a plurality of feature images, including:
performing illumination normalization processing on the face image to be detected to generate an illumination normalization processing image;
carrying out feature extraction processing on the face image to be detected by adopting an LBP algorithm to generate a texture feature image;
converting the face image to be detected from an RGB color space into an HSV space to generate an HSV image;
performing DCT (discrete cosine transformation) on the face image to be detected to generate a frequency spectrum image
Optionally, performing illumination normalization processing on the face image to be detected to generate an illumination normalization processed image, including:
carrying out gamma conversion on the human face image to be detected;
carrying out Gaussian difference filtering on the image subjected to gamma conversion;
and performing histogram equalization processing on the image subjected to the Gaussian difference filtering to generate an illumination normalization processing image.
Optionally, the method further includes:
and configuring different weight coefficients for different types of feature images when the face image to be detected and the feature images are input into a face living body detection classification model obtained through machine learning training in advance.
Optionally, before performing feature extraction processing on the face image to be detected, the method further includes:
carrying out preprocessing operation on the face image to be detected, wherein the preprocessing operation comprises the following steps: filtering, denoising and binarization operation.
Optionally, after the collecting the facial image to be detected, the method further includes:
detecting a face image to be detected by using a detector based on a Viola-Jones algorithm to obtain a face region image;
carrying out feature extraction processing on the face image to be detected to generate a plurality of feature images, including:
and performing feature extraction processing on the face region image to generate a plurality of feature images.
Optionally, before the face image to be detected and the feature image are input to a face living body detection classification model obtained through machine learning training in advance, the method further includes:
acquiring training sample data, wherein the training sample data comprises human face living body data and non-human face living body data;
and training the multilayer convolutional neural network model according to the training sample data to obtain a human face living body detection classification model.
The embodiment of the invention also provides a human face living body detection device, which has simple detection process, short required time and high precision of human face living body detection, and comprises the following components:
the image acquisition module is used for acquiring a human face image to be detected;
the characteristic extraction processing module is used for carrying out characteristic extraction processing on the face image to be detected to generate a plurality of characteristic images;
the human face living body detection module is used for inputting the human face image to be detected and the multiple characteristic images into a human face living body detection classification model obtained through machine learning training in advance and outputting a human face living body detection result;
the feature extraction processing module is further configured to:
performing illumination normalization processing on the face image to be detected to generate an illumination normalization processing image;
carrying out feature extraction processing on the face image to be detected by adopting an LBP algorithm to generate a texture feature image;
converting the face image to be detected from an RGB color space into an HSV space to generate an HSV image;
and performing DCT (discrete cosine transformation) on the face image to be detected to generate a frequency spectrum image.
The embodiment of the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the method when executing the computer program.
An embodiment of the present invention further provides a computer-readable storage medium, in which a computer program for executing the above method is stored.
In the embodiment of the invention, the human face image to be detected is acquired, the feature extraction processing is carried out on the human face image to be detected to generate the feature image, the human face image to be detected and the feature image are input into the human face living body detection classification model obtained through machine learning training in advance, the human face living body detection result is output, and then the detection on the human face living body is completed. The illumination normalization processing image is generated by performing illumination normalization processing on the face image to be detected, so that the influence of illumination on identification can be reduced as much as possible. The method comprises the steps of performing feature extraction processing on a face image to be detected by adopting an LBP algorithm to generate a texture feature image, converting the RGB color space of the face image to be detected into an HSV space to generate an HSV image, performing DCT (discrete cosine transformation) on the face image to be detected, and generating a frequency spectrum image, so that the detection precision of the face image to be detected can be further ensured.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts. In the drawings:
FIG. 1 is a flowchart of a face liveness detection method in an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a living human face detection apparatus according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating an application example of a face in-vivo detection method according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a convolutional neural network according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a computer device in an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the embodiments of the present invention are further described in detail below with reference to the accompanying drawings. The exemplary embodiments and descriptions of the present invention are provided to explain the present invention, but not to limit the present invention.
In the description of the present specification, the terms "comprising," "including," "having," "containing," and the like are used in an open-ended fashion, i.e., to mean including, but not limited to. Reference to the description of the terms "one embodiment," "a particular embodiment," "some embodiments," "for example," etc., means that a particular feature, structure, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. The sequence of steps involved in the embodiments is for illustrative purposes to illustrate the implementation of the present application, and the sequence of steps is not limited and can be adjusted as needed.
Fig. 1 is a flowchart of a face live detection method according to an embodiment of the present invention, and as shown in fig. 1, the method includes:
In this embodiment, after the face image to be detected is collected, in order to improve the subsequent detection accuracy, the face image to be detected may be detected by using a detector based on the Viola-Jones algorithm to obtain a face region image, and the face region image may be directly used in the subsequent detection.
102, performing feature extraction processing on the face image to be detected to generate a plurality of feature images, including:
step 1022, performing feature extraction processing on the face image to be detected by using an LBP algorithm to generate a texture feature image;
and 1024, performing DCT (discrete cosine transformation) on the face image to be detected to generate a frequency spectrum image.
Based on the step 101, after the face image to be detected is collected, detecting the face image to be detected by using a detector based on the Viola-Jones algorithm to obtain a face region image;
and performing feature extraction processing on the face region image to generate a feature image.
In this embodiment, a plurality of feature extraction processes may be performed on the face image to be detected, so as to generate a plurality of feature images. Specifically, for example:
as an example: the feature image comprises an illumination normalized processing image, in which case step 102 comprises:
for the face image I to be detectedfacePerforming a gamma transformation Iface-gama=Iface γThe gamma coefficient may be 0.2.
Performing Gaussian difference filtering on the image subjected to gamma conversionThe high and low frequency coefficients of the gaussian difference filter may be set to 0.5 and 2, respectively.
Histogram equalization processing I for the image after Gaussian difference filteringface-li=FEQ(Iface-dog) And generating an illumination normalization processing image. The histogram equalization pixel mapping uses the following formula:
where n is the sum of the pixels in the image, nkIs the number of pixels at the current gray level and L is the total number of possible gray levels in the image.
By carrying out illumination normalization processing on the face image to be detected, the influence of illumination on identification can be reduced as much as possible.
As yet another example: the feature image comprises a texture feature image, and in this case, step 102 comprises:
and performing feature extraction processing on the face image to be detected by adopting an LBP algorithm to generate a texture feature image.
When the method is implemented, w can be within a 3X 3 window3×3With the window center pixel Ix,yComparing the gray values of the adjacent 8 pixels with the threshold value, if the values of the surrounding pixels are greater than the value of the central pixel, marking the position of the pixel as 1, otherwise, marking the position as 0, so that 8 binary numbers B can be generated by comparing the 8 points1,B2,...B8A 8-bit binary number B1,B2,...B8Is converted into a decimal number I 'of 0 to 255'x,yAs the texture feature value of the point.
In picture Iface-rgbThe generated texture feature value is used as the pixel value of the point texture image by sliding in a 3 × 3 window. Finishing the window sliding of the whole picture, namely generating a texture image Iface-lbp。
As yet another example: the feature image includes an HSV image, and in this case, step 102 includes:
and converting the face image to be detected from the RGB color space into an HSV space to generate an HSV image.
When embodied, in image Iface-rgbIn (f), f (x, y, i) is a pixel point in the RGB space (i is a certain space in R, G, B). Take fmaxMax (f (x, y, R), f (x, y, G), f (x, y, B)) and fminMin (f (x, y, R), f (x, y, G), f (x, y, B)). Then f' (x, y, V) ═ f in its corresponding HSV spacemax,f'(x,y,S)=(fmax-fmin)/fmax,
In picture Iface-rgbIn the method, all the points are subjected to the operation to complete the HSV space image Iface-hsvAnd (4) generating.
As yet another example: the characteristic image includes a spectrum image, and in this case, step 102 includes:
and performing DCT (discrete cosine transformation) on the face image to be detected to generate a frequency spectrum image.
When embodied, in image Iface-rgbIn (1), dividing an image into 8 × 8 small blocks;
performing DCT on each small block by the transformation formula
In this embodiment, the face living body detection result may be a category "1" and a category "0", where the category "1" indicates that the image to be recognized is a face living body image, and the category "0" indicates that the image is a non-living body image.
As can be seen from the above, the method for detecting a living human face according to the embodiment of the present invention acquires a human face image to be detected, performs feature extraction processing on the human face image to be detected, generates a feature image, inputs the human face image to be detected and the feature image into a human face living body detection classification model obtained through machine learning training in advance, outputs a human face living body detection result, and further completes detection of a living human face. The illumination normalization processing image is generated by performing illumination normalization processing on the face image to be detected, so that the influence of illumination on identification can be reduced as much as possible. The method comprises the steps of performing feature extraction processing on a face image to be detected by adopting an LBP algorithm to generate a texture feature image, converting the RGB color space of the face image to be detected into an HSV space to generate an HSV image, performing DCT (discrete cosine transformation) on the face image to be detected, and generating a frequency spectrum image, so that the detection precision of the face image to be detected can be further ensured.
In an embodiment of the present invention, the method further comprises:
and configuring different weight coefficients for different types of feature images when the face image to be detected and the feature images are input into a face living body detection classification model obtained through machine learning training in advance.
Different weight coefficients are configured for different types of feature images, and special diagnosis images corresponding to different weight coefficients can be selected according to actual operation requirements, so that smooth and efficient operation of human face living body detection is guaranteed.
Specifically, when the face image to be detected and the feature image are input into a face living body detection classification model obtained through machine learning training in advance, different weight coefficients may be configured for an illumination normalization processing image, a texture feature image, an HSV image, and a spectrum image, for example, the weight coefficient of the illumination normalization processing image is configured to be 1, the weight coefficient of the texture feature image is configured to be 2, the weight coefficient of the HSV image is configured to be 3, and the weight coefficient of the spectrum image is configured to be 4, and in the specific operation, various types of feature images may be selected according to the actual operation condition.
In order to ensure that the acquired face image to be detected is clear enough and the subsequent face living body detection operation is carried out smoothly, before the feature extraction processing is carried out on the face image to be detected, the method further comprises the following steps:
carrying out preprocessing operation on the face image to be detected, wherein the preprocessing operation comprises the following steps: filtering, denoising and binarization operation.
In the embodiment of the present invention, before the face image to be detected and the feature image are input to a face living body detection classification model obtained through machine learning training in advance, the method further includes:
acquiring training sample data, wherein the training sample data comprises human face living body data and non-human face living body data;
and training the multilayer convolutional neural network model according to the training sample data to obtain a human face living body detection classification model.
In this embodiment, the specific convolutional neural network is as follows (see fig. 4):
an input layer: 5 three-channel images (original RGB image, illumination normalization processing image, LBP texture image, HSV image, frequency spectrum image) are input.
And constructing a network for each three-channel image, wherein the network structure is as follows:
convolution pooling layer 1 stage: 2 convolution layers and 1 max-firing layer, the convolution kernel size is 3 x 3, and the step length is 2;
convolution pooling layer 2 stage: 2 convolution layers and 1 max-firing layer, the convolution kernel size is 3 x 3, and the step length is 2;
convolution pooling layer 3 stage: 2 convolution layers and 1 max-firing layer, the convolution kernel size is 3 x 3, and the step length is 2;
convolution pooling layer 4 stage: 2 convolution layers and 1 max-firing layer, the convolution kernel size is 3 x 3, and the step length is 2;
convolution pooling layer 5 stage: 2 convolution layers and 1 max-firing layer, the convolution kernel size is 3 x 3, and the step length is 2;
convolution pooling layer 6 stage: 2 convolution layers and 1 max-firing layer, the convolution kernel size is 3 x 3, and the step length is 2;
full connection layer 1 stage: all the characteristic image features are fully connected into 5760-dimensional vectors, and Relu activation functions are used;
and 2, fully connecting layer: fully connecting into 2048-dimensional vectors by using a Relu activation function;
full connection layer 3 stage: the full connection is a 1000-dimensional vector;
softmax layer: and outputting a classification result.
The invention is described below in a specific application scenario:
face recognition performed during online business transaction requires live body detection, and whether the user actually shoots or not is judged, so that the live body detection is required before the face recognition.
The procedure for performing the biopsy was as follows (see FIG. 3):
and S1, inputting a face image, carrying out face detection, and extracting a face region image.
And S2, performing illumination normalization processing on the face region image to generate an illumination normalized face image, wherein the content comprises gamma transformation, Gaussian high-pass filtering and equalization processing.
And S3, extracting texture features of the face region image according to an LBP algorithm, and generating a texture image.
And S4, converting the face region image from an RGB color space into an HSV space, and generating an HSV image.
And S5, performing DCT (discrete cosine transform) on the face region image to extract frequency domain characteristics, and generating a frequency spectrum image.
S6, making a sample data set of the face living body detection image, setting the image label as 1 if the image is a face living body, otherwise setting the image label as 0, generating each feature field image of the image according to the steps S1-S5, constructing a multi-layer feature fusion hierarchical convolutional neural network, training the sample data set, and generating a face living body detection classification model.
And S7, generating multi-feature images of the images to be subjected to living body classification according to S1-S5, inputting the multi-feature images into the convolutional neural network model trained in S6, if the output classification result is 1, indicating that the input face images are living body images, and if the output classification result is 0, indicating that the face images are not living bodies.
Based on the same inventive concept, the embodiment of the present invention further provides a human face living body detection apparatus, as described in the following embodiments. Because the principle of solving the problems of the face living body detection device is similar to that of the face living body detection method, the implementation of the face living body detection device can refer to the implementation of the face living body detection method, and repeated parts are not repeated. As used hereinafter, the term "unit" or "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 2 is a schematic structural diagram of a living human face detection apparatus according to an embodiment of the present invention, and as shown in fig. 2, the apparatus includes:
the image acquisition module 201 is configured to acquire a face image to be detected.
And the feature extraction processing module 202 is configured to perform feature extraction processing on the face image to be detected to generate multiple feature images.
The face living body detection module 203 is used for inputting the face image to be detected and the multiple feature images into a face living body detection classification model obtained through machine learning training in advance and outputting a face living body detection result;
the feature extraction processing module 202 is further configured to:
performing illumination normalization processing on the face image to be detected to generate an illumination normalization processing image;
carrying out feature extraction processing on the face image to be detected by adopting an LBP algorithm to generate a texture feature image;
converting the face image to be detected from an RGB color space into an HSV space to generate an HSV image;
and performing DCT (discrete cosine transformation) on the face image to be detected to generate a frequency spectrum image.
An embodiment of the present invention further provides a computer device, as shown in fig. 5, where the computer device includes a memory, a processor, a communication interface, and a communication bus, a computer program that can be executed on the processor is stored in the memory, and the processor executes the computer program to implement the steps in the method according to the above embodiment.
The processor may be a Central Processing Unit (CPU). The Processor may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, or a combination thereof.
The memory, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and units, such as the corresponding program units in the above-described method embodiments of the present invention. The processor executes various functional applications of the processor and the processing of the work data by executing the non-transitory software programs, instructions and modules stored in the memory, that is, the method in the above method embodiment is realized.
The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by the processor, and the like. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and such remote memory may be coupled to the processor via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more units are stored in the memory and when executed by the processor perform the method of the above embodiments.
An embodiment of the present invention further provides a computer-readable storage medium, in which a computer program for executing the above method is stored.
In summary, the invention collects the face image to be detected, performs feature extraction processing on the face image to be detected to generate the feature image, inputs the face image to be detected and the feature image into the face living body detection classification model obtained through machine learning training in advance, outputs the face living body detection result, and further completes the detection of the face living body.
In an embodiment, the influence of the illumination on the recognition can be reduced as much as possible by adding the illumination normalization processing.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (10)
1. A face living body detection method is characterized by comprising the following steps:
collecting a human face image to be detected;
carrying out feature extraction processing on the face image to be detected to generate a plurality of feature images;
inputting the face image to be detected and the multiple feature images into a face living body detection classification model obtained through machine learning training in advance, and outputting a face living body detection result;
carrying out feature extraction processing on the face image to be detected to generate a plurality of feature images, including:
performing illumination normalization processing on the face image to be detected to generate an illumination normalization processing image;
carrying out feature extraction processing on the face image to be detected by adopting an LBP algorithm to generate a texture feature image;
converting the face image to be detected from an RGB color space into an HSV space to generate an HSV image;
and performing DCT (discrete cosine transformation) on the face image to be detected to generate a frequency spectrum image.
2. The method of claim 1, wherein performing illumination normalization processing on the face image to be detected to generate an illumination normalization processed image comprises:
carrying out gamma conversion on the human face image to be detected;
carrying out Gaussian difference filtering on the image subjected to gamma conversion;
and performing histogram equalization processing on the image subjected to the Gaussian difference filtering to generate an illumination normalization processing image.
3. The method of claim 2, wherein the method further comprises:
and configuring different weight coefficients for different types of feature images when the face image to be detected and the feature images are input into a face living body detection classification model obtained through machine learning training in advance.
4. The method according to claim 1, wherein before the feature extraction processing is performed on the face image to be detected, the method further comprises:
carrying out preprocessing operation on the face image to be detected, wherein the preprocessing operation comprises the following steps: filtering, denoising and binarization operation.
5. The method of claim 1, wherein after the acquiring the face image to be detected, further comprising:
and detecting the face image to be detected by using a detector based on the Viola-Jones algorithm to obtain a face region image.
6. The method of claim 5, wherein the feature extraction processing is performed on the face image to be detected to generate a plurality of feature images, and the method comprises:
and performing feature extraction processing on the face region image to generate a plurality of feature images.
7. The method according to any one of claims 1 to 6, wherein before inputting the face image to be detected and the plurality of feature images into a face in-vivo detection classification model obtained through machine learning training in advance, the method further comprises:
acquiring training sample data, wherein the training sample data comprises human face living body data and non-human face living body data;
and training the multilayer convolutional neural network model according to the training sample data to obtain a human face living body detection classification model.
8. A face liveness detection device, comprising:
the image acquisition module is used for acquiring a human face image to be detected;
the characteristic extraction processing module is used for carrying out characteristic extraction processing on the face image to be detected to generate a plurality of characteristic images;
the human face living body detection module is used for inputting the human face image to be detected and the multiple characteristic images into a human face living body detection classification model obtained through machine learning training in advance and outputting a human face living body detection result;
the feature extraction processing module is further configured to:
performing illumination normalization processing on the face image to be detected to generate an illumination normalization processing image;
carrying out feature extraction processing on the face image to be detected by adopting an LBP algorithm to generate a texture feature image;
converting the face image to be detected from an RGB color space into an HSV space to generate an HSV image;
and performing DCT (discrete cosine transformation) on the face image to be detected to generate a frequency spectrum image.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program for executing the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010655086.7A CN111814682A (en) | 2020-07-09 | 2020-07-09 | Face living body detection method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010655086.7A CN111814682A (en) | 2020-07-09 | 2020-07-09 | Face living body detection method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111814682A true CN111814682A (en) | 2020-10-23 |
Family
ID=72842026
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010655086.7A Pending CN111814682A (en) | 2020-07-09 | 2020-07-09 | Face living body detection method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111814682A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112507934A (en) * | 2020-12-16 | 2021-03-16 | 平安银行股份有限公司 | Living body detection method, living body detection device, electronic apparatus, and storage medium |
CN112633113A (en) * | 2020-12-17 | 2021-04-09 | 厦门大学 | Cross-camera human face living body detection method and system |
CN113569707A (en) * | 2021-07-23 | 2021-10-29 | 北京百度网讯科技有限公司 | Living body detection method, living body detection device, electronic apparatus, and storage medium |
CN114140854A (en) * | 2021-11-29 | 2022-03-04 | 北京百度网讯科技有限公司 | Living body detection method and device, electronic equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106778518A (en) * | 2016-11-24 | 2017-05-31 | 汉王科技股份有限公司 | A kind of human face in-vivo detection method and device |
CN108875618A (en) * | 2018-06-08 | 2018-11-23 | 高新兴科技集团股份有限公司 | A kind of human face in-vivo detection method, system and device |
WO2019114580A1 (en) * | 2017-12-13 | 2019-06-20 | 深圳励飞科技有限公司 | Living body detection method, computer apparatus and computer-readable storage medium |
CN110598580A (en) * | 2019-08-25 | 2019-12-20 | 南京理工大学 | Human face living body detection method |
CN110705392A (en) * | 2019-09-17 | 2020-01-17 | Oppo广东移动通信有限公司 | Face image detection method and device and storage medium |
-
2020
- 2020-07-09 CN CN202010655086.7A patent/CN111814682A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106778518A (en) * | 2016-11-24 | 2017-05-31 | 汉王科技股份有限公司 | A kind of human face in-vivo detection method and device |
WO2019114580A1 (en) * | 2017-12-13 | 2019-06-20 | 深圳励飞科技有限公司 | Living body detection method, computer apparatus and computer-readable storage medium |
CN108875618A (en) * | 2018-06-08 | 2018-11-23 | 高新兴科技集团股份有限公司 | A kind of human face in-vivo detection method, system and device |
CN110598580A (en) * | 2019-08-25 | 2019-12-20 | 南京理工大学 | Human face living body detection method |
CN110705392A (en) * | 2019-09-17 | 2020-01-17 | Oppo广东移动通信有限公司 | Face image detection method and device and storage medium |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112507934A (en) * | 2020-12-16 | 2021-03-16 | 平安银行股份有限公司 | Living body detection method, living body detection device, electronic apparatus, and storage medium |
CN112507934B (en) * | 2020-12-16 | 2024-06-07 | 平安银行股份有限公司 | Living body detection method, living body detection device, electronic equipment and storage medium |
CN112633113A (en) * | 2020-12-17 | 2021-04-09 | 厦门大学 | Cross-camera human face living body detection method and system |
CN113569707A (en) * | 2021-07-23 | 2021-10-29 | 北京百度网讯科技有限公司 | Living body detection method, living body detection device, electronic apparatus, and storage medium |
CN114140854A (en) * | 2021-11-29 | 2022-03-04 | 北京百度网讯科技有限公司 | Living body detection method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yang et al. | Underwater image enhancement based on conditional generative adversarial network | |
Yang et al. | FV-GAN: Finger vein representation using generative adversarial networks | |
CN110569756B (en) | Face recognition model construction method, recognition method, device and storage medium | |
CN113033465B (en) | Living body detection model training method, device, equipment and storage medium | |
Chen et al. | Robust local features for remote face recognition | |
CN108345818B (en) | Face living body detection method and device | |
CN111814682A (en) | Face living body detection method and device | |
Faraji et al. | Face recognition under varying illuminations using logarithmic fractal dimension-based complete eight local directional patterns | |
CN111783629B (en) | Human face in-vivo detection method and device for resisting sample attack | |
Seal et al. | Human face recognition using random forest based fusion of à-trous wavelet transform coefficients from thermal and visible images | |
CN111814574A (en) | Face living body detection system, terminal and storage medium applying double-branch three-dimensional convolution model | |
WO2021137946A1 (en) | Forgery detection of face image | |
CN111079764B (en) | Low-illumination license plate image recognition method and device based on deep learning | |
CN110084238B (en) | Finger vein image segmentation method and device based on LadderNet network and storage medium | |
CN113011253B (en) | Facial expression recognition method, device, equipment and storage medium based on ResNeXt network | |
CN112528866A (en) | Cross-modal face recognition method, device, equipment and storage medium | |
CN112507897A (en) | Cross-modal face recognition method, device, equipment and storage medium | |
CN110674824A (en) | Finger vein segmentation method and device based on R2U-Net and storage medium | |
CN111209873A (en) | High-precision face key point positioning method and system based on deep learning | |
Wang et al. | Fingerprint pore extraction using U-Net based fully convolutional network | |
Li et al. | Poisson reconstruction-based fusion of infrared and visible images via saliency detection | |
Liu et al. | Iris recognition in visible spectrum based on multi-layer analogous convolution and collaborative representation | |
Guo et al. | Multifeature extracting CNN with concatenation for image denoising | |
Hao et al. | Low-light image enhancement based on retinex and saliency theories | |
Suárez et al. | Cross-spectral image patch similarity using convolutional neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201023 |
|
RJ01 | Rejection of invention patent application after publication |