CN110751069A - Face living body detection method and device - Google Patents

Face living body detection method and device Download PDF

Info

Publication number
CN110751069A
CN110751069A CN201910958335.7A CN201910958335A CN110751069A CN 110751069 A CN110751069 A CN 110751069A CN 201910958335 A CN201910958335 A CN 201910958335A CN 110751069 A CN110751069 A CN 110751069A
Authority
CN
China
Prior art keywords
training
living body
face
sample set
feature vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910958335.7A
Other languages
Chinese (zh)
Inventor
张艳红
彭骏
吉纲
占涛
方自成
陈伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ROUTON ELECTRONIC CO Ltd
Wuhan Puli Commercial Machine Co Ltd
Original Assignee
ROUTON ELECTRONIC CO Ltd
Wuhan Puli Commercial Machine Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ROUTON ELECTRONIC CO Ltd, Wuhan Puli Commercial Machine Co Ltd filed Critical ROUTON ELECTRONIC CO Ltd
Priority to CN201910958335.7A priority Critical patent/CN110751069A/en
Publication of CN110751069A publication Critical patent/CN110751069A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a face in-vivo detection method and a face in-vivo detection device, wherein the method comprises the following steps: acquiring a face external expansion image; inputting the face external expansion image into a preset fusion classifier to obtain a living body detection result; the preset fusion classifier is obtained based on fusion face living body sample set training. The method comprises the steps of performing multi-feature extraction on a face living body sample set to obtain a plurality of features, respectively training an SVM (support vector machine) classifier aiming at the features to obtain three classifiers, obtaining confidence of each sample of a face living body test set by using the three classifiers, then performing statistics and weight calculation to realize feature fusion, obtaining a face living body training sample set fused with the three features, training the SVM classifier according to the fused face living body training sample set to obtain a preset fusion classifier, wherein the preset fusion classifier has higher identification accuracy compared with a classifier with single feature and has better identification accuracy under a complex environment.

Description

Face living body detection method and device
Technical Field
The invention relates to the technical field of face recognition, in particular to a face in-vivo detection method and a face in-vivo detection device.
Background
Face recognition is a biometric technique for identifying an identity based on facial feature information of a person. The technology of collecting images or video streams containing human faces by using a camera or a camera, automatically detecting and tracking the human faces in the images and further identifying the detected human faces has the advantages of convenience, rapidness, accuracy and the like, and is very commonly applied in recent years. With the application of face recognition in a large number, the safety problem of face recognition also occurs continuously, the situation that a person performs face recognition by replacing a real person with a personal photo occurs for many times, and the face living body detection technology is very critical in order to improve the safety of face recognition.
The human face in-vivo detection method in the prior art is divided into an interactive type and a non-interactive type, the interactive human face in-vivo detection is realized through random action instructions and lip language detection, the method has the advantages of large calculated amount, complex retrieval process and poor user experience, and the non-interactive human face in-vivo detection but feature detection method is not high in recognition accuracy and poor in recognition accuracy under a complex environment.
Therefore, how to more effectively perform living human face detection in the prior art has become an urgent problem to be solved in the industry.
Disclosure of Invention
Embodiments of the present invention provide a method and an apparatus for detecting a living human face, so as to solve the technical problems mentioned in the foregoing background art, or at least partially solve the technical problems mentioned in the foregoing background art.
In a first aspect, an embodiment of the present invention provides a face live detection method, including:
acquiring a face external expansion image;
inputting the face external expansion image into a preset fusion classifier to obtain a living body detection result;
the preset fusion classifier is obtained by training based on a fusion face living body training sample set.
Before the step of inputting the face outward expansion image into a preset fusion classifier to obtain a living body detection result, the method further comprises the following steps:
acquiring a fused face living body training sample set, and acquiring a plurality of living body fusion feature vector samples and a plurality of non-living body fusion feature vector samples according to the fused face living body training sample set;
taking each living body fusion feature vector sample as a positive sample, taking each non-living body fusion feature vector sample as a negative sample, taking one positive sample and one negative sample as a group of training samples, and obtaining a plurality of groups of training samples;
and inputting the training samples into an SVM classifier for training to obtain a preset fusion classifier for any group of training samples.
Before the step of acquiring the fused human face living body training sample set, the method further comprises the following steps:
a human face living body training sample set is obtained,
analyzing the face living body training sample set to obtain a local binary pattern micro-texture feature vector training sample set, a multi-direction color gradient feature vector training sample set and a Fourier spectrum feature vector training sample set;
the face living body training sample set comprises a plurality of living body face training sample photos and a plurality of non-living body face training sample photos.
After the step of obtaining the local binary pattern micro-texture feature vector training sample set, the multi-directional color gradient feature vector training sample set, and the fourier spectrum feature vector training sample set, the method further includes:
acquiring a plurality of living body local binary pattern micro texture feature vector samples and a plurality of non-living body local binary pattern micro texture feature vector samples according to the local binary pattern micro texture feature vector training sample set;
taking each living body local binary pattern micro texture feature vector sample as a positive sample, taking each non-living body local binary pattern micro texture feature vector sample as a negative sample, taking one positive sample and one negative sample as a group of training samples, and obtaining a plurality of groups of training samples;
and inputting the training samples into an SVM classifier for training to obtain a first SVM classifier training model for any group of training samples.
After the step of obtaining the local binary pattern micro-texture feature vector training sample set, the multi-directional color gradient feature vector training sample set, and the fourier spectrum feature vector training sample set, the method further includes:
acquiring a plurality of living body multidirectional color gradient feature vector samples and a plurality of non-living body multidirectional color gradient feature vector samples according to the multidirectional color gradient feature vector training sample set;
taking each living body multidirectional color gradient feature vector sample as a positive sample, taking each non-living body multidirectional color gradient feature vector sample as a negative sample, taking one positive sample and one negative sample as a group of training samples, and obtaining a plurality of groups of training samples;
and inputting the training samples into an SVM classifier for training to obtain a second SVM classifier training model for any group of training samples.
After the step of obtaining the local binary pattern micro-texture feature vector training sample set, the multi-directional color gradient feature vector training sample set, and the fourier spectrum feature vector training sample set, the method further includes:
acquiring a plurality of living body Fourier spectrum characteristic vector samples and a plurality of non-living body Fourier spectrum characteristic vector samples according to the Fourier spectrum characteristic training vector sample set;
taking each living body Fourier spectrum characteristic vector sample as a positive sample, taking each non-living body Fourier spectrum characteristic vector sample as a negative sample, taking one positive sample and one negative sample as a group of training samples, and obtaining a plurality of groups of training samples;
and inputting the training samples into an SVM classifier for training to obtain a third SVM classifier training model for any group of training samples.
Wherein the method further comprises:
analyzing the face living body test sample set according to a first SVM classifier training model, a second SVM classifier training model and a third SVM classifier training model respectively to obtain first feature weight information, second feature weight information and third feature weight information;
and carrying out linear combination on the local binary pattern micro-texture feature training sample set, the multi-direction color gradient feature training sample set and the Fourier spectrum feature training sample set according to the first feature weight information, the second feature weight information and the third feature weight information to obtain a fused face living body training sample set.
In a second aspect, an embodiment of the present invention provides a face liveness detection apparatus, including:
the acquisition module is used for acquiring a face external expansion image;
the detection module is used for inputting the face external expansion image into a preset fusion classifier to obtain a living body detection result;
the preset fusion classifier is obtained by training based on a fusion face living body training sample set.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method for detecting a living human face according to the first aspect when executing the program.
In a fourth aspect, an embodiment of the present invention provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the face liveness detection method according to the first aspect.
According to the face living body detection method and device provided by the embodiment of the invention, a plurality of features are obtained by performing multi-feature extraction on a face living body training sample set, an SVM classifier is respectively trained aiming at the plurality of features to obtain three classifiers, the confidence degrees of the three classifiers on a face living body testing sample set are calculated, then statistics and weight calculation are performed, so that feature fusion is realized, the face living body training sample set with the three types of features fused is obtained, the SVM classifier is trained according to the fused face living body training sample set to obtain the preset fusion classifier, and the preset fusion classifier has higher identification accuracy compared with a classifier with single features and better identification accuracy in a complex environment.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart illustrating a human face live detection method according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a living human face detection apparatus according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic flow chart of a face live detection method described in an embodiment of the present invention, as shown in fig. 1, including:
step S1, acquiring a face external expansion image;
step S2, inputting the face outward-expanded image into a preset fusion classifier to obtain a living body detection result;
the preset fusion classifier is obtained by training based on a fusion face living body training sample set.
Specifically, the face extension image described in the embodiment of the present invention refers to performing face detection on an image acquired by a camera by using an existing face detection algorithm, and if a face is detected, performing appropriate extension processing on the periphery of a face region to obtain a face extension image.
The fused face living body training sample set described in the embodiment of the invention specifically refers to that after feature extraction is carried out on a face living body training sample set, a local binary pattern micro-texture feature vector training sample set, a multi-direction color gradient feature vector training sample set and a Fourier spectrum feature vector sample set are obtained, and then SVM classifiers are trained respectively aiming at the local binary pattern micro-texture feature vector training sample set to obtain a first SVM classifier training model; training an SVM classifier aiming at a multi-direction color gradient feature vector training sample set to obtain a second SVM classifier training model; training an SVM classifier aiming at the Fourier spectrum feature vector training sample set to obtain a third SVM classifier training model; and respectively inputting the face living body test sample set into a first SVM classifier training model, a second SVM classifier training model and a third SVM classifier training model to obtain confidence degrees of the three features, so that weight information of the three features is obtained through calculation according to the confidence degrees, and linear combination is performed on the local binary pattern micro-texture feature vector training sample set, the multi-direction color gradient feature vector training sample set and the Fourier spectrum feature vector training sample set according to the weight information of the three features to obtain a fused face living body training sample set.
The preset fusion classifier described in the embodiment of the invention is obtained by inputting a fusion face living body training sample set into an SVM classifier for training.
According to the method and the device for detecting the living human face, provided by the embodiment of the invention, a plurality of features are obtained by performing multi-feature extraction on a living human face training sample set, an SVM classifier is respectively trained aiming at the plurality of features to obtain three classifiers, the confidence degrees of the three classifiers on a living human face testing sample set are calculated, so that feature fusion is realized, a fused living human face training sample set fusing the three features is obtained, the SVM classifier is trained according to the fused living human face training sample set, a preset fusion classifier is obtained, and the preset fusion classifier has higher identification accuracy and better identification accuracy under a complex environment compared with a classifier with an individual feature.
On the basis of the above embodiment, before the step of inputting the face outward expansion image into a preset fusion classifier to obtain a living body detection result, the method further includes:
acquiring a fused face living body training sample set, and acquiring a plurality of living body fusion feature vector samples and a plurality of non-living body fusion feature vector samples according to the fused face living body training sample set;
taking each living body fusion feature vector sample as a positive sample, taking each non-living body fusion feature vector sample as a negative sample, taking one positive sample and one negative sample as a group of training samples, and obtaining a plurality of groups of training samples;
and inputting the training samples into an SVM classifier for training to obtain a preset fusion classifier for any group of training samples.
Specifically, the fused face live body training sample set described in the embodiment of the present invention includes a plurality of live body face training sample photographs and a plurality of non-live body face training sample photographs, so that a plurality of live body fusion feature vector samples and a plurality of non-live body fusion feature vector samples can be obtained according to the fused face live body training sample set.
The preset fusion classifier obtained by training the fusion face living body training sample set described in the embodiment of the invention integrates the characteristics of various features, and can realize a more accurate detection effect compared with a single feature.
On the basis of the above embodiment, before the step of acquiring the fused face live sample set, the method further includes:
a human face living body training sample set is obtained,
analyzing the face living body training sample set to obtain a local binary pattern micro-texture feature vector training sample set, a multi-direction color gradient feature vector training sample set and a Fourier spectrum feature vector training sample set;
the face living body training sample set comprises a plurality of living body face training sample photos and a plurality of non-living body face training sample photos.
Specifically, the non-living body face training sample photo described in the embodiment of the present invention may be an image obtained by taking a photo containing a face.
The embodiment of the invention describes analyzing a face living body training sample set, and particularly relates to a method for extracting circular local binary pattern micro-texture features from a face region photo gray level space in the face living body training sample set to obtain a local binary pattern micro-texture feature vector training sample set; extracting a human face region photo in a human face living body training sample set in HSV and YCrCb color spaces to reflect the illumination reflection difference, and obtaining a multidirectional color gradient characteristic vector training sample set; and extracting Fourier spectrum characteristics from the face region photos in the face living body training sample set in a gray scale space to obtain a Fourier spectrum characteristic vector training sample set.
According to the embodiment of the invention, a plurality of features are extracted according to the face living body training sample set, so that the follow-up steps can be favorably carried out.
On the basis of the above embodiment, after the step of obtaining the local binary pattern micro-texture feature vector training sample set, the multi-directional color gradient feature vector training sample set, and the fourier spectrum feature vector training sample set, the method further includes:
acquiring a plurality of living body local binary pattern micro texture feature vector samples and a plurality of non-living body local binary pattern micro texture feature vector samples according to the local binary pattern micro texture feature vector training sample set;
taking each living body local binary pattern micro texture feature vector sample as a positive sample, taking each non-living body local binary pattern micro texture feature vector sample as a negative sample, taking one positive sample and one negative sample as a group of training samples, and obtaining a plurality of groups of training samples;
and inputting the training samples into an SVM classifier for training to obtain a first SVM classifier training model for any group of training samples.
After the step of obtaining the local binary pattern micro-texture feature vector training sample set, the multi-directional color gradient feature vector training sample set, and the fourier spectrum feature vector training sample set, the method further includes:
acquiring a plurality of living body multidirectional color gradient feature vector samples and a plurality of non-living body multidirectional color gradient feature vector samples according to the multidirectional color gradient feature vector training sample set;
taking each living body multidirectional color gradient feature vector sample as a positive sample, taking each non-living body multidirectional color gradient feature vector sample as a negative sample, taking one positive sample and one negative sample as a group of training samples, and obtaining a plurality of groups of training samples;
and inputting the training samples into an SVM classifier for training to obtain a second SVM classifier training model for any group of training samples.
After the step of obtaining the local binary pattern micro-texture feature vector training sample set, the multi-directional color gradient feature vector training sample set, and the fourier spectrum feature vector training sample set, the method further includes:
acquiring a plurality of living body Fourier spectrum characteristic vector samples and a plurality of non-living body Fourier spectrum characteristic vector samples according to the Fourier spectrum characteristic vector sample set;
taking each living body Fourier spectrum characteristic vector sample as a positive sample, taking each non-living body Fourier spectrum characteristic vector sample as a negative sample, taking one positive sample and one negative sample as a group of training samples, and obtaining a plurality of groups of training samples;
and inputting the training samples into an SVM classifier for training to obtain a third SVM classifier training model for any group of training samples.
Analyzing the face living body training sample set according to a first SVM (support vector machine) classifier training model, a second SVM classifier training model and a third SVM classifier training model respectively to obtain first feature weight information, second feature weight information and third feature weight information;
and carrying out linear combination on the local binary pattern micro-texture feature training sample set, the multi-direction color gradient feature training sample set and the Fourier spectrum feature training sample set according to the first feature weight information, the second feature weight information and the third feature weight information to obtain a fused face living body training sample set.
Specifically, in the embodiment of the present invention, the living human face test sample set is analyzed according to a first SVM classifier training model, a second SVM classifier training model and a third SVM classifier training model, respectively, to obtain first feature weight information, second feature weight information and third feature weight information, specifically, the living human face test sample set is input into the first SVM classifier training model, the second SVM classifier training model and the third SVM classifier training model, respectively, to obtain a first confidence degree, a second confidence degree and a third confidence degree, respectively, the feature with the largest confidence degree is divided into 2, the central feature is divided into 1 and the minimum feature is divided into 0, so that the scores of three features can be counted on the whole test data set, and then the percentage of the respective scores of the three features in the total score is used as the weight when the features are merged, so as to obtain the first feature weight information, the second feature weight information and the third feature weight information, Second feature weight information and third feature weight information.
In the embodiment of the present invention, the three eigenvectors are adjusted according to the first eigen weight information, the second eigen weight information, and the third eigen weight information, and the three eigenvectors are linearly combined according to the eigen weights to obtain the fusion features, for example: the resulting eigenvectors are a1, a2, a3 respectively, and the weights are w1, w2, w3 respectively, then the linear combination procedure is: w1 a1+ w2 a2+ w3 a 3.
In the embodiment of the invention, three feature vectors are subjected to feature fusion to obtain a fusion face living body training sample set fusing the three features, an SVM (support vector machine) classifier is trained according to the fusion face living body training sample set to obtain a preset fusion classifier, and the preset fusion classifier has higher identification accuracy compared with a classifier with single feature and better identification accuracy in a complex environment.
Fig. 2 is a schematic structural diagram of a living human face detection apparatus according to an embodiment of the present invention, as shown in fig. 2, including: an acquisition module 210 and a detection module 220; the obtaining module 210 is configured to obtain a face outward expansion image; the detection module 220 is configured to input the face outward expansion image into a preset fusion classifier to obtain a living body detection result; the preset fusion classifier is obtained by training based on a fusion face living body training sample set.
The apparatus provided in the embodiment of the present invention is used for executing the above method embodiments, and for details of the process and the details, reference is made to the above embodiments, which are not described herein again.
The method comprises the steps of extracting multiple features of a face living body training sample set to obtain multiple features, respectively training an SVM classifier aiming at the multiple features to obtain three classifiers, calculating confidence degrees of the three classifiers on the face living body testing sample set to realize feature fusion, obtaining a fused face living body training sample set fused with the three features, training the SVM classifier according to the fused face living body training sample set to obtain a preset fusion classifier, wherein the preset fusion classifier has higher identification accuracy compared with a classifier with single feature and has better identification accuracy under a complex environment.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 3, the electronic device may include: a processor (processor)310, a communication Interface (communication Interface)320, a memory (memory)330 and a communication bus 340, wherein the processor 310, the communication Interface 320 and the memory 330 communicate with each other via the communication bus 340. The processor 310 may call logic instructions in the memory 330 to perform the following method: acquiring a face external expansion image; inputting the face external expansion image into a preset fusion classifier to obtain a living body detection result; the preset fusion classifier is obtained by training based on a fusion face living body training sample set.
In addition, the logic instructions in the memory 330 may be implemented in the form of software functional units and stored in a computer readable storage medium when the software functional units are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
An embodiment of the present invention discloses a computer program product, which includes a computer program stored on a non-transitory computer readable storage medium, the computer program including program instructions, when the program instructions are executed by a computer, the computer can execute the methods provided by the above method embodiments, for example, the method includes: acquiring a face external expansion image; inputting the face external expansion image into a preset fusion classifier to obtain a living body detection result; the preset fusion classifier is obtained by training based on a fusion face living body training sample set.
Embodiments of the present invention provide a non-transitory computer-readable storage medium storing server instructions, where the server instructions cause a computer to execute the method provided in the foregoing embodiments, for example, the method includes: acquiring a face external expansion image; inputting the face external expansion image into a preset fusion classifier to obtain a living body detection result; the preset fusion classifier is obtained based on fusion face living body sample set training.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A face living body detection method is characterized by comprising the following steps:
acquiring a face external expansion image;
inputting the face external expansion image into a preset fusion classifier to obtain a living body detection result;
the preset fusion classifier is obtained by training based on a fusion face living body training sample set.
2. The method for detecting the living human face according to claim 1, wherein before the step of inputting the human face external image into a preset fusion classifier to obtain the living human face detection result, the method further comprises:
acquiring a fused face living body training sample set, and acquiring a plurality of living body fusion feature vector samples and a plurality of non-living body fusion feature vector samples according to the fused face living body training sample set;
taking each living body fusion feature vector sample as a positive sample, taking each non-living body fusion feature vector sample as a negative sample, taking one positive sample and one negative sample as a group of training samples, and obtaining a plurality of groups of training samples;
and inputting the training samples into an SVM classifier for training to obtain a preset fusion classifier for any group of training samples.
3. The face live-action detection method according to claim 2, wherein before the step of obtaining the fused face live-action training sample set, the method further comprises:
a human face living body training sample set is obtained,
analyzing the face living body training sample set to obtain a local binary pattern micro-texture feature vector training sample set, a multi-direction color gradient feature vector training sample set and a Fourier spectrum feature vector training sample set;
the face living body training sample set comprises a plurality of living body face training sample photos and a plurality of non-living body face training sample photos.
4. The face in-vivo detection method according to claim 3, wherein after the step of obtaining the local binary pattern micro-texture feature vector training sample set, the multi-directional color gradient feature vector training sample set, and the Fourier spectrum feature vector training sample set, the method further comprises:
acquiring a plurality of living body local binary pattern micro texture feature vector samples and a plurality of non-living body local binary pattern micro texture feature vector samples according to the local binary pattern micro texture feature vector training sample set;
taking each living body local binary pattern micro texture feature vector sample as a positive sample, taking each non-living body local binary pattern micro texture feature vector sample as a negative sample, taking one positive sample and one negative sample as a group of training samples, and obtaining a plurality of groups of training samples;
and inputting the training samples into an SVM classifier for training to obtain a first SVM classifier training model for any group of training samples.
5. The face in-vivo detection method according to claim 4, wherein after the step of obtaining the local binary pattern micro-texture feature vector training sample set, the multi-directional color gradient feature vector training sample set, and the Fourier spectrum feature vector training sample set, the method further comprises:
acquiring a plurality of living body multidirectional color gradient feature vector samples and a plurality of non-living body multidirectional color gradient feature vector samples according to the multidirectional color gradient feature vector training sample set;
taking each living body multidirectional color gradient feature vector sample as a positive sample, taking each non-living body multidirectional color gradient feature vector sample as a negative sample, taking one positive sample and one negative sample as a group of training samples, and obtaining a plurality of groups of training samples;
and inputting the training samples into an SVM classifier for training to obtain a second SVM classifier training model for any group of training samples.
6. The face in-vivo detection method according to claim 5, wherein after the step of obtaining the local binary pattern micro-texture feature vector training sample set, the multi-directional color gradient feature vector training sample set, and the Fourier spectrum feature vector training sample set, the method further comprises:
acquiring a plurality of living body Fourier spectrum characteristic vector samples and a plurality of non-living body Fourier spectrum characteristic vector samples according to the Fourier spectrum characteristic vector training sample set;
taking each living body Fourier spectrum characteristic vector sample as a positive sample, taking each non-living body Fourier spectrum characteristic vector sample as a negative sample, taking one positive sample and one negative sample as a group of training samples, and obtaining a plurality of groups of training samples;
and inputting the training samples into an SVM classifier for training to obtain a third SVM classifier training model for any group of training samples.
7. The face liveness detection method of claim 6, wherein the method further comprises:
acquiring a human face living body test sample set;
analyzing the face living body test sample set according to a first SVM classifier training model, a second SVM classifier training model and a third SVM classifier training model respectively to obtain first feature weight information, second feature weight information and third feature weight information;
and carrying out linear combination on the local binary pattern micro-texture feature training sample set, the multi-direction color gradient feature training sample set and the Fourier spectrum feature training sample set according to the first feature weight information, the second feature weight information and the third feature weight information to obtain a fused face living body training sample set.
8. A face liveness detection device, comprising:
the acquisition module is used for acquiring a face external expansion image;
the detection module is used for inputting the face external expansion image into a preset fusion classifier to obtain a living body detection result;
the preset fusion classifier is obtained by training based on a fusion face living body training sample set.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method for detecting a living human face as claimed in any one of claims 1 to 7 when executing the program.
10. A non-transitory computer-readable storage medium, on which a computer program is stored, wherein the computer program, when being executed by a processor, implements the steps of the method for detecting a living human face according to any one of claims 1 to 7.
CN201910958335.7A 2019-10-10 2019-10-10 Face living body detection method and device Pending CN110751069A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910958335.7A CN110751069A (en) 2019-10-10 2019-10-10 Face living body detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910958335.7A CN110751069A (en) 2019-10-10 2019-10-10 Face living body detection method and device

Publications (1)

Publication Number Publication Date
CN110751069A true CN110751069A (en) 2020-02-04

Family

ID=69277888

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910958335.7A Pending CN110751069A (en) 2019-10-10 2019-10-10 Face living body detection method and device

Country Status (1)

Country Link
CN (1) CN110751069A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111178341A (en) * 2020-04-10 2020-05-19 支付宝(杭州)信息技术有限公司 Living body detection method, device and equipment
CN111652082A (en) * 2020-05-13 2020-09-11 北京的卢深视科技有限公司 Face living body detection method and device
CN111832460A (en) * 2020-07-06 2020-10-27 北京工业大学 Face image extraction method and system based on multi-feature fusion
CN112070041A (en) * 2020-09-14 2020-12-11 北京印刷学院 Living body face detection method and device based on CNN deep learning model
CN113283388A (en) * 2021-06-24 2021-08-20 中国平安人寿保险股份有限公司 Training method, device and equipment of living human face detection model and storage medium
CN114140854A (en) * 2021-11-29 2022-03-04 北京百度网讯科技有限公司 Living body detection method and device, electronic equipment and storage medium
CN114764874A (en) * 2022-04-06 2022-07-19 北京百度网讯科技有限公司 Deep learning model training method, object recognition method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140037152A1 (en) * 2011-04-20 2014-02-06 Institute Of Automation, Chinese Academy Of Sciences Identity recognition based on multiple feature fusion for an eye image
CN105354554A (en) * 2015-11-12 2016-02-24 西安电子科技大学 Color and singular value feature-based face in-vivo detection method
CN110069983A (en) * 2019-03-08 2019-07-30 深圳神目信息技术有限公司 Vivo identification method, device, terminal and readable medium based on display medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140037152A1 (en) * 2011-04-20 2014-02-06 Institute Of Automation, Chinese Academy Of Sciences Identity recognition based on multiple feature fusion for an eye image
CN105354554A (en) * 2015-11-12 2016-02-24 西安电子科技大学 Color and singular value feature-based face in-vivo detection method
CN110069983A (en) * 2019-03-08 2019-07-30 深圳神目信息技术有限公司 Vivo identification method, device, terminal and readable medium based on display medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
董吉祥: "人脸活体检测算法研究与实现", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111178341A (en) * 2020-04-10 2020-05-19 支付宝(杭州)信息技术有限公司 Living body detection method, device and equipment
CN111652082A (en) * 2020-05-13 2020-09-11 北京的卢深视科技有限公司 Face living body detection method and device
CN111652082B (en) * 2020-05-13 2021-12-28 北京的卢深视科技有限公司 Face living body detection method and device
CN111832460A (en) * 2020-07-06 2020-10-27 北京工业大学 Face image extraction method and system based on multi-feature fusion
CN111832460B (en) * 2020-07-06 2024-05-21 北京工业大学 Face image extraction method and system based on multi-feature fusion
CN112070041A (en) * 2020-09-14 2020-12-11 北京印刷学院 Living body face detection method and device based on CNN deep learning model
CN113283388A (en) * 2021-06-24 2021-08-20 中国平安人寿保险股份有限公司 Training method, device and equipment of living human face detection model and storage medium
CN113283388B (en) * 2021-06-24 2024-05-24 中国平安人寿保险股份有限公司 Training method, device, equipment and storage medium of living body face detection model
CN114140854A (en) * 2021-11-29 2022-03-04 北京百度网讯科技有限公司 Living body detection method and device, electronic equipment and storage medium
CN114764874A (en) * 2022-04-06 2022-07-19 北京百度网讯科技有限公司 Deep learning model training method, object recognition method and device
CN114764874B (en) * 2022-04-06 2023-04-07 北京百度网讯科技有限公司 Deep learning model training method, object recognition method and device

Similar Documents

Publication Publication Date Title
CN110751069A (en) Face living body detection method and device
WO2021203863A1 (en) Artificial intelligence-based object detection method and apparatus, device, and storage medium
CN108229330A (en) Face fusion recognition methods and device, electronic equipment and storage medium
CN107463865B (en) Face detection model training method, face detection method and device
CA3066029A1 (en) Image feature acquisition
CN107958230B (en) Facial expression recognition method and device
US20170140210A1 (en) Image processing apparatus and image processing method
EP3176563A1 (en) Identification device and identification method
US9489566B2 (en) Image recognition apparatus and image recognition method for identifying object
JP6071002B2 (en) Reliability acquisition device, reliability acquisition method, and reliability acquisition program
CN108509994B (en) Method and device for clustering character images
CN111178252A (en) Multi-feature fusion identity recognition method
CN109815823B (en) Data processing method and related product
WO2020195732A1 (en) Image processing device, image processing method, and recording medium in which program is stored
US20240203097A1 (en) Method and apparatus for training image processing model, and image classifying method and apparatus
CN110598019A (en) Repeated image identification method and device
WO2017167313A1 (en) Expression recognition method and device
CN111723762B (en) Face attribute identification method and device, electronic equipment and storage medium
JP2015197708A (en) Object identification device, object identification method, and program
JP2007048172A (en) Information classification device
CN114677730A (en) Living body detection method, living body detection device, electronic apparatus, and storage medium
CN116206334A (en) Wild animal identification method and device
CN106709490B (en) Character recognition method and device
Marjusalinah et al. Classification of finger spelling American sign language using convolutional neural network
CN114332990A (en) Emotion recognition method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200204