CN111027400A - Living body detection method and device - Google Patents

Living body detection method and device Download PDF

Info

Publication number
CN111027400A
CN111027400A CN201911116418.8A CN201911116418A CN111027400A CN 111027400 A CN111027400 A CN 111027400A CN 201911116418 A CN201911116418 A CN 201911116418A CN 111027400 A CN111027400 A CN 111027400A
Authority
CN
China
Prior art keywords
living body
model
detection
face image
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911116418.8A
Other languages
Chinese (zh)
Inventor
张徽
崔东顺
李宜兵
钱兴
黄广斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yantai Guangzhi Weixin Intelligent Technology Co ltd
Original Assignee
Yantai Guangzhi Weixin Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yantai Guangzhi Weixin Intelligent Technology Co ltd filed Critical Yantai Guangzhi Weixin Intelligent Technology Co ltd
Priority to CN201911116418.8A priority Critical patent/CN111027400A/en
Publication of CN111027400A publication Critical patent/CN111027400A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a living body detection method and a living body detection device, wherein the method comprises the following steps: acquiring a face image of an object to be detected, and preprocessing the face image; inputting the preprocessed face image into a pre-trained living body detection model consisting of a convolutional neural network model and an ELM model for detection, and outputting a detection score; and comparing the score with a set threshold, and if the score is greater than or equal to the set threshold, judging that the object to be detected is a living body. According to the technical scheme, a method of combining an ELM model and a convolutional neural network model is used, network parameters are reduced, network learning efficiency is improved, generalization performance of the network is enhanced, and meanwhile, online model updating can be achieved.

Description

Living body detection method and device
Technical Field
The invention relates to the field of image processing, in particular to a method and a device for detecting a living body.
Background
With the rapid development of artificial intelligence at the present stage, the face recognition technology carried on artificial intelligence is also increasingly applied to various fields requiring identity verification, such as security and finance, for example, remote fund transaction, access control systems, and the like. However, in the application field of personal and property security, besides ensuring the identity of the authenticated person, it is first confirmed that the authenticated person is a living organism, i.e. the face recognition system needs to be able to prevent the attacker from attacking the authenticated person by using common attack methods such as photos, masks, screen shots, and the like. Face liveness detection is a method for determining the true physiological characteristics of an object in some authentication scenarios. In the face recognition application, the living body detection effectively verifies whether a user operates as a real living body, so that common attack means such as photos, printed pictures, masks, shelters and screen reproduction can be effectively resisted, the user can be helped to discriminate fraudulent behaviors, and the benefit of the user is guaranteed.
At present, the face living body detection technology is mainly divided into two categories: the first type of technology is an active human face living body detection technology, which requires a user to complete specific actions (blinking, mouth opening, head shaking, head pointing and the like) in a matching manner according to instructions when performing human face recognition, so that a living body detection module can judge whether an operator is a living body according to whether the operator accurately completes the living body actions; the second type of technology is a passive human face living body detection technology, which does not require a user to perform a series of actions in a matching manner, and the user experience is good, but the technology difficulty is high, and the technology mainly depends on input single-frame image information or other sensor equipment to perform living body detection. In the prior art, for the passive human face living body detection technology, supervised training is usually performed by using a deep learning method for acquiring images of human faces of living bodies and non-living bodies, but this kind of operation often needs to increase parameters of a deep learning framework or training data with orders of magnitude of hundred million to ensure the accuracy of living body detection.
Disclosure of Invention
The embodiment of the invention provides a method and a device for detecting a living body, which are used for reducing network parameters, improving the network learning efficiency and enhancing the generalization performance of a network by using a method of combining an ELM model and a convolutional neural network model.
In order to achieve the above object, in one aspect, an embodiment of the present invention provides a method for detecting a living body, including:
acquiring a face image of an object to be detected, and preprocessing the face image;
inputting the preprocessed face image into a pre-trained living body detection model consisting of a convolutional neural network model and an ELM model for detection, and outputting a detection score;
and comparing the score with a set threshold, and if the score is greater than or equal to the set threshold, judging that the object to be detected is a living body.
In another aspect, an embodiment of the present invention provides a living body detection apparatus, where the apparatus includes:
the image preprocessing unit is used for acquiring a face image of an object to be detected and preprocessing the face image;
the image detection unit is used for inputting the preprocessed face image into a pre-trained living body detection model consisting of a convolutional neural network model and an over-limit learning machine (ELM) model for detection and outputting a detection score;
and the living body judging unit is used for comparing the score with a set threshold value, and judging that the object to be detected is a living body if the score is greater than or equal to the set threshold value.
The technical scheme has the following beneficial effects: the invention provides a living body detection method based on an ultralimit learning machine (ELM) (extreme learning machine) and a convolutional neural network model, which combines the advantages that the training speed of the ELM model is high, the convolutional neural network model can extract deep characteristic information of an image, the model parameters are few, and the use of a mobile terminal is convenient.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of a method of live detection according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of detecting a living organism according to yet another embodiment of the present invention;
fig. 3 is a schematic structural diagram of a living body detection apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides a living body detection method based on an ELM and a convolutional neural network, which reduces network parameters, improves network learning efficiency and enhances generalization performance of the network by using a method of combining the ELM and the convolutional neural network, and can realize on-line model updating. Firstly, collecting a part of living body and non-living body face images as a training set for training a convolutional neural network model, deleting a softmax layer of the trained convolutional neural network model, inputting the training set into an adjusted model to obtain an ELM training sample, training the ELM model, collecting the face of an object to be detected for the object to be detected, preprocessing the image, inputting the preprocessed image into the adjusted convolutional neural network model, obtaining an output result, inputting the preprocessed image into the ELM model to obtain a final detection result, and comparing the detection result with a preset threshold value to obtain a prediction result, thereby judging whether the object to be detected is a living body.
Fig. 1 is a flow chart of a method for detecting a living body according to an embodiment of the present invention, the method including:
s101, collecting a face image of an object to be detected, and preprocessing the face image. The method specifically comprises the following steps: cutting an area where a face is located in the face image; and setting the cut human face image to be a specific size. Specifically, the size of the clipped human face image can be determined according to the requirements of the used convolutional neural network model, and in this embodiment, the clipped human face image can be 224 × 224.
S102, inputting the preprocessed face image into a pre-trained living body detection model consisting of a convolutional neural network model and an over-limit learning machine (ELM) model for detection, and outputting a detection score. Wherein the pre-trained in vivo detection model consisting of a convolutional neural network model and an ELM model is trained by the following steps:
collecting a set number of living body face images and non-living body face images as training images; acquiring a first preset number of living body face images shot by different acquisition equipment under different light rays; and acquiring a second preset number of non-living human face images shot by different acquisition equipment under different light rays, and re-shot human face print photos and human face electronic photos. For example, 40000 images of the faces of the living body and the non-living body are collected together.
Preprocessing the training image to obtain training data and test data; namely, cutting the area of the face in the training image; setting the cut face image to a specific size, where the size of the cut face image is consistent with the size of the face image in the preprocessing, for example, 224 × 224 face image; generating labels for the face images with the set sizes, wherein the label corresponding to the living body face image is 1, and the label corresponding to the non-living body face image is 0; 3/4 is taken as training data of all the human face images with set sizes and the corresponding labels, and the rest 1/4 is taken as test data.
Inputting the training data into a convolutional neural network model to train the convolutional neural network model, and determining the trained convolutional neural network model after testing the trained convolutional neural network model through test data; wherein, whether the convolutional neural network is trained or not is judged by testing the accuracy, and the calculation formula of the testing accuracy is as follows: and the accuracy rate is the number of correctly classified images/the number of all images, and the trained convolutional neural network model is the convolutional neural network model with the highest accuracy rate for the test data set. The convolutional neural network model can use a MobileNet model, a ShuffleNet model and a SqueezeNet model; the MobileNet model includes an input layer, a convolution layer, a BatchNorm layer, a ReLU layer, an inverted residual block, and a softmax layer.
And replacing the softmax layer of the trained convolutional neural network model with the single hidden layer ELM model to obtain a pre-trained living body detection model consisting of the convolutional neural network model and the ELM model. Specifically, a trained convolutional neural network model is trimmed, and a softmax layer is deleted, wherein the softmax layer can obtain the prediction probability of the convolutional neural network for processing a face image, the last layer of the trimmed convolutional neural network model is connected with a single hidden layer ELM, training data of the convolutional neural network model is input into the adjusted convolutional neural network model to obtain an output result of the model, and the output result is used as a training sample of the ELM. Inputting all ELM training samples into an ELM model which comprises a hidden layer and has 600 hidden nodes, and carrying out ELM model training. In this case, it is equivalent to extracting the features of the face image by using a convolutional neural network, and then putting the features into an ELM network for training. And the convolutional neural network model and the updated ELM model form a whole in-vivo detection model.
S103, comparing the score with a set threshold, and if the score is larger than or equal to the set threshold, judging that the object to be detected is a living body. Specifically, the test score is compared with a threshold, and if the score value is 0.9 or more, the recognition result is a living body, and if the score value is less than 0.9, the recognition result is a non-living body, as the threshold can be set.
Further, as shown in fig. 2, the method further includes: and updating the ELM model on line. The method specifically comprises the following steps: storing the face image which is determined as the living body face image after detection in the actual detection into a living body library; storing the face image which is determined as a non-living body face image after detection in actual detection into a non-living body library; regularly checking face images judged wrongly in the living body library and the non-living body library, and storing the face images judged wrongly into a living body library or a non-living body library which is correctly corresponding to the face images; and performing updating training on the ELM model by using the face images in the updated living body library and the non-living body library.
Fig. 3 is a schematic structural diagram of a living body detection apparatus according to an embodiment of the present invention, where the apparatus includes:
the image preprocessing unit 21 is configured to acquire a face image of an object to be detected and preprocess the face image;
the image detection unit 22 is used for inputting the preprocessed face image into a pre-trained living body detection model consisting of a convolutional neural network model and an over-limit learning machine (ELM) model for detection and outputting a detection score;
and the living body judging unit 23 is configured to compare the score with a set threshold, and if the score is greater than or equal to the set threshold, judge that the object to be detected is a living body.
Further, the apparatus further comprises a detection model training unit, configured to:
collecting a set number of living body face images and non-living body face images as training images;
preprocessing the training image to obtain training data and test data;
inputting the training data into a convolutional neural network model to train the convolutional neural network model, and determining the trained convolutional neural network model after testing the trained convolutional neural network model through test data;
replacing a softMax layer of the trained convolutional neural network model with a single hidden layer ELM model to obtain a living body detection model;
and inputting the training data into the in-vivo detection model to train an ELM (element-to-metal) model in the in-vivo detection model, so as to obtain a pre-trained in-vivo detection model consisting of a convolutional neural network model and the ELM model.
Further, the device also comprises an ELM model updating unit used for updating the ELM model on line,
the method is specifically used for:
storing the face image which is determined as the living body face image after detection in the actual detection into a living body library;
storing the face image which is determined as a non-living body face image after detection in actual detection into a non-living body library;
regularly checking face images judged wrongly in the living body library and the non-living body library, and storing the face images judged wrongly into a living body library or a non-living body library which is correctly corresponding to the face images;
and performing updating training on the ELM model by using the face images in the updated living body library and the non-living body library.
The invention relates to a face in-vivo detection method, which comprises the steps of capturing a face image of an object to be detected by a camera, preprocessing the acquired image, carrying out image recognition on the preprocessed image by using an offline training model, and realizing in-vivo detection according to a recognition result. And the online optimization updating of the ELM model can be realized according to the actually used in-vivo detection effect to improve the in-vivo detection and identification success rate, and the trouble that the complicated convolutional neural network needs to be updated and trained again to improve the in-vivo detection and identification success rate is avoided.
It should be understood that the specific order or hierarchy of steps in the processes disclosed is an example of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged without departing from the scope of the present disclosure. The accompanying method claims present elements of the various steps in a sample order, and are not intended to be limited to the specific order or hierarchy presented.
In the foregoing detailed description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the subject matter require more features than are expressly recited in each claim. Rather, as the following claims reflect, invention lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby expressly incorporated into the detailed description, with each claim standing on its own as a separate preferred embodiment of the invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. To those skilled in the art; various modifications to these embodiments will be readily apparent, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
What has been described above includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the aforementioned embodiments, but one of ordinary skill in the art may recognize that many further combinations and permutations of various embodiments are possible. Accordingly, the embodiments described herein are intended to embrace all such alterations, modifications and variations that fall within the scope of the appended claims. Furthermore, to the extent that the term "includes" is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term "comprising" as "comprising" is interpreted when employed as a transitional word in a claim. Furthermore, any use of the term "or" in the specification of the claims is intended to mean a "non-exclusive or".
Those of skill in the art will further appreciate that the various illustrative logical blocks, units, and steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate the interchangeability of hardware and software, various illustrative components, elements, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design requirements of the overall device. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present embodiments.
The various illustrative logical blocks, or elements, described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor, an Application Specific Integrated Circuit (ASIC), a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, or any other similar configuration.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. For example, a storage medium may be coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC, which may be located in a user terminal. In the alternative, the processor and the storage medium may reside in different components in a user terminal.
In one or more exemplary designs, the functions described above in connection with the embodiments of the invention may be implemented in hardware, software, firmware, or any combination of the three. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media that facilitate transfer of a computer program from one place to another. Storage media may be any available media that can be accessed by a general purpose or special purpose computer. For example, such computer-readable media can include, but is not limited to, RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store program code in the form of instructions or data structures and which can be read by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Additionally, any connection is properly termed a computer-readable medium, and, thus, is included if the software is transmitted from a website, server, or other remote source via a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wirelessly, e.g., infrared, radio, and microwave. Such discs (disk) and disks (disc) include compact disks, laser disks, optical disks, DVDs, floppy disks and blu-ray disks where disks usually reproduce data magnetically, while disks usually reproduce data optically with lasers. Combinations of the above may also be included in the computer-readable medium.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A method of in vivo detection, the method comprising:
acquiring a face image of an object to be detected, and preprocessing the face image;
inputting the preprocessed face image into a pre-trained living body detection model consisting of a convolutional neural network model and an ELM model for detection, and outputting a detection score;
and comparing the score with a set threshold, and if the score is greater than or equal to the set threshold, judging that the object to be detected is a living body.
2. The live body detection method according to claim 1, wherein the preprocessing the face image includes:
cutting an area where a face is located in the face image;
and setting the cut face image as a face image with a specific size.
3. The in-vivo detection method according to claim 1, wherein the pre-trained in-vivo detection model consisting of the convolutional neural network model and the over-limit learning machine (ELM) model is trained by:
collecting a set number of living body face images and non-living body face images as training images;
preprocessing the training image to obtain training data and test data;
inputting the training data into a convolutional neural network model to train the convolutional neural network model, and determining the trained convolutional neural network model after testing the trained convolutional neural network model through test data;
replacing a softMax layer of the trained convolutional neural network model with an ELM model to obtain a living body detection model;
and inputting the training data into the in-vivo detection model to train an ELM (element-to-metal) model in the in-vivo detection model, so as to obtain a pre-trained in-vivo detection model consisting of a convolutional neural network model and the ELM model.
4. The in-vivo detection method according to claim 3, wherein the acquiring of a predetermined number of live face images and non-live face images as training images comprises:
collecting a first preset number of living body face images which are shot by different collection devices under different light rays;
and acquiring a second preset number of non-living human face images shot by different acquisition equipment under different light rays, and re-shot human face print photos and human face electronic photos.
5. The in-vivo detection method as set forth in claim 3, wherein the preprocessing the training image to obtain training data and test data comprises:
cutting the area where the face is in the training image;
setting the cut human face image as a human face image with a specific size;
generating labels for the face images with the set sizes, wherein the label corresponding to the living body face image is 1, and the label corresponding to the non-living body face image is 0;
3/4 is taken as training data of all the human face images with set sizes and the corresponding labels, and the rest 1/4 is taken as test data.
6. The liveness detection method of claim 3 wherein the convolutional neural network model comprises a MobileNet model comprising an input layer, a convolutional layer, a BatchNorm layer, a ReLU layer, an inverted residual block, and a softmax layer.
7. The liveness detection method of claim 1 further comprising updating the ELM model on-line, comprising:
storing the face image which is determined as the living body face image after detection in the actual detection into a living body library;
storing the face image which is determined as a non-living body face image after detection in actual detection into a non-living body library;
regularly checking face images judged wrongly in the living body library and the non-living body library, and storing the face images judged wrongly into a living body library or a non-living body library which is correctly corresponding to the face images;
and performing updating training on the ELM model by using the face images in the updated living body library and the non-living body library.
8. A living body detection apparatus, the apparatus comprising:
the image preprocessing unit is used for acquiring a face image of an object to be detected and preprocessing the face image;
the image detection unit is used for inputting the preprocessed face image into a pre-trained living body detection model consisting of a convolutional neural network model and an over-limit learning machine (ELM) model for detection and outputting a detection score;
and the living body judging unit is used for comparing the score with a set threshold value, and judging that the object to be detected is a living body if the score is greater than or equal to the set threshold value.
9. The biopsy apparatus of claim 8, further comprising a test model training unit to:
collecting a set number of living body face images and non-living body face images as training images;
preprocessing the training image to obtain training data and test data;
inputting the training data into a convolutional neural network model to train the convolutional neural network model, and determining the trained convolutional neural network model after testing the trained convolutional neural network model through test data;
replacing a softMax layer of the trained convolutional neural network model with a single hidden layer ELM model to obtain a living body detection model;
and inputting the training data into the in-vivo detection model to train an ELM (element-to-metal) model in the in-vivo detection model, so as to obtain a pre-trained in-vivo detection model consisting of a convolutional neural network model and the ELM model.
10. The liveness detection device according to claim 9, characterised in that it further comprises an ELM model updating unit for updating the ELM model online, in particular for:
storing the face image which is determined as the living body face image after detection in the actual detection into a living body library;
storing the face image which is determined as a non-living body face image after detection in actual detection into a non-living body library;
regularly checking face images judged wrongly in the living body library and the non-living body library, and storing the face images judged wrongly into a living body library or a non-living body library which is correctly corresponding to the face images;
and performing updating training on the ELM model by using the face images in the updated living body library and the non-living body library.
CN201911116418.8A 2019-11-15 2019-11-15 Living body detection method and device Pending CN111027400A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911116418.8A CN111027400A (en) 2019-11-15 2019-11-15 Living body detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911116418.8A CN111027400A (en) 2019-11-15 2019-11-15 Living body detection method and device

Publications (1)

Publication Number Publication Date
CN111027400A true CN111027400A (en) 2020-04-17

Family

ID=70200240

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911116418.8A Pending CN111027400A (en) 2019-11-15 2019-11-15 Living body detection method and device

Country Status (1)

Country Link
CN (1) CN111027400A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914758A (en) * 2020-08-04 2020-11-10 成都奥快科技有限公司 Face in-vivo detection method and device based on convolutional neural network
CN111931699A (en) * 2020-09-10 2020-11-13 华南理工大学 Pedestrian attribute identification method and device and computer equipment
CN112990090A (en) * 2021-04-09 2021-06-18 北京华捷艾米科技有限公司 Face living body detection method and device
CN113761983A (en) * 2020-06-05 2021-12-07 杭州海康威视数字技术股份有限公司 Method and device for updating human face living body detection model and image acquisition equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107688576A (en) * 2016-08-04 2018-02-13 中国科学院声学研究所 The structure and tendentiousness sorting technique of a kind of CNN SVM models
CN108898112A (en) * 2018-07-03 2018-11-27 东北大学 A kind of near-infrared human face in-vivo detection method and system
CN108921041A (en) * 2018-06-06 2018-11-30 深圳神目信息技术有限公司 A kind of biopsy method and device based on RGB and IR binocular camera
CN109376694A (en) * 2018-11-23 2019-02-22 重庆中科云丛科技有限公司 A kind of real-time face biopsy method based on image procossing
CN109886087A (en) * 2019-01-04 2019-06-14 平安科技(深圳)有限公司 A kind of biopsy method neural network based and terminal device
CN110298230A (en) * 2019-05-06 2019-10-01 深圳市华付信息技术有限公司 Silent biopsy method, device, computer equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107688576A (en) * 2016-08-04 2018-02-13 中国科学院声学研究所 The structure and tendentiousness sorting technique of a kind of CNN SVM models
CN108921041A (en) * 2018-06-06 2018-11-30 深圳神目信息技术有限公司 A kind of biopsy method and device based on RGB and IR binocular camera
CN108898112A (en) * 2018-07-03 2018-11-27 东北大学 A kind of near-infrared human face in-vivo detection method and system
CN109376694A (en) * 2018-11-23 2019-02-22 重庆中科云丛科技有限公司 A kind of real-time face biopsy method based on image procossing
CN109886087A (en) * 2019-01-04 2019-06-14 平安科技(深圳)有限公司 A kind of biopsy method neural network based and terminal device
CN110298230A (en) * 2019-05-06 2019-10-01 深圳市华付信息技术有限公司 Silent biopsy method, device, computer equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
余丹,吴小俊: "一种卷积神经网络和极限学习机相结合的人脸识别方法", 《数据采集预处理》 *
李冰 等: "应用并联卷积神经网络的人脸防欺骗方法", 《小型微型计算机系统》 *
赵中堂: "《基于智能移动终端的行为识别方法研究》", 30 April 2015 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113761983A (en) * 2020-06-05 2021-12-07 杭州海康威视数字技术股份有限公司 Method and device for updating human face living body detection model and image acquisition equipment
CN113761983B (en) * 2020-06-05 2023-08-22 杭州海康威视数字技术股份有限公司 Method and device for updating human face living body detection model and image acquisition equipment
CN111914758A (en) * 2020-08-04 2020-11-10 成都奥快科技有限公司 Face in-vivo detection method and device based on convolutional neural network
CN111931699A (en) * 2020-09-10 2020-11-13 华南理工大学 Pedestrian attribute identification method and device and computer equipment
CN112990090A (en) * 2021-04-09 2021-06-18 北京华捷艾米科技有限公司 Face living body detection method and device

Similar Documents

Publication Publication Date Title
CN106599772B (en) Living body verification method and device and identity authentication method and device
CN109948408B (en) Activity test method and apparatus
CN108875833B (en) Neural network training method, face recognition method and device
CN109086691B (en) Three-dimensional face living body detection method, face authentication and identification method and device
CN111027400A (en) Living body detection method and device
CN106934376B (en) A kind of image-recognizing method, device and mobile terminal
CN108038176B (en) Method and device for establishing passerby library, electronic equipment and medium
CN113366487A (en) Operation determination method and device based on expression group and electronic equipment
WO2016172872A1 (en) Method and device for verifying real human face, and computer program product
US11489866B2 (en) Systems and methods for private authentication with helper networks
JP6969663B2 (en) Devices and methods for identifying the user's imaging device
CN107609463B (en) Living body detection method, living body detection device, living body detection equipment and storage medium
WO2022022493A1 (en) Image authenticity determination method and system
CN112395979B (en) Image-based health state identification method, device, equipment and storage medium
TWI712980B (en) Claim information extraction method and device, and electronic equipment
CN105654033A (en) Face image verification method and device
Smith-Creasey et al. Continuous face authentication scheme for mobile devices with tracking and liveness detection
US10423817B2 (en) Latent fingerprint ridge flow map improvement
CN112784741A (en) Pet identity recognition method and device and nonvolatile storage medium
CN113642639B (en) Living body detection method, living body detection device, living body detection equipment and storage medium
US10755074B2 (en) Latent fingerprint pattern estimation
CN113837006A (en) Face recognition method and device, storage medium and electronic equipment
CN111767829B (en) Living body detection method, device, system and storage medium
CN114202807A (en) Living body detection method, living body detection device, electronic apparatus, and storage medium
CN113705366A (en) Personnel management system identity identification method and device and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Zhang Hui

Inventor after: Cui Dongshun

Inventor after: Li Yibing

Inventor after: Qian Xing

Inventor before: Zhang Hui

Inventor before: Cui Dongshun

Inventor before: Li Yibing

Inventor before: Qian Xing

Inventor before: Huang Guangbin

CB03 Change of inventor or designer information
RJ01 Rejection of invention patent application after publication

Application publication date: 20200417

RJ01 Rejection of invention patent application after publication