CN108875497B - Living body detection method, living body detection device and computer storage medium - Google Patents

Living body detection method, living body detection device and computer storage medium Download PDF

Info

Publication number
CN108875497B
CN108875497B CN201711025141.9A CN201711025141A CN108875497B CN 108875497 B CN108875497 B CN 108875497B CN 201711025141 A CN201711025141 A CN 201711025141A CN 108875497 B CN108875497 B CN 108875497B
Authority
CN
China
Prior art keywords
face
video
living body
input data
face video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711025141.9A
Other languages
Chinese (zh)
Other versions
CN108875497A (en
Inventor
孙伟
范浩强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kuangshi Technology Co Ltd
Beijing Megvii Technology Co Ltd
Original Assignee
Beijing Kuangshi Technology Co Ltd
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kuangshi Technology Co Ltd, Beijing Megvii Technology Co Ltd filed Critical Beijing Kuangshi Technology Co Ltd
Priority to CN201711025141.9A priority Critical patent/CN108875497B/en
Publication of CN108875497A publication Critical patent/CN108875497A/en
Application granted granted Critical
Publication of CN108875497B publication Critical patent/CN108875497B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/32Individual registration on entry or exit not involving the use of a pass in combination with an identity check
    • G07C9/37Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Collating Specific Patterns (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The embodiment of the invention provides a method and a device for detecting living bodies and a computer storage medium, wherein the method comprises the following steps: acquiring a face video, wherein the distance between a face in the face video and an image acquisition device for acquiring the face video is changed; and performing living body detection on the face video to determine whether the face is a living body. Therefore, in the embodiment of the invention, the living body detection is carried out by considering the distance change between the video face and the image acquisition device, the living body detection effect can be improved, and the non-living body attacks in the forms of photos and the like can be avoided.

Description

Living body detection method, living body detection device and computer storage medium
Technical Field
The present invention relates to the field of image processing, and more particularly, to a method and apparatus for detecting a living body, and a computer storage medium.
Background
In consideration of safety, the access control system has wide application in various fields. The access control system can be opened in various ways, such as inputting a password, swiping an access card, and fingerprint. However, the password is easily leaked, the access card is easily copied, and the fingerprint is easily forged, so that the potential safety hazard is caused.
As a method for biometric identity authentication, face recognition can be applied to an access control system to improve the security of the access control system, but various attacks such as photos, screen shots and the like need to be prevented in the process of identity authentication.
Disclosure of Invention
The present invention has been made in view of the above problems. The invention provides a method and a device for detecting a living body and a computer storage medium, which consider the distance change between a video face and an image acquisition device to detect the living body and can avoid non-living body attacks in the forms of photos and the like.
According to an aspect of the present invention, there is provided a method of in vivo detection, including:
acquiring a face video, wherein the distance between a face in the face video and an image acquisition device for acquiring the face video is changed;
and performing living body detection on the face video to determine whether the face is a living body.
In an embodiment of the present invention, the performing live body detection on the face video to determine whether the face is a live body includes:
and determining whether the face is a living body according to the size change of different parts of the face in the face video, wherein the size change of the different parts of the face in the face video is obtained according to the distance change of the face.
In an embodiment of the present invention, the determining whether the face is a living body according to size changes of different parts of the face in the face video includes:
and constructing three-dimensional information of the face according to the size change of different parts of the face in the face video, and determining whether the face is a living body according to the three-dimensional information.
In an embodiment of the present invention, the acquiring a face video includes:
acquiring input data;
carrying out face detection on the input data;
continuously tracking the face in the input data according to the position information of the face in the input data determined by the face detection, and picking out the face in the input data to generate the face video.
In an embodiment of the present invention, the method is applied to an access control system, and the method further includes:
performing face recognition on the face video to determine identity information corresponding to the face;
and responding to the fact that the identity information is an authentication identity and the face is a living body, and controlling the access control system to be opened.
In one embodiment of the invention, the liveness detection is performed using a recurrent neural network.
In an embodiment of the present invention, before the acquiring the face video, the method further includes:
and obtaining the recurrent neural network for the in-vivo detection through training by using the sample set with the labeled information.
In one embodiment of the present invention, further comprising:
adding labeling information to the face video, and adding the face video added with the labeling information into a sample set;
and obtaining an updated recurrent neural network through training by using the updated sample set.
According to another aspect of the present invention, there is provided a device for in vivo testing, the device comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a face video, and the distance between a face in the face video and an image acquisition device for acquiring the face video is variable;
and the living body detection module is used for carrying out living body detection on the face video so as to determine whether the face is a living body.
In an embodiment of the invention, the in-vivo detection module is specifically configured to: and determining whether the face is a living body according to the size change of different parts of the face in the face video, wherein the size change of the different parts of the face in the face video is obtained according to the distance change of the face.
In an embodiment of the invention, the in-vivo detection module is specifically configured to: and constructing three-dimensional information of the face according to the size change of different parts of the face in the face video, and determining whether the face is a living body according to the three-dimensional information.
In an embodiment of the present invention, the obtaining module includes:
the acquisition submodule is used for acquiring input data;
the face detection submodule is used for carrying out face detection on the input data;
and the video generation sub-module is used for continuously tracking the face in the input data according to the position information of the face in the input data, which is determined by the face detection, and picking up the face in the input data to generate the face video.
In an embodiment of the present invention, the door access control system further includes:
the face recognition module is used for carrying out face recognition on the face video so as to determine identity information corresponding to the face;
and the control module is used for responding to the fact that the identity information is an authentication identity and the face is a living body, and controlling the access control system to be opened.
In one embodiment of the invention, the liveness detection module performs the liveness detection using a recurrent neural network.
In an embodiment of the present invention, the apparatus further comprises a training module, configured to:
adding labeling information to the face video, and adding the face video added with the labeling information into a sample set;
and obtaining an updated recurrent neural network through training by using the updated sample set.
The apparatus can be used to implement the method of in vivo testing of the foregoing aspects and various examples thereof.
According to yet another aspect of the present invention, there is provided a biopsy device, comprising a memory, a processor and a computer program stored on the memory and running on the processor, the processor implementing the steps of the biopsy method of the preceding aspects and examples when executing the computer program.
According to a further aspect of the present invention, there is provided a computer storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method of living body detection described in the preceding aspects and examples.
Therefore, in the embodiment of the invention, the living body detection is carried out by considering the distance change between the video face and the image acquisition device, the living body detection effect can be improved, and the non-living body attacks in the forms of photos and the like can be avoided.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent by describing in more detail embodiments of the present invention with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings, like reference numbers generally represent like parts or steps.
FIG. 1 is a schematic block diagram of an electronic device of an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a method of in vivo testing of an embodiment of the present invention;
FIG. 3 is a schematic diagram of a recurrent neural network in an embodiment of the present invention;
FIG. 4 is a schematic flow chart of the method for detecting living bodies in FIG. 2 applied to an entrance guard control method of an entrance guard system according to an embodiment of the present invention;
fig. 5 is another schematic flow chart of an access control method according to an embodiment of the present invention;
FIG. 6 is a schematic block diagram of an apparatus for in vivo testing in accordance with an embodiment of the present invention;
fig. 7 is a schematic block diagram of a device for detecting a living body applied to an access control system according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail below with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of embodiments of the invention and not all embodiments of the invention, with the understanding that the invention is not limited to the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the invention described herein without inventive step, shall fall within the scope of protection of the invention.
The embodiment of the present invention can be applied to an electronic device, and fig. 1 is a schematic block diagram of the electronic device according to the embodiment of the present invention. The electronic device 10 shown in FIG. 1 includes one or more processors 102, one or more memory devices 104, an input device 106, an output device 108, an image sensor 110, and one or more non-image sensors 114, which are interconnected by a bus system 112 and/or otherwise. It should be noted that the components and configuration of the electronic device 10 shown in FIG. 1 are exemplary only, and not limiting, and that the electronic device may have other components and configurations as desired.
The processor 102 may include a CPU 1021 and a GPU 1022 or other form of processing unit having data processing capability and/or Instruction execution capability, such as a Field-Programmable Gate Array (FPGA) or an Advanced Reduced Instruction Set Machine (Reduced Instruction Set Computer) Machine (ARM), etc., and the processor 102 may control other components in the electronic device 10 to perform desired functions.
The storage 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory 1041 and/or non-volatile memory 1042. The volatile Memory 1041 may include, for example, a Random Access Memory (RAM), a cache Memory (cache), and/or the like. The non-volatile Memory 1042 may include, for example, a Read-Only Memory (ROM), a hard disk, a flash Memory, and the like. One or more computer program instructions may be stored on the computer-readable storage medium and executed by processor 102 to implement various desired functions. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like.
The output device 108 may output various information (e.g., images or sounds) to an outside (e.g., a user), and may include one or more of a display, a speaker, and the like.
The image sensor 110 may take images (e.g., photographs, videos, etc.) desired by the user and store the taken images in the storage device 104 for use by other components.
It should be noted that the components and structure of the electronic device 10 shown in fig. 1 are merely exemplary, and although the electronic device 10 shown in fig. 1 includes a plurality of different devices, some of the devices may not be necessary, some of the devices may be more numerous, and the like, as desired, and the invention is not limited thereto.
FIG. 2 is a schematic flow chart of a method of in vivo testing in accordance with an embodiment of the present invention. The method shown in fig. 2 may include:
s101, obtaining a face video, wherein the distance between a face in the face video and an image acquisition device for acquiring the face video is changed;
s102, performing living body detection on the face video to determine whether the face is a living body.
Therefore, in the embodiment of the invention, the living body detection is carried out by considering the distance change between the video face and the image acquisition device, the living body detection effect can be improved, and the non-living body attacks in the forms of photos and the like can be avoided.
In addition, the living body detection method provided by the embodiment of the invention can carry out living body detection under the condition that the user does not sense, does not need user cooperation, and is favorable for improving user experience.
Exemplarily, S101 may include: acquiring input data; carrying out face detection on the input data; continuously tracking the face in the input data according to the position information of the face in the input data determined by the face detection, and picking out the face in the input data to generate the face video.
Wherein the raw video captured by the image capture device may be used as input data. The original video collected by the image collecting device may also be subjected to image preprocessing as input data, where the image preprocessing may include at least one of the following: light compensation, gray level transformation, histogram equalization, normalization, geometric correction, filtering, sharpening and the like. For example, a pedestrian may randomly move (e.g., pace) within the capture area of the image capture device to obtain a video of the change in distance between the face of the person and the image capture device. For example, when the living body detecting method is applied to an access control system, an image capturing device (a camera such as a camera) may be installed above or near the access control system, and the image capturing device faces the door entering direction of the access control system. In addition, it can be understood that the pedestrians in the acquired original video are also far to near relative to the image acquisition device as the pedestrians come towards the access control system.
Wherein, a face detection algorithm may be employed to detect whether a face exists in the input data. The process can be implemented by using an existing face detection algorithm, which is not described in detail herein. The face detection algorithm may be obtained by training using a machine learning method (e.g., deep learning, a regression algorithm based on local features, etc.). It can be understood that if the input data is determined to have no face through the face detection, another input data can be obtained again; if it is determined through face detection that the input data has a face, a face video may be generated based on the face data.
The face video can be generated by continuously tracking the face in the input data and matting out the face in the input data according to the position information of the face in the input data determined by the face detection. Therefore, unnecessary background information which possibly interferes with the living body detection can be avoided from appearing in the face video, and the detection efficiency of the living body detection can be improved. It can be understood that the face video is consistent with the time sequence of the input data, for example, when the face video is applied to an access control system, the face in the face video is from far to near with respect to the access control system, that is, in the face video of S101, the distance between the face and the image capturing device may gradually decrease with time.
Specifically, not only the existence of a face but also the position information of the face can be determined by face detection. That is, the position information of the face in each frame image of the input data can be determined by face detection. Furthermore, the face in each frame of image can be extracted to form a face sequence according to the position information, so that the continuous tracking of the face in the input data is realized, and a face video is formed.
Illustratively, the living body detection may be performed using a recurrent neural network in S102. Specifically, the face video in S101 may be used as an input to a Recurrent Neural Networks (RNN), and it may be determined whether the face video is a living body according to an output thereof. For example, the output may be any value of the [0,1] interval, and if the output is greater than or equal to a preset value (e.g., 0.75 or 0.85, etc.), it may be determined as a living body; if the output is less than the preset value, an attack may be determined.
Referring to fig. 3, the recurrent neural network may include a series of Long Short-Term Memory (LSTM) units, and each frame of the face video may be sequentially input to the respective LSTM units. The cyclic neural network can learn the relation between frames in the video, and can effectively process the video sequence, so that the obtained result is more effective. Further, after the process shown in FIG. 3, the output of the recurrent neural network, i.e., some value between 0 and 1, can be obtained by the Softmax function. As an example, the Softmax function may be defined as:
Figure BDA0001448222340000071
wherein z isiRepresents the input of the ith neuron in the current layer, g (z)i) The output of the ith neuron in the current layer is shown, e is a natural constant, i is 1,2, …, k.
As an implementation manner, in S102, it may be determined whether the face is a living body according to size changes of different parts of the face in the face video, where the size changes of the different parts of the face in the face video are obtained according to distance changes of the face. Specifically, a first size change of a first part (such as the part where the eyes are located) of the face in the face video can be determined, and a second size change of a second part (such as the part where the mouth or the nose is located) of the face in the face video can be determined. If the size at different positions varies differently, it can be determined as a living body; if the dimensional variations at different locations are nearly the same, a non-live attack can be determined. When the size change of different parts is judged, factors such as the frame number position in the face video, the body state change of the face and the like can be comprehensively considered.
For example, in S102, three-dimensional information of the face may be constructed according to size changes of different parts of the face in the face video, and whether the face is a living body may be determined according to the three-dimensional information. Specifically, the depth information of each part of the face can be determined according to the size change of each part of the face in the face video, so that a three-dimensional model of the face is constructed, and whether the face is a living body is determined by comparing the three-dimensional model with stored real face three-dimensional data.
For example, when the method is applied to an access control system, when a pedestrian walks to the access control system from far to near, the size change of each part of the face of the pedestrian in a face video can be determined along with the change of the distance.
For example, the method for detecting a living body shown in fig. 2 may be applied to an access control system, and as shown in fig. 4, the method for detecting a living body is applied to an access control system, and the method further includes, on the basis of fig. 2:
s103, carrying out face recognition on the face video to determine identity information corresponding to the face.
And S104, responding to the fact that the identity information is an authentication identity and the face is a living body, and controlling the access control system to be opened.
For example, in S103, an existing face recognition algorithm may be used, for example, a face video is input to a convolutional neural network for face recognition, so as to determine the identity information. The face recognition algorithm may be obtained by training using a machine learning method (e.g., deep learning, a regression algorithm based on local features, etc.).
Specifically, the database may store an authenticated face image (or referred to as a registered face image), and the authenticated face image may be used as a base image. For example, in an access control system of a bank background, face images of all employees of the branch may be used as a base image; in an access control system of a hotel storage room, a face image of a logistical person of the hotel can be used as a bottom warehouse image; and so on. In S103, a face in the face video may be matched with a face image in the base library image, for example, similarity may be calculated, if the similarity between the face in the face video and the first face image in the base library image is greater than or equal to a preset similarity threshold, it may be determined that the face in the face video and the first face image belong to the same person, and it is determined that the identity information of the face in the face video is an authentication identity; if the similarity between the face in the face video and any face image in the base image is smaller than a preset similarity threshold, the identity information of the face can be determined to be a non-authentication identity. The face of one frame of image in the face video may be selected for matching, for example, the selected face may be a certain front face with higher definition.
It can be understood that if the identity information is determined to be a non-authentication identity in S103, that is, the person in the face video is not a basement person but a stranger, the access control system may be controlled to remain in the closed state, that is, the access control system may be refused to be opened for the stranger.
For example, the execution order of S102 and S103 is not limited in the embodiment of the present invention. For example, S103 may be performed first, and S102 may be performed after the identity information is determined to be the authentication identity; and controlling the access control system to keep closed when the identity is determined to be the non-authentication identity in S103. For example, S102 may be performed first, and S103 may be performed after the living body is determined; and controlling the access control system to keep closed when the attack is determined in S102. For example, as shown in fig. 5, S102 and S103 may be executed in parallel, and the door access system may be controlled to be opened after the authentication identity is determined at S103 and the living body is determined at S102; and controlling the access control system to keep closed when the identity is determined to be non-authentication in S103 or the attack is determined to be attack in S102.
Optionally, before S101, the recurrent neural network for the in-vivo test may be obtained through training using a sample set with labeling information. Wherein, the marking information is 'living body' or 'attack'. That is, the sample set may include a positive sample set that labels living organisms and a negative sample set that labels attacks. Each positive sample in the positive sample set is a real face video, each negative sample in the negative sample set is a non-living face (such as a photo) video, the distance between a real face in the real face video and the image acquisition device is changed, and the distance between a non-living face in the non-living face video and the image acquisition device is also changed. Based on the sample set, a machine learning method (such as deep learning, a regression algorithm based on local features, and the like) is adopted to obtain a recurrent neural network for in vivo detection through training. In the training process, the recurrent neural network can fully learn the size change of different parts of the human face in the human face distance change process, and also can learn the three-dimensional information of the human face and the like, thereby ensuring the accuracy of the result of the recurrent neural network.
In addition, after S102 or S104, the annotation information may be added to the face video, and the face video to which the annotation information is added to the sample set; and obtaining an updated recurrent neural network through training by using the updated sample set. As shown in fig. 5, after data reflow, the face video may be labeled (live or attack), and the recurrent neural network model for live detection in S102 is optimized by training again. For example, the mark may be a manual mark, for example, an administrator adds mark information to the face video to be a living body or an attack, so that more samples corresponding to the current access control system can be included in the sample set, the sample set is more exclusive to the access control system, and the accuracy of the process of detecting the living body is further ensured to be higher.
FIG. 6 is a schematic block diagram of an apparatus for in vivo testing in accordance with an embodiment of the present invention. The apparatus 60 shown in fig. 6 may include: an acquisition module 610 and a liveness detection module 620.
An obtaining module 610, configured to obtain a face video, where a distance between a face in the face video and an image acquisition device that acquires the face video is variable;
and a living body detection module 620, configured to perform living body detection on the face video acquired by the acquisition module 610, so as to determine whether the face is a living body.
As one implementation, the liveness detection module 620 may be specifically configured to: and determining whether the face is a living body according to the size change of different parts of the face in the face video, wherein the size change of the different parts of the face in the face video is obtained according to the distance change of the face.
As one implementation, the liveness detection module 620 may be specifically configured to: and constructing three-dimensional information of the face according to the size change of different parts of the face in the face video, and determining whether the face is a living body according to the three-dimensional information.
As one implementation, the liveness detection module 620 may utilize a recurrent neural network for the liveness detection.
As one implementation, the apparatus 60 may further include a training module for: adding labeling information to the face video, and adding the face video added with the labeling information into a sample set; and obtaining an updated recurrent neural network through training by using the updated sample set.
As one implementation, the obtaining module 610 may include an obtaining sub-module, a face detection sub-module, and a video generation sub-module. The acquisition submodule may be configured to acquire input data. The face detection submodule may be configured to perform face detection on the input data acquired by the acquisition submodule. The video generation sub-module may be configured to continuously track the face in the input data according to the position information of the face in the input data determined by the face detection sub-module, and scratch out the face in the input data to generate the face video.
As an implementation manner, the apparatus 60 may be applied to an access control system, and as shown in fig. 7, the apparatus 60 may further include a face recognition module 630 and a control module 640.
The face recognition module 630 may be configured to perform face recognition on the face video to determine identity information corresponding to the face. The control module 640 may be configured to control the access control system to open in response to the face recognition module 630 determining that the identity information is an authentication identity and the living body detection module 620 determining that the face is a living body.
The apparatus 60 shown in fig. 6 can implement the method shown in fig. 2, and the apparatus 60 shown in fig. 7 can implement the methods shown in fig. 4 and fig. 5, which are not described herein again to avoid repetition.
In addition, another living body detecting device is provided in an embodiment of the present invention, which includes a memory, a processor, and a computer program stored in the memory and running on the processor, and when the processor executes the computer program, the processor implements the steps of the method shown in fig. 2 to 5.
In addition, the embodiment of the present invention also provides an electronic device, which may include the apparatus 60 shown in fig. 6 or fig. 7. The electronic device may implement the methods illustrated in fig. 2-5 described above.
In addition, the embodiment of the invention also provides a computer storage medium, and the computer storage medium is stored with the computer program. The computer program, when executed by a processor, may implement the steps of the methods of fig. 2-5 described above. For example, the computer storage medium is a computer-readable storage medium.
Therefore, in the embodiment of the invention, the living body detection is carried out by considering the distance change between the video face and the image acquisition device, the living body detection effect can be improved, and the non-living body attacks in the forms of photos and the like can be avoided. When the living body detection method is applied to the access control system, the scene characteristics of the access control system are fully considered, the access control can be opened as soon as possible for the authentication living body, meanwhile, the non-living body attacks in the forms of photos and the like can be avoided, and the safety of the access control system is ensured.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the foregoing illustrative embodiments are merely exemplary and are not intended to limit the scope of the invention thereto. Various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another device, or some features may be omitted, or not executed.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the method of the present invention should not be construed to reflect the intent: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where such features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. It will be appreciated by those skilled in the art that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some of the modules in an item analysis apparatus according to embodiments of the present invention. The present invention may also be embodied as apparatus programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The above description is only for the specific embodiment of the present invention or the description thereof, and the protection scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and the changes or substitutions should be covered within the protection scope of the present invention. The protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A method of in vivo testing, the method comprising:
acquiring a face video, wherein the distance between a face in the face video and an image acquisition device for acquiring the face video is changed;
and performing living body detection on the face video by utilizing a recurrent neural network to determine whether the face is a living body, wherein the recurrent neural network comprises a series of long-short term memory (LSTM) units, each frame of the face video is sequentially used as the input of each LSTM unit, and the input of each LSTM unit also comprises the output of the last-stage LSTM unit.
2. The method of claim 1, wherein the obtaining the face video comprises:
acquiring input data;
carrying out face detection on the input data;
continuously tracking the face in the input data according to the position information of the face in the input data determined by the face detection, and picking out the face in the input data to generate the face video.
3. The method of claim 1, applied to an access control system, further comprising:
performing face recognition on the face video to determine identity information corresponding to the face;
and responding to the fact that the identity information is an authentication identity and the face is a living body, and controlling the access control system to be opened.
4. The method of any of claims 1 to 3, further comprising:
adding labeling information to the face video, and adding the face video added with the labeling information into a sample set;
and obtaining an updated recurrent neural network through training by using the updated sample set.
5. An apparatus for in vivo testing, the apparatus comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a face video, and the distance between a face in the face video and an image acquisition device for acquiring the face video is variable;
and the living body detection module is used for carrying out living body detection on the face video by utilizing a cyclic neural network to determine whether the face is a living body, wherein the cyclic neural network comprises a series of long-short term memory (LSTM) units, each frame of the face video is sequentially used as the input of each LSTM unit, and the input of each LSTM unit also comprises the output of the last-stage LSTM unit.
6. The apparatus of claim 5, wherein the obtaining module comprises:
the acquisition submodule is used for acquiring input data;
the face detection submodule is used for carrying out face detection on the input data;
and the video generation sub-module is used for continuously tracking the face in the input data according to the position information of the face in the input data, which is determined by the face detection, and picking up the face in the input data to generate the face video.
7. The device of claim 5, applied to an access control system, further comprising:
the face recognition module is used for carrying out face recognition on the face video so as to determine identity information corresponding to the face;
and the control module is used for responding to the fact that the identity information is an authentication identity and the face is a living body, and controlling the access control system to be opened.
8. The apparatus of any of claims 5 to 7, further comprising a training module to:
adding labeling information to the face video, and adding the face video added with the labeling information into a sample set;
and obtaining an updated recurrent neural network through training by using the updated sample set.
9. An apparatus for biopsy comprising a memory, a processor and a computer program stored on the memory and running on the processor, wherein the steps of the method of any one of claims 1 to 4 are implemented when the computer program is executed by the processor.
10. A computer storage medium on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 4.
CN201711025141.9A 2017-10-27 2017-10-27 Living body detection method, living body detection device and computer storage medium Active CN108875497B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711025141.9A CN108875497B (en) 2017-10-27 2017-10-27 Living body detection method, living body detection device and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711025141.9A CN108875497B (en) 2017-10-27 2017-10-27 Living body detection method, living body detection device and computer storage medium

Publications (2)

Publication Number Publication Date
CN108875497A CN108875497A (en) 2018-11-23
CN108875497B true CN108875497B (en) 2021-04-27

Family

ID=64325445

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711025141.9A Active CN108875497B (en) 2017-10-27 2017-10-27 Living body detection method, living body detection device and computer storage medium

Country Status (1)

Country Link
CN (1) CN108875497B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110532895B (en) 2019-08-06 2020-10-23 创新先进技术有限公司 Method, device and equipment for detecting fraudulent behavior in face recognition process
CN110751757A (en) * 2019-09-11 2020-02-04 河海大学 Unlocking method based on face image processing and intelligent lock
CN111091112B (en) * 2019-12-30 2021-10-15 支付宝实验室(新加坡)有限公司 Living body detection method and device
CN113657293B (en) * 2021-08-19 2023-11-24 北京神州新桥科技有限公司 Living body detection method, living body detection device, electronic equipment, medium and program product

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529512A (en) * 2016-12-15 2017-03-22 北京旷视科技有限公司 Living body face verification method and device
CN107122744A (en) * 2017-04-28 2017-09-01 武汉神目信息技术有限公司 A kind of In vivo detection system and method based on recognition of face

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101770613A (en) * 2010-01-19 2010-07-07 北京智慧眼科技发展有限公司 Social insurance identity authentication method based on face recognition and living body detection
CN101908140A (en) * 2010-07-29 2010-12-08 中山大学 Biopsy method for use in human face identification
CN103440479B (en) * 2013-08-29 2016-12-28 湖北微模式科技发展有限公司 A kind of method and system for detecting living body human face
CN105868677B (en) * 2015-01-19 2022-08-30 创新先进技术有限公司 Living body face detection method and device
CN111985294A (en) * 2015-09-01 2020-11-24 北京上古视觉科技有限公司 Iris recognition system with living body detection function
CN106897658B (en) * 2015-12-18 2021-12-14 腾讯科技(深圳)有限公司 Method and device for identifying human face living body
CN107169429A (en) * 2017-04-28 2017-09-15 北京小米移动软件有限公司 Vivo identification method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529512A (en) * 2016-12-15 2017-03-22 北京旷视科技有限公司 Living body face verification method and device
CN107122744A (en) * 2017-04-28 2017-09-01 武汉神目信息技术有限公司 A kind of In vivo detection system and method based on recognition of face

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Learning precise timing with LSTM recurrent networks;F.A.Gers;《JMLR》;20021231;论文全文 *
Long Short-Term Memory;Sepp Hochreiter;《Neural Computation》;19971115;第9卷(第8期);论文全文 *
基于深度学习的图像识别应用研究;周凯龙;《中国优秀硕士学位论文全文数据库 信息科技辑》;20170315(第3期);论文全文 *

Also Published As

Publication number Publication date
CN108875497A (en) 2018-11-23

Similar Documents

Publication Publication Date Title
CN109948408B (en) Activity test method and apparatus
CN109711243B (en) Static three-dimensional face in-vivo detection method based on deep learning
US20200410215A1 (en) Liveness test method and apparatus
CN106778525B (en) Identity authentication method and device
CN106599772B (en) Living body verification method and device and identity authentication method and device
CN105740780B (en) Method and device for detecting living human face
CN108875497B (en) Living body detection method, living body detection device and computer storage medium
CN106934359B (en) Multi-view gait recognition method and system based on high-order tensor subspace learning
CN111274916B (en) Face recognition method and face recognition device
CN106919921B (en) Gait recognition method and system combining subspace learning and tensor neural network
CN110674712A (en) Interactive behavior recognition method and device, computer equipment and storage medium
US10489636B2 (en) Lip movement capturing method and device, and storage medium
CN107590473B (en) Human face living body detection method, medium and related device
CN111027481B (en) Behavior analysis method and device based on human body key point detection
CN112215180A (en) Living body detection method and device
EP3734503A1 (en) Method and apparatus with liveness detection
EP3674973A1 (en) Method and apparatus with liveness detection and object recognition
CN108875509A (en) Biopsy method, device and system and storage medium
CN108875500B (en) Pedestrian re-identification method, device and system and storage medium
KR20230169104A (en) Personalized biometric anti-spoofing protection using machine learning and enrollment data
CN112633222B (en) Gait recognition method, device, equipment and medium based on countermeasure network
CN112308035A (en) Image detection method, image detection device, computer equipment and storage medium
Nahar et al. Twins and Similar Faces Recognition Using Geometric and Photometric Features with Transfer Learning
CN108875467B (en) Living body detection method, living body detection device and computer storage medium
CN113989914B (en) Security monitoring method and system based on face recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant