WO2016197298A1 - Procédé de détection de corps vivant, système de détection de corps vivant et produit de programme informatique - Google Patents

Procédé de détection de corps vivant, système de détection de corps vivant et produit de programme informatique Download PDF

Info

Publication number
WO2016197298A1
WO2016197298A1 PCT/CN2015/080964 CN2015080964W WO2016197298A1 WO 2016197298 A1 WO2016197298 A1 WO 2016197298A1 CN 2015080964 W CN2015080964 W CN 2015080964W WO 2016197298 A1 WO2016197298 A1 WO 2016197298A1
Authority
WO
WIPO (PCT)
Prior art keywords
detected
signal
living body
video data
living
Prior art date
Application number
PCT/CN2015/080964
Other languages
English (en)
Chinese (zh)
Inventor
曹志敏
贾开
Original Assignee
北京旷视科技有限公司
北京小孔科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京旷视科技有限公司, 北京小孔科技有限公司 filed Critical 北京旷视科技有限公司
Priority to PCT/CN2015/080964 priority Critical patent/WO2016197298A1/fr
Priority to CN201580000331.8A priority patent/CN105612533B/zh
Publication of WO2016197298A1 publication Critical patent/WO2016197298A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements

Definitions

  • the present disclosure relates to the field of living body detection, and more particularly, to a living body detecting method, a living body detecting system, and a computer program product capable of realizing human body living body detection.
  • face recognition systems are increasingly used in security, finance and other fields that require authentication, such as bank remote account opening, access control systems, and remote transaction operation verification.
  • authentication such as bank remote account opening, access control systems, and remote transaction operation verification.
  • the first person to be verified is a legal living organism. That is to say, the face recognition system needs to be able to prevent an attacker from using a photo, a 3D face model or a mask to attack.
  • the method of solving the above problem is usually called living body detection, and the purpose is to judge whether the acquired biometrics are from a living, on-the-spot, real person.
  • biometric detection technology relies on special hardware devices (such as infrared cameras, depth cameras) or can only prevent simple static photo attacks.
  • the present disclosure has been made in view of the above problems.
  • the present disclosure provides a living body detecting method, a living body detecting system, and a computer program product based on a common monocular camera, which can effectively prevent photos by detecting skin elastic characteristics in a video image sequence of a subject for living body detection. , 3D face models and mask attacks.
  • a living body detecting method includes: acquiring video data collected via a video data collecting device; determining an object to be detected based on the video data; and acquiring a to-be-detected object Detecting a signal; and determining whether the signal to be detected is a living physiological signal, wherein the signal to be detected is a skin elasticity signal corresponding to the object to be detected.
  • determining the object to be detected based on the video data includes determining a face image therein as the object to be detected based on the video data, and determining the At least one key area in the face image.
  • the living body detecting method wherein the determining at least one key region in the face image includes determining a key point in the face image based on the video data, based on the key The point divides the face image into the at least one key area.
  • the living body detecting method wherein the acquiring the signal to be detected corresponding to the object to be detected includes: acquiring a pre-action region before and after a predetermined time point corresponding to the at least one key region An image and a post-action area image, the predetermined time point being a point in time at which the object to be detected performs a predetermined action.
  • the living body detecting method further comprises: normalizing the pre-action area image and the post-action area image into A grayscale image having a predetermined size, and the normalized pre-action region image and the normalized post-action region image are superimposed as the to-be-detected signal.
  • the living body detecting method wherein the acquiring the signal to be detected corresponding to the object to be detected further comprises: setting the post-action area image and the predetermined range around the post-action area image The relevant area image is normalized to a grayscale image having the predetermined size as the signal to be detected.
  • determining whether the signal to be detected is a living body physiological signal comprises: comparing the signal to be detected with a preset living condition, and matching the signal to be detected in the signal to be detected When the living condition is set, the signal to be detected is determined to be a living physiological signal, wherein the preset living condition is a skin elasticity signal corresponding to the living body acquired based on the preset preset video data.
  • the living body detecting method further includes: starting the detection timing while acquiring the video data collected via the video data collecting device; and not when the detection timing reaches the preset time threshold In a case where it is determined whether the signal to be detected is a living physiological signal, it is determined that the signal to be detected is not a living physiological signal.
  • a living body detection system comprising: a processor; a memory; and computer program instructions stored in the memory, when the computer program instructions are executed by the processor The following steps: acquiring video data collected by the video data collecting device; determining an object to be detected based on the video data; acquiring a signal to be detected corresponding to the object to be detected; and determining whether the signal to be detected is a living physiological signal
  • the signal to be detected is a skin elasticity signal corresponding to the object to be detected.
  • a living body detecting system wherein the computer Determining, based on the video data, the determining of the object to be detected based on the video data, determining a face image therein as the object to be detected, and determining the face image At least one key area in the middle.
  • a living body detecting system wherein the step of determining at least one key region in the face image when the computer program instructions are executed by the processor includes: based on the video Data, determining a key point in the face image, and dividing the face image into the at least one key area based on the key point.
  • a living body detecting system wherein the step of acquiring the signal to be detected corresponding to the object to be detected when the computer program instruction is executed by the processor includes: obtaining a correspondence And a pre-action area image and a post-action area image before and after the predetermined time point of the at least one key area, where the predetermined time point is a time point at which the object to be detected performs a predetermined action.
  • a living body detecting system wherein the step of acquiring the signal to be detected corresponding to the object to be detected when the computer program instruction is executed by the processor further comprises: The pre-action area image and the post-action area image are normalized to a grayscale image having a predetermined size, and the normalized pre-action area image and the normalized post-action area image are superimposed as The signal to be detected.
  • a living body detecting system wherein the step of acquiring the signal to be detected corresponding to the object to be detected when the computer program instruction is executed by the processor further comprises: The post-action area image and the relevant area image of the predetermined range around the post-action area image are normalized to the gray image having the predetermined size as the signal to be detected.
  • a living body detecting system wherein the step of determining whether the signal to be detected is a living body physiological signal when the computer program instruction is executed by the processor comprises: comparing the to-be-detected The signal and the preset living condition are determined, when the to-be-detected signal matches the preset living condition, determining that the to-be-detected signal is a living physiological signal, wherein the preset living condition is obtained based on pre-acquired preset video data. Corresponds to the skin elasticity signal of the living body.
  • a living body detection system further includes a detection timer, wherein when the computer program instructions are executed by the processor: acquiring the video data acquired via the video data collection device At the same time, the detection timer is started to execute the detection timing; In the case that the detection timing does not determine whether the signal to be detected is a living physiological signal when the detection timing reaches the preset time threshold, it is determined that the signal to be detected is not a living physiological signal.
  • a computer program product comprising a computer readable storage medium on which computer program instructions are stored, the computer program instructions being executed while being executed by a computer
  • the signal to be detected is a skin elasticity signal corresponding to the object to be detected.
  • FIG. 1 is a flow chart illustrating a living body detecting method according to an embodiment of the present invention.
  • FIG. 2 is a functional block diagram illustrating a living body detection system in accordance with an embodiment of the present invention.
  • FIG. 3 is a flow chart illustrating a first example of acquiring a signal to be detected in a living body detecting method according to an embodiment of the present invention.
  • FIG. 4 is a second exemplary flowchart illustrating a method of acquiring a signal to be detected in a living body detecting method according to an embodiment of the present invention.
  • FIG. 5 is a flowchart further illustrating living body detection based on a signal to be detected in a living body detecting method according to an embodiment of the present invention.
  • FIG. 6 is a schematic block diagram illustrating a living body detecting system according to an embodiment of the present invention.
  • FIG. 1 is a flow chart illustrating a living body detecting method according to an embodiment of the present invention. As shown in FIG. 1, a living body detecting method according to an embodiment of the present invention includes the following steps.
  • step S101 video data acquired via the video capture device is acquired.
  • the video capture device is a camera capable of acquiring video data of a subject (such as a front or rear camera of a smart phone, a camera of an access control system, etc.).
  • Acquiring video data collected via the video capture device includes, but is not limited to, receiving video data transmitted from the video capture device via a wired or wireless method after the video capture device configured by the physical location is separately configured to acquire video data.
  • the video capture device may be physically co-located with other modules or components in the biometric detection system or even within the same housing, and other modules or components in the biometric detection system are received from the video capture device via the internal bus. Video data.
  • the video data acquired via the video capture device may be a video of a continuous predetermined period of time (eg, 3 seconds).
  • a face that is a subject of living body detection needs to be able to clearly appear in the video.
  • the video of the predetermined period of time it is necessary to record an image of a specific region before and after the specific object is completed according to the indication.
  • the specific action may be, for example, pressing the skin of the two jaws with a finger, or inhaling the two jaws. Thereafter, the processing proceeds to step S102.
  • an object to be detected is determined based on the video data.
  • a pre-trained face detector such as Adaboost Cascade
  • Adaboost Cascade may be used to obtain the location of the face in the video image in the video data.
  • a machine learning algorithm such as Deep learning, or regression techniques based on local features
  • face detectors such as Adaboost Cascade
  • At least one key area in the face area is determined according to the key point.
  • the face area can be divided into a series of triangular elements, and the images of the triangular elements located in the chin, the tibia, the two sides, etc. are taken as key areas. image. Thereafter, the processing proceeds to step S103.
  • step S103 a signal to be detected corresponding to the object to be detected is acquired.
  • the image of the key region before and after the specific motion is completed is recorded according to the indication
  • the image of the key region before and after the captured specific motion is taken as the signal to be detected.
  • the captured image of the key area after the specific action and the predetermined range around the key area after the specific action are recorded.
  • the relevant area image is used as a signal to be detected. If the object to be detected is a living body, the signal to be detected will include a characteristic signal reflecting the skin elasticity of the living body.
  • the processing proceeds to step S104.
  • step S104 it is determined whether the signal to be detected is a living physiological signal.
  • the signal to be detected acquired in step S103 is sent to the trained classifier. If the classifier determines that the signal to be detected is a living physiological signal, it outputs 1; otherwise, it outputs 0.
  • the process of training the classifier can be done offline. For example, an image of a front and rear frame in which a living person performs a prescribed action is collected in advance, and an attack image using a photograph, a video playback, a paper mask, and a 3D model to perform a prescribed action is collected, and the former is taken as a positive sample and the latter as a negative sample.
  • the classifier is then trained using statistical learning methods such as deep learning and support vector machines.
  • the skin elasticity characteristic in the video image sequence of the subject is detected to perform living body detection, thereby effectively preventing photos, 3D face models, and mask attacks.
  • the living body detecting system 20 includes a video data acquiring module 21, an object to be detected determining module 22, a signal to be detected acquiring module 23, and a living body detecting module 24.
  • the video data acquiring module 21, the object to be detected determining module 22, the to-be-detected signal acquiring module 23, and the living body detecting module 24 may be, for example, by hardware (for example, a camera, a server, a dedicated computer or a CPU, a GPU, and various application specific integrated circuits). Etc.), software, firmware, and any feasible combination of them.
  • the video data acquiring module 21 is configured to acquire video data.
  • the video data acquisition module 21 may be a video capture device including an RGB camera capable of acquiring video data of a subject.
  • the view The frequency data acquisition module 21 may include a video capture device of a depth camera (depth camera) capable of acquiring depth information of a subject.
  • the video data acquiring module 21 can be physically separated from the object to be detected determining module 22, the signal to be detected module 23, and the living body detecting module 24, or physically located at the same location or even inside the same casing.
  • the video data acquiring module 21 further performs the wired or wireless manner.
  • the depth video data acquired by the video capture device is sent to a subsequent module.
  • the video data acquiring module 21 and the object to be detected determining module 22, the to-be-detected signal acquiring module 23, and the living body detecting module 24 are physically located at the same position or even inside the same casing, the video data acquiring module
  • the depth video data acquired by the video capture device is sent to the subsequent module via the internal bus.
  • the video data may be RGB color video data or RGBD video data including depth information.
  • the frequency data acquired via the video data acquiring module 21 as the video capturing device may be a video of a continuous predetermined period of time (for example, 3 seconds).
  • a face that is a subject of living body detection needs to be able to clearly appear in the video.
  • the video of the predetermined period of time it is necessary to record an image of a specific region before and after the specific object is completed according to the indication.
  • the specific action may be, for example, pressing the skin of the two jaws with a finger, or inhaling the two jaws.
  • the object to be detected determining module 22 is configured to determine an object to be detected based on the video data collected by the video data acquiring module 21 .
  • the object to be detected determination module 22 can use a pre-trained face detector (such as Adaboost Cascade) to acquire the position of the face in the video image in the video image.
  • a pre-trained face detector such as Adaboost Cascade
  • at least one key area in the face area is determined according to the key point.
  • the face area may be divided into a series of triangular slices, and images of triangular pieces located in areas such as chin, tibia, and two cymbals may be used as key area images.
  • the to-be-detected signal acquisition module 23 is configured to acquire a to-be-detected signal corresponding to the object to be detected determined by the object to be detected determining module 22. Specifically, in an embodiment of the present invention, after the image of the key area before and after the specific action is completed according to the indication is recorded, the object is captured. The image of the key area before and after the specific action is obtained as a signal to be detected. In another embodiment of the present invention, after recording the image of the key area before and after the specific object is completed according to the indication, the captured image of the key area after the specific action and the predetermined range around the key area after the specific action are recorded. The relevant area image is used as a signal to be detected. If the object to be detected is a living body, the signal to be detected will include a characteristic signal reflecting the skin elasticity of the living body.
  • the biometric detection module 24 is configured to perform biometric detection on the to-be-detected signal extracted by the to-be-detected signal acquisition module 23 to determine whether the to-be-detected signal is a living physiological signal.
  • the living body detection module 24 is a trained classifier. If the classifier determines that the signal to be detected is a living physiological signal, it outputs 1; otherwise, it outputs 0. The process of training the classifier can be done offline.
  • an image of a front and rear frame in which a living person performs a prescribed action is collected in advance, and an attack image using a photograph, a video playback, a paper mask, and a 3D model to perform a prescribed action is collected, and the former is taken as a positive sample and the latter as a negative sample.
  • the classifier is then trained using statistical learning methods such as deep learning and support vector machines.
  • FIG. 3 is a flow chart illustrating a first example of acquiring a signal to be detected in a living body detecting method according to an embodiment of the present invention.
  • a first example of acquiring a signal to be detected in a living body detecting method according to an embodiment of the present invention includes the following steps.
  • step S301 based on the video data, the face image therein is determined as the object to be detected.
  • a pre-trained face detector such as Adaboost Cascade is used to acquire the position of a face in a video image in video data. Thereafter, the processing proceeds to step S302.
  • step S302 key points in the face image are determined.
  • the key points include, but are not limited to, the corners of the face, the corners of the mouth, the nose, the highest point of the tibia, and the like. Thereafter, the processing proceeds to step S303.
  • step S303 the face image is divided into at least one key area based on the key points.
  • the face area will be divided into a series of triangular pieces, which will be located at the chin, ⁇ An image of a triangular piece of a bone, two ridges, and the like is used as a key area image. Thereafter, the processing proceeds to step S304.
  • step S304 acquiring, before and after the predetermined time point corresponding to the at least one key area Pre-action area image and post-action area image.
  • the predetermined time point is a point in time at which the object to be detected performs a predetermined action.
  • the specific action may be, for example, pressing the skin of the two jaws with a finger, or inhaling the two jaws.
  • step S305 the pre-action area image and the post-action area image are normalized into a grayscale image having a predetermined size. Specifically, the pre-action area image and the post-action area image are normalized to a 40x40 grayscale image. Thereafter, the processing proceeds to step S306.
  • step S306 the normalized pre-action area image and the normalized post-action area image are superimposed as a signal to be detected. Specifically, the grayscale images normalized to the size of 40x40 are then stacked together to obtain a 40x40x2 dual-channel image signal (tensor signal).
  • the obtained signal to be detected will be provided to the trained convolutional neural network, and a series of designed convolutional layer, pooling layer and full-link layer will finally obtain a two-category result, and whether the output is output.
  • FIG. 4 is a second exemplary flowchart illustrating a method of acquiring a signal to be detected in a living body detecting method according to an embodiment of the present invention.
  • a second example of acquiring a signal to be detected in a living body detecting method according to an embodiment of the present invention includes the following steps.
  • Steps S401 to S404 in Fig. 4 are the same as steps S301 to S304 shown in Fig. 3, and a repetitive description thereof will be omitted herein.
  • step S404 After acquiring the pre-action area image and the post-action area image before and after the predetermined time point corresponding to the at least one key area in step S404, the processing proceeds to step S405.
  • step S405 the post-action area image and the relevant area image of the predetermined range around the post-action area image are normalized into a grayscale image having a predetermined size as a signal to be detected.
  • the signal to be detected does not include the pre-action area image, but includes the post-action area image and the predetermined range from the post-action area image to the surrounding area.
  • Related area image does not include the pre-action area image, but includes the post-action area image and the predetermined range from the post-action area image to the surrounding area.
  • the signal to be detected thus obtained is also provided to the trained convolutional neural network, and finally obtains a two-category result through a series of designed convolutional layer, pooling layer and full-link layer, and whether the output is The probability of a living body (probability value between 0-1). This is due to the outward expansion of the corresponding skin area after the user performs an action of suffocating. For real human skin, from the two jaws to the lower jaw, the skin will gradually move from the bulge to the lower jaw drum, and the whole process is smooth. For general photos, video playback, etc., it is naturally impossible to achieve the aeration effect. For a simple mask made of printing paper, when it is covered on the face to make a bulging action, since the paper is hard, various edges, lines, and the like appear locally, which is also very different from real living skin.
  • FIG. 5 is a flowchart further illustrating living body detection based on a signal to be detected in a living body detecting method according to an embodiment of the present invention. As shown in FIG. 5, the living body detection based on the signal to be detected according to an embodiment of the present invention includes the following steps.
  • step S501 video data acquired via the video capture device is acquired.
  • the video data acquired via the video capture device may be a video of a continuous predetermined period of time (eg, 3 seconds).
  • a face that is a subject of living body detection needs to be able to clearly appear in the video.
  • in the video of the predetermined period of time it is necessary to record an image of a specific region before and after the specific object is completed according to the indication.
  • the specific action may be, for example, pressing the skin of the two jaws with a finger, or inhaling the two jaws. Thereafter, the process proceeds to step S502.
  • step S502 the detection timing is started.
  • steps S501 and S502 are performed simultaneously, that is, while the video data is initially collected via the video capture device to perform the live detection, the timer is turned on to perform the detection timing. Thereafter, the processing proceeds to step S503.
  • an object to be detected is determined based on the video data.
  • the pre-trained face detector can be used to acquire the position of the face in the video data as the object to be detected in the video image. For example, a large number of face images are pre-captured, and a series of key points such as the corners of the face, the corners of the mouth, the nose, and the highest point of the cheekbone are manually marked in each image, using machine learning algorithms (such as deep learning, or based on local features). Regression algorithm), training to obtain a face detector. Using the trained face detector, you can output the face position and key point coordinates based on the input image. Thereafter, the processing proceeds to step S504.
  • a signal to be detected corresponding to the object to be detected is acquired. Specifically, after obtaining the face position and the key point coordinates thereon, at least one key area in the face area is determined according to the key point.
  • the face area may be divided into a series of triangular slices, and images of triangular pieces located in areas such as chin, tibia, and two cymbals may be used as key area images.
  • an image of a key area before and after a specific action is completed according to the indication is recorded. After that, the image of the key area before and after the specific action captured is taken as the signal to be detected.
  • the specific action may be, for example, pressing the skin of the two jaws with a finger, or inhaling the two jaws.
  • the pre-action area image and the post-action area image are normalized to a 40x40 grayscale image.
  • the grayscale images normalized to 40x40 are superimposed on the image before and after the motion to obtain a 40x40x2 dual-channel image signal (tensor signal), which is used as the signal to be detected.
  • the post-action area image and the predetermined range of related area images around the post-action area image are normalized to a grayscale image having a predetermined size as a signal to be detected.
  • the processing proceeds to step S505.
  • step S505 it is determined whether the signal to be detected matches a preset living condition. Specifically, the determination is performed in a trained classifier, such as a convolutional neural network.
  • a trained classifier such as a convolutional neural network.
  • an image of the front and rear frames of the living body performing the prescribed action may be collected in advance, and an attack image using a photo, a video playback, a paper mask, and a 3D model to perform a prescribed action may be collected, and the former is taken as a positive sample.
  • the latter is used as a negative sample, and then the classifier is trained using statistical learning methods such as deep learning and support vector machines.
  • step S505 if it is determined that the signal to be detected matches the preset living body condition, the processing proceeds to step S507.
  • step S507 the classifier outputs a result of determining that the signal to be detected is a living body physiological signal.
  • step S505 if a negative result is obtained in step S505, that is, it is judged that the signal to be detected does not match the preset living body condition, the processing proceeds to step S506.
  • step S506 it is determined whether the detection timing reaches a preset time threshold. If a negative result is obtained in step S506, that is, the detection timing has not reached the preset time threshold, the process returns Step S503, to continue the biometric detection based on the video data.
  • step S506 if a positive result is obtained in step S506, that is, the detection timing reaches the preset time threshold, the processing proceeds to step S508.
  • step S508 since the preset time threshold has been reached, and the signal to be detected matching the preset living condition is still not obtained, it is determined that the signal to be detected is not the living physiological signal, and the living body detecting process is ended. In this way, it is possible to prevent trespassers from constantly trying to perform live verification with photos, video playback, paper masks, and 3D models.
  • FIG. 6 is a schematic block diagram illustrating a living body detecting system according to an embodiment of the present invention.
  • a living body detection system 6 includes a processor 61, a memory 62, and computer program instructions 63 stored in the memory 62.
  • the computer program instructions 63 may implement the functions of the respective functional modules of the living body detection system according to an embodiment of the present invention when the processor 61 is in operation, and/or may perform various steps of the living body detection method according to an embodiment of the present invention.
  • the following steps are performed: acquiring video data collected via a video data collecting device; determining an object to be detected based on the video data; acquiring corresponding to the waiting Detecting a signal to be detected of the object; and determining whether the signal to be detected is a living physiological signal, wherein the signal to be detected is a skin elasticity signal corresponding to the object to be detected.
  • the step of determining an object to be detected based on the video data comprises: determining a face image therein as the object to be detected based on the video data And determining at least one key area in the face image.
  • the step of determining the at least one key region in the face image when the computer program instructions 63 are executed by the processor 61 comprises determining a key point in the face image based on the video data And dividing the face image into the at least one key area based on the key point.
  • the step of acquiring the signal to be detected corresponding to the object to be detected when the computer program instruction 63 is executed by the processor 61 comprises: acquiring a predetermined time point corresponding to the at least one key region
  • the pre-action area image and the post-action area image before and after, the predetermined time point is a time point at which the object to be detected performs a predetermined action.
  • the step of the signal to be detected corresponding to the object to be detected further includes: normalizing the pre-action area image and the post-action area image into a grayscale image having a predetermined size, and normalizing the The pre-action area image and the normalized post-action area image are overlapped as the to-be-detected signal.
  • the step of acquiring the signal to be detected corresponding to the object to be detected when the computer program instruction 63 is executed by the processor 61 further comprises: displaying the post-action area image and the post-action area
  • the image of the relevant region of the predetermined range around the image is normalized to a grayscale image having the predetermined size as the signal to be detected.
  • the step of determining whether the signal to be detected is a living physiological signal when the computer program instruction 63 is executed by the processor 61 comprises: comparing the signal to be detected with a preset living condition, in the to-be-detected When the signal matches the preset living condition, the signal to be detected is determined to be a living physiological signal, wherein the preset living condition is a skin elasticity signal corresponding to the living body acquired based on the preset preset video data.
  • Each module in the living body detecting system according to an embodiment of the present invention may be implemented by a computer program stored in a memory stored in a processor in a living body detecting system according to an embodiment of the present invention, or may be in a computer according to an embodiment of the present invention
  • the computer instructions stored in the computer readable storage medium of the program product are implemented by the computer when executed.
  • the computer readable storage medium can be any combination of one or more computer readable storage media, for example, a computer readable storage medium includes computer readable program code for randomly generating a sequence of action instructions, and another computer can The read storage medium contains computer readable program code for performing face activity recognition.
  • the computer readable storage medium may include, for example, a memory card of a smart phone, a storage component of a tablet, a hard disk of a personal computer, a random access memory (RAM), a read only memory (ROM), an erasable programmable read only memory. (EPROM), Portable Compact Disk Read Only Memory (CD-ROM), USB memory, or any combination of the above storage media.
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • CD-ROM Portable Compact Disk Read Only Memory
  • USB memory or any combination of the above storage media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne un procédé de détection de corps vivant, un système de détection de corps vivant et un produit de programme informatique, qui permettent de réaliser une détection de corps humain vivant. Ledit procédé de détection de corps vivant comprend les étapes consistant : à acquérir des données vidéo collectées par un appareil de collecte de données vidéo ; sur la base des données vidéo, à déterminer un objet à détecter ; à acquérir un signal à détecter correspondant à l'objet à détecter ; et à déterminer si le signal à détecter est un signal physiologique de corps vivant, le signal à détecter étant un signal relatif à l'élasticité de la peau qui correspond à l'objet à détecter.
PCT/CN2015/080964 2015-06-08 2015-06-08 Procédé de détection de corps vivant, système de détection de corps vivant et produit de programme informatique WO2016197298A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2015/080964 WO2016197298A1 (fr) 2015-06-08 2015-06-08 Procédé de détection de corps vivant, système de détection de corps vivant et produit de programme informatique
CN201580000331.8A CN105612533B (zh) 2015-06-08 2015-06-08 活体检测方法、活体检测系统以及计算机程序产品

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/080964 WO2016197298A1 (fr) 2015-06-08 2015-06-08 Procédé de détection de corps vivant, système de détection de corps vivant et produit de programme informatique

Publications (1)

Publication Number Publication Date
WO2016197298A1 true WO2016197298A1 (fr) 2016-12-15

Family

ID=55991264

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/080964 WO2016197298A1 (fr) 2015-06-08 2015-06-08 Procédé de détection de corps vivant, système de détection de corps vivant et produit de programme informatique

Country Status (2)

Country Link
CN (1) CN105612533B (fr)
WO (1) WO2016197298A1 (fr)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109325392A (zh) * 2017-08-01 2019-02-12 苹果公司 生物特征认证技术
CN109858375A (zh) * 2018-12-29 2019-06-07 深圳市软数科技有限公司 活体人脸检测方法、终端及计算机可读存储介质
CN111126105A (zh) * 2018-10-31 2020-05-08 北京猎户星空科技有限公司 人体关键点检测方法和装置
CN111382646A (zh) * 2018-12-29 2020-07-07 Tcl集团股份有限公司 一种活体识别方法、存储介质及终端设备
CN111488764A (zh) * 2019-01-26 2020-08-04 天津大学青岛海洋技术研究院 一种面向ToF图像传感器的人脸识别算法
CN111985438A (zh) * 2020-08-31 2020-11-24 杭州海康威视数字技术股份有限公司 一种静态人脸处理方法、装置及设备
CN112069954A (zh) * 2020-08-26 2020-12-11 武汉普利商用机器有限公司 一种活体微表情检测方法及系统
CN112529060A (zh) * 2020-12-02 2021-03-19 贝壳技术有限公司 一种图像材质类别识别方法及装置

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106250840A (zh) * 2016-07-27 2016-12-21 中国科学院自动化研究所 基于深度学习的嘴巴张闭状态检测方法
CN106599772B (zh) * 2016-10-31 2020-04-28 北京旷视科技有限公司 活体验证方法和装置及身份认证方法和装置
CN106778525B (zh) * 2016-11-25 2021-08-10 北京旷视科技有限公司 身份认证方法和装置
CN106778615B (zh) * 2016-12-16 2019-10-18 中新智擎科技有限公司 一种识别用户身份的方法、装置和物业服务机器人
CN107992794B (zh) * 2016-12-30 2019-05-28 腾讯科技(深圳)有限公司 一种活体检测方法、装置和存储介质
CN108229325A (zh) 2017-03-16 2018-06-29 北京市商汤科技开发有限公司 人脸检测方法和系统、电子设备、程序和介质
CN108694357A (zh) * 2017-04-10 2018-10-23 北京旷视科技有限公司 用于活体检测的方法、装置及计算机存储介质
CN108875452A (zh) * 2017-05-11 2018-11-23 北京旷视科技有限公司 人脸识别方法、装置、系统和计算机可读介质
CN107609462A (zh) * 2017-07-20 2018-01-19 北京百度网讯科技有限公司 待检测信息生成及活体检测方法、装置、设备及存储介质
CN107679457A (zh) * 2017-09-06 2018-02-09 阿里巴巴集团控股有限公司 用户身份校验方法及装置
CN109840406B (zh) * 2017-11-29 2022-05-17 百度在线网络技术(北京)有限公司 活体验证方法、装置和计算机设备
CN108363944A (zh) * 2017-12-28 2018-08-03 杭州宇泛智能科技有限公司 人脸识别终端双摄防伪方法、装置及系统
CN109171649B (zh) * 2018-08-30 2021-08-17 合肥工业大学 智能影像式生命体征探测仪
CN110110597B (zh) * 2019-04-02 2021-08-27 北京旷视科技有限公司 活体检测方法、装置及活体检测终端

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1509454A (zh) * 2001-05-14 2004-06-30 �����ɷ� 指纹辨识中使用灰阶微分影像之纹理分类辨识伪迹之方法
CN1882952A (zh) * 2003-11-21 2006-12-20 Atmel格勒诺布尔公司 双向扫描的指纹传感器
US20070160263A1 (en) * 2006-01-06 2007-07-12 Fujitsu Limited Biometric information input apparatus
CN101159016A (zh) * 2007-11-26 2008-04-09 清华大学 一种基于人脸生理性运动的活体检测方法及系统
CN101226589A (zh) * 2007-01-18 2008-07-23 中国科学院自动化研究所 基于薄板样条形变模型的活体指纹检测方法
CN102499638A (zh) * 2011-09-28 2012-06-20 北京航空航天大学 一种基于视听嗅触的活体探测系统
CN103890778A (zh) * 2011-10-25 2014-06-25 茂福公司 反欺骗设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100592322C (zh) * 2008-01-04 2010-02-24 浙江大学 照片人脸与活体人脸的计算机自动鉴别方法
KR101349406B1 (ko) * 2012-05-15 2014-01-09 엘지이노텍 주식회사 디스플레이장치 및 이의 절전 방법
CN103479367B (zh) * 2013-09-09 2016-07-20 广东工业大学 一种基于面部运动单元识别的驾驶员疲劳检测方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1509454A (zh) * 2001-05-14 2004-06-30 �����ɷ� 指纹辨识中使用灰阶微分影像之纹理分类辨识伪迹之方法
CN1882952A (zh) * 2003-11-21 2006-12-20 Atmel格勒诺布尔公司 双向扫描的指纹传感器
US20070160263A1 (en) * 2006-01-06 2007-07-12 Fujitsu Limited Biometric information input apparatus
CN101226589A (zh) * 2007-01-18 2008-07-23 中国科学院自动化研究所 基于薄板样条形变模型的活体指纹检测方法
CN101159016A (zh) * 2007-11-26 2008-04-09 清华大学 一种基于人脸生理性运动的活体检测方法及系统
CN102499638A (zh) * 2011-09-28 2012-06-20 北京航空航天大学 一种基于视听嗅触的活体探测系统
CN103890778A (zh) * 2011-10-25 2014-06-25 茂福公司 反欺骗设备

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109325392A (zh) * 2017-08-01 2019-02-12 苹果公司 生物特征认证技术
CN111126105B (zh) * 2018-10-31 2023-06-30 北京猎户星空科技有限公司 人体关键点检测方法和装置
CN111126105A (zh) * 2018-10-31 2020-05-08 北京猎户星空科技有限公司 人体关键点检测方法和装置
CN111382646B (zh) * 2018-12-29 2023-09-05 Tcl科技集团股份有限公司 一种活体识别方法、存储介质及终端设备
CN111382646A (zh) * 2018-12-29 2020-07-07 Tcl集团股份有限公司 一种活体识别方法、存储介质及终端设备
CN109858375A (zh) * 2018-12-29 2019-06-07 深圳市软数科技有限公司 活体人脸检测方法、终端及计算机可读存储介质
CN109858375B (zh) * 2018-12-29 2023-09-26 简图创智(深圳)科技有限公司 活体人脸检测方法、终端及计算机可读存储介质
CN111488764A (zh) * 2019-01-26 2020-08-04 天津大学青岛海洋技术研究院 一种面向ToF图像传感器的人脸识别算法
CN111488764B (zh) * 2019-01-26 2024-04-30 天津大学青岛海洋技术研究院 一种面向ToF图像传感器的人脸识别方法
CN112069954A (zh) * 2020-08-26 2020-12-11 武汉普利商用机器有限公司 一种活体微表情检测方法及系统
CN112069954B (zh) * 2020-08-26 2023-12-19 武汉普利商用机器有限公司 一种活体微表情检测方法及系统
CN111985438A (zh) * 2020-08-31 2020-11-24 杭州海康威视数字技术股份有限公司 一种静态人脸处理方法、装置及设备
CN112529060A (zh) * 2020-12-02 2021-03-19 贝壳技术有限公司 一种图像材质类别识别方法及装置

Also Published As

Publication number Publication date
CN105612533A (zh) 2016-05-25
CN105612533B (zh) 2021-03-02

Similar Documents

Publication Publication Date Title
WO2016197298A1 (fr) Procédé de détection de corps vivant, système de détection de corps vivant et produit de programme informatique
US10621454B2 (en) Living body detection method, living body detection system, and computer program product
US10339402B2 (en) Method and apparatus for liveness detection
TWI751161B (zh) 終端設備、智慧型手機、基於臉部識別的認證方法和系統
WO2019127262A1 (fr) Procédé de détection in vivo de visage humain basé sur une extrémité en nuage, dispositif électronique et produit de programme
CN105184246B (zh) 活体检测方法和活体检测系统
CN106407914B (zh) 用于检测人脸的方法、装置和远程柜员机系统
US11074436B1 (en) Method and apparatus for face recognition
US9985963B2 (en) Method and system for authenticating liveness face, and computer program product thereof
WO2016172923A1 (fr) Procédé de détection de vidéo, système de détection de vidéo, et produit programme d'ordinateur
JP2015529365A5 (fr)
TW201911130A (zh) 一種翻拍影像識別方法及裝置
WO2017000213A1 (fr) Procédé et dispositif de détection de corps vivant et produit-programme informatique
CN110612530B (zh) 用于选择脸部处理中使用的帧的方法
JP5592040B1 (ja) バイオメトリクスタイプのアクセス制御システムにおける不正の検出
WO2017000218A1 (fr) Procédé et dispositif de détection de corps vivant et produit programme d'ordinateur
US11315360B2 (en) Live facial recognition system and method
JP2017010210A (ja) 顔検出装置、顔検出システム、及び顔検出方法
CN110309693A (zh) 多层次状态侦测系统与方法
WO2017000217A1 (fr) Procédé et dispositif de détection de corps vivant et produit programme d'ordinateur
Surantha et al. Lightweight face recognition-based portable attendance system with liveness detection
JP6438693B2 (ja) 認証装置、認証方法、およびプログラム
JP2022178273A (ja) 画像認証装置、画像認証方法、コンピュータプログラム、及び記憶媒体
Hussain et al. Accurate Face Recognition system based on ARM 9 processor using HAAR wavelets
KR101718244B1 (ko) 얼굴 인식을 위한 광각 영상 처리 장치 및 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15894568

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 20.04.2018)

122 Ep: pct application non-entry in european phase

Ref document number: 15894568

Country of ref document: EP

Kind code of ref document: A1