US20140270412A1 - Liveness detection system based on face behavior - Google Patents
Liveness detection system based on face behavior Download PDFInfo
- Publication number
- US20140270412A1 US20140270412A1 US14/289,872 US201414289872A US2014270412A1 US 20140270412 A1 US20140270412 A1 US 20140270412A1 US 201414289872 A US201414289872 A US 201414289872A US 2014270412 A1 US2014270412 A1 US 2014270412A1
- Authority
- US
- United States
- Prior art keywords
- face
- motion
- user
- image
- liveness detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G06K9/00906—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/167—Detection; Localisation; Normalisation using comparisons between temporally consecutive images
-
- G06K9/00261—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
Definitions
- the present disclosure is generally related to authentication systems.
- Biometric information generally includes physiological features of a user seeking access to the system and/or behavioral features of the user. For instance, physiological features include facial features, fingerprints, and/or retina features. Behavioral features include the behavior of the user, such as voice, gait, among other features.
- unauthorized users may seek to circumvent these safeguards by using an image (e.g., physical photo, electronic image, such as a cell phone image, etc.) of the authorized user to spoof the system into permitting access.
- image e.g., physical photo, electronic image, such as a cell phone image, etc.
- a method comprising: receiving plural pictures of a video stream comprising a face and an adjacent background; determining motion of the face and the background, the motion determined over the plural pictures; comparing the motion between the face and the background; and determining whether the face corresponds to an actual, live user or an image of the user based on the comparison, the determinations performed by a processor.
- a method comprising: receiving from an image capture device plural pictures of a video stream comprising a face; prompting a motion of a first portion of the face of a user; determining if there is motion within the first portion of the face, the motion determined over the plural pictures; and determining whether the face corresponds to an actual, live user or an image of the user based on the determination of motion, the determinations performed by a processor.
- a system comprising: an image capture device configured to receive plural pictures of a video stream comprising a face and an adjacent background; and a computing device comprising a processor that is configured to: determine motion of the face and the background, the motion determined over the plural pictures; compare the motion between the face and the background and within at least a portion of the face; and determine whether the face corresponds to an actual, live user or an image of the user based on the comparisons.
- FIG. 1 is a flow diagram depicting an example embodiment of a liveness detection process that determines whether a target of an image capture device is an actual, live user or an image of the user.
- FIG. 2 is a schematic diagram that illustrates an embodiment of a process for dividing a face and background of a human being, captured by an image capture device, into plural sub-units, each of which is evaluated using motion estimation over a sequence of plural pictures in a video stream.
- FIG. 3 is a schematic diagram that illustrates an embodiment of a process for dividing a face and background of an image of a human being, captured by an image capture device, into plural sub-units, each of which is evaluated using motion estimation over a sequence of plural pictures in a video stream.
- FIGS. 4-5 are schematic diagrams that illustrate a method for determining a background region.
- FIGS. 6A-6C are schematic diagrams that illustrate an embodiment of a process for the determination of motion within a portion of a face by detecting an eye blink.
- FIG. 7 is a screen diagram that illustrates an embodiment of a process for prompting user motion to enable the determination of motion of the face.
- FIGS. 8A-8C are example screen diagrams that illustrate an embodiment of a process for prompting user motion through inducement.
- FIG. 9 is a block diagram that illustrates an embodiment of a computing system that includes a liveness detection system.
- FIG. 10 is a flow diagram of an example embodiment of a liveness detection method based on a difference in motion between a face and a background adjacent the face between plural pictures.
- FIG. 11 is a flow diagram of an example embodiment of a liveness detection method based on a difference in motion within a face, such as through a detection of changes in facial features between plural pictures after prompting motion of a user.
- an invention that comprises a liveness detection system and method that differentiates between a live human being and a corresponding image.
- an unscrupulous user e.g., referred to herein also as an invader
- an invader may seek to spoof an authorized user to gain access to the network or computing system.
- One method an invader may use is to present an image (e.g., photograph, electronic image, such as from a display screen of a communications device) of an authorized user to an image capture device (e.g., web cam, video camera, etc.) of the authentication system.
- a liveness detection system detects a face of a user, in range of an image capture device of an authentication system (e.g., such a user also referred to herein as a target), and a background adjacent to the face, determines the associated size and location of the face and background, computes motion over plural pictures or frames (e.g., pictures and frames are used herein interchangeably, with the understanding that certain embodiments contemplate both interlaced and progressive types of scanning formats), and compares the motion between the face and the background.
- an image capture device of an authentication system e.g., such a user also referred to herein as a target
- a background adjacent to the face determines the associated size and location of the face and background, computes motion over plural pictures or frames (e.g., pictures and frames are used herein interchangeably, with the understanding that certain embodiments contemplate both interlaced and progressive types of scanning formats), and compares the motion between the face and the background.
- the liveness detection system uses the determined motion differences to make a determination as to whether the user (e.g., the target) attempting to access the network or computing system is a live human being or merely an image of a live human being. Such a determination is also referred to herein, generally, as a liveness determination (e.g., a determination of whether the target is a real, live human being or an image of the human being).
- the authentication system may access a database of stored images to further assess whether the facial features of the target match (e.g., threshold degree of matching) the facial features of the stored image.
- a match results in permitting access to the computing system or network (or at least permitting further authentication in some implementations).
- Some embodiments may implement other, or in some embodiments, additional measures to determine or substantiate liveness, respectively.
- a liveness detection system commands and/or induces a user to make a specified action or expression by direct command or induction.
- Commands may be implemented by way of textual messages presented across a display screen of the authentication system that request that the target turn or move his or her head, or blink his or her eyes, among other possible commands. Commands may also be presented verbally, or using non-textual graphics.
- Induction may be implemented by way of causing a graphic, such as a displayed box-frame, to move across the screen to induce the target to position his or her face within the frame, or by causing a flash (e.g., by a coupled device or via action presented on a screen of the display device), which results in an eye blink by the target, among other methods of inducement.
- a combination of commands and inducement may be implemented.
- the liveness detection system determines if the target is a real human being or an image depicting a human being by analyzing the information related to the specified action or expression, such as whether there is a change in facial expression (e.g., eye blink, smile, frown, etc.) or head (and hence face) movement.
- one security problem arises from the inability to differentiate between a live human being and an image of the human being in the input data. If an authentication system does not have this ability of differentiation, then an invader may deceive the system by using an image depicting an authenticated user. It is a difficult problem to differentiate between a human being and an image from static image data, but it is easier to differentiate between a three-dimensional (3D) face and a two-dimensional face image from captured video by analyzing the facial action or expression. For instance, one mechanism for liveness detection is to use a 3D or infrared camera to identify the depths of a stereoscopic-based face and plane image.
- a liveness detection system can be used, which differentiates according to the movement of a target in the video. Both the real (e.g., live) face and the inanimate image can move in the video, but their respective movements produce different changes among the video pictures. For instance, when a real human moves his or her face in front of the image capture device, the background region is static. In contrast, when an image moves, the face in the image and surrounding background region move together. Hence, certain embodiments of a liveness detection system can differentiate between a live human being and an image according to these differences in motion.
- the target may not move enough (e.g., through head movement or changes in facial expression) when attempting authentication.
- the target freezes in front of the camera without making any actions, it is difficult for an authentication system to differentiate between the user and a static image. That is, the captured video in such scenarios comprising a face lacking in variation is similar to a static face image, which makes it difficult for an authentication system to collect and assess the data needed to enable differentiation.
- the authentication system is facing a dilemma: if the system determines the static face as a real human being, then it is easy for an invader to deceive the system by using an image.
- liveness detection systems address these authentication issues by prompting the target to make certain, expected actions, and then identify the differences between the content of actions performed by a real human being and by an image. Accordingly, certain embodiments of a liveness detection system provide an advance over conventional authentication systems in the way of precision, efficiency, and/or security in the authenticating process.
- FIG. 1 illustrates one embodiment of a process 100 for liveness detection, such as in an authentication system.
- the process 100 depicted in FIG. 1 is for illustration, and that other embodiments are contemplated to be within the scope of the disclosure, including embodiments using a subset of the illustrated processes, or using a variation in the order of processing, among other variations.
- the liveness detection process 100 prompts motion from the target by commanding the target to make a specific action (e.g., behavior) (though not shown in FIG. 1 , and/or inducing the target to make the specific action, as explained further below).
- a specific action e.g., behavior
- the liveness detection process 100 receives a video stream comprising plural pictures that include information pertaining to the target's behavior, collects the information, and uses the information to differentiate between live human behavior and inanimate objects. In other words, based on the information, the liveness detection process 100 determines if the target in the captured video is a real human being or an image depicting a human being.
- the liveness detection process 100 starts image capture ( 102 ) using an image capture device, such as a webcam ( 104 ).
- a target is typically positioned in close proximity to the webcam, with the resulting image displayed live on a display screen of a computing device, such as a computer screen coupled to the webcam.
- the liveness detection process 100 presents on the display screen of the computing device, shown to the target, a command ( 106 ) that attempts to prompt motion by the target.
- the command may be embodied in the form of a text message presented across the display screen, such as “Move your head” ( 108 ) or “Blink your eyes” ( 110 ).
- non-textual graphics may be used to prompt action, such as a graphic of a face moving his or her head or blinking his or her eyes.
- the display screen message or command may be omitted, and the commands may be embodied as verbal messages (e.g., through speakers coupled to the computing device).
- the process 100 further includes the webcam capturing a video of the target ( 112 ) containing the face movement or facial expression following the command.
- the resulting video stream comprising the plural pictures of the captured face and background, is stored (e.g., buffered) in a storage device (e.g., memory, persistent storage, etc.) ( 114 ), and accessed to perform liveness detection ( 116 ).
- Liveness detection ( 116 ) includes the execution of one or more known algorithms to perform blink detection ( 118 ), motion estimation ( 120 ), and/or background analysis ( 122 ), as explained further below in association with FIGS. 4-5 .
- the liveness detection process 100 may collect required differentiation information for these algorithms.
- the liveness detection process 100 produces a result ( 124 ) that includes a determination of whether the target in the video is a real human being, or only an image depicting a human being. From this information, an authentication system may perform other measures before allowing access, though in some embodiments, such measures may be performed at least in part before the liveness determination.
- the image acquisition (capture) process may be commenced at the time of presenting the command ( 106 ).
- a webcam is illustrated as an example image capture device, other image capture devices not configured for use with the Internet may be used, such as a camcorder.
- one or more of these processes may be omitted in some embodiments, such as the use of only one of the commands ( 108 ) or ( 110 ) or the omission of the commands ( 106 ) altogether.
- FIGS. 2-3 illustrate the example motion estimation process 120 implemented by certain embodiments of a liveness detection system.
- the liveness detection system can implement a process of differentiation according to the received video pictures.
- face motion or movement motion and movement used interchangeably herein
- the main difference between a real human being and an image is the magnitude of relative movement.
- the target e.g., an invader
- the background region is static. But if the target (e.g., an invader) holds an image (e.g. the screen picture of a cell phone or another device) of an authorized user and moves the image in front of the camera, then the background and the face in the image will move together.
- the movement of face, background, and image picture are in agreement.
- a liveness detection system obtains information of movement using motion estimation to analyze the pictures in the video.
- the liveness detection system receives plural (e.g., two) pictures close in time, and divides the pictures into many sub-units (e.g., square regions, such as macroblocks, though the regions may be of rectangular form in some embodiments). Then for each sub-unit (or in some embodiments, for each group of sub-units), the liveness detection system estimates the moving direction and magnitude between the two pictures. To compare the movements of the face and background, the liveness detection system determines the location and size of the face. For instance, the liveness detection system may implement well-known face detection algorithms to find discriminating information of the face.
- the liveness detection system may also obtain information of the background, such as through the use of well-known object segmentation, or in some embodiments, via use of head detection algorithms to filter out the possible foreground region and choose the region closest to the face from the remaining region.
- the liveness detection system computes the moving direction and magnitude of the face and the background. If the moving directions and magnitudes of the face and background are similar enough (e.g., meet or exceed a threshold degree of similarity), then the liveness detection system determines that the target in the input video is an image, otherwise, the liveness detection system determines that the target is a real human being.
- the liveness detection system determines whether the target in the input video is a real human being, otherwise the system determines that the target is an image.
- FIGS. 2 and 3 shown are schematic diagrams illustrating the above-described motion estimation process 120 .
- a picture 200 is shown (and in FIG. 3 , a picture 300 ), divided up by the liveness detection system into plural sub-units, such as sub-unit 202 .
- the sub-units 202 are depicted as squares of a given size, though it should be appreciated within the context of the present disclosure that other sizes may be used, or rectangles may be used in some embodiments.
- the picture 200 comprises a background (or equivalently, background region) 204 and a face 206 .
- the face 206 is shown bounded by a bounding region 208 (e.g., depicted as a square in this example, though other geometrical formats may be used covering substantially the same or different areas).
- a bounding region 208 e.g., depicted as a square in this example, though other geometrical formats may be used covering substantially the same or different areas.
- there may be one or more bounding regions e.g., a second bounding region for the background or a portion thereof.
- FIGS. 4-5 shown is a reproduction of the pictures 200 and 300 from FIGS. 2-3 with additional bounding regions as shown by pictures 400 and 500 , respectively. Discussion of like features shared between FIGS. 2-3 and 4 - 5 are omitted here except for purposes of further explanation.
- the bounding regions include face bounding regions 402 and 502 , head bounding regions 404 and 504 , and background determining bounding regions 406 and 506 .
- the background regions 408 and 508 bounded by background determining bounding regions 406 and 506 are each configured as an inverted “U,” though not limited to such a configuration.
- the respective face region e.g., bounding regions 402 and 502
- an estimation of each respective head region bounding regions 404 and 504
- the background regions 408 and 508 that are close to the face regions are determined, with the head regions (e.g., bounded by head bounding regions 404 and 504 ) substantially excluded.
- the background region 408 is determined by computing the difference between the background determining bounding region 406 and the head bounding regions 404 .
- the background region 508 is determined by computing the difference between the background determining bounding region 506 and the head bounding regions 504 .
- FIG. 4 which has a picture 410 of a sun and mountains in the background region 408
- the image of a human being is presented in FIG. 5 (which omits the picture in the background).
- each sub-unit 202 the magnitude and direction of motion is computed for the area of the picture bounded by the sub-unit 202 in comparison to at least one other picture, as is known in motion estimation.
- each sub-unit 202 is associated with a motion vector 210 (e.g., represented as a line or dot in the sub-unit 202 ) representing the direction and magnitude of the motion.
- a picture 300 is similarly divided up into sub-units 302 , with the picture 300 also including a background 304 , face 306 , bounding region 308 , and motion vectors 310 as similarly described above for FIG. 2 .
- FIG. 3 a picture 300 is similarly divided up into sub-units 302 , with the picture 300 also including a background 304 , face 306 , bounding region 308 , and motion vectors 310 as similarly described above for FIG. 2 .
- the motion vectors 210 for the background 204 have a different movement compared to the face 206 (e.g., diagonal motion vectors 210 in the majority of the face sub-units 202 compared to primarily dots in the background 204 ), so the target captured in this picture 200 is determined by the liveness detection system to be a real human being (e.g., live user) given their lack of similarity in motion vectors when compared to the background 204 .
- an evaluation by the liveness detection system of a facial expression may be used to differentiate between a real, live human being and an image.
- One difference between video of a live human being and an image is with regard to facial features. In other words, it is unlikely for there to be a change in facial features when capturing over plural pictures an image held by a user.
- one condition to determine the target as a real, live human being is the detected presence of the facial expression. So, one embodiment of a liveness detection system uses a facial expression detection algorithm to detect the presence of a facial expression. Since different facial expressions impose different changes in the facial features, the detection algorithms of those facial expressions may also be different.
- eye-blink detection algorithms check the eye region
- smile detection algorithms check the mouth region.
- One example eye-blink detection algorithm checks the variation of eye regions in every picture of the captured video.
- FIGS. 6A-6C shown is an illustration of plural pictures 600 (e.g., 600 A, 600 B, and 600 C) of a captured video stream revealing an eye blink for a given target.
- an eye-blink algorithm may be considered a sub-category of motion estimation directed to regions within the face (e.g., and not necessarily with respect to the background motion).
- the eye blink algorithm may be performed by using a bounding region 602 that focuses on a detected region (in this example, the eye region of the face, though other regions in addition to or in lieu of the eye region may be bounded, such as the mouth, eyebrows, forehead, etc.), and sub-divided into plural sub-units (not shown in FIGS. 6A-6C ) to compare motion within the particular bounded region between the plural pictures of a given picture sequence.
- the bounding region 602 is depicted as an ellipse, with the understanding that other geometric configurations may be used in some embodiments covering substantially the same, or less or more of the region. In FIGS.
- the eyes bounded by bounding region 602 are depicted as wide open, whereas in FIG. 6B , the target has blinked as shown by the closed eyes in the bounding region 602 .
- the liveness detection system determines that an eye blink is present in the period.
- variations in regions other than those bounded by the bounding region 602 are not considered, and hence the variation in eye movement alone is used to make a determination.
- the detected presence of one or more eye blinks can be used to determine the target as a real, live human being.
- a similar approach may be used for other facial expressions in other regions of the face.
- a liveness detection process 100 implemented by an embodiment of the liveness detection system includes the command procedure 106 ( FIG. 1 ) that prompts the target to move his or her face (or move features within the face, such as an eye or eyes).
- FIG. 7 shown is a screen diagram of a screen 700 (e.g., on a computer monitor or other display device) that shows a live capture 702 of a video stream pertaining to the target, and a message 704 that prompts the target into some specific action.
- the target may already have embarked on the authentication process, entering his or her name (e.g., John Doe), and then the authentication system embarks on the biometric portion (e.g., motion detection followed by facial recognition) with a presentation of the target's entered name on the screen 700 .
- the biometric portion e.g., motion detection followed by facial recognition
- the message 704 requests the following: “John: Please blink your eyes so that we may further authenticate your logon.” It should be appreciated that other specific actions may be requested, such as having the target shift his or her position (e.g., side-to-side, or moving further away or closer), and/or make other facial expressions. Further, the liveness detection system may request several movements to collect sufficient information to enable a determination of liveness, each acting to substantiate the prior determinations (or if done concurrently, to substantiate the determinations based on other requested movements) of liveness (and/or serving as a basis of determining liveness where other determinations are unsatisfactory). As noted above, the message presented by the liveness detection system may be embodied as a voice message or as a combination of a verbal and visual message.
- FIGS. 8A-8C are screen diagrams depicting a display screen 800 (e.g., 800 A, 800 B, and 800 C) that illustrate an induction process performed by certain embodiments of a liveness detection system.
- the liveness detection system attempts to prompt the target to move his or her face.
- the target 802 captured by an image capture device, is shown displayed on the screen 800 A ( FIG. 8A ), 800 B ( FIG. 8B ), and 800 C ( FIG. 8C ) with a graphic pattern 804 (e.g., square, though not limited to square geometry) that is moved in a manner to induce the target 802 to position (e.g., follow and align) his or her face 806 within the graphic pattern 804 .
- a graphic pattern 804 e.g., square, though not limited to square geometry
- the graphic pattern 804 initially centers itself on the face 806 of the target 802 .
- the graphic pattern 804 may be positioned elsewhere relative to the center of the face 806 .
- the graphic pattern 804 has been positioned from the center of the target's face (as shown in FIG. 8A ) to the target's right side of the face 806 .
- the target 802 has positioned his or her face (and possibly moved his or her chair) to the center of the graphic pattern 804 , and by doing so, has satisfied the requirement of a moving face.
- the graphic pattern 804 can be designed as a shooting screen of a camera to increase the probability that the user follows the pattern 804 .
- the shooting screen may comprise a mark (e.g., “+” mark, grid, etc.) that makes the user want or intend to align his or her face with the mark.
- the mark may look like an object that appears in a viewing window of a camera (or something like a gun sight grid for aiming purposes).
- the liveness detection system may combine the command and induction. For example, the liveness detection system may present the command “Please put your face into the square”, and then move the graphic pattern 804 to induce the target 802 to move if, for instance, the target 802 is not currently within the graphic pattern 804 .
- the user may be prompted to incur motion (e.g., induced motion) by a non-textual and/or non-verbal external stimuli that causes a reflexive-type movement.
- a flash on an image capture device may be triggered by the liveness detection system, causing the user to blink or turn his or her head.
- a lighting device e.g., light-emitting diode (LED), etc.
- sound device e.g., bell, audio tone, etc.
- the display presented to the user may cause a flash or other movement to induce reflexive movement.
- FIG. 9 illustrates an embodiment of a computing system 900 .
- a liveness detection system may be embodied in the entirety of the computing system 900 depicted in FIG. 9 , or as a subset thereof in some embodiments.
- the example computing system 900 is shown as including a personal computer, though it should be appreciated within the context of the present disclosure that the computing system 900 may comprise any one of a plurality of computing devices, including a dedicated player appliance, set-top box, laptop, computer workstation, cellular phone, personal digital assistant (PDA), handheld or pen based computer, embedded appliance, or other communication (wired or wireless) device.
- PDA personal digital assistant
- the liveness detection system may be implemented on a network device (also referred to herein as a computing system), similar to the computing system 900 , located upstream of the computing system 900 , such as a server, router, gateway, etc., or implemented with similar functionality distributed among plural devices (e.g., in a server device and the computing device).
- a network device also referred to herein as a computing system
- An upstream network device may be configured with similar components, and hence discussion of the same is omitted for brevity.
- the computing system 900 may, for instance, comprise a processor 902 , one or more input/output (I/O) interfaces 904 , a network interface device 906 , and a display device 908 connected across a data bus 910 .
- the computing system 900 may further comprise a memory 912 that includes an operating system 914 and application specific software, such as a player application 916 in the case of implementing player functionality for the playback of media content, such as video and/or audio (e.g., movies, music, games, etc.).
- the player application 916 may be implemented as a software program configured to read and play back content residing on a disc 922 (or from other high definition video sources) according to the specifications defined by standards such as the Blu-ray Disc format specification, HD-DVD, etc.
- the memory 912 comprises, among other logic (e.g., software), authentication logic 918 , which includes in one embodiment, liveness detection logic 920 .
- the authentication logic 918 may be implemented as a logon procedure (e.g., secured or licensed access) associated with access to content on the disc 922 , access to application programs in memory 912 , or in some embodiments, implemented as part of a logon procedure specific to access to the computing system 900 (and/or in some embodiments, specific to a remotely-located network device as implemented through a browser program residing in the computing system 900 ).
- a logon procedure e.g., secured or licensed access
- an attempted logon by a target to the computing system 900 comprises providing a logon screen on the display 908 and activation of an image capture device 924 coupled to the I/O interface 904 (or integrated in the computing system 900 in some embodiments).
- the target may enter (e.g., via keyboard or other input devices, including via voice activation) personal information (e.g., his or her name, password, etc.), and then be evaluated for biometric authenticity (e.g., facial recognition, liveness determination, etc.).
- the target may sit (or stand) in front of (or in range of) the image capture device 924 , and be prompted to incur motion (e.g., of the face or features thereon) by presenting text messages on a screen of the display device 908 (or by other mechanisms as explained above).
- the target merely positions himself or herself in front of the screen and a motion determination is made (e.g., without prompting). If motion is not detected in this latter scheme, the liveness detection logic 920 may commence an additional approach, commanding movement by the target and/or inducing movement in the manner as explained above.
- entering of personal information e.g., entering his or her name
- liveness detection logic 920 comprises suitable logic to implement the liveness detection process 100 (including the addition of inducement processing) illustrated in FIG. 1 , including the command processing 106 ( FIG. 1 ) and liveness detection processing 116 ( FIG. 1 ) and their associated algorithms.
- the processor 902 may include any custom made or commercially available processor, a central processing unit (CPU) or an auxiliary processor among several processors associated with the computing system 900 , a semiconductor based microprocessor (in the form of a microchip), one or more ASICs, a plurality of suitably configured digital logic gates, and other well-known electrical configurations comprising discrete elements both individually and in various combinations to coordinate the overall operation of the computing system.
- CPU central processing unit
- auxiliary processor among several processors associated with the computing system 900
- a semiconductor based microprocessor in the form of a microchip
- ASICs application specific integrated circuitry
- the memory 912 may include any one of a combination of volatile memory elements (e.g., random-access memory (RAM, such as DRAM, and SRAM, etc.)) and nonvolatile memory elements (e.g., ROM, hard drive, tape, CDROM, etc.).
- the memory 912 typically comprises the native operating system 914 , one or more native applications, emulation systems, or emulated applications for any of a variety of operating systems and/or emulated hardware platforms, emulated operating systems, etc.
- the applications may include application specific software stored on a computer readable medium (e.g., memory, persistent storage, etc.) for execution by the processor 902 and may include the authentication logic 918 and liveness detection logic 920 .
- the memory 912 may, and typically will, comprise other components which have been omitted for purposes of brevity, or in some embodiments, may omit certain components, such as the player application 916 .
- Input/output interfaces 904 provide any number of interfaces for the input and output of data.
- a user input device which may be a body part of a viewer (e.g., hand), keyboard, a mouse, or voice activated mechanism.
- a handheld device e.g., PDA, mobile telephone
- these components may interface with function keys or buttons, a touch sensitive screen, a stylus, body part, etc.
- the input/output interfaces 904 may further include one or more disc drives (e.g., optical disc drives, magnetic disc drives) to enable playback of multimedia content residing on the computer readable medium 922 , and as explained above, may interface with the image capture device 924 , as well as other devices, such as remote alarms, locking/unlocking devices (e.g., electromagnetic devices), etc.
- disc drives e.g., optical disc drives, magnetic disc drives
- the network interface device 906 comprises various components used to transmit and/or receive data over a network environment.
- the network interface device 906 may include a device that can communicate with both inputs and outputs, for instance, a modulator/demodulator (e.g., a modem), wireless (e.g., radio frequency (RF)) transceiver, a telephonic interface, a bridge, a router, network card, etc.
- the computing system 900 may further comprise mass storage (not shown).
- the mass storage may include a data structure (e.g., database) to store image files (and other data files, including identifying information such as name, passwords, pins, etc.) of authorized users for comparison during the authentication process.
- the image and data files of authorized users may be located in a remote storage device (e.g., network storage).
- the display device 908 may comprise a computer monitor or a plasma screen for a PC or a liquid crystal display (LCD) on a hand held device, head-mount device, or other computing device. In some embodiments, the display device 908 may be separate from the computing system 900 , and in some embodiments, integrated in the computing device.
- LCD liquid crystal display
- a “computer-readable medium” stores one or more programs and data for use by or in connection with the instruction execution system, apparatus, or device.
- the computer readable medium is non-transitory, and may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device.
- the computer-readable medium may include, in addition to those set forth above, the following: an electrical connection (electronic) having one or more wires, a portable computer diskette (magnetic), a random access memory (RAM) (electronic), a read-only memory (ROM) (electronic), an erasable programmable read-only memory (EPROM, EEPROM, or Flash memory) (electronic), and a portable compact disc read-only memory (CDROM) (optical).
- an electrical connection having one or more wires
- a portable computer diskette magnetic
- RAM random access memory
- ROM read-only memory
- EPROM erasable programmable read-only memory
- CDROM portable compact disc read-only memory
- a liveness detection method 1000 implemented by a processor 902 in the computing system 900 and depicted in FIG. 10 , comprises receiving plural pictures of a video stream comprising a face and an adjacent background ( 1002 ); determining motion of the face and the background, the motion determined over the plural pictures ( 1004 ); comparing the motion between the face and the background ( 1006 ); and determining whether the face corresponds to an actual, live user or an image of the user based on the comparison, the determinations performed by the processor ( 1008 ).
- a liveness detection method 1100 implemented by the processor 902 in the computing system 900 and depicted in FIG. 11 , comprises receiving from an image capture device plural pictures of a video stream comprising a face ( 1102 ); prompting a motion of a first portion of the face of a user ( 1104 ); determining if there is motion within the first portion of the face, the motion determined over the plural pictures ( 1106 ); and determining whether the face corresponds to an actual, live user or an image of the user based on the determination of motion, the determinations performed by a processor ( 1108 ).
- a second portion can further be determined, wherein the first portion and second portion of the face may refer to different features of the face where motion may be detected.
- the first portion may be the eyes, and the second portion may be the mouth, or vice versa.
- Other regions of the face are also contemplated for detection of motion, such as the eyebrows, nose, forehead (e.g., wrinkle), etc.
- the scope of certain embodiments of the present disclosure includes embodying the functionality of certain embodiments of a liveness detection system in logic embodied in hardware and/or software-configured mediums. For instance, though described in software configured mediums, it should be appreciated that one or more of the liveness detection system and method functionality described herein may be implemented in hardware or a combination of both hardware and software.
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Collating Specific Patterns (AREA)
- Image Analysis (AREA)
- Studio Devices (AREA)
Abstract
A liveness detection method comprising: receiving plural pictures of a video stream comprising a face and an adjacent background; determining motion of the face and the background, the motion determined over the plural pictures; comparing the motion between the face and the background; and determining whether the face corresponds to an actual, live user or an image of the user based on the comparison, the determinations performed by a processor.
Description
- This application is a Divisional of pending U.S. patent application Ser. No. 13/354,891, filed Jan. 20, 2012 and entitled “LIVENESS DETECTION SYSTEM BASED ON FACE BEHAVIOR”.
- The present disclosure is generally related to authentication systems.
- Certain authentication systems, such as those found in various logon processes for network or computer systems, may be based at least in part on the use of biometric information to authenticate a user. Biometric information generally includes physiological features of a user seeking access to the system and/or behavioral features of the user. For instance, physiological features include facial features, fingerprints, and/or retina features. Behavioral features include the behavior of the user, such as voice, gait, among other features.
- In authentication systems based on facial recognition, unauthorized users may seek to circumvent these safeguards by using an image (e.g., physical photo, electronic image, such as a cell phone image, etc.) of the authorized user to spoof the system into permitting access. Hence it is important for authentication systems to distinguish between the actual user and one pretending to be the user.
- In one embodiment, a method comprising: receiving plural pictures of a video stream comprising a face and an adjacent background; determining motion of the face and the background, the motion determined over the plural pictures; comparing the motion between the face and the background; and determining whether the face corresponds to an actual, live user or an image of the user based on the comparison, the determinations performed by a processor.
- In another embodiment, a method comprising: receiving from an image capture device plural pictures of a video stream comprising a face; prompting a motion of a first portion of the face of a user; determining if there is motion within the first portion of the face, the motion determined over the plural pictures; and determining whether the face corresponds to an actual, live user or an image of the user based on the determination of motion, the determinations performed by a processor.
- In another embodiment, a system comprising: an image capture device configured to receive plural pictures of a video stream comprising a face and an adjacent background; and a computing device comprising a processor that is configured to: determine motion of the face and the background, the motion determined over the plural pictures; compare the motion between the face and the background and within at least a portion of the face; and determine whether the face corresponds to an actual, live user or an image of the user based on the comparisons.
- Other systems, methods, features, and advantages of the present disclosure will be or become apparent to one with skill in the art upon examination of the following drawings and detailed description. It is intended that all such additional systems, methods, features, and advantages be included within this description, be within the scope of the present disclosure, and be protected by the accompanying claims.
- Many aspects of the disclosure can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views.
-
FIG. 1 is a flow diagram depicting an example embodiment of a liveness detection process that determines whether a target of an image capture device is an actual, live user or an image of the user. -
FIG. 2 is a schematic diagram that illustrates an embodiment of a process for dividing a face and background of a human being, captured by an image capture device, into plural sub-units, each of which is evaluated using motion estimation over a sequence of plural pictures in a video stream. -
FIG. 3 is a schematic diagram that illustrates an embodiment of a process for dividing a face and background of an image of a human being, captured by an image capture device, into plural sub-units, each of which is evaluated using motion estimation over a sequence of plural pictures in a video stream. -
FIGS. 4-5 are schematic diagrams that illustrate a method for determining a background region. -
FIGS. 6A-6C are schematic diagrams that illustrate an embodiment of a process for the determination of motion within a portion of a face by detecting an eye blink. -
FIG. 7 is a screen diagram that illustrates an embodiment of a process for prompting user motion to enable the determination of motion of the face. -
FIGS. 8A-8C are example screen diagrams that illustrate an embodiment of a process for prompting user motion through inducement. -
FIG. 9 is a block diagram that illustrates an embodiment of a computing system that includes a liveness detection system. -
FIG. 10 is a flow diagram of an example embodiment of a liveness detection method based on a difference in motion between a face and a background adjacent the face between plural pictures. -
FIG. 11 is a flow diagram of an example embodiment of a liveness detection method based on a difference in motion within a face, such as through a detection of changes in facial features between plural pictures after prompting motion of a user. - Disclosed herein are certain embodiments of an invention that comprises a liveness detection system and method that differentiates between a live human being and a corresponding image. For instance, in authentication systems for network or computing systems, particularly those authentication systems that use logon procedures that involve face recognition, an unscrupulous user (e.g., referred to herein also as an invader) may seek to spoof an authorized user to gain access to the network or computing system. One method an invader may use is to present an image (e.g., photograph, electronic image, such as from a display screen of a communications device) of an authorized user to an image capture device (e.g., web cam, video camera, etc.) of the authentication system.
- To counter such attempts by an invader, one embodiment of a liveness detection system detects a face of a user, in range of an image capture device of an authentication system (e.g., such a user also referred to herein as a target), and a background adjacent to the face, determines the associated size and location of the face and background, computes motion over plural pictures or frames (e.g., pictures and frames are used herein interchangeably, with the understanding that certain embodiments contemplate both interlaced and progressive types of scanning formats), and compares the motion between the face and the background. Since an image tends to show like-motion between the face and background and live image capture of an authenticating user tends to show movement of the face relative to a static surrounding background, the liveness detection system uses the determined motion differences to make a determination as to whether the user (e.g., the target) attempting to access the network or computing system is a live human being or merely an image of a live human being. Such a determination is also referred to herein, generally, as a liveness determination (e.g., a determination of whether the target is a real, live human being or an image of the human being).
- At a time corresponding to this determination (e.g., before liveness determination, during, or after), the authentication system may access a database of stored images to further assess whether the facial features of the target match (e.g., threshold degree of matching) the facial features of the stored image. A match results in permitting access to the computing system or network (or at least permitting further authentication in some implementations).
- Some embodiments may implement other, or in some embodiments, additional measures to determine or substantiate liveness, respectively. A liveness detection system commands and/or induces a user to make a specified action or expression by direct command or induction. Commands may be implemented by way of textual messages presented across a display screen of the authentication system that request that the target turn or move his or her head, or blink his or her eyes, among other possible commands. Commands may also be presented verbally, or using non-textual graphics. Induction may be implemented by way of causing a graphic, such as a displayed box-frame, to move across the screen to induce the target to position his or her face within the frame, or by causing a flash (e.g., by a coupled device or via action presented on a screen of the display device), which results in an eye blink by the target, among other methods of inducement. In some embodiments, a combination of commands and inducement may be implemented. Following these measures, the liveness detection system determines if the target is a real human being or an image depicting a human being by analyzing the information related to the specified action or expression, such as whether there is a change in facial expression (e.g., eye blink, smile, frown, etc.) or head (and hence face) movement.
- Digressing briefly, for conventional authentication systems based on face recognition, one security problem arises from the inability to differentiate between a live human being and an image of the human being in the input data. If an authentication system does not have this ability of differentiation, then an invader may deceive the system by using an image depicting an authenticated user. It is a difficult problem to differentiate between a human being and an image from static image data, but it is easier to differentiate between a three-dimensional (3D) face and a two-dimensional face image from captured video by analyzing the facial action or expression. For instance, one mechanism for liveness detection is to use a 3D or infrared camera to identify the depths of a stereoscopic-based face and plane image. However, it is a more difficult problem if a normal webcam is used. When an authentication system lacks stereoscopic information, certain embodiments of a liveness detection system can be used, which differentiates according to the movement of a target in the video. Both the real (e.g., live) face and the inanimate image can move in the video, but their respective movements produce different changes among the video pictures. For instance, when a real human moves his or her face in front of the image capture device, the background region is static. In contrast, when an image moves, the face in the image and surrounding background region move together. Hence, certain embodiments of a liveness detection system can differentiate between a live human being and an image according to these differences in motion.
- However, in some circumstances, the target may not move enough (e.g., through head movement or changes in facial expression) when attempting authentication. In the extreme case, if the target freezes in front of the camera without making any actions, it is difficult for an authentication system to differentiate between the user and a static image. That is, the captured video in such scenarios comprising a face lacking in variation is similar to a static face image, which makes it difficult for an authentication system to collect and assess the data needed to enable differentiation. In this case, the authentication system is facing a dilemma: if the system determines the static face as a real human being, then it is easy for an invader to deceive the system by using an image. On the other hand, if the system determines the target as an image, it is hard for some users to pass the system, because they do not make enough movement. This problem may cause inconvenience and/or confusion for these users. Certain embodiments of liveness detection systems address these authentication issues by prompting the target to make certain, expected actions, and then identify the differences between the content of actions performed by a real human being and by an image. Accordingly, certain embodiments of a liveness detection system provide an advance over conventional authentication systems in the way of precision, efficiency, and/or security in the authenticating process.
- Having broadly summarized certain features of liveness detection systems and methods of the present disclosure, reference will now be made in detail to the description of the disclosure as illustrated in the drawings. While the disclosure is described in connection with these drawings, there is no intent to limit the disclosure to the embodiment or embodiments disclosed herein. Although the description identifies or describes specifics of one or more embodiments, such specifics are not necessarily part of every embodiment, nor are all various stated advantages associated with a single embodiment. On the contrary, the intent is to cover all alternatives, modifications and equivalents included within the spirit and scope of the disclosure as defined by the appended claims. Further, it should be appreciated in the context of the present disclosure that the claims are not necessarily limited to the particular embodiments set out in the description.
- Attention is directed to
FIG. 1 , which illustrates one embodiment of aprocess 100 for liveness detection, such as in an authentication system. It should be appreciated within the context of the present disclosure that theprocess 100 depicted inFIG. 1 is for illustration, and that other embodiments are contemplated to be within the scope of the disclosure, including embodiments using a subset of the illustrated processes, or using a variation in the order of processing, among other variations. In general, theliveness detection process 100 prompts motion from the target by commanding the target to make a specific action (e.g., behavior) (though not shown inFIG. 1 , and/or inducing the target to make the specific action, as explained further below). Theliveness detection process 100 receives a video stream comprising plural pictures that include information pertaining to the target's behavior, collects the information, and uses the information to differentiate between live human behavior and inanimate objects. In other words, based on the information, theliveness detection process 100 determines if the target in the captured video is a real human being or an image depicting a human being. - The
liveness detection process 100 starts image capture (102) using an image capture device, such as a webcam (104). A target is typically positioned in close proximity to the webcam, with the resulting image displayed live on a display screen of a computing device, such as a computer screen coupled to the webcam. In one embodiment, theliveness detection process 100 presents on the display screen of the computing device, shown to the target, a command (106) that attempts to prompt motion by the target. For instance, the command may be embodied in the form of a text message presented across the display screen, such as “Move your head” (108) or “Blink your eyes” (110). In some embodiments, other, non-textual graphics may be used to prompt action, such as a graphic of a face moving his or her head or blinking his or her eyes. In some embodiments, the display screen message or command may be omitted, and the commands may be embodied as verbal messages (e.g., through speakers coupled to the computing device). - The
process 100 further includes the webcam capturing a video of the target (112) containing the face movement or facial expression following the command. The resulting video stream, comprising the plural pictures of the captured face and background, is stored (e.g., buffered) in a storage device (e.g., memory, persistent storage, etc.) (114), and accessed to perform liveness detection (116). Liveness detection (116) includes the execution of one or more known algorithms to perform blink detection (118), motion estimation (120), and/or background analysis (122), as explained further below in association withFIGS. 4-5 . Since the face in an image does not show an eye blinking like a real human user, and the face and the background region in an image move together when the image moves, theliveness detection process 100 may collect required differentiation information for these algorithms. Theliveness detection process 100 produces a result (124) that includes a determination of whether the target in the video is a real human being, or only an image depicting a human being. From this information, an authentication system may perform other measures before allowing access, though in some embodiments, such measures may be performed at least in part before the liveness determination. - It should be appreciated within the context of the present disclosure that one or more of these processes may be implemented in a different order than that illustrated in
FIG. 1 . For instance, the image acquisition (capture) process may be commenced at the time of presenting the command (106). Further, though a webcam is illustrated as an example image capture device, other image capture devices not configured for use with the Internet may be used, such as a camcorder. Also, one or more of these processes may be omitted in some embodiments, such as the use of only one of the commands (108) or (110) or the omission of the commands (106) altogether. - Having described an example embodiment of a
liveness detection process 100, attention is directed toFIGS. 2-3 , which illustrate the examplemotion estimation process 120 implemented by certain embodiments of a liveness detection system. After the liveness detection system presents one or more commands (and/or provides an induction, as explained further below), or after video capture in some embodiments without the use of commands and/or induction, the liveness detection system can implement a process of differentiation according to the received video pictures. For face motion or movement (motion and movement used interchangeably herein), the main difference between a real human being and an image is the magnitude of relative movement. When a target moves his or her face in front of the camera, the background region is static. But if the target (e.g., an invader) holds an image (e.g. the screen picture of a cell phone or another device) of an authorized user and moves the image in front of the camera, then the background and the face in the image will move together. Moreover, the movement of face, background, and image picture are in agreement. - Accordingly, one embodiment of a liveness detection system obtains information of movement using motion estimation to analyze the pictures in the video. The liveness detection system receives plural (e.g., two) pictures close in time, and divides the pictures into many sub-units (e.g., square regions, such as macroblocks, though the regions may be of rectangular form in some embodiments). Then for each sub-unit (or in some embodiments, for each group of sub-units), the liveness detection system estimates the moving direction and magnitude between the two pictures. To compare the movements of the face and background, the liveness detection system determines the location and size of the face. For instance, the liveness detection system may implement well-known face detection algorithms to find discriminating information of the face. The liveness detection system may also obtain information of the background, such as through the use of well-known object segmentation, or in some embodiments, via use of head detection algorithms to filter out the possible foreground region and choose the region closest to the face from the remaining region. According to the result of motion estimation, the liveness detection system computes the moving direction and magnitude of the face and the background. If the moving directions and magnitudes of the face and background are similar enough (e.g., meet or exceed a threshold degree of similarity), then the liveness detection system determines that the target in the input video is an image, otherwise, the liveness detection system determines that the target is a real human being. Note that one having ordinary skill in the art in the context of the present disclosure may use other criteria, such as determining whether the movement and direction between face and background are different enough (e.g., per a given threshold). Then (if different enough) the liveness detection system determines that the target in the input video is a real human being, otherwise the system determines that the target is an image.
- Referring to
FIGS. 2 and 3 , shown are schematic diagrams illustrating the above-describedmotion estimation process 120. InFIG. 2 , apicture 200 is shown (and inFIG. 3 , a picture 300), divided up by the liveness detection system into plural sub-units, such assub-unit 202. The sub-units 202 are depicted as squares of a given size, though it should be appreciated within the context of the present disclosure that other sizes may be used, or rectangles may be used in some embodiments. Thepicture 200 comprises a background (or equivalently, background region) 204 and aface 206. Theface 206 is shown bounded by a bounding region 208 (e.g., depicted as a square in this example, though other geometrical formats may be used covering substantially the same or different areas). Note that there may be one or more bounding regions (e.g., a second bounding region for the background or a portion thereof). For instance, and referring toFIGS. 4-5 , shown is a reproduction of thepictures FIGS. 2-3 with additional bounding regions as shown bypictures FIGS. 2-3 and 4-5 are omitted here except for purposes of further explanation. InFIGS. 4 and 5 , the bounding regions includeface bounding regions head bounding regions bounding regions FIGS. 4-5 , thebackground regions bounding regions background regions regions 402 and 502) is detected, and an estimation of each respective head region (boundingregions 404 and 504) is computed based on the detectedface regions background regions face bounding regions 402 and 502) are determined, with the head regions (e.g., bounded byhead bounding regions 404 and 504) substantially excluded. For instance, thebackground region 408 is determined by computing the difference between the background determiningbounding region 406 and thehead bounding regions 404. Likewise, thebackground region 508 is determined by computing the difference between the background determiningbounding region 506 and thehead bounding regions 504. It is noted that the real human being is shown inFIG. 4 , which has apicture 410 of a sun and mountains in thebackground region 408, and the image of a human being is presented inFIG. 5 (which omits the picture in the background). - Referring back to
FIGS. 2 and 3 , for each sub-unit 202, the magnitude and direction of motion is computed for the area of the picture bounded by the sub-unit 202 in comparison to at least one other picture, as is known in motion estimation. Hence, each sub-unit 202 is associated with a motion vector 210 (e.g., represented as a line or dot in the sub-unit 202) representing the direction and magnitude of the motion. InFIG. 3 , apicture 300 is similarly divided up intosub-units 302, with thepicture 300 also including abackground 304,face 306, boundingregion 308, andmotion vectors 310 as similarly described above forFIG. 2 . InFIG. 3 , it is observed that most of themotion vectors 310 for thebackground 304 have a similar movement to the face 306 (e.g., diagonal motion vectors of like-magnitude), so the target captured in thispicture 300 is determined by the liveness detection system to be an image (e.g., an invader). In contrast, inFIG. 2 , it is observed that themotion vectors 210 for thebackground 204 have a different movement compared to the face 206 (e.g.,diagonal motion vectors 210 in the majority of theface sub-units 202 compared to primarily dots in the background 204), so the target captured in thispicture 200 is determined by the liveness detection system to be a real human being (e.g., live user) given their lack of similarity in motion vectors when compared to thebackground 204. - In addition to, or in some embodiments in lieu of, a determination of movement of the face relative to the background, an evaluation by the liveness detection system of a facial expression may be used to differentiate between a real, live human being and an image. One difference between video of a live human being and an image is with regard to facial features. In other words, it is unlikely for there to be a change in facial features when capturing over plural pictures an image held by a user. In embodiments of a liveness detection system that use facial expressions in its algorithm, one condition to determine the target as a real, live human being is the detected presence of the facial expression. So, one embodiment of a liveness detection system uses a facial expression detection algorithm to detect the presence of a facial expression. Since different facial expressions impose different changes in the facial features, the detection algorithms of those facial expressions may also be different.
- For example, eye-blink detection algorithms check the eye region, and smile detection algorithms check the mouth region. One example eye-blink detection algorithm checks the variation of eye regions in every picture of the captured video. With reference to
FIGS. 6A-6C , shown is an illustration of plural pictures 600 (e.g., 600A, 600B, and 600C) of a captured video stream revealing an eye blink for a given target. It is noted that an eye-blink algorithm may be considered a sub-category of motion estimation directed to regions within the face (e.g., and not necessarily with respect to the background motion). From that perspective, the eye blink algorithm may be performed by using abounding region 602 that focuses on a detected region (in this example, the eye region of the face, though other regions in addition to or in lieu of the eye region may be bounded, such as the mouth, eyebrows, forehead, etc.), and sub-divided into plural sub-units (not shown inFIGS. 6A-6C ) to compare motion within the particular bounded region between the plural pictures of a given picture sequence. Note that the boundingregion 602 is depicted as an ellipse, with the understanding that other geometric configurations may be used in some embodiments covering substantially the same, or less or more of the region. InFIGS. 6A and 6C , the eyes bounded by boundingregion 602 are depicted as wide open, whereas inFIG. 6B , the target has blinked as shown by the closed eyes in thebounding region 602. In one embodiment, if the captured video has some period with a large variation in movement in the eye region (e.g., over the span of three pictures in the illustrative example shown inFIGS. 6A-6C ), and small or no variation in other face regions at the same time, then the liveness detection system determines that an eye blink is present in the period. In some embodiments, variations in regions other than those bounded by the boundingregion 602 are not considered, and hence the variation in eye movement alone is used to make a determination. Since image acquisition in the same period (e.g., span of time) of an image will not reflect such behavior, the detected presence of one or more eye blinks (from a single eye or both eyes) can be used to determine the target as a real, live human being. A similar approach may be used for other facial expressions in other regions of the face. - As noted in the description pertaining to
FIG. 1 , there may be circumstances where the target does not move or moves in an imperceptible manner, and hence aliveness detection process 100 implemented by an embodiment of the liveness detection system includes the command procedure 106 (FIG. 1 ) that prompts the target to move his or her face (or move features within the face, such as an eye or eyes). With reference toFIG. 7 , shown is a screen diagram of a screen 700 (e.g., on a computer monitor or other display device) that shows alive capture 702 of a video stream pertaining to the target, and amessage 704 that prompts the target into some specific action. In this example, the target may already have embarked on the authentication process, entering his or her name (e.g., John Doe), and then the authentication system embarks on the biometric portion (e.g., motion detection followed by facial recognition) with a presentation of the target's entered name on thescreen 700. It should be appreciated within the context of the present disclosure that other variations of an authentication process (e.g., pin or password entry, etc.) are contemplated to be within the scope of the disclosure. Themessage 704 requests the following: “John: Please blink your eyes so that we may further authenticate your logon.” It should be appreciated that other specific actions may be requested, such as having the target shift his or her position (e.g., side-to-side, or moving further away or closer), and/or make other facial expressions. Further, the liveness detection system may request several movements to collect sufficient information to enable a determination of liveness, each acting to substantiate the prior determinations (or if done concurrently, to substantiate the determinations based on other requested movements) of liveness (and/or serving as a basis of determining liveness where other determinations are unsatisfactory). As noted above, the message presented by the liveness detection system may be embodied as a voice message or as a combination of a verbal and visual message. - If the target does not notice the command, or does not follow the command as he or she does not understand the necessity of the command, then the command is not enough to guarantee a specific action or behavior of the target. One mechanism that certain embodiments of the liveness detection system may employ is to prompt user motion through induction. The induction may be performed alone, or in some embodiments, together with a message (e.g., direct command). The use of induction increases the probability that the target makes an expected specific action, facilitating the liveness detection system in making a liveness determination. Reference is made to
FIGS. 8A-8C , which are screen diagrams depicting a display screen 800 (e.g., 800A, 800B, and 800C) that illustrate an induction process performed by certain embodiments of a liveness detection system. In the example depicted inFIGS. 8A-8C , the liveness detection system attempts to prompt the target to move his or her face. InFIGS. 8A-8C , thetarget 802, captured by an image capture device, is shown displayed on thescreen 800A (FIG. 8A ), 800B (FIG. 8B ), and 800C (FIG. 8C ) with a graphic pattern 804 (e.g., square, though not limited to square geometry) that is moved in a manner to induce thetarget 802 to position (e.g., follow and align) his or herface 806 within thegraphic pattern 804. - For instance, in
FIG. 8A , thegraphic pattern 804 initially centers itself on theface 806 of thetarget 802. In some embodiments, thegraphic pattern 804 may be positioned elsewhere relative to the center of theface 806. Referring toFIG. 8B , thegraphic pattern 804 has been positioned from the center of the target's face (as shown inFIG. 8A ) to the target's right side of theface 806. InFIG. 8C , thetarget 802 has positioned his or her face (and possibly moved his or her chair) to the center of thegraphic pattern 804, and by doing so, has satisfied the requirement of a moving face. Note that in some embodiments, thegraphic pattern 804 can be designed as a shooting screen of a camera to increase the probability that the user follows thepattern 804. For instance, the shooting screen may comprise a mark (e.g., “+” mark, grid, etc.) that makes the user want or intend to align his or her face with the mark. In other words, the mark may look like an object that appears in a viewing window of a camera (or something like a gun sight grid for aiming purposes). In some embodiments, the liveness detection system may combine the command and induction. For example, the liveness detection system may present the command “Please put your face into the square”, and then move thegraphic pattern 804 to induce thetarget 802 to move if, for instance, thetarget 802 is not currently within thegraphic pattern 804. - In some embodiments, the user may be prompted to incur motion (e.g., induced motion) by a non-textual and/or non-verbal external stimuli that causes a reflexive-type movement. For instance, a flash on an image capture device may be triggered by the liveness detection system, causing the user to blink or turn his or her head. As another example, a lighting device (e.g., light-emitting diode (LED), etc.) and/or sound device (e.g., bell, audio tone, etc.) coupled to the liveness detection system may be triggered to draw the user's attention (and hence induce head and/or eye movement). In some embodiments, the display presented to the user may cause a flash or other movement to induce reflexive movement.
- Having described an example operation of certain embodiments of a liveness detection system and method (e.g., process), attention is directed to
FIG. 9 , which illustrates an embodiment of acomputing system 900. A liveness detection system may be embodied in the entirety of thecomputing system 900 depicted inFIG. 9 , or as a subset thereof in some embodiments. Theexample computing system 900 is shown as including a personal computer, though it should be appreciated within the context of the present disclosure that thecomputing system 900 may comprise any one of a plurality of computing devices, including a dedicated player appliance, set-top box, laptop, computer workstation, cellular phone, personal digital assistant (PDA), handheld or pen based computer, embedded appliance, or other communication (wired or wireless) device. In some embodiments, the liveness detection system may be implemented on a network device (also referred to herein as a computing system), similar to thecomputing system 900, located upstream of thecomputing system 900, such as a server, router, gateway, etc., or implemented with similar functionality distributed among plural devices (e.g., in a server device and the computing device). An upstream network device may be configured with similar components, and hence discussion of the same is omitted for brevity. - The
computing system 900 may, for instance, comprise aprocessor 902, one or more input/output (I/O) interfaces 904, anetwork interface device 906, and adisplay device 908 connected across a data bus 910. Thecomputing system 900 may further comprise amemory 912 that includes anoperating system 914 and application specific software, such as aplayer application 916 in the case of implementing player functionality for the playback of media content, such as video and/or audio (e.g., movies, music, games, etc.). In some embodiments, theplayer application 916 may be implemented as a software program configured to read and play back content residing on a disc 922 (or from other high definition video sources) according to the specifications defined by standards such as the Blu-ray Disc format specification, HD-DVD, etc. Thememory 912 comprises, among other logic (e.g., software),authentication logic 918, which includes in one embodiment,liveness detection logic 920. In some embodiments, theauthentication logic 918 may be implemented as a logon procedure (e.g., secured or licensed access) associated with access to content on thedisc 922, access to application programs inmemory 912, or in some embodiments, implemented as part of a logon procedure specific to access to the computing system 900 (and/or in some embodiments, specific to a remotely-located network device as implemented through a browser program residing in the computing system 900). - In one example operation, an attempted logon by a target to the
computing system 900 comprises providing a logon screen on thedisplay 908 and activation of animage capture device 924 coupled to the I/O interface 904 (or integrated in thecomputing system 900 in some embodiments). The target may enter (e.g., via keyboard or other input devices, including via voice activation) personal information (e.g., his or her name, password, etc.), and then be evaluated for biometric authenticity (e.g., facial recognition, liveness determination, etc.). For instance, the target may sit (or stand) in front of (or in range of) theimage capture device 924, and be prompted to incur motion (e.g., of the face or features thereon) by presenting text messages on a screen of the display device 908 (or by other mechanisms as explained above). In some embodiments, the target merely positions himself or herself in front of the screen and a motion determination is made (e.g., without prompting). If motion is not detected in this latter scheme, theliveness detection logic 920 may commence an additional approach, commanding movement by the target and/or inducing movement in the manner as explained above. In some embodiments, entering of personal information (e.g., entering his or her name) does not commence until a liveness determination has taken place. Other variations of an authentication procedure are contemplated to be within the scope of the disclosure. For instance, additional biometric algorithms may be employed in some embodiments. In general, certain embodiments of theliveness detection logic 920 comprises suitable logic to implement the liveness detection process 100 (including the addition of inducement processing) illustrated inFIG. 1 , including the command processing 106 (FIG. 1 ) and liveness detection processing 116 (FIG. 1 ) and their associated algorithms. - The
processor 902 may include any custom made or commercially available processor, a central processing unit (CPU) or an auxiliary processor among several processors associated with thecomputing system 900, a semiconductor based microprocessor (in the form of a microchip), one or more ASICs, a plurality of suitably configured digital logic gates, and other well-known electrical configurations comprising discrete elements both individually and in various combinations to coordinate the overall operation of the computing system. - The
memory 912 may include any one of a combination of volatile memory elements (e.g., random-access memory (RAM, such as DRAM, and SRAM, etc.)) and nonvolatile memory elements (e.g., ROM, hard drive, tape, CDROM, etc.). Thememory 912 typically comprises thenative operating system 914, one or more native applications, emulation systems, or emulated applications for any of a variety of operating systems and/or emulated hardware platforms, emulated operating systems, etc. For example, the applications may include application specific software stored on a computer readable medium (e.g., memory, persistent storage, etc.) for execution by theprocessor 902 and may include theauthentication logic 918 andliveness detection logic 920. One of ordinary skill in the art will appreciate that thememory 912 may, and typically will, comprise other components which have been omitted for purposes of brevity, or in some embodiments, may omit certain components, such as theplayer application 916. - Input/
output interfaces 904 provide any number of interfaces for the input and output of data. For example, where thecomputing system 900 comprises a personal computer, these components may interface with a user input device, which may be a body part of a viewer (e.g., hand), keyboard, a mouse, or voice activated mechanism. Where thecomputing system 900 comprises a handheld device (e.g., PDA, mobile telephone), these components may interface with function keys or buttons, a touch sensitive screen, a stylus, body part, etc. The input/output interfaces 904 may further include one or more disc drives (e.g., optical disc drives, magnetic disc drives) to enable playback of multimedia content residing on the computerreadable medium 922, and as explained above, may interface with theimage capture device 924, as well as other devices, such as remote alarms, locking/unlocking devices (e.g., electromagnetic devices), etc. - The
network interface device 906 comprises various components used to transmit and/or receive data over a network environment. By way of example, thenetwork interface device 906 may include a device that can communicate with both inputs and outputs, for instance, a modulator/demodulator (e.g., a modem), wireless (e.g., radio frequency (RF)) transceiver, a telephonic interface, a bridge, a router, network card, etc. Thecomputing system 900 may further comprise mass storage (not shown). For some embodiments, the mass storage may include a data structure (e.g., database) to store image files (and other data files, including identifying information such as name, passwords, pins, etc.) of authorized users for comparison during the authentication process. In some embodiments, the image and data files of authorized users may be located in a remote storage device (e.g., network storage). - The
display device 908 may comprise a computer monitor or a plasma screen for a PC or a liquid crystal display (LCD) on a hand held device, head-mount device, or other computing device. In some embodiments, thedisplay device 908 may be separate from thecomputing system 900, and in some embodiments, integrated in the computing device. - In the context of this disclosure, a “computer-readable medium” stores one or more programs and data for use by or in connection with the instruction execution system, apparatus, or device. The computer readable medium is non-transitory, and may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium may include, in addition to those set forth above, the following: an electrical connection (electronic) having one or more wires, a portable computer diskette (magnetic), a random access memory (RAM) (electronic), a read-only memory (ROM) (electronic), an erasable programmable read-only memory (EPROM, EEPROM, or Flash memory) (electronic), and a portable compact disc read-only memory (CDROM) (optical).
- Having provided a detailed description of certain embodiments of liveness detection systems and methods, it should be appreciated that one embodiment of a
liveness detection method 1000, implemented by aprocessor 902 in thecomputing system 900 and depicted inFIG. 10 , comprises receiving plural pictures of a video stream comprising a face and an adjacent background (1002); determining motion of the face and the background, the motion determined over the plural pictures (1004); comparing the motion between the face and the background (1006); and determining whether the face corresponds to an actual, live user or an image of the user based on the comparison, the determinations performed by the processor (1008). - In view of the foregoing disclosure, it should be appreciated that another embodiment of a
liveness detection method 1100, implemented by theprocessor 902 in thecomputing system 900 and depicted inFIG. 11 , comprises receiving from an image capture device plural pictures of a video stream comprising a face (1102); prompting a motion of a first portion of the face of a user (1104); determining if there is motion within the first portion of the face, the motion determined over the plural pictures (1106); and determining whether the face corresponds to an actual, live user or an image of the user based on the determination of motion, the determinations performed by a processor (1108). Moreover, a second portion can further be determined, wherein the first portion and second portion of the face may refer to different features of the face where motion may be detected. For instance, in one embodiment, the first portion may be the eyes, and the second portion may be the mouth, or vice versa. Other regions of the face are also contemplated for detection of motion, such as the eyebrows, nose, forehead (e.g., wrinkle), etc. - Any process descriptions or blocks in flow diagrams should be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process, and alternate implementations are included within the scope of the embodiments of the present disclosure in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, and/or with one or more functions omitted in some embodiments, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present disclosure. Also, though certain architectures are illustrated in the present disclosure, it should be appreciated that the methods described herein are not necessarily limited to the disclosed architectures.
- In addition, though various delineations in software logic have been depicted in the accompanying figures and described in the present disclosure, it should be appreciated that one or more of the functions performed by the various logic described herein may be combined into fewer software modules and or distributed among a greater number. Further, though certain disclosed benefits/advantages inure to certain embodiments of certain liveness detection systems, it should be understood that not every embodiment necessarily provides every benefit/advantage.
- In addition, the scope of certain embodiments of the present disclosure includes embodying the functionality of certain embodiments of a liveness detection system in logic embodied in hardware and/or software-configured mediums. For instance, though described in software configured mediums, it should be appreciated that one or more of the liveness detection system and method functionality described herein may be implemented in hardware or a combination of both hardware and software.
- It should be emphasized that the above-described embodiments of the present disclosure are merely possible examples of implementations, merely set forth for a clear understanding of the principles of the disclosure. Many variations and modifications may be made to the above-described embodiment(s) without departing substantially from the spirit and principles of the disclosure. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims.
Claims (5)
1. A liveness detection method, comprising:
receiving from an image capture device plural pictures of a video stream comprising a face;
prompting a motion of a first portion of the face of a user;
determining if there is motion within the first portion of the face, the motion determined over the plural pictures; and
determining whether the face corresponds to an actual, live user or an image of the user based on the determination of motion, the determinations performed by a processor.
2. The method of claim 1 , wherein determining if there is motion within the first portion of the face comprises determining if there is an eye blink or a change in facial expression.
3. The method of claim 1 , further comprising determining if there is motion within a second portion of the face that is different than the first portion, the motion determined over the plural pictures, wherein determining whether the face corresponds to the actual, live user or the image of the user is based on the determination of motion in the first and second portions.
4. The method of claim 1 , wherein prompting the motion to the user comprises generating a message to the user, inducing a specific action of the user, or a combination of both, wherein generating the message is a text, image, or voice command requesting that the user make the specific action, wherein inducing the specific action of the user comprises causing the specific action, wherein the specific action includes a reflexive movement of the user based on non-textual, non-verbal, external stimuli.
5. The method of claim 1 , wherein determining whether the face corresponds to the actual, live user or the image of the user is based on the prompted motion.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/289,872 US20140270412A1 (en) | 2012-01-20 | 2014-05-29 | Liveness detection system based on face behavior |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/354,891 US9025830B2 (en) | 2012-01-20 | 2012-01-20 | Liveness detection system based on face behavior |
US14/289,872 US20140270412A1 (en) | 2012-01-20 | 2014-05-29 | Liveness detection system based on face behavior |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/354,891 Division US9025830B2 (en) | 2012-01-20 | 2012-01-20 | Liveness detection system based on face behavior |
Publications (1)
Publication Number | Publication Date |
---|---|
US20140270412A1 true US20140270412A1 (en) | 2014-09-18 |
Family
ID=48797240
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/354,891 Active 2033-05-17 US9025830B2 (en) | 2012-01-20 | 2012-01-20 | Liveness detection system based on face behavior |
US14/289,872 Abandoned US20140270412A1 (en) | 2012-01-20 | 2014-05-29 | Liveness detection system based on face behavior |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/354,891 Active 2033-05-17 US9025830B2 (en) | 2012-01-20 | 2012-01-20 | Liveness detection system based on face behavior |
Country Status (1)
Country | Link |
---|---|
US (2) | US9025830B2 (en) |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104794464A (en) * | 2015-05-13 | 2015-07-22 | 上海依图网络科技有限公司 | In vivo detection method based on relative attributes |
CN104869164A (en) * | 2015-05-26 | 2015-08-26 | 北京金和网络股份有限公司 | BS software application system configuration and image rapid switching method on registration page |
CN106096519A (en) * | 2016-06-01 | 2016-11-09 | 腾讯科技(深圳)有限公司 | Live body discrimination method and device |
CN106557723A (en) * | 2015-09-25 | 2017-04-05 | 北京市商汤科技开发有限公司 | A kind of system for face identity authentication with interactive In vivo detection and its method |
CN107229927A (en) * | 2017-08-03 | 2017-10-03 | 河北工业大学 | A kind of Face datection anti-fraud method |
CN107239735A (en) * | 2017-04-24 | 2017-10-10 | 复旦大学 | A kind of biopsy method and system based on video analysis |
CN107346419A (en) * | 2017-06-14 | 2017-11-14 | 广东欧珀移动通信有限公司 | Iris identification method, electronic installation and computer-readable recording medium |
US20180046850A1 (en) * | 2016-08-09 | 2018-02-15 | Mircea Ionita | Methods and systems for enhancing user liveness detection |
US9934443B2 (en) * | 2015-03-31 | 2018-04-03 | Daon Holdings Limited | Methods and systems for detecting head motion during an authentication transaction |
CN108229326A (en) * | 2017-03-16 | 2018-06-29 | 北京市商汤科技开发有限公司 | Face false-proof detection method and system, electronic equipment, program and medium |
CN108369785A (en) * | 2015-08-10 | 2018-08-03 | 优替控股有限公司 | Activity determination |
CN108696641A (en) * | 2018-05-15 | 2018-10-23 | Oppo(重庆)智能科技有限公司 | Call reminding method, device, storage medium and mobile terminal |
US10217009B2 (en) | 2016-08-09 | 2019-02-26 | Daon Holdings Limited | Methods and systems for enhancing user liveness detection |
US20190095737A1 (en) * | 2017-09-28 | 2019-03-28 | Ncr Corporation | Self-service terminal (sst) facial authentication processing |
US10250598B2 (en) | 2015-06-10 | 2019-04-02 | Alibaba Group Holding Limited | Liveness detection method and device, and identity authentication method and device |
US20190166119A1 (en) * | 2017-11-29 | 2019-05-30 | Ncr Corporation | Security gesture authentication |
CN110059624A (en) * | 2019-04-18 | 2019-07-26 | 北京字节跳动网络技术有限公司 | Method and apparatus for detecting living body |
CN110334637A (en) * | 2019-06-28 | 2019-10-15 | 百度在线网络技术(北京)有限公司 | Human face in-vivo detection method, device and storage medium |
US10628661B2 (en) | 2016-08-09 | 2020-04-21 | Daon Holdings Limited | Methods and systems for determining user liveness and verifying user identities |
US20200143186A1 (en) * | 2018-11-05 | 2020-05-07 | Nec Corporation | Information processing apparatus, information processing method, and storage medium |
US10679083B2 (en) | 2017-03-27 | 2020-06-09 | Samsung Electronics Co., Ltd. | Liveness test method and apparatus |
US10839251B2 (en) | 2017-06-26 | 2020-11-17 | Rank One Computing Corporation | Method and system for implementing image authentication for authenticating persons or items |
US11093770B2 (en) * | 2017-12-29 | 2021-08-17 | Idemia Identity & Security USA LLC | System and method for liveness detection |
US11115408B2 (en) | 2016-08-09 | 2021-09-07 | Daon Holdings Limited | Methods and systems for determining user liveness and verifying user identities |
US11170252B2 (en) * | 2019-09-16 | 2021-11-09 | Wistron Corporation | Face recognition method and computer system thereof |
US11176392B2 (en) | 2017-03-27 | 2021-11-16 | Samsung Electronics Co., Ltd. | Liveness test method and apparatus |
Families Citing this family (54)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5541407B1 (en) * | 2013-08-09 | 2014-07-09 | 富士ゼロックス株式会社 | Image processing apparatus and program |
US9305225B2 (en) * | 2013-10-14 | 2016-04-05 | Daon Holdings Limited | Methods and systems for determining user liveness |
CN104639517B (en) * | 2013-11-15 | 2019-09-17 | 阿里巴巴集团控股有限公司 | The method and apparatus for carrying out authentication using human body biological characteristics |
CN104683302A (en) * | 2013-11-29 | 2015-06-03 | 国际商业机器公司 | Authentication method, authentication device, terminal equipment, authentication server and system |
US20160057138A1 (en) * | 2014-03-07 | 2016-02-25 | Hoyos Labs Ip Ltd. | System and method for determining liveness |
EP3189475A4 (en) * | 2014-09-03 | 2018-04-11 | Samet Privacy, LLC | Image processing apparatus for facial recognition |
US9430696B2 (en) * | 2014-10-09 | 2016-08-30 | Sensory, Incorporated | Continuous enrollment for face verification |
EP4047551A1 (en) * | 2014-10-15 | 2022-08-24 | NEC Corporation | Impersonation detection device, impersonation detection method, and recording medium |
US10198645B2 (en) * | 2014-11-13 | 2019-02-05 | Intel Corporation | Preventing face-based authentication spoofing |
US20160140390A1 (en) * | 2014-11-13 | 2016-05-19 | Intel Corporation | Liveness detection using progressive eyelid tracking |
CN107077589B (en) * | 2014-11-13 | 2021-02-09 | 英特尔公司 | Facial spoofing detection in image-based biometrics |
US9811649B2 (en) * | 2014-11-13 | 2017-11-07 | Intel Corporation | System and method for feature-based authentication |
US9875396B2 (en) | 2014-11-13 | 2018-01-23 | Intel Corporation | Spoofing detection in image biometrics |
US9928603B2 (en) * | 2014-12-31 | 2018-03-27 | Morphotrust Usa, Llc | Detecting facial liveliness |
CN106960177A (en) * | 2015-02-15 | 2017-07-18 | 北京旷视科技有限公司 | Living body faces verification method and system, living body faces checking device |
WO2017000217A1 (en) * | 2015-06-30 | 2017-01-05 | 北京旷视科技有限公司 | Living-body detection method and device and computer program product |
US9990537B2 (en) | 2015-07-20 | 2018-06-05 | International Business Machines Corporation | Facial feature location using symmetry line |
US9996732B2 (en) | 2015-07-20 | 2018-06-12 | International Business Machines Corporation | Liveness detector for face verification |
US10268911B1 (en) * | 2015-09-29 | 2019-04-23 | Morphotrust Usa, Llc | System and method for liveness detection using facial landmarks |
KR102077198B1 (en) * | 2015-10-31 | 2020-02-13 | 후아웨이 테크놀러지 컴퍼니 리미티드 | Facial verification method and electronic device |
WO2017139325A1 (en) * | 2016-02-09 | 2017-08-17 | Aware, Inc. | Face liveness detection using background/foreground motion analysis |
US10311219B2 (en) * | 2016-06-07 | 2019-06-04 | Vocalzoom Systems Ltd. | Device, system, and method of user authentication utilizing an optical microphone |
US10089521B2 (en) * | 2016-09-02 | 2018-10-02 | VeriHelp, Inc. | Identity verification via validated facial recognition and graph database |
JP6773493B2 (en) * | 2016-09-14 | 2020-10-21 | 株式会社東芝 | Detection device, detection method, and detection program |
CN107886032B (en) | 2016-09-30 | 2021-12-14 | 阿里巴巴集团控股有限公司 | Terminal device, smart phone, authentication method and system based on face recognition |
US10950275B2 (en) | 2016-11-18 | 2021-03-16 | Facebook, Inc. | Methods and systems for tracking media effects in a media effect index |
CN106778518B (en) * | 2016-11-24 | 2021-01-08 | 汉王科技股份有限公司 | Face living body detection method and device |
US10303928B2 (en) | 2016-11-29 | 2019-05-28 | Facebook, Inc. | Face detection for video calls |
US10554908B2 (en) | 2016-12-05 | 2020-02-04 | Facebook, Inc. | Media effect application |
WO2018118120A1 (en) | 2016-12-23 | 2018-06-28 | Aware, Inc. | Analysis of reflections of projected light in varying colors, brightness, patterns, and sequences for liveness detection in biometric systems |
WO2018156366A1 (en) * | 2017-02-27 | 2018-08-30 | Tobii Ab | Determining eye openness with an eye tracking device |
CN109086645B (en) * | 2017-06-13 | 2021-04-20 | 阿里巴巴集团控股有限公司 | Face recognition method and device and false user recognition method and device |
CN109325328B (en) * | 2017-08-01 | 2022-08-30 | 苹果公司 | Apparatus, method and storage medium for biometric authentication |
CN107590463A (en) * | 2017-09-12 | 2018-01-16 | 广东欧珀移动通信有限公司 | Face identification method and Related product |
WO2019056310A1 (en) | 2017-09-22 | 2019-03-28 | Qualcomm Incorporated | Systems and methods for facial liveness detection |
US11507646B1 (en) * | 2017-09-29 | 2022-11-22 | Amazon Technologies, Inc. | User authentication using video analysis |
CN107832598B (en) * | 2017-10-17 | 2020-08-14 | Oppo广东移动通信有限公司 | Unlocking control method and related product |
JP7067023B2 (en) * | 2017-11-10 | 2022-05-16 | 富士通株式会社 | Information processing device, background update method and background update program |
KR102455633B1 (en) * | 2017-12-21 | 2022-10-17 | 삼성전자주식회사 | Liveness test method and apparatus |
CN109190509B (en) * | 2018-08-13 | 2023-04-25 | 创新先进技术有限公司 | Identity recognition method, device and computer readable storage medium |
CN109034102B (en) * | 2018-08-14 | 2023-06-16 | 腾讯科技(深圳)有限公司 | Face living body detection method, device, equipment and storage medium |
CN109345253A (en) * | 2018-09-04 | 2019-02-15 | 阿里巴巴集团控股有限公司 | Resource transfers method, apparatus and system |
US11138302B2 (en) | 2019-02-27 | 2021-10-05 | International Business Machines Corporation | Access control using multi-authentication factors |
US20210027080A1 (en) * | 2019-07-24 | 2021-01-28 | Alibaba Group Holding Limited | Spoof detection by generating 3d point clouds from captured image frames |
CN112395906A (en) * | 2019-08-12 | 2021-02-23 | 北京旷视科技有限公司 | Face living body detection method and device, face living body detection equipment and medium |
CN112395907A (en) * | 2019-08-12 | 2021-02-23 | 北京旷视科技有限公司 | Face living body detection method and device, face living body detection equipment and medium |
WO2021038298A2 (en) | 2019-08-29 | 2021-03-04 | PXL Vision AG | Id verification with a mobile device |
KR20210108082A (en) * | 2020-02-25 | 2021-09-02 | 삼성전자주식회사 | Method and apparatus of detecting liveness using phase difference |
US11501531B2 (en) * | 2020-03-03 | 2022-11-15 | Cyberlink Corp. | Systems and methods for anti-spoofing protection using motion detection and video background analysis |
US11694480B2 (en) * | 2020-07-27 | 2023-07-04 | Samsung Electronics Co., Ltd. | Method and apparatus with liveness detection |
CN111914775B (en) * | 2020-08-06 | 2023-07-28 | 平安科技(深圳)有限公司 | Living body detection method, living body detection device, electronic equipment and storage medium |
EP3961483A1 (en) | 2020-09-01 | 2022-03-02 | Aptiv Technologies Limited | Method and system for authenticating an occupant within an interior of a vehicle |
US11783628B2 (en) | 2021-05-18 | 2023-10-10 | ID R&D Inc. | Method and computing device for performing a crowdsourcing task |
CN114445918B (en) * | 2022-02-21 | 2024-09-20 | 支付宝(杭州)信息技术有限公司 | Living body detection method, device and equipment |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130015946A1 (en) * | 2011-07-12 | 2013-01-17 | Microsoft Corporation | Using facial data for device authentication or subject identification |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1762980B1 (en) * | 2000-12-15 | 2008-09-17 | Sony Corporation | Layered image based rendering |
US8515124B2 (en) * | 2010-02-04 | 2013-08-20 | Electronics And Telecommunications Research Institute | Method and apparatus for determining fake image |
JP5725793B2 (en) * | 2010-10-26 | 2015-05-27 | キヤノン株式会社 | Imaging apparatus and control method thereof |
US8873840B2 (en) * | 2010-12-03 | 2014-10-28 | Microsoft Corporation | Reducing false detection rate using local pattern based post-filter |
US8824749B2 (en) * | 2011-04-05 | 2014-09-02 | Microsoft Corporation | Biometric recognition |
-
2012
- 2012-01-20 US US13/354,891 patent/US9025830B2/en active Active
-
2014
- 2014-05-29 US US14/289,872 patent/US20140270412A1/en not_active Abandoned
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130015946A1 (en) * | 2011-07-12 | 2013-01-17 | Microsoft Corporation | Using facial data for device authentication or subject identification |
Cited By (40)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10430679B2 (en) | 2015-03-31 | 2019-10-01 | Daon Holdings Limited | Methods and systems for detecting head motion during an authentication transaction |
AU2016201759B2 (en) * | 2015-03-31 | 2020-05-14 | Daon Technology | Methods and Systems for Detecting User Head Motion During an Authentication Transaction |
US9934443B2 (en) * | 2015-03-31 | 2018-04-03 | Daon Holdings Limited | Methods and systems for detecting head motion during an authentication transaction |
CN104794464A (en) * | 2015-05-13 | 2015-07-22 | 上海依图网络科技有限公司 | In vivo detection method based on relative attributes |
CN104869164A (en) * | 2015-05-26 | 2015-08-26 | 北京金和网络股份有限公司 | BS software application system configuration and image rapid switching method on registration page |
US10250598B2 (en) | 2015-06-10 | 2019-04-02 | Alibaba Group Holding Limited | Liveness detection method and device, and identity authentication method and device |
CN108369785A (en) * | 2015-08-10 | 2018-08-03 | 优替控股有限公司 | Activity determination |
CN106557723A (en) * | 2015-09-25 | 2017-04-05 | 北京市商汤科技开发有限公司 | A kind of system for face identity authentication with interactive In vivo detection and its method |
CN106096519A (en) * | 2016-06-01 | 2016-11-09 | 腾讯科技(深圳)有限公司 | Live body discrimination method and device |
US10628661B2 (en) | 2016-08-09 | 2020-04-21 | Daon Holdings Limited | Methods and systems for determining user liveness and verifying user identities |
US10592728B2 (en) * | 2016-08-09 | 2020-03-17 | Daon Holdings Limited | Methods and systems for enhancing user liveness detection |
US20180046850A1 (en) * | 2016-08-09 | 2018-02-15 | Mircea Ionita | Methods and systems for enhancing user liveness detection |
US11115408B2 (en) | 2016-08-09 | 2021-09-07 | Daon Holdings Limited | Methods and systems for determining user liveness and verifying user identities |
US10210380B2 (en) | 2016-08-09 | 2019-02-19 | Daon Holdings Limited | Methods and systems for enhancing user liveness detection |
US10217009B2 (en) | 2016-08-09 | 2019-02-26 | Daon Holdings Limited | Methods and systems for enhancing user liveness detection |
CN108229328A (en) * | 2017-03-16 | 2018-06-29 | 北京市商汤科技开发有限公司 | Face false-proof detection method and system, electronic equipment, program and medium |
US11482040B2 (en) | 2017-03-16 | 2022-10-25 | Beijing Sensetime Technology Development Co., Ltd. | Face anti-counterfeiting detection methods and systems, electronic devices, programs and media |
US11080517B2 (en) | 2017-03-16 | 2021-08-03 | Beijing Sensetime Technology Development Co., Ltd | Face anti-counterfeiting detection methods and systems, electronic devices, programs and media |
CN108229326A (en) * | 2017-03-16 | 2018-06-29 | 北京市商汤科技开发有限公司 | Face false-proof detection method and system, electronic equipment, program and medium |
US10679083B2 (en) | 2017-03-27 | 2020-06-09 | Samsung Electronics Co., Ltd. | Liveness test method and apparatus |
US11721131B2 (en) | 2017-03-27 | 2023-08-08 | Samsung Electronics Co., Ltd. | Liveness test method and apparatus |
US11176392B2 (en) | 2017-03-27 | 2021-11-16 | Samsung Electronics Co., Ltd. | Liveness test method and apparatus |
US11138455B2 (en) | 2017-03-27 | 2021-10-05 | Samsung Electronics Co., Ltd. | Liveness test method and apparatus |
CN107239735A (en) * | 2017-04-24 | 2017-10-10 | 复旦大学 | A kind of biopsy method and system based on video analysis |
CN107346419A (en) * | 2017-06-14 | 2017-11-14 | 广东欧珀移动通信有限公司 | Iris identification method, electronic installation and computer-readable recording medium |
US10839210B2 (en) | 2017-06-14 | 2020-11-17 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Iris recognition method, electronic device and computer-readable storage medium |
US10839251B2 (en) | 2017-06-26 | 2020-11-17 | Rank One Computing Corporation | Method and system for implementing image authentication for authenticating persons or items |
CN107229927A (en) * | 2017-08-03 | 2017-10-03 | 河北工业大学 | A kind of Face datection anti-fraud method |
US20190095737A1 (en) * | 2017-09-28 | 2019-03-28 | Ncr Corporation | Self-service terminal (sst) facial authentication processing |
US10679082B2 (en) * | 2017-09-28 | 2020-06-09 | Ncr Corporation | Self-Service Terminal (SST) facial authentication processing |
US20190166119A1 (en) * | 2017-11-29 | 2019-05-30 | Ncr Corporation | Security gesture authentication |
US10924476B2 (en) * | 2017-11-29 | 2021-02-16 | Ncr Corporation | Security gesture authentication |
EP3493088A1 (en) * | 2017-11-29 | 2019-06-05 | NCR Corporation | Security gesture authentication |
US11093770B2 (en) * | 2017-12-29 | 2021-08-17 | Idemia Identity & Security USA LLC | System and method for liveness detection |
CN108696641A (en) * | 2018-05-15 | 2018-10-23 | Oppo(重庆)智能科技有限公司 | Call reminding method, device, storage medium and mobile terminal |
US20210256282A1 (en) * | 2018-11-05 | 2021-08-19 | Nec Corporation | Information processing apparatus, information processing method, and storage medium |
US20200143186A1 (en) * | 2018-11-05 | 2020-05-07 | Nec Corporation | Information processing apparatus, information processing method, and storage medium |
CN110059624A (en) * | 2019-04-18 | 2019-07-26 | 北京字节跳动网络技术有限公司 | Method and apparatus for detecting living body |
CN110334637A (en) * | 2019-06-28 | 2019-10-15 | 百度在线网络技术(北京)有限公司 | Human face in-vivo detection method, device and storage medium |
US11170252B2 (en) * | 2019-09-16 | 2021-11-09 | Wistron Corporation | Face recognition method and computer system thereof |
Also Published As
Publication number | Publication date |
---|---|
US9025830B2 (en) | 2015-05-05 |
US20130188840A1 (en) | 2013-07-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9025830B2 (en) | Liveness detection system based on face behavior | |
US9778842B2 (en) | Controlled access to functionality of a wireless device | |
CN108804884B (en) | Identity authentication method, identity authentication device and computer storage medium | |
US9075974B2 (en) | Securing information using entity detection | |
US11762966B2 (en) | Methods and devices for operational access grants using facial features and facial gestures | |
US9183430B2 (en) | Portable electronic apparatus and interactive human face login method | |
US20220245963A1 (en) | Method, apparatus and computer program for authenticating a user | |
US20230306792A1 (en) | Spoof Detection Based on Challenge Response Analysis | |
CN110276313B (en) | Identity authentication method, identity authentication device, medium and computing equipment | |
JP2014044475A (en) | Image processing apparatus, image processing method, and image processing program | |
Cardaioli et al. | Privacy-friendly de-authentication with BLUFADE: Blurred face detection | |
WO2022231702A1 (en) | Integrating and detecting visual data security token in data via graphics processing circuitry using a frame buffer | |
McQuillan | Is lip-reading the secret to security? | |
Chollet et al. | Identities, forgeries and disguises | |
Alsufyani | Biometric Presentation Attack Detection for Mobile Devices Using Gaze Information | |
Xu | Toward robust video event detection and retrieval under adversarial constraints | |
Smith | The use of 3D sensor for computer authentication by way of facial recognition for the eyeglasses wearing persons | |
Yang | Improving Two-Factor Authentication Usability with Sensor-Assisted Facial Recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: CYBERLINK CORP., TAIWAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MA, CHIH-CHAO;LIU, YI-HSIN;REEL/FRAME:032985/0751 Effective date: 20120130 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |