WO2017025575A1 - Liveness detecton - Google Patents

Liveness detecton Download PDF

Info

Publication number
WO2017025575A1
WO2017025575A1 PCT/EP2016/069084 EP2016069084W WO2017025575A1 WO 2017025575 A1 WO2017025575 A1 WO 2017025575A1 EP 2016069084 W EP2016069084 W EP 2016069084W WO 2017025575 A1 WO2017025575 A1 WO 2017025575A1
Authority
WO
WIPO (PCT)
Prior art keywords
liveness
user device
test
results
entity
Prior art date
Application number
PCT/EP2016/069084
Other languages
French (fr)
Inventor
Eleanor Simone Frederika LOUGHLIN-MCHUGH
Roman Edward SZCZESNIAK
Francisco Angel Garcia RODRIGUEZ
Georgios PARASKEVAS
Benjamin Robert TREMOULHEAC
Usman Mahmood KHAN
Original Assignee
Yoti Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/822,803 external-priority patent/US9794260B2/en
Priority claimed from US14/822,804 external-priority patent/US20170046583A1/en
Application filed by Yoti Ltd filed Critical Yoti Ltd
Publication of WO2017025575A1 publication Critical patent/WO2017025575A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/3226Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using a predetermined code, e.g. password, passphrase or PIN
    • H04L9/3231Biological data, e.g. fingerprint, voice or retina
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09CCIPHERING OR DECIPHERING APPARATUS FOR CRYPTOGRAPHIC OR OTHER PURPOSES INVOLVING THE NEED FOR SECRECY
    • G09C5/00Ciphering apparatus or methods not provided for in the preceding groups, e.g. involving the concealment or deformation of graphic data such as designs, written or printed messages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction

Definitions

  • the present invention is in the field of liveness detection, and has particular applications in the context of network security to prevent spoofing attacks based on entities masquerading as humans.
  • a spoofing attack refers to a technique whereby an unauthorized human or software entity masquerades as an authorized entity, thereby gaining an illegitimate advantage.
  • a particular example is an unauthorized entity masquerading as a particular user so as to gain improper access to the user's personal information held in a notionally secure data store, launch an attack on a notionally secure system by masquerading a system administrator, or gain some other form of access to a notionally secure system which they can then exploit to their benefit.
  • Liveness detection refers to techniques of detecting whether an entity, which may exhibit what are ostensibly human characteristics, is actually a real, living being or is a non-living entity masquerading as such.
  • One example of liveness detection is the well-known CAPTCHA test; or to give it its full name “Completely Automated Public Turing test to tell Computers and Humans Apart”. The test is based on a challenge-response paradigm. In the broadest sense, a system presents an entity with a test that is designed to be trivial for a human but difficult for robot software.
  • a typical implementation is requiring an entity to interpret a word or phrase embodied in an image or audio file. This is an easy task for a human to interpret, but it is a harder task for robot software to interpret the
  • iiveness detection is in the context of a system that is rationally secured based on biometrics (e.g. facial, fingerprint, or voice verification).
  • biometrics e.g. facial, fingerprint, or voice verification
  • Such a system may require a user wishing to gain access to the system to present one of their biometric identifiers i.e. distinguishing human features (e.g. their face, fingerprint, or voice) to the system using a biometric sensor (e.g. camera; fingerprint sensor; microphone).
  • the presented biometric identifier is compared with biometric data of users who are authorized to access the system, and access is granted to the presenting user only if the biometric identifier matches the biometric data of one of the authorized users.
  • Such systems can be spoofed by presenting fake biometric samples to the biometric sensor, such as pre-captured or synthesized image/speech data, physical
  • the inventors of the present invention have recognized that physiological responses to randomized outputs (such as randomized visual or audible outputs), as exhibited by visible human features (such as the eyes or mouth), provide an excellent basis for Iiveness detection, as such reactions are very difficult for non-living entities to replicate accurately.
  • randomized outputs such as randomized visual or audible outputs
  • visible human features such as the eyes or mouth
  • Iiveness detection method comprises implementing, by a Iiveness detection system, the following steps, A first set of one or more parameters of a first liveness test is selected at random.
  • the first parameter set is transmitted to a user device available to an entity, thereby causing the user device to perform the first liveness test according to the first parameter set.
  • Results of the first liveness test performed at the user device according to the first parameter set are received form the user device.
  • Results of a second liveness test pertaining to the entity are received.
  • the liveness detection system determines whether the entity is a living being using the results of the liveness tests, the results of the first liveness test being so used by comparing them with the first parameter set.
  • the method may comprise implementing, by the liveness detection system, steps of: selecting at random a second set of one or more parameters of the second liveness test; and transmitting the second parameter set to the or another user device available to the entity, thereby causing that user device to perform the second liveness test according to the second parameter set, wherein the results of the second liveness test performed at that user device according to the second parameter set are received from that user device and used in the determining step by comparing them with the second parameter set.
  • the results of at least one of tests that are received at the liveness detection system may have been generated by capturing a moving image of the entity.
  • the results of the at least one test as received at the liveness detection system comprise information that has been extracted from the moving image.
  • the results of that test that are received at the liveness detection may comprise the moving image
  • the method may further comprise processing, by the liveness detection system, the moving image to extract information from the moving image.
  • the extracted information may be used in the determining step and describe at least one of:
  • One of the tests may be performed by emitting at least one light pulse at a
  • randomized timing that is defined by the parameter set of that test; wherein the results of that test convey changes over time in the pupil size and/or in an iris pattern of at least one eye of the entity, and those results are compared with that parameter set to determine whether the changes in the pupil size and/or the iris pattern match the randomized timing.
  • one of the tests may be performed by displaying at least one display element at a randomized display location that is defined by the parameter set of that test; wherein the results of that test convey a response of the entity to the at least one display element as displayed in that test, and those results are compared with that parameter set to determine whether the response to the display element matches the at least one randomized display location.
  • one of the tests may be performed by displaying a randomly selected display element that is defined by the parameter set of that test; wherein the results of that test convey a response of the entity to the randomly selected display element, and those results are compared with that parameter set to determine whether the response of the entity matches the at least one randomly selected display element.
  • the second test may be performed by the or another user device monitoring movements of that user device using at least one sensor of that user device.
  • the method may comprise, by the liveness detection system: transmitting to the entity, from a source address of the liveness detection system, an identifier of at least one destination address (e.g. at least one URI) of the liveness detection system different than the source address; and determining whether the results of at least one of the tests were transmitted to the at least one destination address.
  • a source address of the liveness detection system e.g. at least one URI
  • the at least one destination address may be randomly selected by the liveness detection system.
  • the method may comprise comprising granting the entity access to a remote computer system oniy if it is determined that it is a living being and the results of the at least one of the test were been transmitted by to the at least one destination address,
  • the method may comprise, by the liveness detection system: transmitting to the entity, from the source address of the liveness detection system, a first and a second identifier of a first and a second destination address of the liveness detection system respectively, the first and second destination addresses being different from the source address and from each other; determining whether the results of the second test were received at the first destination address; and determining whether the results of the first test were received at the second destination address.
  • liveness detection system comprise: liveness control server logic; first liveness processing server logic for processing the results of the first liveness test, the first liveness processing server logic having a plurality of addresses including the first destination address, and second liveness processing logic for processing the results of the second liveness test, the second liveness processing logic having a plurality of addresses including the first destination address
  • the results of the second test may be received at the first liveness processing server, the results of the first liveness test may be received at the second liveness processing server, and the method may comprise:
  • the liveness control server providing the results of the first test to the first liveness processing server and the results of the second test to the second liveness processing server only if: the results of the second test were received at the first destination address of the first liveness processing server, and the results of the first test were received at the second destination address of the second liveness processing server.
  • the results of the first and second tests may be received in a first message and a second message respectively, each message comprising a signature expected to have been generated, for each message, from both parameter sets; the liveness control server may compare both signatures with the first and second parameter sets and provide the results of the first test to the first liveness processing server and the results of the second test to the second liveness
  • the method may comprise detecting when a timeout condition occurs, the timeout condition caused by an unacceptable delay in receiving the results relative to a timing of the transmitting step, wherein the entity is refused access to a remote computer system in response to the timeout condition occurring.
  • the method may comprise granting the entity access to a remote computer system only if the entity is determined to be a living being.
  • the first and second tests may be performed at the same time as one another.
  • the method may comprise granting the entity access to a remote computer system only if the entity is determined to be a living being.
  • a liveness detection system comprises: a set of one or more processing units, the set configured to perform operations of: selecting at random a first set of one or more parameters of a first liveness test; transmitting, to a user device available to an entity, the first parameter set, thereby causing the user device to perform the first liveness test according to the first parameter set; receiving from the user device results of the first liveness test performed at the user device according to the first parameter set; receiving results of a second liveness test pertaining to the entity; and determining whether the entity is a living being using the results of the liveness tests, the results of the first liveness test being so used by comparing them with the first parameter set.
  • a computer-implemented liveness detection method is implemented by a liveness detection system.
  • the iiveness detection system comprises computer storage storing a shared secret known only to the liveness detection system and one or more authorized user devices.
  • the method comprises implementing by the liveness detection system the following steps.
  • a set of one or more parameters of a liveness test is selected at random which, when combined with the shared secret, define expected outputs that should be provided in the liveness test.
  • the parameter set is transmitted to a user device, thereby causing the user device to perform the iiveness test according to the parameter set, whereby the user device can only provide the expected outputs therein if it also has access to its own version of the shared secret.
  • Results of the liveness test performed at the user device according to the first parameter set are received from the user device.
  • the parameter set and the shared secret stored at the liveness detection system are used at the liveness detection system to determine the expected outputs.
  • the results of the liveness test are compared with the determined expected outputs to determine whether the behaviour of an entity that was subject to the liveness test performed at the user device is an expected reaction to the expected outputs, thereby determining from the entity's behaviour both whether the entity is a living being and whether the user device is one of the authorized user device(s).
  • the shared secret may define a restricted subset of a set of available display locations, wherein the parameter set defines one or more available display locations selected at random from the restricted subset, and wherein the expected outputs are provided by displaying one or more display elements at the one or more randomly selected available display locations on a display of the user device.
  • the behaviour may be eye movements exhibited by at least one eye of the entity during the displaying of the one or more display elements at the user device and conveyed by the received results, the expected reaction being an expected movement of the eye, whereby it is determined both whether the entity is a living being and whether the user device is one of the authorized user device(s) from the entity's eye movements.
  • the shared secret may for example define an elliptic curve.
  • a liveness detection system comprises: computer storage storing a shared secret known only to the liveness detection system and one or more authorized user devices; and a set of one or more processing units, the set configured to perform operations of: selecting at random a set of one or more parameters of a liveness test which, when combined with the shared secret, define expected outputs that should be provided in the liveness test; transmitting the parameter set to a user device, thereby causing the user device to perform the liveness test according to the parameter set, whereby the user device can only provide the expected outputs therein if it also has access to its own version of the shared secret; receiving from the user device results of the liveness test performed at the user device according to the first parameter set; using the parameter set and the shared secret stored at the liveness detection system to determine the expected outputs; and comparing the results of the liveness test with the determined expected outputs to determine whether the behaviour of an entity that was subject to the liveness test performed at the user device is an expected reaction to the expected outputs, thereby
  • a liveness detection system comprises a controller, a video input, a feature recognition module, and a liveness detection module.
  • the controller is configured to control an output device to provide randomized outputs to an entity over an interval of time.
  • the video input is configured to receive a moving image of the entity captured by a camera over the interval of time.
  • the feature recognition module is configured to process the moving image to detect at least one human feature of the entity.
  • the liveness detection module is configured to compare with the randomized outputs a behaviour exhibited by the detected human feature over the interval of time to determine whether the behaviour is an expected reaction to the randomized outputs, thereby determining whether the entity is a living being.
  • the human feature that the feature recognition module is configured to detect may be an eye of the entity.
  • providing the randomized outputs may comprise controlling the output device to emit at least one light pulse having a randomized timing within the moving image, and the expected reaction may be an expected pupillary response to the at least one light pulses.
  • providing the randomized outputs may comprise controlling the output device to emit at least two randomly light pulse having a randomized separation in time from one another, and the expected reaction may be an expected pupillary response to the at least two light pulses.
  • the output device may be a camera flash or a display.
  • the liveness detection system may comprise a velocity measurement module configured to compare frames of the moving image to one another so as to generate a velocity distribution of the eye, the velocity distribution representing the rate of change of the diameter of the pupil at different times, said comparison comprising comparing the velocity distribution with the expected response.
  • said comparison by the liveness detection module may comprise comparing the velocity distribution with a probability distribution, wherein the probability distribution represents the expected pupillary response.
  • said comparison by the liveness detection module may comprise: determining a first time, wherein the first time corresponds to a local maximum of the velocity distribution; determining a second time, wherein the second time corresponds to a local minimum of the velocity distribution, the local minimum occurring immediately before or immediately after the local maximum; and determining a difference between the first and second times and comparing the difference to a threshold.
  • respective differences may be determined between the first time and two second times, one corresponding to the local minimum immediately before the local maximum and one corresponding to the local minimum occurring immediately after the local maximum, and each may be compared to a respective threshold.
  • the entity may be determined to be a living being only if each of the two differences is below its respective threshold, and the velocity distribution matches the probability distribution.
  • the output device may be a display.
  • Providing the randomized outputs may comprise controlling the display to display a display element at a random location of the display, and the expected reaction may be an expected movement of the eye,
  • the liveness detection system may comprise: a spatial windowing module
  • an analysis module configured to, for each of a plurality of regions of the iris area, generate a histogram of pixel values within that region for use in tracking movements of the eye, the liveness detection module being configured to perform said comparison by comparing the histograms with the expected movement.
  • the liveness detection module may be configured to perform said comparison by comparing the histograms with a probability density function representing the expected movement.
  • the liveness detection system may comprise: a spatial windowing module configured, for each of a plurality of frames of the moving image, to divide at least a portion of that frame into a plurality of blocks, each block formed one or more respective sub-blocks, each sub-block formed of one or more respective pixels; and an analysis module configured to assign to each block a respective block value based on its one or more respective sub-blocks, the liveness detection module being configured to perform said comparison by comparing the block values with the expected movement.
  • a spatial windowing module configured, for each of a plurality of frames of the moving image, to divide at least a portion of that frame into a plurality of blocks, each block formed one or more respective sub-blocks, each sub-block formed of one or more respective pixels
  • an analysis module configured to assign to each block a respective block value based on its one or more respective sub-blocks, the liveness detection module being configured to perform said comparison by comparing the block values with the expected movement.
  • each sub-block may be formed of a multiple pixels, and/or each block may be formed of multiple sub-blocks.
  • the analysis module may be configured to assign to each sub-block a binary value by detecting whether or not at least a predetermined proportion of its respective pixels have intensities below an intensity threshold, the block value of each block being assigned by combining the binary values assigned to its respective sub-blocks.
  • the pixel intensities may be determined by converting the plurality of frames from a colour format into a grayscale format.
  • providing the randomized outputs may further comprise accessing user-created data, held a first memory local to the output device, which defines a restricted subset of locations on the display, the random location being selected at random from the restricted subset, wherein the system is also configured to compare the behaviour exhibited by the eye with a version of the user-created data held in a second memory remote from the output device.
  • the user- created data may define a two-dimensional curve, the restricted subset being the set of points on the curve.
  • the first memory and the output device may be integrated in a user device.
  • the behaviour that is compared with the randomized outputs may be at least one of: changes in the size of the pupil of the eye over time; changes in an iris pattern of the eye over time; and eye movements exhibited the eye.
  • providing the randomized outputs may comprise controlling the output device to output at least one randomly selected word;
  • the human feature that the feature recognition module is configured to detect may be a mouth of the entity, and the expected response is the user speaking the word, the movements of the mouth being compared to the random word using a iip reading algorithm.
  • the liveness detection system may comprise an access module configured to grant the entity access to a remote computer system only if they are determined to be a living being.
  • the liveness detection module may be configured to output at least one of: a confidence value which conveys a probability that the entity is a living being, and a binary classification of the entity as either living or non-living.
  • a computer-implemented liveness detection method comprises: controlling an output device to provide randomized outputs to an entity over an interval of time; receiving a moving image of the entity captured by a camera over the interval of time; processing the moving image to detect at least one human feature of the entity; and comparing with the randomized outputs a behaviour exhibited by the detected human feature over the interval of time to determine whether the behaviour is an expected reaction to the randomized outputs, thereby determining whether the entity is a living being.
  • any of the features of any one of the above aspects or any embodiment thereof may be implemented in embodiments of any of the other aspects.
  • Any of the method disclosed herein may be implemented by logic (e.g. software modules) of a corresponding system.
  • any of the system functionaiity disclosed herein may be implemented as steps of a corresponding method.
  • a computer program product comprises code stored on a computer-readable storage medium and configured when executed to implement any of the method steps or system functionality disclosed herein.
  • Figure 1 shows a block diagram of a computer system
  • Figures 2A, 2B and 2C show various functional modules of a liveness detection system in a first embodiment of the present invention
  • Figure 2D shows a flow chart for a liveness detection method in the first
  • Figure 3 illustrates some of the principles of an image stabilization technique
  • Figure 4 demonstrates a pupil's response to a light pulse stimulus during a liveness detection process
  • Figures 5A-5C is a graph showing how the pupillary area of an eye responds to a light pulse stimulus
  • Figure 5D is a graph showing how the pupillary area response to two light pulses in relatively quick succession
  • Figures 8A and 6B show various functional modules of a liveness detection system in a second embodiment of the present invention
  • Figure 6C shows a flow chart for a liveness detection method in the second embodiment
  • Figure 7A illustrates a display element exhibition randomized motion
  • Figure 7B illustrates movements of an eye when tracking a visible element
  • Figures 8 and 9 illustrate some of the principles of an eye tracking technique
  • Figures 10A and 1 B illustrates a process by which histograms describing movements of an eye can be generated
  • Figure 11 shows a signalling diagram for a liveness detection technique according to a third embodiment
  • Figures 12A and 12B illustrate some principles of a liveness detection technique that is based in part on a shared secret between a user device and a server;
  • Figure 13 illustrates how an eye movement is manifest in a sequence of grayscale video frame images
  • Figures 14A and 11 B illustrate a novel motion binary pattern technique.
  • Figure 15 shows a signalling diagram for a liveness 'transactional * detection technique similar to that of figure 11.
  • Figure 1 shows a block diagram of a computer system, which comprises a user device 104 available to a user 2; a computer network 118; and a remote computer system 130 i.e. remote from the user device 104.
  • the user device 104 and remote system 130 are both connected to the network 118, so that data can be transmitted and received between the user device 104 and the remote system 130.
  • the user device 104 is a computer device which can take a number of forms, such as a mobile device (smartphone, tablet etc.), laptop or desktop computer etc.
  • the user device 104 comprises a display 106; a camera 108 and camera flash 1 10; a network interface 116; and a processor 112, formed of one or more processing units (e.g. CPUs), to which each of the aforementioned components of the user device 104 is connected.
  • the processor 112 is configured to execute code, which include a liveness detection application ("app") 11 . When executed on the processor 112, the liveness detection app 1 14 can control the display 106, camera 108 and flash 108, and can transmit and receive data to and from the network 1 18 via the network interface 1 16.
  • apps liveness detection application
  • the camera 108 is capable of capturing a moving image i.e. video formed of a temporal sequence of frames to be played out in quick succession so as to replicate continuous movement, that is outputted as a video signal from the camera 108.
  • Each frame is formed of a 2-dimensional array of pixels (i.e. image samples).
  • each pixel may comprise a three-dimensional vector defining the chrominance and luminance of that pixel in the frame.
  • the camera 108 is located so that the user 102 can easily capture a moving image of their face with the camera 108.
  • the camera 108 may be a front- facing camera integrated in a smartphone, tablet or laptop computer screen, or an external webcam mounted on a laptop or desktop display screen.
  • the flash 1 10 is controllable to emit relatively high intensity light. Its primary function is to provide a quick burst of illumination to illuminate a scene as the camera 108 captures an image of the scene, though some modern user devices such as smartphones and tablets also provide for other uses of the camera flash 110 e.g. to provide continuous illumination in a torch mode.
  • the display 106 outputs information to the user 102 in visual form, and may for example be a display screen.
  • the display screen may incorporate a touch screen so that it also functions as an input device for receiving inputs from the user 102.
  • the remote system 130 comprises at least one processor 122 and network interface 128 via which the processor 122 of the remote system is connected to the network 1 18.
  • the processor 122 and network interface 126 constitute a server 120.
  • the processor is configured to execute control code 124 ("back-end software"), which cooperates with the liveness detection app 1 14 on the user device 104 to grant the user device 104 access to the remote system 130, provided certain criteria are met. For example, access to the remote system 130 using the user device 104 may be conditional on the user 102 successfully completing a validation process.
  • the remote system 130 may for example comprise a secure data store 132, which holds (say) the user's personal data.
  • the back-end software 124 makes retrieval of the user's personal data from the database 132 using the user device 104 conditional on successful validation of the user 102.
  • Embodiments of the present invention can be implemented as part of the validation process to provide a validation process that includes a liveness detection element. That is, access to the remote system 130 may be conditional on the user 102 passing a liveness detection test to demonstrate that they are indeed a living being.
  • the validation process can also comprise other elements, e.g. based on one or more credentials, such as a username and password, so that the user 102 is required not only to demonstrate that they what they say they are (i.e. a living being) but also that they are who they say they are (e.g. a particular individual) - note hover that it is the former that is the focus of the present disclosure, and the liveness detection techniques can be implemented separately and independently from any identity check or without considering identity at all.
  • FIG. 2A shows a liveness detection system 200a in a first embodiment of the present invention.
  • the liveness detection system 200a comprises the following functional modules: a liveness detection controller 218 connected to control the camera 108 and flash 1 10 (or alternatively the display 106); an image stabilizer 204 having an input connected to receive a video signal from the camera 204; a corner detector 202 having an input connected to receive the video signal and an output connected to the image stabilizer 204; an iris detector 205 having an input connected to receive a stabilized version of the image signal from the image stabilizer 204; a diameter estimator 208 having an input connected to an output of the iris detector 206 and an output; first second and third time differential modules 208a, 208b, 208c, each having a respective input connected to the output of the diameter estimation module 206; first, second and third accumulators 210a, 21 Ob, 210c having respective inputs connected to the outputs of the first, second and third time differentia!
  • modules 208a, 208b, 208c respectively; a first liveness detection module 212a having first, second and third inputs connected to outputs of the first, second and third accumulators 210a, 210b, 210c respectively; and a randomized generator 219 which generates randomized (e.g. random or pseudo-random) data Rri, and outputs the randomized data Rn to both the liveness detection controller 218 and the liveness detection module 212a.
  • the modules 208a,..., 210c constitute a velocity measurement module 213.
  • FIG. 2B shows additional details of the liveness detection module 212a.
  • the liveness detection module 212a comprises first second and third comparison modules 231a, 231 b, 231c having inputs connected to the outputs of the first, second and third accumulators 210a, 210b, 210c respectively; and a decision module connected to receive inputs from each of the comparison modules 231 a, 231 b, 231c.
  • Figure 2C shows how each of the comparison modules 231a, 231 b, 231c (for which the general reference sign 231 is used) comprises a distribution fitting module 232, a global maximum estimation module 234, and a global minimum estimation module 236.
  • the decision module has inputs connected to outputs of the modules 232, 234, 238, and an additional input connected to receive the randomized data Rn.
  • the randomized data Rn is in the form of one or more randomly generated
  • PD pupil dilation
  • the functional modules of the liveness detection system 200a are software modules, representing functionality implemented by executing the liveness detection app 114 on the user device 104, or by executing the back-end software 124 on the server 120, or a combination of both. That is, the liveness detection system 200a may be localized at a single computer device, or distributed across multiple computer devices.
  • the liveness detection system outputs a binary classification of the user 102, classifying the user 102 as either living or non-living, which is generated by the liveness detection module 212a based on an analysis of a moving image of the user's face captured by the camera 108,
  • the liveness detection system 200a of the first embodiment implements a technique for anti-spoofing based on pupillary light reflex.
  • the technique will now be
  • figure 2D is a flow chart for the method.
  • the liveness detection app 114 outputs an instruction to the user 102 that they should look at the camera 108, so that their face is within the camera's field of view.
  • the app 114 may display a preview of the video captured by the camera, with instructions as to how the user should correctly position their face within the camera's field of view.
  • the liveness detection controller 218 controls the camera 108 and camera flash 110 (or the brightness level of the display 106) of the user device 102 to perform the following operations.
  • the camera flash 110 (or display 106) emits random light modulated pulses with a frequency of more than 0.33Hz ( ⁇ 1 pulse every 3 seconds).
  • the camera 108 stars recording video frames the moment that the flash 110 (or display 106) starts emitting the light pulses.
  • Each video frame comprises a high-resolution image of at least one of the user's eyes (right or left).
  • the recording continues for about three seconds in total, so as to capture a three second moving image of the user's face i.e. three seconds worth of video frames (typically between about 60 and 90 video frames for a conventional smartphone or tabled).
  • the light pulses are modulated based on the PD parameter set Rn, as generated by the randomized generator 219, in the following manner. At least two light pulses are emitted within the three second window - one at the start of the interval when recording commences, and at least one more whilst the recording is in progress. The two light pulses are separated in time by a randomly chosen time interval 5t that is defined by the PD parameter set Rn, In some implementations, three or four (or more) light pulses may be used, all having random temporal separations relative to one another.
  • each of the later light pulse(s) is greater than that of the light pulse(s) preceding it. If light pulses of the same intensity were used each time, the pupillary response would diminish with each pulse due to the eye becoming accustomed to the light level of the pulses. Increasing the intensity of each pulse ensures a measurable physiological reaction by the pupil to each light pulse.
  • Every video frame that is recorded is timestamped i.e. associated with a time value defining when it was captured relative to the other frames. This enables the in the behaviour and position of the user's iris for each desired time interval.
  • the notation F J is used to represent a video frame having timestamp t hereinbeiow and in the figures. The following steps are then performed for each video frame FJ, for one of the user's eyes (or for each eye separately).
  • corner detection techniques are used to detect two reference points of the eye in the frame FJ - shown in figure 3 and labelled "A" and "B" - corresponding to the corners of the eye.
  • Image stabilization is used to place the points A and B on a reference plane P common to all frames i.e. a rotational transformation is applied to the frame F J, as necessary, so that the points A and B in all of the stabilized (i.e. rotated) versions of the video frames lie in the same reference plane P.
  • the plane P is the horizontal place in the coordinate system of the frames, meaning that the points A and B are vertically aligned in all of the stabilized versions of the frames.
  • the notation SF_t is used to represent the stabilized version of the frame F_t.
  • the iris is detected and isolated in the stabilized video frame SF_t, using machine learning and blob detection techniques.
  • machine learning and blob detection techniques The application of such techniques to iris detection are known in the art.
  • the diameter of the iris remains substantially constant - any changes in the diameter of the iris in the video can be assumed to be caused by movement of the user's head. These could be accounted for e.g. by applying a scaling transformation to the video frames to keep the iris diameter constant in the stabilized frames, though in practice this may be unnecessary as the iris diameter will remain substantially constant in the video provided the user keeps their head still during the recording.
  • the diameter of the pupil changes in response to the light pulses - this is a physiological response to the light pulses, and is used at the basis for liveness detection in this first embodiment.
  • Figure 5A shows a graph illustrating how the diameter of the pupillary area is expected to change over time in response to a light pulse stimulus at time Ti .
  • the graph tracks the change of the pupillary area of the eye after a light stimulus of medium/high intensity is applied to the eye.
  • FIG. 5B shows the rate of change in the pupil diameter over the same time interval i.e. the velocity of contraction (positive velocity) or dilation (negative velocity).
  • the pupil diameter exhibits rapid, essentially random fluctuations. Nevertheless, the velocity response has an overall structure over larger time scales that is still evident.
  • Figure 5C shows a smoothed version of the velocity curve of figure 2C, in which the rapid fluctuations have been averaged out by taking a windowed average of the velocity curve with a window large enough to eliminate the fluctuations but small enough to preserve the overall structure.
  • the velocity is zero. Between T1 and T2, the velocity reaches its local peak value (i.e. local maximum) at a time g_max.
  • the smoothed velocity curve has local minima immediately to the left and right of time g_max, at times g_maxL and g_max respectively. These are immediate in the sense of being closest in time i.e. such that that there are no other local minima between g_max and g_minL or between gjnax and g_minR.
  • the time gjnsnL is near to the time T1 that the stimulus is applied.
  • the time g_minR is after the time T2 (at which the pupil stops contracting and starts dilating) but before T3 (the dilation break, at which the pupil dilation slows suddenly). That is, g_minR occurs in the well-defined temporal range between T2 and T3.
  • the physiological response by a real, human pupil to the stimulus is such that g_max, g_minl_ and g_miriR are expected to satisfy a certain relationship - specifically that g_max-g_minL is no more than a first known value ⁇ 1 and gjninR- g_max is no more than a second known value M2.
  • the second interval At2 is of the order of a second, whereas the first interval M1 is at least an order of magnitude lower.
  • the acceleration of the pupil is zero.
  • Figure 5D shows the pupillary response to the two light pulses separated in time by an interval 5t.
  • the pupillary area traces the response curve of figure 5A, until the second pulse is applied causing the eye to retrace the response curve of figure 5A a second time.
  • the intensity of the second pulse is greater than that of the first pulse by an amount such that the second pulse causes substantially the same level of contraction as the first pulse. That is, the curve of figure 5D corresponds to two instances of the curve of figure 5A, separated in time by 6t.
  • FIGS. 5A-5D are provided simply to aid illustration - as will be appreciated, they are highly schematic and not to scale. Velocity curves computed from measurements of real human eyes may exhibit more complex structures, but are nevertheless expected to satisfy the aforementioned relationship,
  • a differential dD of the pupil diameter D_t is estimated on different time intervals by:
  • Figure 4 illustrates and example of a differential dD between time t and time t+n
  • each of the diameter differentials dD J is accumulated over time to form a respective velocity distribution in the form of a time series of differential values:
  • each time series is denoted TS1 , TS3 and TS6 respectively, below and in the figures, and describes the rate at which the size of the pupil is changing at different points in time (i.e. the pupil's velocity).
  • the liveness detection module 212a analyzes each time series TS1 , TS3, TS6 in order to identify if its fit closely to a Weibull probability density function (PDF), sometimes referred to as a Frechet PDF. The analysis is performed by computing fit measure for the time series.
  • PDF Weibull probability density function
  • the Weibull PDF represents the expected pupillary response to the two light pulses as illustrated in figure 5D, and has predetermined coefficients, set to match the expected behaviour of human eyes in response to the two light pulses, as illustrated in figures 5A-5D.
  • the behaviour of human eyes is sufficiently predictable across the human population, that most human pupils will exhibit a physiological response to each light pulse that fits the Weibull PDF to within a certain margin of error.
  • No machine learning techniques are needed to set the parameters - the Weibull PDF for a given 5t can be computed efficiently (i.e. using minimal processing resources) based on closed-form equations for a given vlaue of 6t.
  • An example of a suitable fit measure is a weighted sum of squared errors R 2 , defined as:
  • o is an element of the smoothed time series TS (the summation being over all elements in TS)
  • e 0 is the value of o predicted by the PDF
  • ⁇ 2 is the variance of the time series TS.
  • Three R 2 metrics are computed separately - one for each smoothed time series TS1 , TS3, TS6.
  • a local maximum g_max of each time series is computed as:
  • g__max arg__max_t (dD_t)
  • g_max is the time t at which the rate of contraction in response to the applicable light pulse is greatest (see figure 5C, and the accompanying text above).
  • the times g_minL and g_minR immediately before and after gjnax are also computed for each light pulse and for each time series TS1 , TS3, TS6.
  • the decision module 238 performs the following operations for each time series TS1 , TS3, TS6.
  • the weighted errors measure 2 of that time series is compared with a threshold.
  • a time difference g_max-g_minL between gjminL and g_max is computed, and compared to a first time threshold dtthreshoidl .
  • a time difference g__minL-g_max between g__max and g_minR is also computed and compared to dtthresho!d2.
  • the first and second time thresholds dtthreshoidl , dtthresho!d2 are set to match the expected time differences ⁇ 1 and Mi respectively (see text above accompanying figure 6C).
  • an equivalent temporal separation is measured and compared to the known (random) separation of that light pulse from one of the other light pulses.
  • the decision module 256 concludes that the user 104 is alive. That is, if and only if all of these criteria are fulfilled does the liveness detection system 200a conclude that the user 104 is a living being. In the event that an entity masquerading as a living being assumes the role user 104, such as a photograph or detailed model of the user 102, it will not exhibit the necessary pupil response to satisfy these criteria, so the system will correctly identify it as non-living. Whilst the above uses a Weibull PDF, more generally, any suitable extreme value theory probability distribution function can be used in place of the Weibull PDF to model the expected response of the human pupil of figure 5A and thereby achieve the same effect.
  • the random separation 5t between the two light pulses is of the order of one second, corresponding to low frequency modulations.
  • randomized high frequency modulations of the light pulse can also be introduced.
  • the high frequency modulations are compared with reflections from the eye and a match between the reflections and the high-frequency modulations is also required for the entity to be identified as living.
  • the technique of the first embodiment can also be implemented using a single light pulse, at a random time relative to the start of the video.
  • the pupillary response in the video is compared with the random timing to check whether it matches.
  • GB2501362 relates to an
  • a code is sent from a server to a user-device equipped with a source of illumination and a camera capable of capturing video imagery of an online user.
  • the user device modulates the source of illumination in accordance with the code, and at the same time captures video imagery of the user.
  • FIG. 6B shows a block diagram of a liveness detection system 200b in a second embodiment.
  • the system 200b implements a technique for anti-spoofing based on tracking the iris movement by presenting elements at random positions of the screen.
  • the second embodiment in a Iiveness test performed according to Rn, the
  • Iiveness detection controller controls the display 108 and the camera 108 of the user device. In the second embodiment, the Iiveness detection controller uses
  • randomized data Rn generated by the randomized generator 219 to display randomized display elements at randomized locations on the display 106 in a
  • the randomized data Rn is in the form of one or more parameters that define the display locations, referred to as an eye tracking ( ⁇ ") parameter set in the context of the second embodiment,
  • the corner detector 202, image stabilizer 204 and iris detector 205 are connected as in the first embodiment, and perform the same operations.
  • the system 200b comprises the following functional modules, in addition: a spatial windowing module 207 having an input connected to an output of the iris detection module 205; and a patent analysis module 209 having an first input connected to an output of the spatial windowing module 207 and a second input connected to receive the stabilized video frames from the image stabilizer 204; and a second Iiveness detection module 212b.
  • FIG. 6B shows additional details of the pattern analysis and Iiveness detection modules 207, 212b.
  • the patent analysis module comprises a plurality of histogram determination modules ,...,h9 (nine in this example), each of which is connected to receive the current stabilized video frame SF_t and has a respective output connected to a respective first input the Iiveness detection module 212b.
  • the Iiveness detection module 212b has a second input connected to receive the randomized data Rn, and outputs a binary classification of an entity subject to the test (the user 102 in this example) as living or non-living, as in the first embodiment.
  • FIG 6C is a flow chart for the method.
  • the method is based on tracking the iris movement by presenting elements at random positions of the screen.
  • a device displays a 'following element' on its screen that moves to the predetermined positions (randomly assigned) that correspond to a block of a square grid of predefined size. These positions are not set by the user and are intended to guide the user to track their eye movement. The user is requested to track the random movements with their eyes. During the whole process the device is recording the eye movements of the user.
  • the liveness detection controller 218 controls the display 106 of the user device 104 to display a display element on its screen which moves between random display locations, defined by the randomized data Rn.
  • FIG 7A shows a display element moving from a randomly chosen location in the bottom block ("9") of the grid to the top middle block ("2") of the grid.
  • the possible display locations correspond to the blocks
  • regions (equivalently referred to herein as "regions” or “sectors”) of a 3x3 grid defined in relation to the display 106, which are predetermined by the system 200b (e.g. at the user device 104 ⁇ but not by the user 102.
  • the user 102 is requested to track the random movements with his eyes.
  • a high-resolution moving image of the user's face is captured by the camera 108 as the user follows the moving display element (S604).
  • Every video frame that is recorded is timestamped in order to know precisely the behaviour and position of the iris for each time interval, exactly as in the method of the first embodiment.
  • step S808 corner detection and image stabilization algorithms are applied to the frame F_t in order to place the points A and B on the same plane P so that the movements and the size of the iris ca be isolated.
  • Step S606 corresponds exactly to step S202 of figure 2C, and the description applies equally in this instance.
  • the iris in the frame F_t is detected using machine learning and blob detection techniques (S608, corresponding exactly to step S204 of figure 2),
  • a window region around the iris (“iris window”) is identified by the spatial windowing module 207, based on the iris detection of step S608.
  • the window W is shown in figure 8.
  • a respective histogram is generated based pixel values.
  • Figure 10A illustrates the technique used to generate the respective histogram.
  • Figure 10A shows an exemplary block b formed of an array of pixels. Each of the pixels can take one of three values, represented by different shading (note this is an extremely simplified example presented to aid understanding).
  • a histogram H(b) is for the block b is generated.
  • the histogram H(b) has a bin for each possible pixel value (so three bins in this extremely simplified example), and that bin defines the number of pixels in the block b having that value (i.e. the count for that bin).
  • each bin of the histogram corresponds to a range of multiple pixel values - in the preferred technique described below, extreme quantization is applied whereby each pixel is quantized to one of two values representing light and dark.
  • An individual histogram (H1 ,...,H9) is generated in this way for each of the nine blocks of the iris window, as illustrated in figure 10B.
  • the set of nine histograms is denoted H.
  • the histogram for a block b dominated by dark pixels - as occurs when pupil is in that block in the video - is measurably different from the histogram of a block b dominated by the lighter colours of the iris and/or sclera of the eye.
  • the histograms H change accordingly, in a predictable manner. This allows the movement to be examined without having to rely on machine learning techniques.
  • a change in the histograms H is expected to occur.
  • the change in the histogram of the iris movement is compared with the change in the location of the display element in order to evaluate if the iris moved to the correct block i.e. as would be expected if the eye were a real human eye tracking the display element. If after a predetermined number of movements the system identified that the user didn't follow the element correctly, the user would be classified as trying to spoof the system.
  • Figures 10A and 10B illustrate a situation in which pixel values are used to generate histograms for blocks b directly.
  • the blocks b are first divided into sub-blocks, and block values are assigned based on the sub-blocks.
  • the frame F_t is converted to grayscale i.e. each pixel is converted to a single value carrying only intensity information.
  • binary thresholding is applied to the grayscale image, such that if the value of a pixel is less than a predetermined threshold then its value is set equal to black (value "1"); otherwise set it equal to white (value "0").
  • the thresholding removes the iris texture (in this example, the movement of the pupil alone is used to track eye movements) Blob detection of high circularity is then applied to the quantized image in order to identify and extract the pupil of the eye, as follows.
  • the location of the pupil is determined by a novel type of Local Binary Pattern, referred to herein as a motion binary pattern (MBP).
  • MBP Motion binary pattern
  • Each sub-block sb comprises one or more pixels - multiple pixels in this example.
  • a block to be represented by a single number ("block value") by concatenating all the binary values of its constituent sub-blocks.
  • FIG 14B illustrates a case the change of value of a MBP block b when detecting an edge in motion (the "1" value sub-blocks in figure 14B correspond to the edge region of the dark pupil).
  • the display element is a randomly selected word.
  • the word changes when the display element moves.
  • the user is also required to read the word as the test progressed.
  • the moving image is processed to identify the user's lips, and lip-reading techniques are used to determine whether the user is speaking the correct word.
  • randomly selected words are displayed at the same location on the display, and lip reading alone is used for liveness detection (i.e. with no eye tracking).
  • the randomized locations at the display element is displayed are selected based on the randomized data Rn and, in addition, based on at least one shared secret between the user device 6 and the remote system 130.
  • the shared secret can for example be a user-defined shape, such as an elliptic curve.
  • An elliptic curve requires minimal storage overhead, as it can be parameterised by a small number of parameters.
  • a twisted Edwards curve could also be used. This is either alternatively to the elliptic curve or in combination with an elliptic curve. That is to say any number or combination of the two types of curve, or any other cryptographically secure curve could be used to form the shared secret.
  • FIG. 12A An exemplary elliptic curve Cv is shown in figure 12A.
  • the user 02 defines the curve at the user device 104, for example by tracing it on the device's touchscreen or using a mouse/trackpad etc.
  • Geometric data of the curve is both stored securely at the user device 106, encrypted based on a code (e.g. PIN) inputted by the user 102, and transmitted to the remote system 130 via a secure channel for storage thereat.
  • a code e.g. PIN
  • the geometry of the curve is stored both at the user device and the remote system 130.
  • the user 102 In order to change the curve, the user 102 must input the code to decrypt it. Every time the curve is changed, the version at the remote system 130 is updated to reflect the changes.
  • the ET parameter set defines the point on an ellipse in terms of one-dimensional coordinates, defining a length along the ellipse.
  • n 1 ,...,N ⁇ of N points in two-dimensional space.
  • Each randomly selected parameter can be represented by a single
  • the display element is displayed at display locations corresponding to randomly selected points Pt on the curve Cv, selected by the user device 104.
  • the curve Cv itself is not displayed.
  • the user device 104 communicates the eye images to the remote system 130, or information about the movements of the eyes in response to the random display element derived from the eye images.
  • the user device 106 does not transmit any other information about the points Pt that it has selected - these are conveyed to the remote system 130 only through the movements of the user's eyes.
  • the ET parameters Rn determine which points on the curve will selected in a deterministic manner i.e. if the ET parameters Rn and the curve Cv are known, it is always possible to know with certainty which points the user device 106 will select.
  • the remote system 130 can reconstruct the points Pt as selected by the user device based on its own copy of the curve.
  • the user device 106 must know the shared secret in the form of the curve Cv. This prevents a device which does not have access to the securely-held shared secret from being used to access e.g. the database 132 in the remote system 130. In other words, based on its knowledge of the shared secret, the remote system knows which points on the curve Cv the user device should have selected given its own knowledge of the shared secret. Should the wrong points be selected as part of an attempted spoofing attack, the attack will fail as a consequence.
  • Both of the above described techniques consider movement of the pupil in response to certain stimuli.
  • the changes in the pattern of the iris can be used to the same end.
  • the diameter of the iris is constant, however the structure of the iris exhibits intricate variations (iris pattern) that are visible as colour patterns.
  • iris pattern As the eye reacts to the pulse or display element stimulus, these patters will change. This can be measured, for example, by identifying and tracking distinct points on the iris in response to the relevant stimulus.
  • a noise filtering algorithm may applied to the image, and the tracking based on the noise-filtered version.
  • a set of differential equations is applied to select visible dark and/or light spots in the noise- filtered image for tracking. Again, the detected movements are compared to expected data that is generated form closed-form equations without the need for machine learning.
  • the randomized data may also define a transformation of the curve e.g. a scaling and/or rotation of the curve over time, so that a second point Pt2 is a point on a transformed version Cv' of the curve relative to the version Cv used on which a first point Pt1 is selected.
  • a transformation of the curve e.g. a scaling and/or rotation of the curve over time
  • the user may indicate the location of the display element as they perceive it using some other input device of their user device 104, such as a touchscreen, trackpad, mouse etc.
  • some other input device of their user device 104 such as a touchscreen, trackpad, mouse etc.
  • the user selects (e.g. touches or clicks on) the point on the screen where the display element is displayed, and their inputs are compared to the expected display location(s).
  • the binary classification outputted by the system liveness detection system may, for example, be conveyed to an access control module 214 of the server 120, so that the access control module 214 can decide whether or not to grant the user 102 access to the remote system 130 based on the classification. For example, access may be granted only if the user 102 is identified as a living being by the liveness detection system.
  • the liveness detection system 200a/200b may, instead of generating a binary classification of the user 102 as living/non-living, generate a confidence value denoting the system's confidence that the user is living or non-living e.g. a probability that the user 102 is living or a probability they they are non-living.
  • the access controller 214 receives the probability, and can perform its own classification.
  • Figure 1 1 illustrates a liveness detection technique in a third embodiment, which combines the techniques of the first and second embodiments.
  • a liveness (control) server 120a receives liveness (control) data from a user.
  • a pupil dilation server 120b receives liveness (control) data from a user.
  • an eye tracking server 120c receives liveness (control) data from a user.
  • the liveness server 120a coordinates the liveness detection technique of the third embodiment.
  • the pupil dilation server 120b implements the liveness check of the first embodiments, based on pupil dilation.
  • the eye tracking server 120 implements the liveness check of the second embodiment, based on eye tracking.
  • the liveness server 120a requests a PD parameter set and an ET parameter set from the pupil dilation server 102b and the eye tracking server 102c respectively.
  • the PD parameters are for implementing the process of the first embodiment, i.e. based on pupillary response to pulsing, and defines one (or more) randomly selected temporal separation between two (or more) light pulses (or one light pulse at a random time in the video).
  • the ET parameters is of the kind used in the second embodiment, i.e. based on display elements at random display locations, and when combination with the user-selected curve Cv defines a set of spatial points selected at random, at which the display element is displayed.
  • the randomness of the process is generated server-side.
  • the liveness server After receiving the PD parameters from the pupil dilation server 120b (S1 104a) and the ET parameters from the eye tracking server 20c (S 104b), the liveness server
  • user device 104 uses the PD and ET parameters to instigate the liveness detection processes of the first and second embodiments respectively by performing the randomized pulse test according to the PD parameters and the randomized display element test according to the ET parameters i.e. emitting light pulses at random interval(s) based on the PD set and displaying display element(s) at random locations selected on the user's curve Cv based on the ET parameter set.
  • the PD and ET sets are transmitted from the liveness server 120a to the user device 104 at step S1 107 in an instigation message 1 101.
  • the instigation message also comprises a network address of the pupil dilation server 120b and a network address of the eye tracking server 120c.
  • Each network address defines a respective network endpoint, and may for example be a URI (Uniform Resource Indicator).
  • URI Uniform Resource Indicator
  • the URIs used are uniquely allocated to the user device 104, and each constitutes a shared secret between the user device 104 and the remote system 130.
  • the two processes are linked, in that the randomized display element of the second process is displayed at the randomized locations defined by the ET set within a predetermined time interval commencing with the first light pulse of the first process. That is, the processes are coordinated so that the display element is displayed to the user at the randomized locations at a time when their eyes are still reacting to the light pulse i.e. while they are still temporarily slightly stunned by the pulse.
  • the movements of the eye when tracking the display element are measurably different when the eye is in this stunned state (as compared with an un-stunned eye), and these differences are predictable across the population, which is exploited as part of the liveness detection procedure of the third embodiment.
  • At least one moving image is captured over a time interval that spans both the pulsing and the displaying of the display element, which constitutes the data that forms the basis of the first and second liveness detection processes.
  • Three points on the user's curve Cv are used, with the display element moving between these three points during the capture process.
  • the user device 104 transmits information collected at step S1 107 to both the pupil dilation server 120b and the eye tracking server 120c, in at least one first message 1 102a and at least one second message 1 102b
  • no or minimal image processing is performed at the user device 104, and the moving image(s), or a minimally processed version thereof, is transmitted to the servers 120b, 120c for processing at the remote system 130 in the first and second messages 1 102b respectively.
  • the remote system 130 performs the majority of the steps of the liveness detection process of the first embodiment, and in particular computes the changes in the pupil diameter over time; the remote system 130 also performs the majority of the steps of the liveness detection process of the second embodiment, and in particular computes the histograms representing the blocks of the iris window over time,
  • the user device performs the majority of this processing.
  • the user device computes the changes in the pupil diameter and the histograms, which it transmits to the pupil dilation server 120b and eye tracking server 120c in the first and second messages 1 102a, 1 102b respectively.
  • the processing can be distributed between the user device 104 and servers 120b, 120c in numerous different ways.
  • At least one secure execution environment is provided on the user device 104 in which code and data loaded inside the secure execution environment is integrity protected. To the extent that !iveness detection processing is performed at the user device 104, it is performed within the secure execution environment.
  • the user device 104 applies a signature to both the first and second messages 1 102a, 1102b.
  • the signature is generated by the user device based on both the PD and ET parameters sets.
  • the first and second messages 102a, 1102b are transmitted to the URIs of the eye tracking server 120b and pupil dilation server 120b, as indicated in the instigation message 1 101 - not the other way round. That is the first message 1102a, containing the results of the randomized light pulse test, are transmitted to the eye tracking server 120c; likewise, the second message 1 102b, containing the results of the randomized display element test, are transmitted to the pupil dilation server 120b.
  • each message is transmitted to a server which is not its ultimate intended destination.
  • the functionality of the serves 120a-120c need not be distributed across multiple computer devices (though that is not excluded). That is, their function may be implemented by a single device or by multiple devices, but within separate secure execution environments - for example by different processes in separate secure execution environments on the same device, or even different threads of the same program in separate secure execution environments. Regardless of how the functionality is implemented at the hardware level, a key aspect is that the three servers 120a-102c constitute three separate network endpoints of the network 1 18 i.e.:
  • instigation message 1 101 which is a shared secret between the user device 104 and the remote system 130
  • the endpoint to which the second message 1102b is transmitted as also indicated in the instigation message 1101 (and which is also a shared secret between the user device 104 and the remote system 130).
  • the remote system 130 is configured to provide at least three separate network endpoints, e.g. as defined by three different URIs or other endpoint identifiers, and comprises associated logic for each network endpoint for effecting communications with the user device 106 via that endpoint.
  • the servers 120a-120c are entities that are logically distinct form one another, each in the form a respective set of code executed in a separate, secure
  • the back-end system 130 represent separate network endpoints in the sense that the three URIs are different from one another within the URI space (even if they ultimately response to the same IP address and even the same port number of the back-end system 130, which may or may not be the case).
  • the contents of the first message 1102a is communicated (S1 10a) from the eye tracking 120c to the liveness server 120a along with the signature of the first message 1102a and the URI at which it was received.
  • the contents of the second message 1 102b is communicated (S1 1 10b) from the pupil dilation server 120b to the liveness server 120a along with the signature of the second message 1102b and the URI at which it was received.
  • the liveness server 120a has access to both the PD and ET parameter sets by virtue of steps S1 104a and S1 104b respectively. It compares both sets with each of the signatures attached to the first and second messages 1102a, 1102b (recall each signature was generated by the user device 104 using both sets).
  • the liveness server also has access to the URIs that it supplied to the user device 102 in the first message 1101 , and compares these with the URIs that the first and second messages 1102a, 1102b were actually sent to. If either of the URIs actually used does not match the one that should have been used, or if either of the signatures does not match the parameter sets, this is communicated to the access controller 214, thereby causing the user 102 to be refused access to the remote system 130 e.g. to the database 132. For example, this can be achieved by automatically classifying the user as non-living to the access controller 214 - even though the non-matching URI(s) and/or non-matching signature(s) are not directly indicative of this.
  • the liveness server 120a provides (S1111 a) the PD results (i.e. the contents of the first message 1102a, as provided by the eye tracking server 120c in step S1 10b) to the pupil dilation server 120b and provides (S1 1 1 1 b) the ET results (i.e. the contents of the second message 1 102b, as provided by the pupil dilation server 102b in step S1 1 10a) to the eye tracking server 120c.
  • the pupil dilation server 120b performs the liveness detection technique of the first embodiment for each eye separately, as described in detail above with reference to figures 2A-2D, based on a comparison of the contents of the first message 1 102a with the randomly generated PD parameter set so as to generate e.g. a probability that the user 102 is alive.
  • the eye tracking server 120c performs the liveness detection technique of the first process for each eye, as described in detail above with reference to figures 6A-6C, based on a comparison of the contents of the second message 1 102a with the randomly generated ET parameter set so as to generate e.g. a probability that the user is alive.
  • the second process detects when the movement exhibit by the eye is not consistent with the fact that the eye has recently been exposed to the medium-to-high intensity light pulse of the first embodiment (even if the movements themselves are consistent with the randomized locations in general terms). As will be apparent, this can be achieved by suitable tuning of the coefficients of the PDF used in the second process as part of normal design procedure.
  • the probabilities generated by the first and second processes are combined into an aggregate probability (e.g. by averaging, such as weighted averaging), which is communicated to the access controller 214 at step S112 or which is used to generate a binary classification of the user 102 as living/non-living, by comparing the aggregate probability with a threshold, that is communicated to the access controller 214 at step S112.
  • the access controller 214 decides whether or not to grant access to the remote system 130 e.g. to the database 132 based on this information.
  • the messages are "swapped" between the servers 120b, 120c (in steps S1110- S1111 ) via the same liveness server 120a within the confines of the back-end system 130, and the liveness server 120a only allows the swap to proceed (in step S111 ) if both signatures and both URIs are correct. This makes it much harder for a man-in-the-middle attack to take place.
  • the secure channels (or non-secure channels as applicable) between the user device 104 and the different servers 20a-120c need not be via the same network (though they are in the above example).
  • An additional check can also be imposed by the system, which is that the time interval commencing with the transmission of the instigation message and ending with the receipt of the first and second message (whichever is received latest) is less than a predetermined time interval (e.g. 3 to 10 seconds long). If the time exceeds this, the user 102 is refused access regardless.
  • An alternative timing window can be used, for example starting with the transmission of the initial message 1101 and ending with the liveness server 120a outputting the classification/aggregate confidence value at step S1112.
  • the liveness detection techniques presented herein can, for example, be used as part of an enrolment procedure for a digital identify system. For example, the
  • the access controller 214 is implemented by the uPass enrolment module, and a user is only permitted to enrol and thereby create a uPass profile(s) if they are determined to be a living being with sufficient confidence.
  • both tests are randomized i.e. performed according to separate respective sets of randomly generated parameters) - the randomized pulse test and the randomized display elements test. More generally, two separate liveness tests of different types can be used, one of which may not be randomized.
  • one of the tests may involve monitoring movements of a mobile device 104 as recorded using one or more sensors of the user device 104 (camera, gyroscope, other accelerometer, GPS etc.).
  • sensors of the user device 104 camera, gyroscope, other accelerometer, GPS etc.
  • human-induced motion is expected at certain times (for instance, when certain actions are performed by the user device) and the absence of this can be used as an indication that the device is not being used by a living being.
  • a living being has a number of distinct characteristics arising from their ongoing biological processes, the sum total of which constitutes life. The techniques presented above are based in particular on visual characteristics that are attributable to life, such as eye movement and pupil contraction. Other characteristics
  • Attributable to life include the ability to provide a thumb or finger print, which can also be used as a basis for a liveness detection test (note in this case what is being tested is not the identity attached to the finger print i.e. a match to a known finger print pattern is not being sought - it is simply the ability of a human to provide a humanoid finger or thumb print at a certain point in them that is being used as an indicator of life).
  • a liveness detection test which are based on different ones of these life characteristics, as in the third embodiment, a greater range of life-like characteristics is tested thereby enabling deductions to be made with greater certainty.
  • the two liveness tests are performed by a single user device 104
  • the two test could be performed by multiple, collocated devices available to the user - for instance, one test could be performed by a user's laptop and the other by their smartphone.
  • signatures of the kind described above are used, both parameter sets are still sent to each device in this case, so that each device can generate the signature from both parameter sets.
  • Figure 15 illustrates a liveness 'transaction' system and describes a 'transactional' liveness implementation of the embodiment described above with reference to Fig 1 1.
  • the liveness system of figure 15 is shown using 2 data checking components.
  • the liveness system of figure 15 differentiates trust on the transmission and submission mechanisms. Any additional elements may be added by extension.
  • first random test parameters are obtained, such as ET random
  • second random test parameters are obtained, such as PD random parameters.
  • Step 3 random destination addresses are selected.
  • Signature S Q is generated on total and stored cache C L .
  • C L can cache additional data related to processing.
  • Step 4 Parameters and destination addresses are encrypted with a device-specific key.
  • the signature can be attached and then transmitted to the remote device.
  • test results are delivered as per Figure 1 1 , with S Q inside the message encrypted with a server key.
  • a separate signature S R is generated from the combination of test results with S Q and appended as cleartext to both messages M PD and M ET .
  • SR is used to marry up the multiple parts of the transaction response which are decrypted in the liveness server and routed to their respective processing services with S Q attached.
  • results are returned to the liveness server with S Q attached for
  • the result of gauging liveness is produced.
  • the transaction may be tracked in progress by Sq, the signature of the liveness query sent to the device, with Sr providing the integrity of the response.
  • Sq need not be encrypted with the Liveness Server key as the submission mechanism already specifies secured connections.
  • Sr may be encrypted during submission with the Liveness Server key and only viewable in the Liveness Server. Sr confirms that the data submission has concluded without corruption.
  • the 'living being' designation may be desired to apply to a non-living entity such as a synthetic being designed to imitate a human living being. In some situations it may thus be desirable to allow the synthetic entity to be seen by the system as a living being. This could be achieved by altering a tolerance or degree of completion for the test for liveness and leading to its being completed successfully.
  • test for a living being is altered to test for a live authorized user device or an overall liveness status.
  • the device being 'live' in the sense that the authorized user device is active and providing input/out in real-time. That is to say live in much the same way a television broadcast is considered to be live.
  • the random time period used to separate two successive capture events in the device capture of biometric liveness data in embodiments may also define a timeout which can be used as a finite bound on the processing time allowed to perform other measurements or computations during liveness data capture.
  • a succession of notional timestamps can be generated and each individual captured data item may be accordingly tagged with the associated notional timestamp as well as an actual timestamp taken to the greatest accuracy of the capturing device. These timestamps may be included in the datasets transmitted to the server.
  • the server On receipt the server computes the correct timestamps based upon its copy of the time series generation function. If the timestamps don't match then this may provide an indication that the device has used compromised software, or else the data stream has been artificially created. In either case it can be rejected and the capturing device unregistered from the liveness detection system.
  • Random numbers defined by the randomized output of pulses of light, output from the authorized user device may be used semantically as a transaction identifier.
  • the entire liveness transaction may be individually identified using the random number generated from or directly provided by the sequence of randomized light pulses.
  • the transaction may be uniquely identified with minimal value or numerical crossover between a subsequently produced and very large set of identifiers.

Abstract

In a liveness detection system, a first set of one or more parameters of a first liveness test is selected at random. The first parameter set is transmitted to a user device available to an entity, thereby causing the user device to perform the first liveness test according to the first parameter set. Results of the first liveness test performed at the user device according to the first parameter set are received form the user device. Results of a second liveness test pertaining to the entity are received. The liveness detection system determines whether the entity is a living being using the results of the liveness tests, the results of the first liveness test being so used by comparing them with the first parameter set. The method also comprises transmitting to the entity, from a source address of the liveness detection system, an identifier of at least one destination address of the liveness detection system different than the source address; and determining whether the results of at least one of the tests were transmitted to the at least one destination address.

Description

LIVENESS DETECTION
Technical Field The present invention is in the field of liveness detection, and has particular applications in the context of network security to prevent spoofing attacks based on entities masquerading as humans.
Background
In the context of network security, a spoofing attack refers to a technique whereby an unauthorized human or software entity masquerades as an authorized entity, thereby gaining an illegitimate advantage. A particular example is an unauthorized entity masquerading as a particular user so as to gain improper access to the user's personal information held in a notionally secure data store, launch an attack on a notionally secure system by masquerading a system administrator, or gain some other form of access to a notionally secure system which they can then exploit to their benefit.
"Liveness detection" refers to techniques of detecting whether an entity, which may exhibit what are ostensibly human characteristics, is actually a real, living being or is a non-living entity masquerading as such. One example of liveness detection is the well-known CAPTCHA test; or to give it its full name "Completely Automated Public Turing test to tell Computers and Humans Apart". The test is based on a challenge-response paradigm. In the broadest sense, a system presents an entity with a test that is designed to be trivial for a human but difficult for robot software. A typical implementation is requiring an entity to interpret a word or phrase embodied in an image or audio file. This is an easy task for a human to interpret, but it is a harder task for robot software to interpret the
word/image as it is in a non-text format. Variations of this technique include distorting the word or phrase, with the intention of making it even less susceptible to interpretation by software. Another example of iiveness detection is in the context of a system that is rationally secured based on biometrics (e.g. facial, fingerprint, or voice verification). Such a system may require a user wishing to gain access to the system to present one of their biometric identifiers i.e. distinguishing human features (e.g. their face, fingerprint, or voice) to the system using a biometric sensor (e.g. camera; fingerprint sensor; microphone). The presented biometric identifier is compared with biometric data of users who are authorized to access the system, and access is granted to the presenting user only if the biometric identifier matches the biometric data of one of the authorized users.
Such systems can be spoofed by presenting fake biometric samples to the biometric sensor, such as pre-captured or synthesized image/speech data, physical
photographs, or even physical, three dimensional models of human features, such as accurate face or finger models. In this context, a robust Iiveness detection technique needs to be able to reliably distinguish between a real biometric identifier, i.e. captured directly from a living being who wishes to access the system, and a fake biometric identifier, i.e. that has been pre-captured or synthesised. To date, research into more advanced Iiveness detection based on biometric data have mostly focussed on machine learning techniques. Machine learning techniques tend to be relatively expensive to implement (in terms of processing resources), and require some form of offline and/or online model training. Summary
The inventors of the present invention have recognized that physiological responses to randomized outputs (such as randomized visual or audible outputs), as exhibited by visible human features (such as the eyes or mouth), provide an excellent basis for Iiveness detection, as such reactions are very difficult for non-living entities to replicate accurately.
According to a first aspect of the present invention, a computer-implemented
Iiveness detection method comprises implementing, by a Iiveness detection system, the following steps, A first set of one or more parameters of a first liveness test is selected at random. The first parameter set is transmitted to a user device available to an entity, thereby causing the user device to perform the first liveness test according to the first parameter set. Results of the first liveness test performed at the user device according to the first parameter set are received form the user device. Results of a second liveness test pertaining to the entity are received. The liveness detection system determines whether the entity is a living being using the results of the liveness tests, the results of the first liveness test being so used by comparing them with the first parameter set.
In embodiments, the method may comprise implementing, by the liveness detection system, steps of: selecting at random a second set of one or more parameters of the second liveness test; and transmitting the second parameter set to the or another user device available to the entity, thereby causing that user device to perform the second liveness test according to the second parameter set, wherein the results of the second liveness test performed at that user device according to the second parameter set are received from that user device and used in the determining step by comparing them with the second parameter set.
The results of at least one of tests that are received at the liveness detection system may have been generated by capturing a moving image of the entity.
For example, the results of the at least one test as received at the liveness detection system comprise information that has been extracted from the moving image.
Alternatively, the results of that test that are received at the liveness detection may comprise the moving image, and the method may further comprise processing, by the liveness detection system, the moving image to extract information from the moving image. In either case, the extracted information may be used in the determining step and describe at least one of:
• changes in the pupil size of at least one eye of the entity over time;
• changes in an iris pattern of at least one eye of the entity over time;
• eye movements exhibited by at least one eye of the entity;
• lip movements exhibited by lips of the entity. One of the tests may be performed by emitting at least one light pulse at a
randomized timing that is defined by the parameter set of that test; wherein the results of that test convey changes over time in the pupil size and/or in an iris pattern of at least one eye of the entity, and those results are compared with that parameter set to determine whether the changes in the pupil size and/or the iris pattern match the randomized timing.
Alternatively or in addition, one of the tests may be performed by displaying at least one display element at a randomized display location that is defined by the parameter set of that test; wherein the results of that test convey a response of the entity to the at least one display element as displayed in that test, and those results are compared with that parameter set to determine whether the response to the display element matches the at least one randomized display location.
Alternatively or in addition, one of the tests may be performed by displaying a randomly selected display element that is defined by the parameter set of that test; wherein the results of that test convey a response of the entity to the randomly selected display element, and those results are compared with that parameter set to determine whether the response of the entity matches the at least one randomly selected display element.
The second test may be performed by the or another user device monitoring movements of that user device using at least one sensor of that user device.
The method may comprise, by the liveness detection system: transmitting to the entity, from a source address of the liveness detection system, an identifier of at least one destination address (e.g. at least one URI) of the liveness detection system different than the source address; and determining whether the results of at least one of the tests were transmitted to the at least one destination address.
The at least one destination address may be randomly selected by the liveness detection system. The method may comprise comprising granting the entity access to a remote computer system oniy if it is determined that it is a living being and the results of the at least one of the test were been transmitted by to the at least one destination address,
The method may comprise, by the liveness detection system: transmitting to the entity, from the source address of the liveness detection system, a first and a second identifier of a first and a second destination address of the liveness detection system respectively, the first and second destination addresses being different from the source address and from each other; determining whether the results of the second test were received at the first destination address; and determining whether the results of the first test were received at the second destination address.
For example the liveness detection system mat comprise: liveness control server logic; first liveness processing server logic for processing the results of the first liveness test, the first liveness processing server logic having a plurality of addresses including the first destination address, and second liveness processing logic for processing the results of the second liveness test, the second liveness processing logic having a plurality of addresses including the first destination address
The results of the second test may be received at the first liveness processing server, the results of the first liveness test may be received at the second liveness processing server, and the method may comprise:
• the first liveness processing server providing the results of the second
liveness test to the liveness control server;
• the second liveness processing server providing the results of the first
liveness test to the liveness control server; and
• the liveness control server providing the results of the first test to the first liveness processing server and the results of the second test to the second liveness processing server only if: the results of the second test were received at the first destination address of the first liveness processing server, and the results of the first test were received at the second destination address of the second liveness processing server. For example, the results of the first and second tests may be received in a first message and a second message respectively, each message comprising a signature expected to have been generated, for each message, from both parameter sets; the liveness control server may compare both signatures with the first and second parameter sets and provide the results of the first test to the first liveness processing server and the results of the second test to the second liveness
processing server only if: the second message was received at the first destination address of the first liveness processing server, the first message was received at the second destination address of the second liveness processing server, and both signatures match the parameter sets.
The method may comprise detecting when a timeout condition occurs, the timeout condition caused by an unacceptable delay in receiving the results relative to a timing of the transmitting step, wherein the entity is refused access to a remote computer system in response to the timeout condition occurring.
The method may comprise granting the entity access to a remote computer system only if the entity is determined to be a living being.
The first and second tests may be performed at the same time as one another.
The method may comprise granting the entity access to a remote computer system only if the entity is determined to be a living being.
According to a second aspect of the present invention, a liveness detection system comprises: a set of one or more processing units, the set configured to perform operations of: selecting at random a first set of one or more parameters of a first liveness test; transmitting, to a user device available to an entity, the first parameter set, thereby causing the user device to perform the first liveness test according to the first parameter set; receiving from the user device results of the first liveness test performed at the user device according to the first parameter set; receiving results of a second liveness test pertaining to the entity; and determining whether the entity is a living being using the results of the liveness tests, the results of the first liveness test being so used by comparing them with the first parameter set. According to a third aspect of the present invention, a computer-implemented liveness detection method is implemented by a liveness detection system. The iiveness detection system comprises computer storage storing a shared secret known only to the liveness detection system and one or more authorized user devices. The method comprises implementing by the liveness detection system the following steps. A set of one or more parameters of a liveness test is selected at random which, when combined with the shared secret, define expected outputs that should be provided in the liveness test. The parameter set is transmitted to a user device, thereby causing the user device to perform the iiveness test according to the parameter set, whereby the user device can only provide the expected outputs therein if it also has access to its own version of the shared secret. Results of the liveness test performed at the user device according to the first parameter set are received from the user device. The parameter set and the shared secret stored at the liveness detection system are used at the liveness detection system to determine the expected outputs. The results of the liveness test are compared with the determined expected outputs to determine whether the behaviour of an entity that was subject to the liveness test performed at the user device is an expected reaction to the expected outputs, thereby determining from the entity's behaviour both whether the entity is a living being and whether the user device is one of the authorized user device(s).
In embodiments, the shared secret may define a restricted subset of a set of available display locations, wherein the parameter set defines one or more available display locations selected at random from the restricted subset, and wherein the expected outputs are provided by displaying one or more display elements at the one or more randomly selected available display locations on a display of the user device.
The behaviour may be eye movements exhibited by at least one eye of the entity during the displaying of the one or more display elements at the user device and conveyed by the received results, the expected reaction being an expected movement of the eye, whereby it is determined both whether the entity is a living being and whether the user device is one of the authorized user device(s) from the entity's eye movements.
The shared secret may for example define an elliptic curve.
According to a fourth aspect of the present invention, a liveness detection system comprises: computer storage storing a shared secret known only to the liveness detection system and one or more authorized user devices; and a set of one or more processing units, the set configured to perform operations of: selecting at random a set of one or more parameters of a liveness test which, when combined with the shared secret, define expected outputs that should be provided in the liveness test; transmitting the parameter set to a user device, thereby causing the user device to perform the liveness test according to the parameter set, whereby the user device can only provide the expected outputs therein if it also has access to its own version of the shared secret; receiving from the user device results of the liveness test performed at the user device according to the first parameter set; using the parameter set and the shared secret stored at the liveness detection system to determine the expected outputs; and comparing the results of the liveness test with the determined expected outputs to determine whether the behaviour of an entity that was subject to the liveness test performed at the user device is an expected reaction to the expected outputs, thereby determining from the entity's behaviour both whether the entity is a living being and whether the user device is one of the authorized user device(s).
According to a fifth aspect of the present invention, a liveness detection system comprises a controller, a video input, a feature recognition module, and a liveness detection module. The controller is configured to control an output device to provide randomized outputs to an entity over an interval of time. The video input is configured to receive a moving image of the entity captured by a camera over the interval of time. The feature recognition module is configured to process the moving image to detect at least one human feature of the entity. The liveness detection module is configured to compare with the randomized outputs a behaviour exhibited by the detected human feature over the interval of time to determine whether the behaviour is an expected reaction to the randomized outputs, thereby determining whether the entity is a living being.
In embodiments, the human feature that the feature recognition module is configured to detect may be an eye of the entity.
For example, providing the randomized outputs may comprise controlling the output device to emit at least one light pulse having a randomized timing within the moving image, and the expected reaction may be an expected pupillary response to the at least one light pulses. E.g. providing the randomized outputs may comprise controlling the output device to emit at least two randomly light pulse having a randomized separation in time from one another, and the expected reaction may be an expected pupillary response to the at least two light pulses. The output device may be a camera flash or a display.
The liveness detection system may comprise a velocity measurement module configured to compare frames of the moving image to one another so as to generate a velocity distribution of the eye, the velocity distribution representing the rate of change of the diameter of the pupil at different times, said comparison comprising comparing the velocity distribution with the expected response. For example, said comparison by the liveness detection module may comprise comparing the velocity distribution with a probability distribution, wherein the probability distribution represents the expected pupillary response.
Alternatively or in addition, said comparison by the liveness detection module may comprise: determining a first time, wherein the first time corresponds to a local maximum of the velocity distribution; determining a second time, wherein the second time corresponds to a local minimum of the velocity distribution, the local minimum occurring immediately before or immediately after the local maximum; and determining a difference between the first and second times and comparing the difference to a threshold. For example, respective differences may be determined between the first time and two second times, one corresponding to the local minimum immediately before the local maximum and one corresponding to the local minimum occurring immediately after the local maximum, and each may be compared to a respective threshold.
The entity may be determined to be a living being only if each of the two differences is below its respective threshold, and the velocity distribution matches the probability distribution. The output device may be a display.
Providing the randomized outputs may comprise controlling the display to display a display element at a random location of the display, and the expected reaction may be an expected movement of the eye,
The liveness detection system may comprise: a spatial windowing module
configured to identify, for each of a plurality of frames of the moving image, an iris area, the iris area corresponding to the iris of the eye in the frame; and an analysis module configured to, for each of a plurality of regions of the iris area, generate a histogram of pixel values within that region for use in tracking movements of the eye, the liveness detection module being configured to perform said comparison by comparing the histograms with the expected movement.
For example, the liveness detection module may be configured to perform said comparison by comparing the histograms with a probability density function representing the expected movement.
Alternatively or in addition, the liveness detection system may comprise: a spatial windowing module configured, for each of a plurality of frames of the moving image, to divide at least a portion of that frame into a plurality of blocks, each block formed one or more respective sub-blocks, each sub-block formed of one or more respective pixels; and an analysis module configured to assign to each block a respective block value based on its one or more respective sub-blocks, the liveness detection module being configured to perform said comparison by comparing the block values with the expected movement.
For example, each sub-block may be formed of a multiple pixels, and/or each block may be formed of multiple sub-blocks.
The analysis module may be configured to assign to each sub-block a binary value by detecting whether or not at least a predetermined proportion of its respective pixels have intensities below an intensity threshold, the block value of each block being assigned by combining the binary values assigned to its respective sub-blocks.
The pixel intensities may be determined by converting the plurality of frames from a colour format into a grayscale format. Alternatively or in addition, providing the randomized outputs may further comprise accessing user-created data, held a first memory local to the output device, which defines a restricted subset of locations on the display, the random location being selected at random from the restricted subset, wherein the system is also configured to compare the behaviour exhibited by the eye with a version of the user-created data held in a second memory remote from the output device. For example, the user- created data may define a two-dimensional curve, the restricted subset being the set of points on the curve.
The first memory and the output device may be integrated in a user device.
Where the human feature is an eye, the behaviour that is compared with the randomized outputs may be at least one of: changes in the size of the pupil of the eye over time; changes in an iris pattern of the eye over time; and eye movements exhibited the eye.
Alternatively or in addition, providing the randomized outputs may comprise controlling the output device to output at least one randomly selected word; the human feature that the feature recognition module is configured to detect may be a mouth of the entity, and the expected response is the user speaking the word, the movements of the mouth being compared to the random word using a iip reading algorithm.
In any of the above examples, the liveness detection system may comprise an access module configured to grant the entity access to a remote computer system only if they are determined to be a living being.
The liveness detection module may be configured to output at least one of: a confidence value which conveys a probability that the entity is a living being, and a binary classification of the entity as either living or non-living.
According to a sixth aspect of the present invention, a computer-implemented liveness detection method comprises: controlling an output device to provide randomized outputs to an entity over an interval of time; receiving a moving image of the entity captured by a camera over the interval of time; processing the moving image to detect at least one human feature of the entity; and comparing with the randomized outputs a behaviour exhibited by the detected human feature over the interval of time to determine whether the behaviour is an expected reaction to the randomized outputs, thereby determining whether the entity is a living being.
Any of the features of any one of the above aspects or any embodiment thereof may be implemented in embodiments of any of the other aspects. Any of the method disclosed herein may be implemented by logic (e.g. software modules) of a corresponding system. Similarly any of the system functionaiity disclosed herein may be implemented as steps of a corresponding method.
According to another aspect of the present invention, a computer program product comprises code stored on a computer-readable storage medium and configured when executed to implement any of the method steps or system functionality disclosed herein.
Brief Description of Figures To aid understanding of the present invention, and to show how the same may be carried into effect, reference is made by way of example to the following figures, in which: Figure 1 shows a block diagram of a computer system;
Figures 2A, 2B and 2C show various functional modules of a liveness detection system in a first embodiment of the present invention; Figure 2D shows a flow chart for a liveness detection method in the first
embodiment;
Figure 3 illustrates some of the principles of an image stabilization technique; Figure 4 demonstrates a pupil's response to a light pulse stimulus during a liveness detection process;
Figures 5A-5C is a graph showing how the pupillary area of an eye responds to a light pulse stimulus;
Figure 5D is a graph showing how the pupillary area response to two light pulses in relatively quick succession;
Figures 8A and 6B show various functional modules of a liveness detection system in a second embodiment of the present invention;
Figure 6C shows a flow chart for a liveness detection method in the second embodiment; Figure 7A illustrates a display element exhibition randomized motion;
Figure 7B illustrates movements of an eye when tracking a visible element;
Figures 8 and 9 illustrate some of the principles of an eye tracking technique; Figures 10A and 1 B illustrates a process by which histograms describing movements of an eye can be generated;
Figure 11 shows a signalling diagram for a liveness detection technique according to a third embodiment;
Figures 12A and 12B illustrate some principles of a liveness detection technique that is based in part on a shared secret between a user device and a server;
Figure 13 illustrates how an eye movement is manifest in a sequence of grayscale video frame images;
Figures 14A and 11 B illustrate a novel motion binary pattern technique.
Figure 15 shows a signalling diagram for a liveness 'transactional* detection technique similar to that of figure 11.
Detailed Description of Preferred Embodiments
The preferred embodiments of the present invention that are described below are implemented based on a comparison of biometric data with probability density functions that have been generated from closed-form equations - no machine learning is required.
Figure 1 shows a block diagram of a computer system, which comprises a user device 104 available to a user 2; a computer network 118; and a remote computer system 130 i.e. remote from the user device 104. The user device 104 and remote system 130 are both connected to the network 118, so that data can be transmitted and received between the user device 104 and the remote system 130.
The user device 104 is a computer device which can take a number of forms, such as a mobile device (smartphone, tablet etc.), laptop or desktop computer etc. The user device 104 comprises a display 106; a camera 108 and camera flash 1 10; a network interface 116; and a processor 112, formed of one or more processing units (e.g. CPUs), to which each of the aforementioned components of the user device 104 is connected. The processor 112 is configured to execute code, which include a liveness detection application ("app") 11 . When executed on the processor 112, the liveness detection app 1 14 can control the display 106, camera 108 and flash 108, and can transmit and receive data to and from the network 1 18 via the network interface 1 16.
The camera 108 is capable of capturing a moving image i.e. video formed of a temporal sequence of frames to be played out in quick succession so as to replicate continuous movement, that is outputted as a video signal from the camera 108. Each frame is formed of a 2-dimensional array of pixels (i.e. image samples). For example, each pixel may comprise a three-dimensional vector defining the chrominance and luminance of that pixel in the frame.
The camera 108 is located so that the user 102 can easily capture a moving image of their face with the camera 108. For example, the camera 108 may be a front- facing camera integrated in a smartphone, tablet or laptop computer screen, or an external webcam mounted on a laptop or desktop display screen.
The flash 1 10 is controllable to emit relatively high intensity light. Its primary function is to provide a quick burst of illumination to illuminate a scene as the camera 108 captures an image of the scene, though some modern user devices such as smartphones and tablets also provide for other uses of the camera flash 110 e.g. to provide continuous illumination in a torch mode.
The display 106 outputs information to the user 102 in visual form, and may for example be a display screen. In some user devices, the display screen may incorporate a touch screen so that it also functions as an input device for receiving inputs from the user 102.
The remote system 130 comprises at least one processor 122 and network interface 128 via which the processor 122 of the remote system is connected to the network 1 18. The processor 122 and network interface 126 constitute a server 120. The processor is configured to execute control code 124 ("back-end software"), which cooperates with the liveness detection app 1 14 on the user device 104 to grant the user device 104 access to the remote system 130, provided certain criteria are met. For example, access to the remote system 130 using the user device 104 may be conditional on the user 102 successfully completing a validation process.
The remote system 130 may for example comprise a secure data store 132, which holds (say) the user's personal data. In order to keep the user's data secure, the back-end software 124 makes retrieval of the user's personal data from the database 132 using the user device 104 conditional on successful validation of the user 102.
Embodiments of the present invention can be implemented as part of the validation process to provide a validation process that includes a liveness detection element. That is, access to the remote system 130 may be conditional on the user 102 passing a liveness detection test to demonstrate that they are indeed a living being. The validation process can also comprise other elements, e.g. based on one or more credentials, such as a username and password, so that the user 102 is required not only to demonstrate that they what they say they are (i.e. a living being) but also that they are who they say they are (e.g. a particular individual) - note hover that it is the former that is the focus of the present disclosure, and the liveness detection techniques can be implemented separately and independently from any identity check or without considering identity at all. Figure 2A shows a liveness detection system 200a in a first embodiment of the present invention. The liveness detection system 200a comprises the following functional modules: a liveness detection controller 218 connected to control the camera 108 and flash 1 10 (or alternatively the display 106); an image stabilizer 204 having an input connected to receive a video signal from the camera 204; a corner detector 202 having an input connected to receive the video signal and an output connected to the image stabilizer 204; an iris detector 205 having an input connected to receive a stabilized version of the image signal from the image stabilizer 204; a diameter estimator 208 having an input connected to an output of the iris detector 206 and an output; first second and third time differential modules 208a, 208b, 208c, each having a respective input connected to the output of the diameter estimation module 206; first, second and third accumulators 210a, 21 Ob, 210c having respective inputs connected to the outputs of the first, second and third time differentia! modules 208a, 208b, 208c respectively; a first liveness detection module 212a having first, second and third inputs connected to outputs of the first, second and third accumulators 210a, 210b, 210c respectively; and a randomized generator 219 which generates randomized (e.g. random or pseudo-random) data Rri, and outputs the randomized data Rn to both the liveness detection controller 218 and the liveness detection module 212a. The modules 208a,..., 210c constitute a velocity measurement module 213.
Figure 2B shows additional details of the liveness detection module 212a. The liveness detection module 212a comprises first second and third comparison modules 231a, 231 b, 231c having inputs connected to the outputs of the first, second and third accumulators 210a, 210b, 210c respectively; and a decision module connected to receive inputs from each of the comparison modules 231 a, 231 b, 231c.
Figure 2C shows how each of the comparison modules 231a, 231 b, 231c (for which the general reference sign 231 is used) comprises a distribution fitting module 232, a global maximum estimation module 234, and a global minimum estimation module 236. The decision module has inputs connected to outputs of the modules 232, 234, 238, and an additional input connected to receive the randomized data Rn. The randomized data Rn is in the form of one or more randomly generated
parameters of the liveness detection process of the first embodiment, referred to as a pupil dilation ("PD") parameter set in the context of this embodiment.
The functional modules of the liveness detection system 200a are software modules, representing functionality implemented by executing the liveness detection app 114 on the user device 104, or by executing the back-end software 124 on the server 120, or a combination of both. That is, the liveness detection system 200a may be localized at a single computer device, or distributed across multiple computer devices. The liveness detection system outputs a binary classification of the user 102, classifying the user 102 as either living or non-living, which is generated by the liveness detection module 212a based on an analysis of a moving image of the user's face captured by the camera 108,
The liveness detection system 200a of the first embodiment implements a technique for anti-spoofing based on pupillary light reflex. The technique will now be
described with reference to figure 2D, which is a flow chart for the method.
Before commencing the technique, the liveness detection app 114 outputs an instruction to the user 102 that they should look at the camera 108, so that their face is within the camera's field of view. For example, the app 114 may display a preview of the video captured by the camera, with instructions as to how the user should correctly position their face within the camera's field of view.
In a liveness test performed according to Rn, the liveness detection controller 218 controls the camera 108 and camera flash 110 (or the brightness level of the display 106) of the user device 102 to perform the following operations. The camera flash 110 (or display 106) emits random light modulated pulses with a frequency of more than 0.33Hz (~1 pulse every 3 seconds). The camera 108 stars recording video frames the moment that the flash 110 (or display 106) starts emitting the light pulses.
Each video frame comprises a high-resolution image of at least one of the user's eyes (right or left).
The recording continues for about three seconds in total, so as to capture a three second moving image of the user's face i.e. three seconds worth of video frames (typically between about 60 and 90 video frames for a conventional smartphone or tabled).
The light pulses are modulated based on the PD parameter set Rn, as generated by the randomized generator 219, in the following manner. At least two light pulses are emitted within the three second window - one at the start of the interval when recording commences, and at least one more whilst the recording is in progress. The two light pulses are separated in time by a randomly chosen time interval 5t that is defined by the PD parameter set Rn, In some implementations, three or four (or more) light pulses may be used, all having random temporal separations relative to one another.
The intensity of each of the later light pulse(s) is greater than that of the light pulse(s) preceding it. If light pulses of the same intensity were used each time, the pupillary response would diminish with each pulse due to the eye becoming accustomed to the light level of the pulses. Increasing the intensity of each pulse ensures a measurable physiological reaction by the pupil to each light pulse.
Every video frame that is recorded is timestamped i.e. associated with a time value defining when it was captured relative to the other frames. This enables the in the behaviour and position of the user's iris for each desired time interval. The notation F J is used to represent a video frame having timestamp t hereinbeiow and in the figures. The following steps are then performed for each video frame FJ, for one of the user's eyes (or for each eye separately).
At step S202, corner detection techniques are used to detect two reference points of the eye in the frame FJ - shown in figure 3 and labelled "A" and "B" - corresponding to the corners of the eye. Image stabilization is used to place the points A and B on a reference plane P common to all frames i.e. a rotational transformation is applied to the frame F J, as necessary, so that the points A and B in all of the stabilized (i.e. rotated) versions of the video frames lie in the same reference plane P. In this example, the plane P is the horizontal place in the coordinate system of the frames, meaning that the points A and B are vertically aligned in all of the stabilized versions of the frames. This enables the movements and the size of the pupil to be isolated, as it removes the effects caused by any rotation of the user's head as a while during the capturing of the video. The notation SF_t is used to represent the stabilized version of the frame F_t.
At step S204 the iris is detected and isolated in the stabilized video frame SF_t, using machine learning and blob detection techniques. The application of such techniques to iris detection are known in the art.
The diameter of the iris remains substantially constant - any changes in the diameter of the iris in the video can be assumed to be caused by movement of the user's head. These could be accounted for e.g. by applying a scaling transformation to the video frames to keep the iris diameter constant in the stabilized frames, though in practice this may be unnecessary as the iris diameter will remain substantially constant in the video provided the user keeps their head still during the recording. By contrast, the diameter of the pupil changes in response to the light pulses - this is a physiological response to the light pulses, and is used at the basis for liveness detection in this first embodiment.
At step S205, the diameter "D_t" of the pupil in the frame SF_t s estimated in pixels.
Figure 5A shows a graph illustrating how the diameter of the pupillary area is expected to change over time in response to a light pulse stimulus at time Ti . The graph tracks the change of the pupillary area of the eye after a light stimulus of medium/high intensity is applied to the eye.
Immediately following the stimulus the pupil is rapidly contracted (i.e. its diameter decreases) in response until reaching a maximum contraction (minimum diameter) at time T2, after which it gradually dilates (i.e. its diameter increases) back towards its original contraction. At time T3, approximately 1 second after the stimulus time T1 there is a noticeable genuflection in the response curve i.e. a relatively sudden decrease in the rate of pupil dilation. This is called the "dilation break". Figure 5B shows the rate of change in the pupil diameter over the same time interval i.e. the velocity of contraction (positive velocity) or dilation (negative velocity). The pupil diameter exhibits rapid, essentially random fluctuations. Nevertheless, the velocity response has an overall structure over larger time scales that is still evident.
Figure 5C shows a smoothed version of the velocity curve of figure 2C, in which the rapid fluctuations have been averaged out by taking a windowed average of the velocity curve with a window large enough to eliminate the fluctuations but small enough to preserve the overall structure.
As can be seen in figure 5C, at time 12 the velocity is zero. Between T1 and T2, the velocity reaches its local peak value (i.e. local maximum) at a time g_max. The smoothed velocity curve has local minima immediately to the left and right of time g_max, at times g_maxL and g_max respectively. These are immediate in the sense of being closest in time i.e. such that that there are no other local minima between g_max and g_minL or between gjnax and g_minR.
The time gjnsnL is near to the time T1 that the stimulus is applied. The time g_minR is after the time T2 (at which the pupil stops contracting and starts dilating) but before T3 (the dilation break, at which the pupil dilation slows suddenly). That is, g_minR occurs in the well-defined temporal range between T2 and T3.
The physiological response by a real, human pupil to the stimulus is such that g_max, g_minl_ and g_miriR are expected to satisfy a certain relationship - specifically that g_max-g_minL is no more than a first known value Δί1 and gjninR- g_max is no more than a second known value M2. The second interval At2 is of the order of a second, whereas the first interval M1 is at least an order of magnitude lower. At the times g_max, gjninL and g_minR, the acceleration of the pupil is zero.
As mentioned above, at least two light pulses, having a random temporal separation, are emitted whilst the video is recorded. Figure 5D shows the pupillary response to the two light pulses separated in time by an interval 5t. In response to the first pulse, the pupillary area traces the response curve of figure 5A, until the second pulse is applied causing the eye to retrace the response curve of figure 5A a second time. The intensity of the second pulse is greater than that of the first pulse by an amount such that the second pulse causes substantially the same level of contraction as the first pulse. That is, the curve of figure 5D corresponds to two instances of the curve of figure 5A, separated in time by 6t.
Figures 5A-5D are provided simply to aid illustration - as will be appreciated, they are highly schematic and not to scale. Velocity curves computed from measurements of real human eyes may exhibit more complex structures, but are nevertheless expected to satisfy the aforementioned relationship,
A differential dD of the pupil diameter D_t is estimated on different time intervals by:
• Comparing the pupil diameter DJ at time t with the iris diameter D_(t-1 ) at time t-1 (S208a), to compute a difference "dD1_t" between DJ and DJt-1 ) - e.g. dD1 =D_t - D_(t-1 ) (S206a);
• Comparing the pupil diameter DJ at time t vs iris diameter DJt-3) at time t-3, to compute a difference between "dD3J" and DJt-3) (S206b);
• Comparing the pupil diameter DJ at time t vs iris diameter DJt-6) at time t-6 , to compute a difference "dD6J" between DJ and DJt-8) (S206c).
Figure 4 illustrates and example of a differential dD between time t and time t+n
(n=1 , 3 or 6). At steps S208a, S208b, S208c respectively, each of the diameter differentials dD J is accumulated over time to form a respective velocity distribution in the form of a time series of differential values:
(..., dD_T, dDJT+1 ), dDJT+2),...)
to which a moving average is applied in order to smooth it. The smoothed version of each time series is denoted TS1 , TS3 and TS6 respectively, below and in the figures, and describes the rate at which the size of the pupil is changing at different points in time (i.e. the pupil's velocity). The liveness detection module 212a analyzes each time series TS1 , TS3, TS6 in order to identify if its fit closely to a Weibull probability density function (PDF), sometimes referred to as a Frechet PDF. The analysis is performed by computing fit measure for the time series.
The Weibull PDF represents the expected pupillary response to the two light pulses as illustrated in figure 5D, and has predetermined coefficients, set to match the expected behaviour of human eyes in response to the two light pulses, as illustrated in figures 5A-5D. The behaviour of human eyes is sufficiently predictable across the human population, that most human pupils will exhibit a physiological response to each light pulse that fits the Weibull PDF to within a certain margin of error. No machine learning techniques are needed to set the parameters - the Weibull PDF for a given 5t can be computed efficiently (i.e. using minimal processing resources) based on closed-form equations for a given vlaue of 6t.
An example of a suitable fit measure is a weighted sum of squared errors R2, defined as:
Figure imgf000024_0001
where o is an element of the smoothed time series TS (the summation being over all elements in TS), e0 is the value of o predicted by the PDF, and σ2 is the variance of the time series TS. Three R2 metrics are computed separately - one for each smoothed time series TS1 , TS3, TS6.
For each of the at least two light pulses, a local maximum g_max of each time series is computed as:
g__max=arg__max_t (dD_t)
That is, g_max is the time t at which the rate of contraction in response to the applicable light pulse is greatest (see figure 5C, and the accompanying text above). The times g_minL and g_minR immediately before and after gjnax are also computed for each light pulse and for each time series TS1 , TS3, TS6.
To determine whether the user is a living being or not, the decision module 238 performs the following operations for each time series TS1 , TS3, TS6. The weighted errors measure 2 of that time series is compared with a threshold.
For each light pulse, a time difference g_max-g_minL between gjminL and g_max is computed, and compared to a first time threshold dtthreshoidl . A time difference g__minL-g_max between g__max and g_minR is also computed and compared to dtthresho!d2. The first and second time thresholds dtthreshoidl , dtthresho!d2 are set to match the expected time differences Δί1 and Mi respectively (see text above accompanying figure 6C).
The separation in time between the two response curves is measured for each time series, for example by measuring the separation in time between the times of peak contraction at which dD_t=0 ,e. corresponding to times t1 and t2 in figure 5D. For each additional light pulse used in excess of two, an equivalent temporal separation is measured and compared to the known (random) separation of that light pulse from one of the other light pulses.
If and only if:
• R2 is below the threshold (indicating a good fit of the time series TS to the
Weibull graph for each time series;
• both of the time differences are within their respective time thresholds
dtthreshoidl , dtthresho!d2 for each time series and each time pulse; and
• The measured separation between times t1 and t2 matches the random time interval δί i.e. t2-t1=5t to within a predetermined margin of error for each time series (and for each 6t in the case of three or more light pulses)
then the decision module 256 concludes that the user 104 is alive. That is, if and only if all of these criteria are fulfilled does the liveness detection system 200a conclude that the user 104 is a living being. In the event that an entity masquerading as a living being assumes the role user 104, such as a photograph or detailed model of the user 102, it will not exhibit the necessary pupil response to satisfy these criteria, so the system will correctly identify it as non-living. Whilst the above uses a Weibull PDF, more generally, any suitable extreme value theory probability distribution function can be used in place of the Weibull PDF to model the expected response of the human pupil of figure 5A and thereby achieve the same effect.
The random separation 5t between the two light pulses is of the order of one second, corresponding to low frequency modulations. To provide additional robustness, randomized high frequency modulations of the light pulse can also be introduced. In this case, the high frequency modulations are compared with reflections from the eye and a match between the reflections and the high-frequency modulations is also required for the entity to be identified as living.
The technique of the first embodiment can also be implemented using a single light pulse, at a random time relative to the start of the video. The pupillary response in the video is compared with the random timing to check whether it matches.
A liveness detection technique which uses modulated illumination is disclosed in published UK patent application GB2501362. GB2501362 relates to an
authentication process, in which a code is sent from a server to a user-device equipped with a source of illumination and a camera capable of capturing video imagery of an online user. The user device modulates the source of illumination in accordance with the code, and at the same time captures video imagery of the user.
In GB2501362, the pattern of illumination on the user's face is extracted and analyzed to deduce the code used to control it. The extracted, code is then compared to the transmitted code. However, GB2501362 fails to recognize the possibility of using an expected physiological, human response to randomized light pulses illumination as a basis for liveness detection. Figure 6B shows a block diagram of a liveness detection system 200b in a second embodiment. The system 200b implements a technique for anti-spoofing based on tracking the iris movement by presenting elements at random positions of the screen. In the second embodiment, in a Iiveness test performed according to Rn, the
Iiveness detection controller controls the display 108 and the camera 108 of the user device. In the second embodiment, the Iiveness detection controller uses
randomized data Rn generated by the randomized generator 219 to display randomized display elements at randomized locations on the display 106 in a
Iiveness detection test. The randomized data Rn is in the form of one or more parameters that define the display locations, referred to as an eye tracking (ΈΤ") parameter set in the context of the second embodiment, The corner detector 202, image stabilizer 204 and iris detector 205 are connected as in the first embodiment, and perform the same operations. The system 200b comprises the following functional modules, in addition: a spatial windowing module 207 having an input connected to an output of the iris detection module 205; and a patent analysis module 209 having an first input connected to an output of the spatial windowing module 207 and a second input connected to receive the stabilized video frames from the image stabilizer 204; and a second Iiveness detection module 212b.
Figure 6B shows additional details of the pattern analysis and Iiveness detection modules 207, 212b. The patent analysis module comprises a plurality of histogram determination modules ,...,h9 (nine in this example), each of which is connected to receive the current stabilized video frame SF_t and has a respective output connected to a respective first input the Iiveness detection module 212b.
Returning to figure 6A, the Iiveness detection module 212b has a second input connected to receive the randomized data Rn, and outputs a binary classification of an entity subject to the test (the user 102 in this example) as living or non-living, as in the first embodiment.
The Iiveness detection technique implemented by the system 200b of the second embodiment will now be described with reference to figure 6C, which is a flow chart for the method. The method is based on tracking the iris movement by presenting elements at random positions of the screen. A device displays a 'following element' on its screen that moves to the predetermined positions (randomly assigned) that correspond to a block of a square grid of predefined size. These positions are not set by the user and are intended to guide the user to track their eye movement. The user is requested to track the random movements with their eyes. During the whole process the device is recording the eye movements of the user.
The steps of the technique are the following: At step S602, the liveness detection controller 218 controls the display 106 of the user device 104 to display a display element on its screen which moves between random display locations, defined by the randomized data Rn.
This is illustrated in figure 7A, which shows a display element moving from a randomly chosen location in the bottom block ("9") of the grid to the top middle block ("2") of the grid. The possible display locations correspond to the blocks
(equivalently referred to herein as "regions" or "sectors") of a 3x3 grid defined in relation to the display 106, which are predetermined by the system 200b (e.g. at the user device 104} but not by the user 102. The user 102 is requested to track the random movements with his eyes. A high-resolution moving image of the user's face is captured by the camera 108 as the user follows the moving display element (S604).
Every video frame that is recorded is timestamped in order to know precisely the behaviour and position of the iris for each time interval, exactly as in the method of the first embodiment.
The following operations are performed for each frame F_t of the moving image, for at least one of the user's eyes (or for each eye separately).
At step S808, corner detection and image stabilization algorithms are applied to the frame F_t in order to place the points A and B on the same plane P so that the movements and the size of the iris ca be isolated. Step S606 corresponds exactly to step S202 of figure 2C, and the description applies equally in this instance. The iris in the frame F_t is detected using machine learning and blob detection techniques (S608, corresponding exactly to step S204 of figure 2),
As the display element E moves across the display, the eye follows it causing discernible motion of the pupil relative to the plane P. This is illustrated in figure 7C (note the eye movement illustrated in figure 7C does not correspond to the display element movement shown in figure 7A).
At step S610, a window region around the iris ("iris window") is identified by the spatial windowing module 207, based on the iris detection of step S608. The window W is shown in figure 8. The isolated iris window is divided into a region of 3x3 blocks, labelled "1 " through "9" in figure 8, corresponding to the 3x3 blocks of the grid defined in relation to the display. That is, the window W is divided into N blocks, where N=9 in this example,
At step S612, for each block, a respective histogram is generated based pixel values. Figure 10A illustrates the technique used to generate the respective histogram. Figure 10A shows an exemplary block b formed of an array of pixels. Each of the pixels can take one of three values, represented by different shading (note this is an extremely simplified example presented to aid understanding).
A histogram H(b) is for the block b is generated. The histogram H(b) has a bin for each possible pixel value (so three bins in this extremely simplified example), and that bin defines the number of pixels in the block b having that value (i.e. the count for that bin).
In reality, there may be thousands of different pixel values though in some cases the range may be reduced using suitable quantization i.e. so that each bin of the histogram corresponds to a range of multiple pixel values - in the preferred technique described below, extreme quantization is applied whereby each pixel is quantized to one of two values representing light and dark. An individual histogram (H1 ,...,H9) is generated in this way for each of the nine blocks of the iris window, as illustrated in figure 10B. The set of nine histograms is denoted H. The histogram for a block b dominated by dark pixels - as occurs when pupil is in that block in the video - is measurably different from the histogram of a block b dominated by the lighter colours of the iris and/or sclera of the eye. Thus, as the eye moves between different blocks b of the window W (see figure 9), the histograms H change accordingly, in a predictable manner. This allows the movement to be examined without having to rely on machine learning techniques.
Whenever a movement of the element on the display 106 occurs, a change in the histograms H is expected to occur. Thus, for each movement of the display element on the display 106, the change in the histogram of the iris movement is compared with the change in the location of the display element in order to evaluate if the iris moved to the correct block i.e. as would be expected if the eye were a real human eye tracking the display element. If after a predetermined number of movements the system identified that the user didn't follow the element correctly, the user would be classified as trying to spoof the system.
Figures 10A and 10B illustrate a situation in which pixel values are used to generate histograms for blocks b directly. However, in a preferred technique, the blocks b are first divided into sub-blocks, and block values are assigned based on the sub-blocks. In this preferred technique, before generating the histograms for the frame F_t, the frame F_t is converted to grayscale i.e. each pixel is converted to a single value carrying only intensity information.
As shown in figure 13, binary thresholding is applied to the grayscale image, such that if the value of a pixel is less than a predetermined threshold then its value is set equal to black (value "1"); otherwise set it equal to white (value "0"). This allows the pupil to be isolated completely. The thresholding removes the iris texture (in this example, the movement of the pupil alone is used to track eye movements) Blob detection of high circularity is then applied to the quantized image in order to identify and extract the pupil of the eye, as follows. The location of the pupil is determined by a novel type of Local Binary Pattern, referred to herein as a motion binary pattern (MBP). The motion binary pattern is constructed with the following process:
As shown in figure 14A, a region split is established, whereby each block b is divided into a predetermined number of smaller blocks "sb" ("sub-blocks") arranged as a square - 8x8 sub-blocks sb per block b in this example, so M=64 sub-blocks sb per block b in total. Each sub-block sb comprises one or more pixels - multiple pixels in this example.
If a sub-block sb has a minimum of a third (about 33%) of its pixels equal to 0 then a value of "1" is assigned to the sub-block; otherwise it is assigned a value of "0". In this manner, a a sequence "SEQ(b)" of M binary values is assigned to each block b (one binary value per sub-block) - one for each sub-block. In other words, each block b (e.g. each of the 3x3 blocks) has M sub-blocks and each sub-block is composed of square regions of pixels. The value 1 or 0 is assigned to each sub- block (based on the >=33% thresholding).
This allows a block to be represented by a single number ("block value") by concatenating all the binary values of its constituent sub-blocks. For 8x8 (=84) sub- blocks for each block, the single number is in the range of 0 to 2Λ64. The whole iris window W is represented by N such values - one for each block (where N is the number of blocks e.g. N=9).
As illustrated in figure 14B, a pupil movement results in a change of the value of each block within the expected number of sequences. Figure 14B illustrates a case the change of value of a MBP block b when detecting an edge in motion (the "1" value sub-blocks in figure 14B correspond to the edge region of the dark pupil).
By analysing the sequences of processed eye movements based on the MBP approach and comparing them to the expected sequence of MBP values according to the predefined 'following elements' it is possible for the system to determine if the user is a live person. For a given set of random locations, a probability density function modelling the expected movements of the eye can be cheaply generated, again using closed-form equations (without any need for machine learning) and compared with the observed histograms.
The ordering and labelling of the bins is immaterial, provided whichever convention is adopted is applied consistently.
To further increase the level of robustness of this process, in some cases the display element is a randomly selected word. The word changes when the display element moves. The user is also required to read the word as the test progressed. The moving image is processed to identify the user's lips, and lip-reading techniques are used to determine whether the user is speaking the correct word. In other embodiments, randomly selected words are displayed at the same location on the display, and lip reading alone is used for liveness detection (i.e. with no eye tracking).
Preferably, the randomized locations at the display element is displayed are selected based on the randomized data Rn and, in addition, based on at least one shared secret between the user device 6 and the remote system 130. The shared secret can for example be a user-defined shape, such as an elliptic curve. An elliptic curve requires minimal storage overhead, as it can be parameterised by a small number of parameters. A twisted Edwards curve could also be used. This is either alternatively to the elliptic curve or in combination with an elliptic curve. That is to say any number or combination of the two types of curve, or any other cryptographically secure curve could be used to form the shared secret.
An exemplary elliptic curve Cv is shown in figure 12A. The user 02 defines the curve at the user device 104, for example by tracing it on the device's touchscreen or using a mouse/trackpad etc. Geometric data of the curve is both stored securely at the user device 106, encrypted based on a code (e.g. PIN) inputted by the user 102, and transmitted to the remote system 130 via a secure channel for storage thereat. Thus the geometry of the curve is stored both at the user device and the remote system 130. In order to change the curve, the user 102 must input the code to decrypt it. Every time the curve is changed, the version at the remote system 130 is updated to reflect the changes. The ET parameter set defines the point on an ellipse in terms of one-dimensional coordinates, defining a length along the ellipse. For example, in a quantized system, each ellipse constitutes a finite set S={n | n=1 ,...,N} of N points in two-dimensional space. Each randomly selected parameter can be represented by a single
parameter defining an index of a point in S.
During the liveness detection process, the display element is displayed at display locations corresponding to randomly selected points Pt on the curve Cv, selected by the user device 104. The curve Cv itself is not displayed. When this technique is used, the user device 104 communicates the eye images to the remote system 130, or information about the movements of the eyes in response to the random display element derived from the eye images. The user device 106 does not transmit any other information about the points Pt that it has selected - these are conveyed to the remote system 130 only through the movements of the user's eyes.
The ET parameters Rn determine which points on the curve will selected in a deterministic manner i.e. if the ET parameters Rn and the curve Cv are known, it is always possible to know with certainty which points the user device 106 will select.
Thus, because a copy of the curve Cv is stored at the remote system 130, and because the remote system 130 has access to the randomized data Rn, the remote system can reconstruct the points Pt as selected by the user device based on its own copy of the curve.
This provides an additional layer of robustness for the following reasons: if the user device 108 uses the incorrect curve i.e. that does not match the version stored at the remote system 130, the movements of the user's eyes will not be those expected by the server. Thus, for the liveness detection technique to succeed, the user device 106 must know the shared secret in the form of the curve Cv. This prevents a device which does not have access to the securely-held shared secret from being used to access e.g. the database 132 in the remote system 130. In other words, based on its knowledge of the shared secret, the remote system knows which points on the curve Cv the user device should have selected given its own knowledge of the shared secret. Should the wrong points be selected as part of an attempted spoofing attack, the attack will fail as a consequence.
Both of the above described techniques consider movement of the pupil in response to certain stimuli. Alternatively or in addition, the changes in the pattern of the iris can be used to the same end. The diameter of the iris is constant, however the structure of the iris exhibits intricate variations (iris pattern) that are visible as colour patterns. As the eye reacts to the pulse or display element stimulus, these patters will change. This can be measured, for example, by identifying and tracking distinct points on the iris in response to the relevant stimulus. A noise filtering algorithm may applied to the image, and the tracking based on the noise-filtered version. A set of differential equations is applied to select visible dark and/or light spots in the noise- filtered image for tracking. Again, the detected movements are compared to expected data that is generated form closed-form equations without the need for machine learning.
Optionally, as illustrated in figure 12B, the randomized data may also define a transformation of the curve e.g. a scaling and/or rotation of the curve over time, so that a second point Pt2 is a point on a transformed version Cv' of the curve relative to the version Cv used on which a first point Pt1 is selected.
As an alternative to eye tracking, the user may indicate the location of the display element as they perceive it using some other input device of their user device 104, such as a touchscreen, trackpad, mouse etc. In this case, the user selects (e.g. touches or clicks on) the point on the screen where the display element is displayed, and their inputs are compared to the expected display location(s).
In any of the above embodiments, the binary classification outputted by the system liveness detection system (e.g. 200a, 200b) may, for example, be conveyed to an access control module 214 of the server 120, so that the access control module 214 can decide whether or not to grant the user 102 access to the remote system 130 based on the classification. For example, access may be granted only if the user 102 is identified as a living being by the liveness detection system.
In either of the above embodiments, the liveness detection system 200a/200b may, instead of generating a binary classification of the user 102 as living/non-living, generate a confidence value denoting the system's confidence that the user is living or non-living e.g. a probability that the user 102 is living or a probability they they are non-living. In this case, the access controller 214 receives the probability, and can perform its own classification.
Figure 1 1 illustrates a liveness detection technique in a third embodiment, which combines the techniques of the first and second embodiments.
Three separate servers 120a, 120b, and 120c of the remote system 130 are shown - a liveness (control) server 120a, and two liveness processing servers: a pupil dilation server 120b, and an eye tracking server 120c.
The liveness server 120a coordinates the liveness detection technique of the third embodiment. The pupil dilation server 120b implements the liveness check of the first embodiments, based on pupil dilation. The eye tracking server 120 implements the liveness check of the second embodiment, based on eye tracking. At steps S1 102a and S1102b, the liveness server 120a requests a PD parameter set and an ET parameter set from the pupil dilation server 102b and the eye tracking server 102c respectively.
The PD parameters are for implementing the process of the first embodiment, i.e. based on pupillary response to pulsing, and defines one (or more) randomly selected temporal separation between two (or more) light pulses (or one light pulse at a random time in the video). The ET parameters is of the kind used in the second embodiment, i.e. based on display elements at random display locations, and when combination with the user-selected curve Cv defines a set of spatial points selected at random, at which the display element is displayed. In the third embodiment, importantly, the randomness of the process is generated server-side.
After receiving the PD parameters from the pupil dilation server 120b (S1 104a) and the ET parameters from the eye tracking server 20c (S 104b), the liveness server
120a transmits the PD and ET parameters to the user device 104.
At step S1 107 user device 104 uses the PD and ET parameters to instigate the liveness detection processes of the first and second embodiments respectively by performing the randomized pulse test according to the PD parameters and the randomized display element test according to the ET parameters i.e. emitting light pulses at random interval(s) based on the PD set and displaying display element(s) at random locations selected on the user's curve Cv based on the ET parameter set. This may be triggered by the liveness server 120a requesting a liveness check from the user device 106, or the user device requesting a liveness detection check form the liveness server 120a, The PD and ET sets are transmitted from the liveness server 120a to the user device 104 at step S1 107 in an instigation message 1 101.
The instigation message also comprises a network address of the pupil dilation server 120b and a network address of the eye tracking server 120c. Each network address defines a respective network endpoint, and may for example be a URI (Uniform Resource Indicator). These network addresses are different from the source network address of the instigation message 1 101 (i.e. the address of the liveness server 120a that transmits the message 1 101 ).
The URIs used are uniquely allocated to the user device 104, and each constitutes a shared secret between the user device 104 and the remote system 130.
The two processes are linked, in that the randomized display element of the second process is displayed at the randomized locations defined by the ET set within a predetermined time interval commencing with the first light pulse of the first process. That is, the processes are coordinated so that the display element is displayed to the user at the randomized locations at a time when their eyes are still reacting to the light pulse i.e. while they are still temporarily slightly stunned by the pulse. The movements of the eye when tracking the display element are measurably different when the eye is in this stunned state (as compared with an un-stunned eye), and these differences are predictable across the population, which is exploited as part of the liveness detection procedure of the third embodiment. At least one moving image is captured over a time interval that spans both the pulsing and the displaying of the display element, which constitutes the data that forms the basis of the first and second liveness detection processes. Three points on the user's curve Cv are used, with the display element moving between these three points during the capture process.
At steps S1 108a and S1 108b, the user device 104 transmits information collected at step S1 107 to both the pupil dilation server 120b and the eye tracking server 120c, in at least one first message 1 102a and at least one second message 1 102b
respectively.
In some cases, no or minimal image processing is performed at the user device 104, and the moving image(s), or a minimally processed version thereof, is transmitted to the servers 120b, 120c for processing at the remote system 130 in the first and second messages 1 102b respectively. In these cases, the remote system 130 performs the majority of the steps of the liveness detection process of the first embodiment, and in particular computes the changes in the pupil diameter over time; the remote system 130 also performs the majority of the steps of the liveness detection process of the second embodiment, and in particular computes the histograms representing the blocks of the iris window over time,
In other cases, the user device performs the majority of this processing. In particular, the user device computes the changes in the pupil diameter and the histograms, which it transmits to the pupil dilation server 120b and eye tracking server 120c in the first and second messages 1 102a, 1 102b respectively.
More generally, the processing can be distributed between the user device 104 and servers 120b, 120c in numerous different ways. At least one secure execution environment is provided on the user device 104 in which code and data loaded inside the secure execution environment is integrity protected. To the extent that !iveness detection processing is performed at the user device 104, it is performed within the secure execution environment.
The user device 104 applies a signature to both the first and second messages 1 102a, 1102b. For each message 120b, 120c, the signature is generated by the user device based on both the PD and ET parameters sets. The first and second messages 102a, 1102b are transmitted to the URIs of the eye tracking server 120b and pupil dilation server 120b, as indicated in the instigation message 1 101 - not the other way round. That is the first message 1102a, containing the results of the randomized light pulse test, are transmitted to the eye tracking server 120c; likewise, the second message 1 102b, containing the results of the randomized display element test, are transmitted to the pupil dilation server 120b. In other words, each message is transmitted to a server which is not its ultimate intended destination.
Note that the functionality of the serves 120a-120c need not be distributed across multiple computer devices (though that is not excluded). That is, their function may be implemented by a single device or by multiple devices, but within separate secure execution environments - for example by different processes in separate secure execution environments on the same device, or even different threads of the same program in separate secure execution environments. Regardless of how the functionality is implemented at the hardware level, a key aspect is that the three servers 120a-102c constitute three separate network endpoints of the network 1 18 i.e.:
• the endpoint from which the instigation message 1101 is transmitted to the user device 104;
· the endpoint to which the first message 1 102a is transmitted, as
indicated in the instigation message 1 101 (which is a shared secret between the user device 104 and the remote system 130); • the endpoint to which the second message 1102b is transmitted, as also indicated in the instigation message 1101 (and which is also a shared secret between the user device 104 and the remote system 130).
In the broadest sense, what is meant by having three separate servers 120a-120c within the remote system 130 is that the remote system 130 is configured to provide at least three separate network endpoints, e.g. as defined by three different URIs or other endpoint identifiers, and comprises associated logic for each network endpoint for effecting communications with the user device 106 via that endpoint. In other words, the servers 120a-120c are entities that are logically distinct form one another, each in the form a respective set of code executed in a separate, secure
environment provided by the back-end system 130. They represent separate network endpoints in the sense that the three URIs are different from one another within the URI space (even if they ultimately response to the same IP address and even the same port number of the back-end system 130, which may or may not be the case).
The contents of the first message 1102a is communicated (S1 10a) from the eye tracking 120c to the liveness server 120a along with the signature of the first message 1102a and the URI at which it was received.
Similarly, the contents of the second message 1 102b is communicated (S1 1 10b) from the pupil dilation server 120b to the liveness server 120a along with the signature of the second message 1102b and the URI at which it was received.
The liveness server 120a has access to both the PD and ET parameter sets by virtue of steps S1 104a and S1 104b respectively. It compares both sets with each of the signatures attached to the first and second messages 1102a, 1102b (recall each signature was generated by the user device 104 using both sets).
The liveness server also has access to the URIs that it supplied to the user device 102 in the first message 1101 , and compares these with the URIs that the first and second messages 1102a, 1102b were actually sent to. If either of the URIs actually used does not match the one that should have been used, or if either of the signatures does not match the parameter sets, this is communicated to the access controller 214, thereby causing the user 102 to be refused access to the remote system 130 e.g. to the database 132. For example, this can be achieved by automatically classifying the user as non-living to the access controller 214 - even though the non-matching URI(s) and/or non-matching signature(s) are not directly indicative of this.
If both signatures and both URIs do match, the liveness server 120a provides (S1111 a) the PD results (i.e. the contents of the first message 1102a, as provided by the eye tracking server 120c in step S1 10b) to the pupil dilation server 120b and provides (S1 1 1 1 b) the ET results (i.e. the contents of the second message 1 102b, as provided by the pupil dilation server 102b in step S1 1 10a) to the eye tracking server 120c.
The pupil dilation server 120b performs the liveness detection technique of the first embodiment for each eye separately, as described in detail above with reference to figures 2A-2D, based on a comparison of the contents of the first message 1 102a with the randomly generated PD parameter set so as to generate e.g. a probability that the user 102 is alive.
Similarly, the eye tracking server 120c performs the liveness detection technique of the first process for each eye, as described in detail above with reference to figures 6A-6C, based on a comparison of the contents of the second message 1 102a with the randomly generated ET parameter set so as to generate e.g. a probability that the user is alive. Here, the second process detects when the movement exhibit by the eye is not consistent with the fact that the eye has recently been exposed to the medium-to-high intensity light pulse of the first embodiment (even if the movements themselves are consistent with the randomized locations in general terms). As will be apparent, this can be achieved by suitable tuning of the coefficients of the PDF used in the second process as part of normal design procedure. Where the movements are not consistent, this reduces the probability of the system that the user 102 is human (equivaientiy increases the probability that they are not). The probabilities generated by the first and second processes are combined into an aggregate probability (e.g. by averaging, such as weighted averaging), which is communicated to the access controller 214 at step S112 or which is used to generate a binary classification of the user 102 as living/non-living, by comparing the aggregate probability with a threshold, that is communicated to the access controller 214 at step S112. The access controller 214 then decides whether or not to grant access to the remote system 130 e.g. to the database 132 based on this information.
The messages are "swapped" between the servers 120b, 120c (in steps S1110- S1111 ) via the same liveness server 120a within the confines of the back-end system 130, and the liveness server 120a only allows the swap to proceed (in step S111 ) if both signatures and both URIs are correct. This makes it much harder for a man-in-the-middle attack to take place. Preferably, all communication between the user device 106 and servers 120a
(liveness), 120b (pupil dilation), 120c (eye tracking) is via secure channels. This is particularly the case where the shared secrets are based in a method with well- known properties (pupil dilation/endpoints) as opposed to a "private" method. Where the particular properties are either unique or only known to a small number of actors (ellipse) this in itself can most likely provide sufficient security without the need for secure channels.
The secure channels (or non-secure channels as applicable) between the user device 104 and the different servers 20a-120c need not be via the same network (though they are in the above example).
An additional check can also be imposed by the system, which is that the time interval commencing with the transmission of the instigation message and ending with the receipt of the first and second message (whichever is received latest) is less than a predetermined time interval (e.g. 3 to 10 seconds long). If the time exceeds this, the user 102 is refused access regardless. An alternative timing window can be used, for example starting with the transmission of the initial message 1101 and ending with the liveness server 120a outputting the classification/aggregate confidence value at step S1112. The liveness detection techniques presented herein can, for example, be used as part of an enrolment procedure for a digital identify system. For example, the
Applicants co-pending US Patent Applications 14/622527, 14/622709, 14/822549, 14/622737, 14/622740 - incorporated herein by reference - describe a digital identity system, in which a user can, for example, create a profile of their digital identity (referred to therein as a "uPass") based on an identity document, such as a passport, and a self-captured image of their face ("selfie"). The liveness detection process of the third embodiments can be incorporated into the uPass enrolment procedure when the user submits their selfie. In this case, the access controller 214 is implemented by the uPass enrolment module, and a user is only permitted to enrol and thereby create a uPass profile(s) if they are determined to be a living being with sufficient confidence. As described above, in the third embodiment, in which two separate liveness tests are performed. In the above example, both tests are randomized i.e. performed according to separate respective sets of randomly generated parameters) - the randomized pulse test and the randomized display elements test. More generally, two separate liveness tests of different types can be used, one of which may not be randomized. For example, one of the tests may involve monitoring movements of a mobile device 104 as recorded using one or more sensors of the user device 104 (camera, gyroscope, other accelerometer, GPS etc.). For example, when a device that is known to be a mobile device, human-induced motion is expected at certain times (for instance, when certain actions are performed by the user device) and the absence of this can be used as an indication that the device is not being used by a living being. A living being has a number of distinct characteristics arising from their ongoing biological processes, the sum total of which constitutes life. The techniques presented above are based in particular on visual characteristics that are attributable to life, such as eye movement and pupil contraction. Other characteristics
attributable to life include the ability to provide a thumb or finger print, which can also be used as a basis for a liveness detection test (note in this case what is being tested is not the identity attached to the finger print i.e. a match to a known finger print pattern is not being sought - it is simply the ability of a human to provide a humanoid finger or thumb print at a certain point in them that is being used as an indicator of life). By performing multiple, different liveness tests which are based on different ones of these life characteristics, as in the third embodiment, a greater range of life-like characteristics is tested thereby enabling deductions to be made with greater certainty.
Further, whilst in the above example, the two liveness tests are performed by a single user device 104, the two test could be performed by multiple, collocated devices available to the user - for instance, one test could be performed by a user's laptop and the other by their smartphone. Where signatures of the kind described above are used, both parameter sets are still sent to each device in this case, so that each device can generate the signature from both parameter sets.
Figure 15 illustrates a liveness 'transaction' system and describes a 'transactional' liveness implementation of the embodiment described above with reference to Fig 1 1. The liveness system of figure 15 is shown using 2 data checking components. Thus the liveness system of figure 15 differentiates trust on the transmission and submission mechanisms. Any additional elements may be added by extension.
Figure 15 is thus similar to figure which is described above.
At step 1 , first random test parameters are obtained, such as ET random
parameters.
At step 2, second random test parameters are obtained, such as PD random parameters.
At step 3, random destination addresses are selected. Signature SQ is generated on total and stored cache CL. CL can cache additional data related to processing.
At step 4, Parameters and destination addresses are encrypted with a device- specific key. The signature can be attached and then transmitted to the remote device. At steps 5 and 8, test results are delivered as per Figure 1 1 , with SQ inside the message encrypted with a server key. A separate signature SR is generated from the combination of test results with SQ and appended as cleartext to both messages MPD and MET.
At step 7, SR is used to marry up the multiple parts of the transaction response which are decrypted in the liveness server and routed to their respective processing services with SQ attached. At step 8, results are returned to the liveness server with SQ attached for
reconciliation.
At step 9, the result of gauging liveness is produced. The transaction may be tracked in progress by Sq, the signature of the liveness query sent to the device, with Sr providing the integrity of the response. Sq need not be encrypted with the Liveness Server key as the submission mechanism already specifies secured connections. Sr may be encrypted during submission with the Liveness Server key and only viewable in the Liveness Server. Sr confirms that the data submission has concluded without corruption.
In some embodiments the 'living being' designation may be desired to apply to a non-living entity such as a synthetic being designed to imitate a human living being. In some situations it may thus be desirable to allow the synthetic entity to be seen by the system as a living being. This could be achieved by altering a tolerance or degree of completion for the test for liveness and leading to its being completed successfully.
For example it may be that the test for a living being is altered to test for a live authorized user device or an overall liveness status. The device being 'live' in the sense that the authorized user device is active and providing input/out in real-time. That is to say live in much the same way a television broadcast is considered to be live". In embodiments there may be a second timeout window in which near end data is expected to be captured.
Therefore, the random time period used to separate two successive capture events in the device capture of biometric liveness data in embodiments (i.e. between the camera flash pulses used in the pupil detection mechanism) may also define a timeout which can be used as a finite bound on the processing time allowed to perform other measurements or computations during liveness data capture. Using software implemented in the device with an iterative time series generation function shared with software implemented in the server, a succession of notional timestamps can be generated and each individual captured data item may be accordingly tagged with the associated notional timestamp as well as an actual timestamp taken to the greatest accuracy of the capturing device. These timestamps may be included in the datasets transmitted to the server.
On receipt the server computes the correct timestamps based upon its copy of the time series generation function. If the timestamps don't match then this may provide an indication that the device has used compromised software, or else the data stream has been artificially created. In either case it can be rejected and the capturing device unregistered from the liveness detection system.
Random numbers defined by the randomized output of pulses of light, output from the authorized user device, may be used semantically as a transaction identifier. The entire liveness transaction may be individually identified using the random number generated from or directly provided by the sequence of randomized light pulses. Thus the transaction may be uniquely identified with minimal value or numerical crossover between a subsequently produced and very large set of identifiers. Whilst the above has been described with reference to specific embodiments, these are exemplary and other variations may be apparent to the skilled person. The scope is not limited by the described embodiments but only by the following claims.

Claims

Claims:
1 . A computer-implemented liveness detection method implemented by a liveness detection system, wherein the liveness detection system comprises computer storage storing a shared secret known only to the liveness detection system and one or more authorized user devices, the method comprising
implementing by the liveness detection system the following steps:
selecting at random a set of one or more parameters of a liveness test which, when combined with the shared secret, define expected outputs that should be provided in the liveness test;
transmitting the parameter set to a user device, thereby causing the user device to perform the liveness test according to the parameter set, whereby the user device can only provide the expected outputs therein if it also has access to its own version of the shared secret;
receiving from the user device results of the liveness test performed at the user device according to the first parameter set;
using the parameter set and the shared secret stored at the liveness detection system to determine the expected outputs; and
comparing the results of the liveness test with the determined expected outputs to determine whether the behaviour of an entity that was subject to the liveness test performed at the user device is an expected reaction to the expected outputs, thereby determining from the entity's behaviour both whether the entity is live and whether the user device is one of the authorized user device(s).
2. A method according to claim 1 , wherein the shared secret defines a restricted subset of a set of available display locations, wherein the parameter set defines one or more available display locations selected at random from the restricted subset, and wherein the expected outputs are provided by displaying one or more display elements at the one or more randomly selected available display locations on a display of the user device.
3. A method according to claim 2, wherein the behaviour is eye movements exhibited by at least one eye of the entity during the displaying of the one or more display elements at the user device and conveyed by the received results, the expected reaction being an expected movement of the eye, whereby it is determined both whether the entity is a living being and whether the user device is one of the authorized user device(s) from the entity's eye movements.
4. A method according to claim 2, wherein the behaviour is touch movements exhibited by the entity by touching the display during the displaying of the one or more display elements at the user device and conveyed by the received results, the expected reaction being an expected touch movement, whereby it is determined both whether the entity is live and whether the user device is one of the authorized user device(s) from the entity's eye movements.
5. A method according to claim 1 , wherein the liveness test comprises
controlling the user device to emit at least one light pulse having a randomized timing within the moving image, and the expected reaction is an expected pupillary response to the at least one light pulses.
6. A method according to claim 5, wherein the randomized timing of the light pulse provides a random number for semantic use.
7. A method according to claim 6, wherein the semantic use of the random number is as a transaction identification ID.
8. A method according to claim 2, wherein the shared secret comprises at least one parameter of one or more cryptographically secure curves.
9. A method according to claim 8, wherein the one or more cryptographically secure curves comprise one or more elliptic curves and/or one or more twisted Edwards curves.
10. A liveness detection system comprising:
computer storage storing a shared secret known only to the liveness detection system and one or more authorized user devices; and
a set of one or more processing units, the set configured to perform
operations of: selecting at random a set of one or more parameters of a liveness test which, when combined with the shared secret, define expected outputs that should be provided in the liveness test;
transmitting the parameter set to a user device, thereby causing the user device to perform the liveness test according to the parameter set, whereby the user device can only provide the expected outputs therein if it also has access to its own version of the shared secret;
receiving from the user device results of the liveness test performed at the user device according to the first parameter set;
using the parameter set and the shared secret stored at the liveness detection system to determine the expected outputs; and
comparing the results of the liveness test with the determined expected outputs to determine whether the behaviour of an entity that was subject to the liveness test performed at the user device is an expected reaction to the expected outputs, thereby determining from the entity's behaviour both whether the entity is live and whether the user device is one of the authorized user device(s).
1 1 . A computer-implemented liveness detection method comprising
implementing, by a liveness detection system, the following steps:
selecting at random a first set of one or more parameters of a first liveness test;
transmitting, to a user device available to an entity, the first parameter set, thereby causing the user device to perform the first liveness test according to the first parameter set;
transmitting to the user device, from a source address of the liveness detection system, an identifier of at least one destination address of the liveness detection system different than the source address;
receiving from the user device results of the first liveness test performed at the user device according to the first parameter set;
receiving results of a second liveness test pertaining to the entity; and determining an overall liveness status using the results of the liveness tests, the results of the first liveness test being so used by comparing them with the first parameter set; and
4? determining whether the results of at least one of the tests were transmitted to the at least one destination address.
12. A method according to claim 1 1 , comprising implementing, by the iiveness detection system, steps of:
selecting at random a second set of one or more parameters of the second Iiveness test;
transmitting the second parameter set to the or another user device available to the entity, thereby causing that user device to perform the second Iiveness test according to the second parameter set, wherein the results of the second Iiveness test performed at that user device according to the second parameter set are received from that user device and used in the determining step by comparing them with the second parameter set.
13. A method according to claim 12, comprising generating at the Iiveness detection system a signature based on at least said first and second sets of one or more parameters, and transmitting said signature with said first and second set of one or more parameters.
14. A method according to any one of claims 1 1 to 13, wherein the at least one destination address is randomly selected by the Iiveness detection system.
15. A method according to any one of claims 1 1 to 14, wherein the at least one destination address is at least one URL
16. A method according to any one of claims 1 1 to 15, comprising granting the entity access to a remote computer system only if it is determined that it is live and the results of the at least one of the test were been transmitted by to the at least one destination address.
17. A method according to any one of claims 1 1 to 18, comprising, by the Iiveness detection system:
transmitting to the user device, from the source address of the iiveness detection system, a first and a second identifier of a first and a second destination address of the liveness detection system respectively, the first and second destination addresses being different from the source address and from each other; determining whether the results of the second test were received at the first destination address; and
determining whether the results of the first test were received at the second destination address.
18, A method according to claim 17, comprising generating at the liveness detection system a signature based on at least said first and second identifiers, and transmitting said signature with said first and second identifier.
19. A method according to claim 18 as dependent on claim 13, wherein a single signature is based on at least said first and second sets of one or more parameters, and said first and second identifiers.
20, A method according to any one of claims 11 to 19, comprising encrypting data transmitted to said user device with a device specific key.
21. A method according to any one of claims 17 to 20, wherein the liveness detection system comprises:
liveness control server logic;
first liveness processing server logic for processing the results of the first liveness test, the first liveness processing server logic having a plurality of addresses including the first destination address, and
second liveness processing logic for processing the results of the second liveness test, the second liveness processing logic having a plurality of addresses including the second destination address;
wherein the results of the second test are received at the first liveness processing server, and the results of the first liveness test are received at the second liveness processing server, and the method comprises:
the first liveness processing server providing the results of the second liveness test to the liveness control server;
the second liveness processing server providing the results of the first liveness test to the liveness control server; and the liveness control server providing the results of the first test to the first liveness processing server and the results of the second test to the second liveness processing server only if: the results of the second test were received at the first destination address of the first liveness processing server, and the results of the first test were received at the second destination address of the second liveness processing server.
22. A method according to claim 12 and 21 , wherein the results of the first and second tests are received in a first message and a second message respectively, each message comprising a signature expected to have been generated, for each message, from both parameter sets;
wherein the liveness control server compares both signatures with the first and second parameter sets and provides the results of the first test to the first liveness processing server and the results of the second test to the second liveness processing server only if: the second message was received at the first destination address of the first liveness processing server, the first message was received at the second destination address of the second liveness processing server, and both signatures match the parameter sets.
23, A method according to any of claims 11 to 22, comprising detecting when a timeout condition occurs, the timeout condition caused by an unacceptable delay in receiving the results relative to a timing of the transmitting step, wherein the entity is refused access to a remote computer system in response to the timeout condition occurring.
24. A method according to claim 5, comprising detecting when a timeout condition occurs, the timeout condition being defined based on the timing of said at least one light pulse, wherein the entity is refused access to a remote computer system in response to the timeout condition occurring and/or wherein further processing of said liveness detection method is ceased in response to the timeout condition occurring.
25. A method according to any preceding claim wherein said user device or devices are registered with said liveness detection system.
26. A computer system configured to implement the method of any preceding claims.
27. A computer program product comprising code stored on a computer readable storage medium and configured when executed to implement the method or system of any preceding claim.
PCT/EP2016/069084 2015-08-10 2016-08-10 Liveness detecton WO2017025575A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US14/822,803 US9794260B2 (en) 2015-08-10 2015-08-10 Liveness detection
US14/822,804 2015-08-10
US14/822,803 2015-08-10
US14/822,804 US20170046583A1 (en) 2015-08-10 2015-08-10 Liveness detection

Publications (1)

Publication Number Publication Date
WO2017025575A1 true WO2017025575A1 (en) 2017-02-16

Family

ID=56684660

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2016/069084 WO2017025575A1 (en) 2015-08-10 2016-08-10 Liveness detecton

Country Status (1)

Country Link
WO (1) WO2017025575A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108183736A (en) * 2017-12-28 2018-06-19 北京邮电大学 Transmitter code word selection method, device and transmitter based on machine learning
EP3373202A1 (en) * 2017-03-07 2018-09-12 Eyn Limited Verification method and system
US20180276488A1 (en) * 2017-03-27 2018-09-27 Samsung Electronics Co., Ltd. Liveness test method and apparatus
WO2019108110A1 (en) * 2017-11-28 2019-06-06 Fingerprint Cards Ab Biometric imaging system and method of determining properties of a biometric object using the biometric imaging system
FR3133245A1 (en) * 2022-03-03 2023-09-08 Commissariat à l'Energie Atomique et aux Energies Alternatives Secure image capture system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020172419A1 (en) * 2001-05-15 2002-11-21 Qian Lin Image enhancement using face detection
EP2680191A2 (en) * 2012-06-26 2014-01-01 Google Inc. Facial recognition

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020172419A1 (en) * 2001-05-15 2002-11-21 Qian Lin Image enhancement using face detection
EP2680191A2 (en) * 2012-06-26 2014-01-01 Google Inc. Facial recognition

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ESA HOHTOLA ET AL: "A C T A U N I V E R S I T A T I S O U L U E N S I S SOFTWARE-BASED COUNTERMEASURES TO 2D FACIAL SPOOFING ATTACKS", 18 June 2015 (2015-06-18), pages 57, XP055280495, Retrieved from the Internet <URL:http://jultika.oulu.fi/files/isbn9789526208732.pdf> *
GRAGNANIELLO DIEGO ET AL: "An Investigation of Local Descriptors for Biometric Spoofing Detection", IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, IEEE, PISCATAWAY, NJ, US, vol. 10, no. 4, 1 April 2015 (2015-04-01), pages 849 - 863, XP011576305, ISSN: 1556-6013, [retrieved on 20150319], DOI: 10.1109/TIFS.2015.2404294 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3373202A1 (en) * 2017-03-07 2018-09-12 Eyn Limited Verification method and system
US10853677B2 (en) 2017-03-07 2020-12-01 Eyn Limited Verification method and system
US11176392B2 (en) 2017-03-27 2021-11-16 Samsung Electronics Co., Ltd. Liveness test method and apparatus
US20180276488A1 (en) * 2017-03-27 2018-09-27 Samsung Electronics Co., Ltd. Liveness test method and apparatus
CN108664880A (en) * 2017-03-27 2018-10-16 三星电子株式会社 Activity test method and equipment
EP3382598A3 (en) * 2017-03-27 2018-12-19 Samsung Electronics Co., Ltd. Liveness test method and apparatus
CN108664880B (en) * 2017-03-27 2023-09-05 三星电子株式会社 Activity test method and apparatus
US11721131B2 (en) 2017-03-27 2023-08-08 Samsung Electronics Co., Ltd. Liveness test method and apparatus
CN111357013A (en) * 2017-11-28 2020-06-30 指纹卡有限公司 Biometric imaging system and method of determining a characteristic of a biometric object using a biometric imaging system
US11068733B2 (en) 2017-11-28 2021-07-20 Fingerprint Cards Ab Biometric imaging system and method of determining properties of a biometric object using the biometric imaging system
WO2019108110A1 (en) * 2017-11-28 2019-06-06 Fingerprint Cards Ab Biometric imaging system and method of determining properties of a biometric object using the biometric imaging system
CN108183736B (en) * 2017-12-28 2021-02-23 北京邮电大学 Transmitter codeword selection method and device based on machine learning and transmitter
CN108183736A (en) * 2017-12-28 2018-06-19 北京邮电大学 Transmitter code word selection method, device and transmitter based on machine learning
FR3133245A1 (en) * 2022-03-03 2023-09-08 Commissariat à l'Energie Atomique et aux Energies Alternatives Secure image capture system

Similar Documents

Publication Publication Date Title
US10305908B2 (en) Liveness detection
EP3332403B1 (en) Liveness detection
US20170046583A1 (en) Liveness detection
US10546183B2 (en) Liveness detection
US11551482B2 (en) Facial recognition-based authentication
Eberz et al. Evaluating behavioral biometrics for continuous authentication: Challenges and metrics
US10157273B2 (en) Eye movement based knowledge demonstration
WO2017025575A1 (en) Liveness detecton
Hadid Face biometrics under spoofing attacks: Vulnerabilities, countermeasures, open issues, and research directions
US9690998B2 (en) Facial spoofing detection in image based biometrics
EP3588366A1 (en) Living body detection method, apparatus, system and non-transitory computer-readable recording medium
US20160148066A1 (en) Detection of spoofing attacks for video-based authentication
US11763604B2 (en) Liveness detection based on reflections analysis
US20240037995A1 (en) Detecting wrapped attacks on face recognition
CN109271771A (en) Account information method for retrieving, device, computer equipment
Neal et al. Presentation attacks in mobile and continuous behavioral biometric systems
KR20230139622A (en) Device and method to authorize user based on video data
WO2024049662A1 (en) Verification of liveness data for identity proofing
JP2018195141A (en) Visual-line-based entrance management system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16751288

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16751288

Country of ref document: EP

Kind code of ref document: A1