GB2567798A - Verification method and system - Google Patents

Verification method and system Download PDF

Info

Publication number
GB2567798A
GB2567798A GB1713469.3A GB201713469A GB2567798A GB 2567798 A GB2567798 A GB 2567798A GB 201713469 A GB201713469 A GB 201713469A GB 2567798 A GB2567798 A GB 2567798A
Authority
GB
United Kingdom
Prior art keywords
biometric feature
data
signal
presented
reflected signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1713469.3A
Other versions
GB201713469D0 (en
Inventor
Sheikh Faridul Hasan
Ben Arbia Mohamed
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Eyn Ltd
Original Assignee
Eyn Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Eyn Ltd filed Critical Eyn Ltd
Priority to GB1713469.3A priority Critical patent/GB2567798A/en
Publication of GB201713469D0 publication Critical patent/GB201713469D0/en
Priority to EP18190068.9A priority patent/EP3447684A1/en
Priority to US16/108,183 priority patent/US11308340B2/en
Publication of GB2567798A publication Critical patent/GB2567798A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Collating Specific Patterns (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Whether a biometric feature of a live human is present is determined using a camera (figure 1,106), capturing 202 visual data of a presented biometric feature, and transmitting 204 a signal towards the biometric feature. A sensor (figure 1, 108), is used for capturing 206 data related to a reflected signal from the presented biometric feature; and determining 208 whether the visual data and the reflected signal data relate to a biometric feature having realistic dimensions thereby to determine 210 that a live biometric feature is present. Verification of a user may be done on a smartphone (figure 1, 100) using a microphone to receive audio data. An ultrasound signal may be used for range finding and the microphone picks up reflected signals and analyses them to ensure that a picture is not used instead of a real face, for example.

Description

VERIFICATION METHOD AND SYSTEM
The present invention relates to a method of verifying that (or determining whether) a biometric feature of a live human is present. More particularly, the present invention relates to a method of verifying that (or determining whether) a biometric feature of a live human is present for use as part of a biometric recognition system and/or method. The invention extends to a corresponding apparatus and system.
Biometric authentication, identification, and verification systems are used in a variety of applications (including surveillance, access control, gaming and virtual reality, and driver monitoring systems) as a way of verifying the identity of a user.
Biometric systems typically involve enrolling an authorised user’s biometric feature(s) (e.g. the user’s face, fingerprints, teeth, or iris) in a database, and, at a later time, automatically matching the authorised user’s face presented to the system against one or more entries in the database based on a calculated index of similarity.
Such systems may be vulnerable to ‘spoof’ or ‘presentation’ attacks, in which an attacker claims an authorised user’s identity by presenting a falsified biometric feature of the authorised user to the system, for example by use of a mask, a photograph, a video, or a virtual reality representation of the authorised user’s biometric feature. This may mean that otherwise accurate biometric systems suffer from security risks.
Existing techniques for mitigating the risks of presentation attacks often require the cooperation and/or knowledge of the user/attacker (as in the case of ‘challengeresponse’ tests), which, once an attacker has knowledge of the required response, may be relatively easily overcome (i.e. any system incorporating the known techniques may be easy to ‘spoof’). Furthermore, many existing techniques require specialised hardware, which may reduce the utility of such techniques.
Aspects and embodiments of the present invention are set out in the appended claims. These and other aspects and embodiments of the invention are also described herein.
According to at least one aspect described herein, there is provided a method for determining whether a biometric feature of a live human is present, comprising: using a camera, capturing visual data of a presented biometric feature; transmitting a signal towards the biometric feature; using a sensor, capturing data related to a reflected signal from the presented biometric feature; and determining whether the visual data and the reflected signal data relate to a biometric feature having realistic dimensions thereby to determine that a live biometric feature is present.
By determining whether the data relates to a biometric feature having realistic dimensions, a live biometric feature can be determined to be present without the need for any active response from a user.
Optionally, determining whether the visual data and the reflected signal data relate to a biometric feature having realistic dimensions comprises determining whether the visual data and the reflected signal data in combination relate to a biometric feature having realistic dimensions. The visual data may relate to an angular size of the presented biometric feature and the reflected signal data may relate to a distance of the presented biometric feature from the sensor and/or the shape of a presented biometric feature.
Determining whether the visual data and the reflected signal data relate to a biometric feature having realistic dimensions may comprise comparing the visual data and the reflected signal data against a model related to realistic dimensions of a biometric feature. The model may relate to a ratio of angular size and distance from the sensor for a live biometric feature. The method may further comprise collecting data for use in the model, wherein the data for use in the model may comprise visual data and reflected signal data of a biometric feature of a live human and visual data and reflected signal data of a falsified biometric feature of a live human. The model may be a trained classifier, which may be trained based on presented biometric features of live humans and presented falsified biometric features of live humans. Optionally, the model comprises a convolutional neural network.
Optionally, data related to the presented biometric feature is transmitted for remote processing. Optionally, transmitting a signal comprises transmitting a signal in accordance with a predetermined pattern. The pattern may be formed from at least one pulse and at least one pause, and may be configured such that at least one pulse in the reflected signal is received during the at least one pause in the transmitted signal. The pattern may be selected (optionally, randomly) from a plurality of patterns.
Transmitting a signal may comprise using a single transmitter. Optionally, a single sensor is used to capture data related to a reflected signal.
Optionally, the biometric feature is one or more of: a face; a hand; a palm; a thumb; and one or more fingers. The method may further comprise, using a screen, presenting the captured visual data to the presented biometric feature. A live human may be instructed (using the screen) to locate the biometric feature at a particular position relative to the camera and/or the sensor and/or to perform a particular gesture with the biometric feature.
The sensor may be a microphone, and the signal may be a sound wave (preferably, an ultrasound wave). The frequency of the ultrasound wave may be randomly selected within a predetermined range.
The method may form part of a multi-modal method for determining whether a biometric feature of a live human is present.
According to at least one aspect described herein, there is provided a method for determining whether a biometric feature of a live human is present, comprising: using a camera, capturing visual data of a presented biometric feature, wherein the visual data relates to an angular size of the presented biometric feature; transmitting a signal towards the biometric feature; using a sensor, capturing data related to a reflected signal from the presented biometric feature, wherein the reflected signal data relates to a distance of the presented biometric feature from the sensor; and comparing the visual data and the reflected signal data against a model related to possible angular sizes and possible distances from the sensor for a live biometric feature thereby to determine that a live biometric feature is present.
According to at least one aspect described herein, there is provided a method of verifying the identity of a user, comprising performing a method as described herein; and verifying the identity of the user by comparing biometric information of the user (which optionally comprises information related to the user’s biometric feature(s)) against a database of biometric information of verified users.
According to at least one aspect described herein, there is provided apparatus for determining whether a biometric feature of a live human is present, comprising: a camera for capturing visual data of a presented biometric feature; a module adapted to transmit a signal towards the biometric feature; a sensor for capturing data related to a reflected signal from the presented biometric feature; and a module whether the visual data and the reflected signal data relate to a biometric feature having realistic dimensions thereby to determine that a live biometric feature is present.
The module adapted to transmit a signal may be a loudspeaker; and the signal may be an ultrasound signal. The sensor for capturing data related to a reflected signal may be a microphone. The apparatus may be in the form of one or more of: a smartphone; a laptop computer; a desktop computer; or a tablet computer; an automated passport control gate; and an entry system.
According to at least one aspect described herein, there is provided a system for determining whether a biometric feature of a live human is present, comprising: a user device, comprising: a camera for capturing visual data of a presented biometric feature; a module adapted to transmit a signal towards the biometric feature; and a sensor for capturing data related to a reflected signal from the presented biometric feature; and a remote determination module adapted to determine whether the visual data and the reflected signal data relate to a biometric feature having realistic dimensions thereby to determine that a live biometric feature is present.
The invention extends to methods, system and apparatus substantially as herein described and/or as illustrated with reference to the accompanying figures.
The invention also provides a computer program or a computer program product for carrying out any of the methods described herein, and/or for embodying any of the apparatus features described herein, and a computer readable medium having stored thereon a program for carrying out any of the methods described herein and/or for embodying any of the apparatus features described herein.
The invention also provides a signal embodying a computer program or a computer program product for carrying out any of the methods described herein, and/or for embodying any of the apparatus features described herein, a method of transmitting such a signal, and a computer product having an operating system which supports a computer program for carrying out the methods described herein and/or for embodying any of the apparatus features described herein.
Any feature in one aspect of the invention may be applied to other aspects of the invention, in any appropriate combination. In particular, method aspects may be applied to apparatus aspects, and vice versa. As used herein, means plus function features may be expressed alternatively in terms of their corresponding structure, such as a suitably programmed processor and associated memory.
Furthermore, features implemented in hardware may generally be implemented in software, and vice versa. Any reference to software and hardware features herein should be construed accordingly.
As used herein, the term ‘biometric feature’ preferably connotes a part or characteristic of a human body which can be used to identify a particular human.
As used herein, the term ‘live human’ preferably connotes a living human being (i.e. not a recording or any other kind of indirect representation of a living human).
As used herein, the term ‘head’ preferably connotes a human head, including the face and hair. As used herein, the term ‘face’ is to be preferably understood to be interchangeable with the term ‘head’.
As used herein, the term ‘loudspeaker’ preferably connotes any electroacoustic transducer for transmitting sound waves. As used herein, the term ‘microphone’ refers to connotes any electroacoustic transducer for receiving sound waves.
As used herein, the term ‘audio’ preferably connotes sound, including both audible frequencies and ultrasound frequencies.
As used herein, the term ‘ultrasound’ preferably connotes sound having a frequency above 18 kHZ (which is barely perceptible, or not perceptible, for the majority of humans), more preferably between 18KHz to 22 Khz or alternatively between 20 kHz and 30 MHz.
As used herein, the term ‘realistic’ preferably connotes that an article (which may or may not be real) has characteristics corresponding to a real article.
As used herein, the term ‘dimension’ preferably connotes a measurable characteristic of an article (such as the size of an article) or a measurable characteristic of a relationship between an article and another article (such as the distance between two articles).
As used herein, the term ‘angular size’ preferably connotes one or more apparent dimensions of an object from a given point of view, preferably one or more apparent dimensions of an object within a field of view of a camera.
It should also be appreciated that particular combinations of the various features described and defined in any aspects of the invention can be implemented and/or supplied and/or used independently.
The invention will now be described, purely by way of example, with reference to the accompanying drawings, in which:
Figure 1 is a schematic depiction of a typical portable user device in the form of a smartphone;
Figure 2 is a flowchart which illustrates the main steps of a method for determining whether a biometric feature of a live human is present;
Figure 3 is an image showing the pattern of the transmitted signal and the signals included in the audio data received by the microphone;
Figure 4 is a schematic diagram of a software architecture (including memory) of a user device adapted to implement the method;
Figure 5 is an image showing a method of training a binary classifier for use in the method;
Figure 6a is a schematic depiction of the user device performing the method on a live human face;
Figure 6b is a schematic depiction of the user device performing the method on a falsified human face presented on another user device;
Figure 7 is an image showing a sequence of gestures that a user may perform to verify themselves; and
Figure 8 is a schematic image of a tablet computer and loudspeaker and microphone array for implementing the method.
Specific Description
Figure 1 is a schematic depiction of a typical portable user device 100 in the form of a smartphone. As is well known, the user device 100 comprises a screen 102 and a loudspeaker 108 for providing information to a user, as well as a number of sensors arranged around the screen for receiving inputs from a user. In particular, the sensors include a front-facing camera 106 (i.e. a camera which faces towards the user as the user views the screen 102 - a further rear-facing camera (not shown) may also be provided) for receiving visual data, particularly visual data relating to the user, a microphone 104 for receiving audio data, particularly from the user, and one or more buttons 110 (or similar input device) for receiving a physical input from the user.
The present invention provides a method 200 for determining whether a biometric feature of a live human is present (i.e. whether a part of a purported user presented to the sensors of the user device is actually a part of a real human) which is suited to be implemented using the user device 100. The method 200 may find particular use as an initial stage in biometric recognition systems in order to defend such systems against presentation attacks.
Figure 2 is a flow diagram which illustrates the main steps of the method 200 for determining whether a biometric feature of a live human is present. As mentioned, in an embodiment, the method 200 is implemented on a user device 100 in the form of a smartphone, although it will be appreciated that other implementations are of course possible.
In a first step 202, the camera 106 of the user device is used to capture visual data of a (purported) biometric feature which is presented to the camera 106 and screen 102. The biometric feature may be a real biometric feature of a live human or a falsified or fraudulent biometric feature, such a printed photograph (or other image), a photograph or video displayed on a screen of an external display device (such as another user device), or a fake 3D recreation of a biometric feature (such as a mask of a face, or a silicon mould of a fingerprint). In an embodiment, the biometric feature is a human head, although it will be appreciated that other implementations are of course possible. Specifically, the visual data is a photograph or image of the presented biometric feature.
In a second step 204, a signal is transmitted towards the presented biometric feature for the purposes of range finding (i.e. measuring a distance between the user device and the presented biometric feature). The signal is in the form of an ultrasound signal (i.e. a sound wave having a frequency above 18 kHz, such that it is barely perceptible or not perceptible to most humans), which is transmitted from the loudspeaker 108 of the user device. The signal is transmitted in accordance with a predetermined pattern, which is formed from a series of pulses (or bursts) and pauses.
The ultrasound signal is reflected off objects in the vicinity, including the presented head. In the vast majority of uses, the only object of significance in the vicinity of the loudspeaker 108 is the presented head. As will be appreciated, the loudspeaker 108 is located in the user device 100 such that it is directed towards the presented head, which may improve the transmission of the ultrasound signal towards the presented head.
In a third step 206, the ultrasound signals are detected using the microphone 104 of the user device 100. The microphone 104 is turned on throughout the time period over which the ultrasound signal is transmitted (and optionally for a further time period once the transmission of the ultrasound signal has ceased). Audio data including the detected ultrasound signals is produced by the microphone 104.
Figure 3 is an image showing the pattern of the transmitted signal and the signals included in the audio data received by the microphone 104. The pattern of the pulses and pauses is selected such that reflected pulses are received at the microphone 104 during a pause in the transmitted signal pattern.
The transmission of an ultrasound pulse 302 of duration n commences at time t. A first peak 306 in the received signal at the microphone 104 occurs while the ultrasound pulse is transmitted, corresponding to the transmitted signal directly (i.e. with minimal reflections from other objects). A pause 304 of duration n follows the pulse. In the pause period, secondary peaks 308, 310, 312 are detected in the received signal. The secondary peaks correspond to different objects from which reflected ultrasound signals are received. Since, in general terms, the presented head is the largest and most prominent object in the vicinity of the user device 100 the largest secondary peak 310 corresponds to a reflected signal from the presented head. A delay d between the first peak 306 and the largest secondary peak 310 corresponds to the distance between the object and microphone. At time t+n+p, another pulse is transmitted, and another peak corresponding to the first peak 306 is received. The pause time p is calibrated the first time the method 200 is used based on the distance between the microphone and the loudspeaker to improve the accuracy of range estimation. The pulses and pauses continue until the transmitted ultrasound signal is terminated. Since only the largest secondary peak 310 corresponds to the presented head, the other secondary peaks may be removed so as to avoid interference in later analysis, for example by applying a threshold to the audio data.
Optionally, the frequency of the ultrasound pulse 302 is chosen at random (within a predetermined range), in order to avoid interference with similar nearby devices 100 implementing the method 200 and to increase the security of the method 200. As mentioned, signals corresponding to other frequencies may be removed (for example, if the loudspeaker generates 21.75 KHz, all other frequencies other than 21.75 KHz are filtered out before the reflected secondary peaks are analysed).
The amplitude of the secondary peaks can be used to infer information about the shape of the object from which the ultrasound signals are reflected. For example, a large amplitude may indicate that the received signal has been reflected from an object having a large flat surface (such that the reflected signal is not very dispersed), such as a user device, a display, or a piece of paper or card (such as a physical photograph). In contrast, signals reflected from a partially curved object or an object having various protrusions and indentations (such as a human head) have smaller amplitudes and longer durations.
Optionally, once the transmission of the signal has stopped, the camera 106 and the microphone 104 may be switched off in order to save on battery power and data storage requirements of the user device 100.
In a fourth step 208, the visual data and the audio data are compared against a model. Such comparison may be performed using a processor of the user device 100, or via an external server (in which case the visual data and the audio data, or a processed version thereof, are transmitted from the user device 100 to the external server).
The model used in the comparison step 208 is a binary classifier, which is arranged to receive the visual data and the audio data and produce an output indicating that the presented biometric feature is either verified or is not verified. The model is trained to recognise features in the visual data and the audio data which are indicative of a live human face being present, in particular whether the audio data and the visual data correspond in such a way that a combination of features in the audio data and the visual data are indicative of a live human face being present.
In particular, the model relates to acceptable dimensions of the presented biometric feature (i.e. dimensions that correspond to ‘realistic’ dimensions of a live biometric feature), where the dimensions include the angular size of the presented biometric feature (i.e. the apparent size of the presented biometric feature within the ‘frame’ or field of view of the camera, as determined via the visual data), the distance from the user device 100 (as determined via the delay d between the first peak 306 and the largest secondary peak 310 in the audio data) and the shape of the presented biometric feature (as determined via the amplitude of the secondary peaks in the audio data). Optionally, further dimensions are used in the model - for example, dimensions related to the position of the presented biometric feature within the field of view of the camera.
As will be appreciated, the method 200 thereby acts on the inability of falsified 2D representations to correctly represent the dimensions of a real biometric feature, as the angular size and distance from the user device 100 will generally be out of realistic proportions. For small representations such as a displayed falsified biometric representation on the screen of a user device, the falsified representation will always be detected to have too small an angular size to be a real biometric feature given the detected distance from the user device. Large representations provided on, for example, a large display may have realistic proportions as judged from the angular size and the distance from the user device 100, but the reflected ultrasound signal will show peaks with very high amplitudes. This allows the falsified representation to be detected, even if the proportions of the angular size and distance from the user device are within realistic proportions.
Before comparison against the model, the visual data and audio data are pre-processed using scaling methods. This may improve the model’s ability to recognise particular features in the data.
In a fifth step 212, an output is produced by the model in dependence on the results of the comparison in the fourth step 210. The output may take the form of a message indicating that a real human (or a real human head) has been verified or has not been verified.
Referring to Figure 4, a schematic diagram of the software architecture 150 (including memory) of the user device 100 adapted to implement the method 200 is shown. As illustrated, this includes a control module 152 for controlling the camera 106 and the microphone 104, a stimulus module 154 (also controlled by the control module 152) for generating a pattern and for controlling the loudspeaker 108 to transmit the ultrasound signal including the pattern, a data store 156 provided in communication with the stimulus module 154, and a comparison module 156 for receiving visual data 180 from the camera 106 and audio data 190 from the microphone 104. Optionally, the visual data 180 and audio data 190 (or alternatively processed forms of said data produced by the comparison module 156) are saved into the data store 15, once received.
The particular frequency of the ultrasound signal used in dependence on the hardware of the user device 100. A built-in loudspeaker of a smartphone can typically generate ultrasound of frequencies between 18 KHz and 22 KHz, and a built-in microphone of a smartphone can typically detect frequencies between 18 KHz and 22 KHz. As such, where the user device 100 is a smartphone, frequencies between 18 KHz and 22 KHz may be used.
The control module 152 is arranged to implement the visual data capturing step 202 of the method by turning the camera on in response to a signal from a processing component of the user device 100 (or alternatively an external signal), where the signal indicates that there is a need to determine whether a biometric feature of a live human is present. The control module 152 is also arranged to implement the signal transmission step 204 in part, in that it directs the stimulus module 154 to generate and transmit a signal. Similarly, the control module is also arranged to implement the signal transmission step 204 in part, in that it turns the microphone 104 on (optionally, at the same time as the camera is turned on).
The stimulus module 154 selects a pattern for the signal on the basis of data relating to a plurality of patterns in the data store. The plurality of patterns may differ in the duration and/or number of the pulses and/or the pauses in a single pattern. Furthermore, a variety of different types of signal may be used as the ‘pulse’ in the pattern; options include a sine sweep (for example, from around 18 KHz to 22 KHz), a pulsed ultrasound (for example, of around say 20 KHz), or a white noise. The stimulus module 154 may select the pattern in accordance with a cycle of various predetermined patterns, but preferably instead selects the pattern randomly (which may provide added security). In an alternative, the pattern is generated dynamically based on a plurality of variables (such as pause duration, pulse duration, and pulse signal type), which are determined dynamically or randomly. In all cases, the pattern of the signal (as well as the power characteristics of the signal) is selected such that reflected pulses from objects within a predetermined frequency and distance range are received at the microphone 104 during a pause of the pattern. The reflected frequency range is typically within +-200 Hz of the original ultrasound signal. For example, if the original ultrasound pulse has a frequency of 19.5 kHz and a duration of 100 ns, the reflected ultrasound is expected to have a frequency of between 19.3 kHz and 19.7 kHz and a duration of 100 ns. All other noise is removed using a bandpass filter.
It will be appreciated that transmitting signals according to a pattern (which consequently appears in, or otherwise affects, the received reflected signals) adds a further layer of security, as it protects against attacks in which an attacker transmits a recorded ultrasound pattern. An attacker will not be able to produce a signal having a pattern corresponding to the transmitted pattern unless they are able to predict the pattern of the transmitted ultrasound signal.
The comparison module 156 comprises the previously described model, which may optionally be stored in the data store. As mentioned, in an alternative, the comparison may be performed using an external server, where the binary classifier is provided on the external server. In such a case, the extracted binary signals may be transmitted to the external server from the comparison module 156 over a data connection, and an output may be transmitted back to the user device once the comparison has been performed.
Figure 5 is an image showing a method of training a binary classifier 500 for use in the method 200. In use, the classifier receives the audio data and the visual data as an input and produces a binary output based on its training. The classifier is trained based on the audio data and the visual data alone, using feature learning with a convolutional neural network. This allows the classifier to learn important features, and which are then detected in use. The classifier 500 is trained on the basis of original data 502 from presented live human faces as well as ‘attack’ data 504 from falsified human faces.
Figure 6a is a schematic depiction of the user device 100 performing the method 200 on a live human face 300. As shown, the ultrasound signal transmitted by the microphone 104 and received via the loudspeaker 108 allows a distance a between the face 300 and the user device 100 to be determined. The camera 106 allows determination of the angular size b of the face 300.
The screen 102 of the user device 100 presents an image 112 of the presented head (where the image data is acquired via the camera) - this may assist a human attempting to verify themselves to the system to hold the user device at an appropriate orientation and distance relative to their presented face (in particular so that the entirety of the presented face is in the frame of the camera).
Figure 6b is a schematic depiction of the user device 100 performing the method 200 on a falsified human face, which is presented on another user device 100. The ultrasound signal is transmitted and received as before and visual data is received via the camera 106, but the angular size a is out of proportion with the distance b from the user device 100 - so the presented face is determined to be falsified.
It will be appreciated that the method 200 may act as a first stage of a broader facial or biometric verification method, where the first step (i.e. the described method 200) determines whether a live human is present. Subsequent steps can then determine whether the live human is a verified user.
Similarly, it will be appreciated that the method 200 can form one part of a multi-modal method for determining whether a live human is present, where other techniques are used as further inputs to develop a confidence ‘score’ that the presented feature is in fact a feature of a live human.
Alternatives and Extensions
As an alternative to capturing video via the camera, a plurality of photographs may instead be captured and used in subsequent analysis.
Optionally, a multi-class classifier may be used instead of a binary classifier.
Optionally, an alternative ultrasound transducer is used to transmit and/or receive ultrasound signals, rather than the loudspeaker 108 and/or microphone 104 of the user device 100. The transducer may, for example, be provided as part of separate apparatus.
Although the invention has principally been defined with reference to the transmitted signal being an ultrasound signal, it will be appreciated that a variety of alternative signals may be used, where the signal is used for distance ranging. For example, a laser or infrared signal may be used.
Although the invention has principally been defined with reference to the relevant biometric feature being a human head or face, it will be appreciated that a variety of biometric features can be used, such as a hand, palm, or finger.
Figure 7 is an image showing a sequence of gestures that a user may perform to verify themselves. The method 200 may be repeated for various biometric features, where the user is requested to present various biometric features in a gesture. The user may be requested to perform a sequence of gestures with the biometric features - each gesture acts to change the effective shape, distance from the user device, and angular size of the presented biometric feature (as the feature appears to the sensors of the user device), and so provides additional layers of security without requiring much input from a user.
The method 200 may be implemented on any kind of portable user device 100 having a screen and a camera, such as a smartphone, a laptop computer, a desktop computer, or a tablet computer. Alternatively, the method 100 may be implemented using a static device, such as those that might be included as part of or in association with entry systems, doors, automated passport control gates, or any other kind of system or device (static or otherwise) implementing a facial recognition system.
Any device or apparatus implementing the described method 200 may comprise a NFC (Near Field Communication) reader adapted to read a RFID (Radio Frequency IDentification) chip provided as part of an identity-certifying document (such as a passport, or national ID card or corporate employee badge) or another NFC capable device), which may allow data provided in the RFID chip via NFC to be compared to a face of the user that is verified using the method 200 (as well as optionally allowing comparison between the data in the RFID chip via NFC and any photograph provided as part of the document).
Figure 8 is a schematic image of a tablet computer 800 and loudspeaker and microphone array 802 for implementing the method 200. In one particular use case, a series of tablet computers 800 implementing the method 100 may be installed at an electronic border control (or as part of another access control system). A user may stand in front of the tablet computer 800 and present their passport (allowing the NFC chip of the passport to be scanned, and the photograph information to be compared against a photograph taken via a camera of the tablet computer). In this scenario, to determine whether the user’s face is a live biometric feature a commercially available loudspeaker and microphone array 802 may be provided in communication with the tablet computer 800, where the ultrasound frequency range for such an array 802 may be between 20 kHz and 30 MHz. The use of a loudspeaker and microphone array 802 may allow for improved accuracy.
It will be appreciated that alternative components to a screen may be used for presenting the stimulus, such as a flat surface on to which the stimulus is projected.
It will be understood that the invention has been described above purely by way of example, and modifications of detail can be made within the scope of the invention.
Each feature disclosed in the description, and (where appropriate) the claims and drawings may be provided independently or in any appropriate combination.
Reference numerals appearing in the claims are by way of illustration only and shall 5 have no limiting effect on the scope of the claims.

Claims (36)

1. A method for determining whether a biometric feature of a live human is present, comprising:
using a camera, capturing visual data of a presented biometric feature; transmitting a signal towards the biometric feature;
using a sensor, capturing data related to a reflected signal from the presented biometric feature; and determining whether the visual data and the reflected signal data relate to a biometric feature having realistic dimensions thereby to determine that a live biometric feature is present.
2. A method according to Claim 1, wherein determining whether the visual data and the reflected signal data relate to a biometric feature having realistic dimensions comprises determining whether the visual data and the reflected signal data in combination relate to a biometric feature having realistic dimensions.
3. A method according to any preceding claim, wherein the visual data relates to an angular size of the presented biometric feature.
4. A method according to any preceding claim, wherein the reflected signal data relates to a distance of the presented biometric feature from the sensor.
5. A method according to any preceding claim, wherein the reflected signal data relates to the shape of a presented biometric feature.
6. A method according to any preceding claim, wherein determining whether the visual data and the reflected signal data relate to a biometric feature having realistic dimensions comprises comparing the visual data and the reflected signal data against a model related to realistic dimensions of a biometric feature.
7. A method according to Claim 6, wherein the model relates to a ratio of angular size and distance from the sensor for a live biometric feature.
8. A method according to Claim 6 or 7, further comprising collecting data for use in the model, the data for use in the model comprising visual data and reflected signal data of a biometric feature of a live human and visual data and reflected signal data of a falsified biometric feature of a live human.
9. A method according to any of Claims 6 to 8, wherein the model is a trained classifier.
10. A method according to Claim 9, further comprising training the model based on presented biometric features of live humans and presented falsified biometric features of live humans.
11. A method according to any of Claims 6 to 10, wherein the model comprises a convolutional neural network.
12. A method according to any preceding claim, further comprising transmitting data related to the presented biometric feature for remote processing.
13. A method according to any preceding claim, wherein transmitting a signal comprises transmitting a signal in accordance with a predetermined pattern.
14. A method according to Claim 13, wherein the pattern is formed from at least one pulse and at least one pause.
15. A method according to Claim 14, wherein the pattern is configured such that at least one pulse in the reflected signal is received during the at least one pause in the transmitted signal.
16. A method according to any of Claims 13 to 15, further comprising selecting a pattern from a plurality of patterns.
17. A method according to any of Claims 13 to 16, wherein selecting a pattern comprises randomly selecting a pattern.
18. A method according to any preceding claim, wherein transmitting a signal comprises using a single transmitter.
19. A method according to any preceding claim, wherein a single sensor is used to capture data related to a reflected signal.
20. A method according to any preceding claim, wherein the biometric feature is one or more of: a face; a hand; a palm; a thumb; and one or more fingers.
21. A method according to any preceding claim, further comprising, using a screen, presenting the captured visual data to the presented biometric feature.
22. A method according to any preceding claim, further comprising, using a screen, instructing a live human to locate the biometric feature at a particular position relative to the camera and/or the sensor.
23. A method according to any preceding claim, further comprising, using a screen, instructing a live human to perform a particular gesture with the biometric feature.
24. A method according to any preceding claim, wherein the sensor is a microphone.
25. A method according to Claim 24, wherein the signal comprises a sound wave.
26. A method according to Claim 25, wherein the signal comprises an ultrasound wave.
27. A method according to Claim 26, wherein the frequency of the ultrasound wave is randomly selected within a predetermined range.
28. A method according to any preceding claim, wherein the method forms part of a multi-modal method for determining whether a biometric feature of a live human is present.
29. A method of verifying the identity of a user, comprising performing the method of any of Claims 1 to 28; and verifying the identity of the user by comparing biometric information of the user against a database of biometric information of verified users.
30. A computer program product comprising software code adapted to carry out the method of any of Claims 1 to 29.
31. A client or user device in the form of a telecommunications device or handset such as a smartphone or tablet adapted to execute the computer program product of Claim 30.
32. Apparatus for determining whether a biometric feature of a live human is present, comprising:
a camera for capturing visual data of a presented biometric feature; a module adapted to transmit a signal towards the biometric feature;
a sensor for capturing data related to a reflected signal from the presented biometric feature; and a module whether the visual data and the reflected signal data relate to a biometric feature having realistic dimensions thereby to determine that a live biometric feature is present.
33. Apparatus according to Claim 32, wherein the module adapted to transmit a signal is a loudspeaker; and the signal is an ultrasound signal.
34. Apparatus according to Claim 32 or 33, wherein the sensor for capturing data related to a reflected signal is a microphone.
35. Apparatus according to any of Claims 32 to 34, wherein the apparatus is in the form of one or more of: a smartphone; a laptop computer; a desktop computer; or a tablet computer; an automated passport control gate; and an entry system.
36. A system for determining whether a biometric feature of a live human is present, comprising:
a user device, comprising:
a camera for capturing visual data of a presented biometric feature; a module adapted to transmit a signal towards the biometric feature; and a sensor for capturing data related to a reflected signal from the presented biometric feature; and a remote determination module adapted to determine whether the visual data and the reflected signal data relate to a biometric feature having realistic
5 dimensions thereby to determine that a live biometric feature is present.
GB1713469.3A 2017-08-22 2017-08-22 Verification method and system Withdrawn GB2567798A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
GB1713469.3A GB2567798A (en) 2017-08-22 2017-08-22 Verification method and system
EP18190068.9A EP3447684A1 (en) 2017-08-22 2018-08-21 Verification method and system
US16/108,183 US11308340B2 (en) 2017-08-22 2018-08-22 Verification method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1713469.3A GB2567798A (en) 2017-08-22 2017-08-22 Verification method and system

Publications (2)

Publication Number Publication Date
GB201713469D0 GB201713469D0 (en) 2017-10-04
GB2567798A true GB2567798A (en) 2019-05-01

Family

ID=59996575

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1713469.3A Withdrawn GB2567798A (en) 2017-08-22 2017-08-22 Verification method and system

Country Status (1)

Country Link
GB (1) GB2567798A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016076912A1 (en) * 2014-11-13 2016-05-19 Intel Corporation Spoofing detection in image biometrics
WO2016204968A1 (en) * 2015-06-16 2016-12-22 EyeVerify Inc. Systems and methods for spoof detection and liveness analysis
WO2017025573A1 (en) * 2015-08-10 2017-02-16 Yoti Ltd Liveness detection
US20170124394A1 (en) * 2015-11-02 2017-05-04 Fotonation Limited Iris liveness detection for mobile devices
US20170143241A1 (en) * 2011-12-30 2017-05-25 Theodore Dean McBain System, method and device for confirmation of an operator's health condition and alive status
US20170206413A1 (en) * 2005-11-11 2017-07-20 Eyelock Llc Methods for performing biometric recognition of a human eye and corroboration of same
CN107066983A (en) * 2017-04-20 2017-08-18 腾讯科技(上海)有限公司 A kind of auth method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170206413A1 (en) * 2005-11-11 2017-07-20 Eyelock Llc Methods for performing biometric recognition of a human eye and corroboration of same
US20170143241A1 (en) * 2011-12-30 2017-05-25 Theodore Dean McBain System, method and device for confirmation of an operator's health condition and alive status
WO2016076912A1 (en) * 2014-11-13 2016-05-19 Intel Corporation Spoofing detection in image biometrics
WO2016204968A1 (en) * 2015-06-16 2016-12-22 EyeVerify Inc. Systems and methods for spoof detection and liveness analysis
WO2017025573A1 (en) * 2015-08-10 2017-02-16 Yoti Ltd Liveness detection
US20170124394A1 (en) * 2015-11-02 2017-05-04 Fotonation Limited Iris liveness detection for mobile devices
CN107066983A (en) * 2017-04-20 2017-08-18 腾讯科技(上海)有限公司 A kind of auth method and device

Also Published As

Publication number Publication date
GB201713469D0 (en) 2017-10-04

Similar Documents

Publication Publication Date Title
US11308340B2 (en) Verification method and system
US10546183B2 (en) Liveness detection
US11551482B2 (en) Facial recognition-based authentication
EP2883189B1 (en) Spoof detection for biometric authentication
US10652749B2 (en) Spoof detection using proximity sensors
CA3080399A1 (en) System and method associated with user authentication based on an acoustic-based echo-signature
EP3332403B1 (en) Liveness detection
US11210376B2 (en) Systems and methods for biometric user authentication
EP2434372A2 (en) Controlled access to functionality of a wireless device
CA3030015A1 (en) Spoofing attack detection during live image capture
WO2014025445A1 (en) Texture features for biometric authentication
US11030292B2 (en) Authentication using sound based monitor detection
Zhou et al. Multi-modal face authentication using deep visual and acoustic features
GB2567798A (en) Verification method and system
GB2570620A (en) Verification method and system
Rathore et al. Scanning the voice of your fingerprint with everyday surfaces

Legal Events

Date Code Title Description
732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)

Free format text: REGISTERED BETWEEN 20220310 AND 20220316

WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)