GB2586149A - Behavioural blometric control policy application - Google Patents

Behavioural blometric control policy application Download PDF

Info

Publication number
GB2586149A
GB2586149A GB1911314.1A GB201911314A GB2586149A GB 2586149 A GB2586149 A GB 2586149A GB 201911314 A GB201911314 A GB 201911314A GB 2586149 A GB2586149 A GB 2586149A
Authority
GB
United Kingdom
Prior art keywords
user
computing device
control policy
current user
identity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1911314.1A
Other versions
GB201911314D0 (en
Inventor
Ducatel Gery
Gelardi Gabriele
Fiddler Andy
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
British Telecommunications PLC
Original Assignee
British Telecommunications PLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by British Telecommunications PLC filed Critical British Telecommunications PLC
Priority to GB1911314.1A priority Critical patent/GB2586149A/en
Publication of GB201911314D0 publication Critical patent/GB201911314D0/en
Publication of GB2586149A publication Critical patent/GB2586149A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/316User authentication by observing the pattern of computer usage, e.g. typical user behaviour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6209Protecting access to data via a platform, e.g. using keys or access control rules to a single file or object, e.g. in a secure envelope, encrypted and accessed using a key, or with access control rules appended to the object itself
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/21Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/2149Restricted operating environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/50Maintenance of biometric data or enrolment thereof
    • G06V40/53Measures to keep reference information secret, e.g. cancellable biometrics

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Psychiatry (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioethics (AREA)
  • Collating Specific Patterns (AREA)

Abstract

A method comprises monitoring, at a computing device, behavioural biometrics of a current user 306. Then, causing a control policy to be applied to the computing device 310 based on an identity of the current user that is determined based on the behavioural biometrics 308. The control policy can be applied either by the computing device itself, or by another computing device (fig.2, 210) that is associated by a network to the original computing device. The determination of identity may be achieved via a trained machine learning model, or by comparison of a recreated secret credential with an original from a set of possible users, which is based on shares of a secret sharing scheme. If a user is not a part of the set of possible users, a default control policy can be applied. The method may also authenticate an initial user 302 and apply a control policy for that user 304.

Description

Behavioural iometric Control Policy Application
Field of the invention
The present invention relates to the application of control policies in respect of computing devices. In particular, the present invention makes use of behavioural biometrics to cause a control policy to be applied in respect of the computing device that is relevant to a current user of the device.
Background of the invention
An increasingly wide range of content and functionality is accessible from computing devices such as laptops, tablet computers, smartphones and the like. Whilst this is generally considered to be desirable and beneficial, there are situations where it is commonly desired to limit access to at least some of this content and functionality.
One such situation is the home environment, in which parents commonly wish to restrict their children's access to some kinds of content and functionality that they may consider to be unsuitable for their children. it is also common for parents to wish to apply different restrictions for different children. For example, some parents may be willing to allow their older children to access some kinds of content and functionality that they would be less willing to allow their younger children to access.
In order to control access to content and functionality hal is accessible using a computing device, it is common to utilise so-called control policies which define the restrictions that are to be applied for each user. These control policies are typically combined with the use of an explicit authentication mechanism. The explicit authentication mechanism is typically based on either a knowledge factor (such as a secret pattern, password or number) or a physiological biometric (such as a fingerprint scan or facial recognition). The computing device will not permit a user to use the device (which may be referred to as the device being in a "locked' state) until he or she has been successfully authenticated (which may be referred to as "unlocking" the computing device) at which point, the computing device will apply the relevant control policy for that user (that successfully authenticated themselves) and permit the device to be used (which may be referred to as the device being in an "unlocked" state). Once the user finishes using the computing device, they perform a "logout" action to return the device to a locked state, thereby requiring another user to successfully authenticate themselves before the device can be used.
Summary of the invention
One problem that is encountered with this approach to applying control policies in respect of computing devices is that it is common for users to share (or at least have access to) each other's devices. Therefore, although one user may have authenticated themselves to the computer device (who may be referred to herein as the initial user or the authenticated user), a different user may gain access to the device following the authentication without first being authenticated to the computing device. In such circumstances, it is desirable that the control policy that is being applied to the device is changed to one that is suitable for the actual (i.e. current) user of the device, rather than one that is suitable for the initial (authenticated) user, as this may be inappropriate for the current user.
One simplistic approach to solving this problem would be to seek to ensure that the device is automatically locked before it is passed to another user. Whilst this might help prevent situations where a user has accidentally accessed a device after it has been unlocked by a different user, it will be appreciated that there are many situations in which it is desirable for a user to access a computing device whilst it is operating tinder the account of another user.
in some cases, for example, computing devices may be configured to only allow certain users to unlock the device (by authenticating themselves to the device), even though other users may then be permitted to use the device. For example, a computing device in a home environment may be configured to only allow parents to unlock the device, but might subsequently be allowed to be used by children in the family. By configuring the device in this way, the children need to obtain the cooperation of a parent to use the device, thereby ensuring that the parents are able to maintain an awareness and control over the device's usage.
In other cases; for example; a computing device may be a personal computing device that is ordinarily used by a single user. Accordingly, the device may only allow that user (its owner) to unlock the device in order to protect the confidentiality of any data that is stored on the device. However, the user may still occasionally hand their personal device over to another user (who would not be able to unlock the device themselves) to carry out some activity. For example, in a home environment a parent may own a srnartphone that is configured such that they are the only user who can unlock it. However, the parent may nonetheless occasionally hand their phone over to a child, for example, to allow the child to play a game so as to occupy it on a car journey.
In yet other cases, even when the user to whom a device is being handed may be able to unlock the device under their own user account, it may still he desirable to allow them to access the device under the account of a different user. For example, some applications or content that is to be shared may only be accessible through the account of one of the users. For example; a user may hand another user their computing device to allow the other user to read an interesting passage of an e-book that is only accessible via that users account. As a further example, a user may hand another user their computing device in order for the other user to show them how to carry out certain activities on the computing device.
Ot course, other situations in which it is desirable for a user to access a computing device under the account of another user will be apparent to the skilled person.
Accordingly, it would be desirable to provide a mechanism by which control policies are applied to a computer device in a way that ensures that an appropriate control policy is in place for a current user of the device even when the device changes hands following the authentication of a user.
In a first aspect of the present invention; there is provided, a computer implemented method for applying a control policy in respect of a computing device, the method comprising, at the computing device: monitoring one or more behavioural biometrics of a current user of the computing device; and causing a control policy to be applied in respect of the computing device based on an identity of the current user that is determined based on the one or more behavioural biometrics.
The method may further comprise determining, at the computing device, an identity of the current user of the computing device based on the one or more behavioural biometrics.
Determining the identity of the current user may comprise using a trained machine learning model to classify the behavioural biometrics as belonging to one of a set of possible 25 users.
Determining the identity of the current user may comprise: accessing identification data for a set of possible users, the identification data indicating, for each user, how a respective set of shares of a secret sharing scheme that represent a secret credential for that user can be generated from the behavioural biometrics; determining whether one or more of the set of possible users is the current user by: generating the respective set of shares for that possible user from the behavioural biometrics of the current user; as indicated by the identification data; recreating a copy of the secret credential for that user based on the generated set of shares, in accordance with the secret sharing scheme; and determining whether the recreated copy of the secret credential for that user is correct; and in response to determining that the recreated copy of the secret credential for one of the set of possible users is correct, determining that the identity of the current user is the identity of that user.
Determining the identity of the current user may comprise using the behavioural biometrics to perform a user profile match.
Determining the identity of the current user further comprises identifying the current user as an unidentified user that is not in the set of possible users, and wherein the control policy that is applied is a default control policy for unidentified users.
The method may further comprise: authenticating an initial user of the computing device to determine an identity of the initial user; causing an initial control policy to be applied in respect of the computing device based on the identity of the initial user, wherein the identity of the current user is determined subsequent to the authentication of the initial user and the device is configured to enable the current user to use the device in accordance with the identity of the initial user subject to the control policy.
The control policy may be applied, at least in part, by the computing device.
The control policy may be applied, at least in part, by another computing device that is associated with a network via which the device communicates and the method may further comprise communicating with the other computing device to cause it to apply the control policy.
The control policy may comprise either, or both, of: an access control policy that determines what functionality of the computing device is usable by the current user; and a content control policy that determines what types of content may be accessed by the current User, Through the use of behavioural biometrics, the present invention enables the identity of the current user of the device to be determined separately from any explicit authentication mechanism that may be employed by the computing device. As will be discussed in more detail, behavioural biometrics are related to (relatively) invariant features of a particular user's behaviour as they carry out various activities. This means that a user's behavioural biometrics can be collected as they carry out these activities, without requiring any explicit input from the user. As a result, they can be monitored and continually, regularly or repeatedly evaluated to determine the identity of the user that is currently using the device. This means that an appropriate control policy can be applied in respect of the computing device based on the identity of the current user, when a different user may have authenticated themselves to the device before the device is passed to the current user. Indeed, the device may even be passed around between multiple different users, with an appropriate control policy being applied for each user whenever they are the current user of the device.
In a second aspect of the present invention, there is provided, a computer implemented method for applying a control policy in respect of a computing device, the method comprising: receiving, from the computing device, data identifying a current user of the computing device, the data indicating one or more behavioural biometrics of the current user; determining an identity of the current user based on the one or more behavioural biometrics; and causing a control policy to he applied in respect of the computing device based on the identity of the current user.
Determining the identity of the current user may comprise using a trained machine learning model to classify the behavioural biometrics as belonging to one of a set of possible 15 users.
Determining the identity of the current user may comprise: accessing identification data for a set of possible users, the identification data indicating, for each user, how a respective set of shares of a secret sharing scheme that represent a secret credential for that user can be generated from the behavioural biometrics; determining whether one or more of the set of possible users is the current user by: generating the respective set of shares for that possible user from the behavioural biometrics of the current user, as indicated by the identification data; recreating a copy of the secret credential for that user based on the generated set of shares, in accordance with the secret sharing scheme; and determining whether the recreated copy of the secret credential for that user is correct; and in response to determining that the recreated copy of the secret credential for one of the set of possible users is correct, determining that the identity of the current user is the identity of that user.
Determining the identity of the current user may comprise using the behavioural hi Cb to perform a user profile match.
Determining the identity of the current user comprises identifying the current user as an unidentified user that is not in the set of possible users, and wherein the control policy that is applied is a default control policy for unidentified users.
The control policy may be applied, at least partially, by the computing device and the method may further comprise providing, to the computing device, the control policy that is to be applied by the computing device.
The control policy may be applied, at least partially, by one or more computing devices that are associated with a network via which the device communicates and the method may further comprise communicating with those computing devices to cause the control policy to be applied.
The control policy may comprise either, or both, of: an access control policy that determines what functionality of the computing device is usable by the current user; and a content control policy that determines what types of content may be accessed by the current user.
In a third aspect of the present invention, there is provided a computer system comprising a processor and a memory storing computer program code for performing the method of the first or second aspects set out above.
in a fourth aspect of the present invention, there is provided a computer program which, when executed by one or more processors, is arranged to carry out the method of the first or second aspects set out above.
Brief description of the drawings
Embodiments of the present invention will now be described by way of example only, with reference to the accompanying drawings, in which: Figure 1 is a block diagram of a computing device (or computer system or device) that is suitable for the operation of embodiments of the present invention; Figure 2 is a block diagram cf an exemplary network arrangement within which embodiments of the invention may operate; and Figure 3 is a flowchart that schematically illustrates a computer implemented method for applying a control policy in respect of a computing device in accordance with embodiments of the present invention.
Detailed description of embodiments of the invention Figure 1 is a block diagram of a computing device (or computer system or device) 100 that is suitable for the operation of embodiments of the present invention. The device 100 comprises a memory (or storage) 102, a processor 104 and one or more input/output (I/O) interfaces 106, which are all communicatively linked over one or more communication buses 103.
The memory (or storage or storage medium) 102 can be any volatile read/write memory device such as random access memory (RAM) or a non-volatile memory device such as a hard disk drive, magnetic disc, optical disc, ROM and soon. The memory 102 can be formed as a hierarchy of a plurality of different memory devices, including both volatile and non-volatile memory devices, with the different memory devices in the hierarchy providing differing capacities and response times, as is well known in the art.
The processor 104 may be any processing unit, such as a central processing unit (CPU), which is suitable for executing one or more computer programs (or software or instructions or code). These computer programs may be stored in the memory 102. During operation of the system, the computer programs may be provided from the memory 102 to the processor 104 via the one or more buses 108 for execution. One or more of the stored computer programs are computer programs which, when executed by the processor 104, cause the processor 104 to carry out a method according to an embodiment of the invention (and accordingly configure the system 100 to be a system according to an embodiment of the invention). The processor 104 may comprise multiple processing cores, either contained in a single chip or distributed across multiple chips (i.e. the processor 104 may be a multiprocessor), as is known in the art.
The one or more inputioutput (I/O) interfaces 106 provide interfaces to devices 110a- 110g for the input or output of data, or for both the input and output of data. The devices that are connected to the system 100 via the interfaces 106 may include one or more devices that are intended to either obtain input from a user or provide input to a user, or both, For example, a touchscreen 110a may be connected to the system 100 to provide information to the user via images output to the touchscreen's display and allow the user to provide input by touching or swiping different points on the touchscreen 110a. However, in alternative embodiments, the touchscreen may be replaced by, or augmented with one or more of: a keyboard, a mouse, a number pad and a non-touchscreen display. The devices 110 that are attached to the system 100 via the I/O interfaces may further include one or more sensors that provide an input based on sensed parameters of the physical environment in which the system 100 is operating. For example, the devices 110 may include one or more of: a camera 110b, a microphone 110c, a fingerprint scanner 110d, a GPS sensor 1 10e, a light sensor 1101, a temperature sensor 110g, an accelerometer 110h, a gyroscope 110i, a gravity sensor 1 10j and a magnetometer 110k, Any other sensor may be used instead or in addition, as will be appreciated by those skilled in the art. The one or more input/output (I/O) interfaces 106 may further include one or more network interfaces to enable the computer system 100 to communicate with other computer systems via one or more networks 112, As will be appreciated, any suitable type of network 112 may be utilised by computer system 100 to communicate with other computer systems, including communication via both wired and wireless media, such as, for example, Bluetooth, WiFi or mobile communications networks.
It will be appreciated that the architecture of the system 100 illustrated in figure 1 and described above is merely exemplary and that other computer systems 100 with different architectures (such as those having fewer components, additional components and/or alternative components to those shown in figure 1) may be used in embodiments of the invention. As examples, the computer system 100 could comprise one or more of a personal computer; a laptop; a tablet; a mobile telephone (or smartphone); an Internet of Things (loT) device; and a server. The devices 110 that interface with the computer system 100 may vary considerably depending on the nature of the computer system 100 and may include devices not explicitly mentioned above, as would be apparent to the skilled person.
Figure 2 is a block diagram of an exemplary network arrangement 200 within which embodiments of the invention may operate. In this arrangement, there is an end-user computer device 202, which is a computer system 100 as discussed above. The end-user computer device 202 may be accessed by two different users, a first user 204a and a second user 204b. The computer device 202 is configured to access the Internet 206 via broadband network 208. A control policy server 210 is communicatively coupled to the broadband network and provides a centralised point for defining control policies that should be applied.
It will, however, be appreciated that the network arrangement 200 is merely exemplary and that there are a wide range of different network arrangements in which embodiments of the invention may operate. For example, the numbers of computing devices 202 and users 204 may differ from those shown in figure 2. Furthermore, in some arrangements some of the computing devices 202 may be configured to access the Internet 206 via more than one intermediate network. For example, in some arrangements, one or more of the computing devices 202 may he configured to access the Internet 206 via a mobile network in addition or as an alternative to the broadband network 208. In such arrangements, the control policy server 210 may be communicatively coupled to several, or all, of the intermediate networks. Additionally, it will be appreciated that additional or alternative elements may be included in the arrangements, such as other elements of network infrastructure, as will be known to the skilled person. Indeed, in some embodiments of the invention, the computing device 202 may be a standalone computing device that is not connected to the Internet 206 or any other networks, such as broadband network 208. in such embodiments, the policies that are to be applied may be specified locally on the device 202 itself and stored in (and subsequently retrieved from) a local memory 102, without the involvement of a control policy server 210.
The arrangement 200 illustrated in figure 2 will now be discussed further in conjunction with figure 3, which is a flowchart that schematically illustrates a computer implemented method 300 for applying a control policy in respect of a computing device 204 in accordance with embodiments of the present invention.
In some embodiments, the control policy that is to be applied to the computing device takes the form of an access control policy. That is to say, a control policy that determines what functionality of the computing device can be used. Such control policies may determine which software functionality (such as particular applications) may be used, They may additionally or alternatively determine which hardware functionality (such as the use of particular interlaces such as a camera or NEC interlace) may be used.
It will be appreciated that there are many diflerent ways in which a control policy can indicate what functionality is permitted and what functionality is restricted for a particular user. Generally, the control policy may either define the functionality that a particular user is allowed to access, with all other functionality being restricted (a so-called "whitelist" approach), or, alternatively, may define the functionality that is restricted for a particular user, with all other functionality being permitted (a so-called "blacklist" approach).
The functionality may be referred to explicitly. For example, a control policy may individual applications or hardware features that a user is (or is not) allowed to use. In some embodiments, software functionality may be grouped into particular categories. The control policy may therefore refer to categories of software functionality (e.g. applications) that the user is (or is not) allowed to use. The categories that a particular item of software functionality falls into may be stored in a database and looked-up as needed. This can help to reduce the frequency with which a control policy needs updating because any new software applications can be categorised and handled accordingly. For example, the software functionality might be categorised into groups of applications that are considered to be functionally similar, such as "gaming" applications, "social media" applications or "gambling" applications. The software functionality might additionally or alternatively be categorised into groups of applications that are appropriate (or, alternatively, are inappropriate) for different age groups.
In some embodiments, the control policy that is to be applied to the computing device takes the form of a content control policy. That is to say, a control policy that determines what types of content (such as information andior multimedia) may be accessed by a user. As for the access control policies, either a whitelist or a blacklist approach may be taken. Similarly, the content control policy may explicitly identity content (e.g. particular films, books or webpages) and/or sources of content (such as publishers or websites) that may or may not be accessed by a particular user. Again, content and/or sources of content may be categorised and the content control policy may refer to particular categories of content that are or are not allowed for the user.
In other embodiments, the control policy that is to be applied to the computing device may be a hybrid control policy that determines both what functionality and what content may be accessed by a particular user. That is to say, the hybrid control policy includes elements of both access control policies and content control policies, Due to the wide range of content and functionality that is accessible via various networks (especially those connected to the Internet 206), as well as the relative lack of control that users have over the types of content and functionality that are available over such networks, preferred embodiments of the invention relate to the application of control policies that define limitations on the computing device's use of a network with which it is configured to communicate. That is to say, the control policy places limitations on the functionality of the device to communicate with that network and/or the content that may be accessed via that network. However, of course, in other embodiments, the control policies may additionally or alternatively be used to limit functionality or content that is available locally on the device 202 itself.
In some embodiments, the method 300 starts at an optional operation 302, in which the method 300 authenticates an initial user 204 of the device 202. This authentication serves to unlock the device to enable it to be used and establishes an identity of the user that unlocked the device (the initial user). After a successful authentication, the device 202 unlocks itself and operates according to a user account associated with the authenticated user (for example, by allowing access to that user's data that is stored on the device). Of course, in embodiments where no such locking mechanism is employed, no authentication is required in order to use the device 202. Accordingly, in such embodiments, the ethod 300 starts instead at an operation 306, In order to authenticate themselves as an authorised user of the device 202 at operation 302, the user must interact with the device in accordance with an authentication mechanism which verifies their identity. Any suitable authentication mechanism may be used. For example, various explicit authentication mechanisms may be used, such as mechanisms which are based on knowledge factors, like passwords, secret patterns or numbers, or that are based on physiological biometrics, such as fingerprints or facial recognition. The outcome of the authentication is a verification of the identity of a user who completed the authentication (that is, the identity of an initial user of the device).
In some embodiments, the authentication may be performed entirely locally by the computing device 202 itself. Of course, in other embodiments, the authentication may be performed through communication with another computer system 100, such as an authentication server.
In the exemplary arrangement 200 illustrated in figure 2, for example, the first user 204.a may be registered as an authorised user of the computing device 202. The device 202 may therefore require a successful authentication from the first user 204a before the device can be used. This can help to prevent access to the device by anyone who is not authorised to use the device, unless it is first unlocked by the authorised user (in this case the first user 204a). For example, in the arrangement 200 shown in figure 2, the second user 204b may not be able to unlock the device 202 even though they may be able to physically access the device 202 and may be permitted to use the device 202 once it is unlocked (as illustrated by the dashed line in figure 2). Of course, even though this example only discusses one user 204a who can unlock the device and one user 204b who cannot unlock the device but may be permitted to use the device once it is unlocked; it will be appreciated that, in some embodiments, multiple users may unlock the device or multiple users may be permitted to use the device once it is unlocked, or both.
Having identified the initial (authenticated) user of he device, the method 300 proceeds to an optional operation 304, At optional operation 304, the method 300 causes an initial control policy to be applied in respect of the computing device 202. The initial control policy that is applied is based on the identity of the initial user that was determined from the authentication that was carried out at operation 302.
In some embodiments, the initial control policy is applied locally by the device 202 itself. That is to say, the device 202 is configured to receive and enforce an initial control policy by restricting access to various content or functionality (or both), as dictated by the initial control policy.
In other embodiments, the initial control policy is applied by the computer systems 100 in the networks via which the device communicates, such as broadband network 208. That is to say various computer systems 100 in the network 208 may be involved in restricting access to content or functionality for computing device 202, including, for example, DNS servers, routing devices and/or firewall devices, as will be known to those skilled in the art.
These computer systems 100 may include computer systems 100 that are deployed locally (i.e. geographically proximal) to the computing device 202, such as a home router or wi-fi access point that is used to access the broadband network 208, or may be more centrally located in the network.
In some embodiments, the computing device 202 may communicate with the control policy server 210 in order to apply an appropriate control policy. For example, the computing device 202 (or, in some embodiments, the authentication server) may indicate the identity of the initial user 204a to the control policy server 210, The control policy may then provide a control policy for the computing device 202 to apply locally or may communicate with various computer systems 100 in the network 208 to cause them to apply the control policy in respect of the computing device 202 (or both). Through the use of a control policy server 210, the ease of maintaining the control policies for users can be improved, especially as the number of different computing devices 202 that each user 204 may use increases, as it provides a central location in which to specify the control policies. Furthermore, the use of a control policy server 210 can help to ensure that the control policy that is applied is the most up-to-date one. However, the use of a control policy server 210 is not essential and, in other embodiments, such as when computing device 202 is a standalone computing device, the control policies may be specified locally at the computing device 202 and then stored in (and subsequently retrieved from) the local memory 102 of the computing device 202.
Having caused an initial control policy to be applied in respect of the computing device 202 at optional operation 304, the method proceeds to operation 306. As discussed above, in embodiments where no authentication is required, the method 300 starts at operation 306.
At operation 306, the method 300 monitors one or more behavioural biometrics of a current user 204 of the device.
As discussed above, behavioural biometrics are measurements of some aspect of a user's behaviour during their normal activities (either when actively interacting with the device or when carrying out other activities that do not involve interacting with the device but which can he detected when the device is being carried by the user). As will be appreciated, there are a wide range of behavioural biometrics which can be detected through the various sensors that the computing device 202 has access to. For example, touchscreen interactions, including gestures such as swipes, strikes, pinches, scrolls andlor taps, may be sensed via a touchscreen 110a of the device 100. The data provided by the touchscreen 1i Oa may therefore yield various features that can help to distinguish a particular user from other users. Such features are considered to be behavioural biometrics. For example; the pressure applied, stroke length and/or duration of any touchscreen interactions may be measured and are likely to be different for different users, yet consistent for a particular user. Other sensors may yield other behavioural biometrics, for example sensors such as an accelerometer 110h, gyroscope 110i, gravity sensor 110i and/or magnetometer 110k can be used to determine other distinguishing features of a particular user, such as their gait, or the way in which they hold their phone (e.g. a typical device orientation), A camera 110b of the device may also provide behavioural biometrics; for example, associated with movements and facial expressions of a user as they interact with the device. As a further example, tapping or typing patterns on a keyboard (either virtual or physical) may be monitored and behavioural biometrics relating to these patterns (which may be referred to as keystroke dynamics) can be used. Similarly, the semantic content of data entered into the computing device 202 (whether by virtual or physical keyboard, voice; or in any other way), may be analysed to determine linguistic behavioural biometrics relating to patterns in the language that is used by the user to express themselves (for example, frequencies of use of different words). All these features are considered to be behavioural biometrics. In general, any form of suitable behavioural biometric that can distinguish one user from another (either alone or in combination with other behavioural biometrics) and which may be sensed by the computing device 202 may be monitored.
As will be appreciated, it is necessary to generate the measurements of the behavioural biometrics in a manner which yields relatively repeatable results and yet still provides some utility for distinguishing particular users from other users (when multiple behavioural biometrics are combined). The skilled person would be readily familiar with techniques for doing this. For example, the granularity (or accuracy) with which each of the behavioural biometrics is measured may be lowered to ensure that repeated measurements are likely to provide the same result at the level of granularity that is chosen. Similarly, measurements may be classified into broader categories that the measurements belong to and each such category may be associated with a particular value. Additionally, normalisation techniques may be used to normalise the data that is provided by the sensors. For example multiple measurements of a particular feature may be averaged to provide an average measurement for that feature (such as an average speed of touch, or an average length of stroke and so on). Similarly, data from other sensors may be used to normalise the data that is read from another sensor (e.g. data from a gravity sensor 1 10j may be used to normalise data from an accelerometer 110h so that it is relative to a "real world" coordinate system rather than being relative to the computing device 100), The skilled person would be readily familiar with these, as well as other, techniques that may be used to ensure that the measurements of the behavioural biometrics are captured in a manner which is repeatable.
Having gathered measurements of the current user's behavioural biometrics, the method 300 proceeds to an operation 308. Of course, although operation 306 is illustrated in figure 3 as being a discrete operation, it will be appreciated that the behavioural biometrics may be continuously monitored to ensure that a current set of measurements is available. Operation 308 may then be performed at any point in time based on the current set of measurements obtained From monitoring the current user's behavioural biometrics. In some embodiments, the monitoring performed at operation 306 may further comprise monitoring the behavioural biometrics to detect that the current user has chanced. In such embodiments, the method 300 may wait until a change in current user 204 has been detected before proceeding to operation 308 (or, in other words, operation 308 may be performed in response to detecting a change in the current user 204 from the behavioural biometrics).
At operation 308, the method 300 determines an identity of the current user 204 based on the one or more behavioural biometrics. That is to say, the most recent measurements of the behavioural biometrics that have been obtained from the monitoring of those behavioural biometrics at operation 306 are used at operation 308 to determine the identity of the current user 204 of the computing device 202.
As discussed above, the current user 204b of the device 202 may differ from the user 204a that completed the authentication to access the device 202 (the authenticated user). Similarly, the set of users 204 that are permitted to access the device 202 once it is unlocked may differ from a set of users 204 that can unlock the device 202 by authenticating themselves. Accordingly, at operation 308, the method 300 seeks to identify which one of a set of possible users 204 (i.e. those users 204 that are permitted to access the device 202 once it is unlocked) is currently using the computing device 202 based on their behavioural biometrics.
In some embodiments, in order to identify which of the set of possible users 204 is the current user 204, the method 300 may utilise a machine learning model which has been trained to classify the behavioural biometrics (or features of those behavioural biometrics) as belonging to a particular one of the possible users. The skilled person would be familiar with various machine learning classification algorithms that can be used to learn such a model, such as, for example, Linear Classifiers, Logistic Regression, Naive Bayes Classifier, Nearest Neighbour, Support Vector Machines, Decision Tress, Boosted Trees, Random Forests, or Neural Networks. Any suitable machine learning classification algorithm may be used to train a model for this purpose, as would be known to the skilled person. This model may be learnt at the device itself. For example, the device may utilise a training period in which each possible user of the device is identified prior to using the device for a period of time that enables the device to learn the distinguishing characteristics of that user's behavioural biometrics. However, in other embodiments, the model may be provided to the device, for example by the authentication server 218 based on data that is already known about each of the possible users' behavioural biometrics. In some embodiments, the machine learning model may be further trained to classify a user as being an unidentified user (that is, to determine that the current user is not one of the set of possible users).
In other embodiments, a secret credential is generated from the monitored behavioural biometrics and used to identify the current user 204. A suitable technique for generating a secret credential from a user's behavioural biometrics is discussed in the applicant's UK patent application number 1910169.0 (the entirety of which is hereby incorporated by reference). This technique for generating a secret credential from a user's behavioural biometrics is also discussed herein under Appendix A below. Accordingly, in some embodiments, at operation 308, the method 300 tests which of the set of possible users is currently using the device by recreating the secret credential for each of the users based on the current user's behavioural biometrics. The current user is then identified as being the user for which the recreated secret credential matches the original secret credential. In other words, the method 300 assumes that the current user is each of the set of possible users in turn and recreates the secret credential for that user based on the current user's behavioural biometrics -if the assumption is correct (that is, the current user is that particular user), the recreated secret credential will match that user's secret credential (i.e. the recreated secret credential will be the correct secret credential for that user), otherwise it will not. Accordingly, It the generated secret credential matches the secret credential of one of the set of possible users, the user associated with that secret credential is identified as being the current user of the device. Of course, if none of the secret credentials match those of any of the possible users, the current user can be determined to be an unidentified user.
It will be appreciated that the method 300 need not necessarily generate the secret credential for each of the possible users and may instead test each user in turn until the user has been identified (i.e. when the generated secret credential matches the original secret credential for a particular user), Of course, it will be appreciated that any other techniques for identifying a user from their behavioural biometrics may be used instead. As a further example, user profile matching may be performed to identify a user whose behavioural biometrics match those that have been measured. That is to say, each users behavioural biometrics may be stored and a lookup may be performed to identify which of the users have a set of behavioural biometrics that are the same as those of the current user.
In some embodiments, the determination of the current users identity is performed entirely locally by the computing device 202 itself. However, in other embodiments, the determination of the current user's identity is performed collaboratively with another computing device, such as the control policy server 210. in such embodiments, data identifying the current user of the computing device is provided to the other computing device. In some cases, this data may simply be the behavioural biometrics that have been measured for the current user (or features derived from those behavioural biometrics). in such cases, the other computer system uses those behavioural biometrics to identity the user, for example, by using the techniques discussed above involving the use of trained machine learning classification models or secret credentials generated from the behavioural biometrics.
Having determined the identity of the current user 204 of the computing device 202 at operation 308, the method 300 proceeds to an operation 310.
At operation 310, the method 300 causes a control policy to be applied in respect of the computing device 202 based on the identity of the current user 204. That is to say, an appropriate control policy is selected and applied for the identified user. For example, where the current user is a child, this could be a control policy that a parent has specified for that child.
In some embodiments, the selection of an appropriate control policy for the identified user is performed locally by the computing device 202, such as when the computing device 202 is a standalone computing device. In such embodiments, the computing device 202 retrieves a control policy that has been stored in its memory 102 for that user.
In other embodiments, the selection of an appropriate control policy for the identified user is performed by another computer system 100, such as the control policy server 210, In such embodiments, the computing device 202 provides data identifying the current user 204 of the computing device 202. In embodiments where the identity of the current user 204b is determined locally by the computing device 202, this data may simply be an identifier for the identified user, such as a usemame or user number. In other embodiments, where the identity of the current user is determined by the other computer system 100 (as part of operation 305), the other computer system 100 may simply use the identity that it has already determined for the user without receiving any further data from the computing device 202 (other than the data that was provided at operation 308 to allow the user's identity to be determined by the other computer system 100). In either case, the other computer system 100 uses the identity of the current user 204 to determine an appropriate control policy to be applied.
In some embodiments, the control policy is, at least partially, applied by the computing device 202, Where the appropriate control policy for the current user is determined by another computer system, the other computer system 100 provides the control policy (or the relevant portion of the control policy that is to be enforced by the computing device 202) to the computing device 202.
In some embodiments, the control policy is, at least partially, applied by other computer systems 100 associated with a network via which the device communicates (such as being enforced by routers, firewalls and/or DNS servers in the network). Where the appropriate control policy is determined by the computing device 202 itself, the computing device 202 communicates with those other computer systems 100 to cause them to apply the control policy in respect of the computing device 202. Where the appropriate control policy is determined by another computer system that computer system may communicate with those other computer systems 100 to bring the control policy into effect in respect of the computing device 202, In some embodiments, the control policy is enforced by both the computing device 202 itself and other computer systems 100 associated with a network via which the device communicates, such as broadband network 208.
In some embodiments, where the current user 204 is determined to be an unidentified user (that is, someone who is not one of the known users of a device), an appropriate action may be taken. For example, the device may be locked such that it is no longer operating under the authenticated user's account and requires re-authentication before it can be used.
In other examples, a default control policy may be applied. This default control policy may allow the current user 204 of the device 202 to continue operating the device 202, but may impose restrictions on the functionality or content that can be used. This default control policy may, for example, be determined to include all restrictions that are specified for each of the possible users. In this way, a failsafe mechanism can be provided in the event that identification of the user fails for some reason (such as, for example, a child passing their parent's device 202 to a friend who is not known to and therefore not identifiable by the device 202).
At operation 312, the method 300 determines whether to repeat operations 306, 308 and 310. In particular, in some embodiments, the method 300 repeatedly performs these operations in order to detect further changes in the current user of the device 202 and change the control policy that is applied in respect of the device 202 appropriately. By repeating or iteratively performing operations 306, 308 and 310 the method 300 can ensure that an appropriate control policy is being enforced in respect of the device 204 regardless of the number of times the current user of the device changes. When the monitoring is no longer required, such as, for example, when the device is locked such that it requires re-authentication to be accessed, the method 300 ends.
Insofar as embodiments of the invention described are implementable, at least in part, using a software-controlled programmable processing device, such as a microprocessor, digital signal processor or other processing device, data processing apparatus or system, it will be appreciated that a computer program for configuring a programmable device, apparatus or system to implement the foregoing described methods is envisaged as an aspect of the present invention. The computer program may be embodied as source code or undergo compilation for implementation on a processing device, apparatus or system or may be embodied as object code, for example. Suitably, the computer program is stored on a carrier medium in machine or device readable form, for example in solid-state memory, magnetic memory such as disk or tape, optically or magneto-optically readable memory such as compact disk or digital versatile disk etc., and the processing device utilises the program or a part thereof to configure it for operation. The computer program may be supplied from a remote source embodied in a communications medium such as an electronic signal, radio frequency carrier wave or optical carrier wave. Such carrier media are also envisaged as aspects of the present invention. It will be understood by those skilled in the art that, although the present invention has been described in relation to the above described example embodiments, the invention is not limited thereto and that there are many possible variations and modifications which fall within the scope of the invention. The scope of the present invention includes any novel features or combination of features disclosed herein. The applicant hereby gives notice that new claims may be formulated to such features or combination of features during prosecution of this application or of any such further applications derived therefrom. In particular, with reference to the appended claims, features from dependent claims may be combined with those of the independent claims and features from respective independent claims may be combined in any appropriate manner and not merely in the specific combinations enumerated in the claims.
Whilst the examples that have been discussed herein illustrate the possible uses of the invention within a home environment, it will be appreciated that this invention can equally be applied to many other types of environments. For example, similar concerns exist in a workplace environment where certain users may he allowed to access certain content or functionality that should not be accessible to other users that may gain access to computing devices used by such users. For example, a user that is a member of a human resources department may have access to confidential employee data that a user who is not a member of the human resources department should not have access to.
In summary, there is provided a computer implemented method for applying a control policy in respect of a computing device. The method comprises monitoring, at the computing device, one or more behavioural biometrics of a current user of the computing device. The method further comprises causing a control policy to be applied in respect of the computing device based on an identity of the current user that is determined based on the one or more behavioural biometrics. The identity of the user can be determined either by the computing device itself, or, alternatively, by another computing device to which data indicating the behavioural biometrics is provided by the computing device.
Appendix A -Generating a Secret Credential from a User's Behavioural Biometrics This appendix summarises the technique for generating a secret credential from a user's behavioural biometrics that was originally discussed in the applicant's UK patent application number 1910169.0, as it relates to the invention. In particular, that application discusses how secret sharing schemes can be used to represent a secret credential for a user in terms of a set of shares.
As will be known by those skilled in the art; secret sharing schemes (which may also be referred to as secret splitting schemes) refer to methods for distributing a secret S amongst a number n of secret shares Si, ..., Sr, (also referred to herein as shares) in such a way that the secret S can be computed (or recreated') from the combination of any t or more secret shares S,, but cannot be recreated from any combination ol t-1 or fewer shares Sn. Since the secret S can be recreated from the shares S," it can be considered that the set of secret shares SI, ..., collectively form a secure representation ol the secret. The number t of shares Sn that are required to recreate the secret S may be referred to as the threshold of the secret sharing scheme. There are many known secret sharing schemes and any suitable secret sharing scheme may be used in embodiments of the invention.
One example of a secret sharing scheme that may be used with embodiments of the invention is Shamir's secret sharing scheme, the details of which were initially set out in the paper "How to Share a Secret by Adi Shamir published in Communications of ACM, Volume 22, Number 11, November 1979. As will be known; this system makes use of the idea that a polynomial of order t-1 is only completely defined by a set of at least t points that lie on that polynomial or, in other words, fewer than t points are insufficient to define a unique polynomial of degree t-1. Accordingly, the method used by Shamir's secret sharing scheme involves defining a polynomial of degree trl, i.e. an 4-a x ** a@ X2 + at-, with the first coefficient a° being set as the secret 5, and the remaining coefficients a1, a2, am being picked at random. Having defined a specific polynomial in which the secret S is embedded as the first coefficient ao, any n points on the polynomial are then found. These n points form the secret shares sn. These secret shares sr, represent the secret S because when at least t of the secret shares S. are known then t points lying on a polynomial in which the secret S is embedded are known. From these t points, it is possible to recreate the unique polynomial of degree t-i and thereby recover the secret S which is embedded as the first coefficient an of the polynomial.
However, other secret sharing schemes, such as Blakley's secret sharing scheme (details of which were initially set out in the paper "Safeguarding cryplographic keys", by G. R Blakley published as part of Managing Requirements Knowledge, International Workshop on (AFIPS) 48 313-317, 1979) may be used instead.
To setup a secret credential for a user, a set of shares is generated representing the secret credential. These shares are generated based on the typical behaviour biometrics for* the user to which the secret credential relates (conceptually, this may be considered as specifying the shares in terms of the user's typical behavioural biometrics). For example, when using Shamir's secret sharing scheme, a polynomial of degree t-1 may be specified in which the secret credential S forms the first coefficient and the other coefficients may be picked at random, i.e. f(x) S + x + a2. x2 + .xkl Next, a number n of points pi, p, on the polynomial are determined by evaluating f(1), f(n) respectively. These points form the n shares S, of the secret credential S (i.e. sn=p"). Therefore, in order to define any of the shares s, in terms of one or more typical measurements of a user's behavioural biometrics B", = (b1, bm}, one can specify any suitable method of combining the behavioural biometrics in a way that results in the value of the share sr,. For example, an equation can be specified in which each of the behavioural biometrics B" is multiplied by a respective coefficient = cm} and the value of -which o is then modified by a residual value r as follows: s,= + + cm Om + r The share sr, may therefore be considered to be (i.e. may be specified in terms of) the coefficients Cm, the measurements of the user's behavioural biometrics Bm and a residual value r. Therefore, a computing device will be able to generate (or recreate) the secret s, given only the coefficients Ca, and the residual value r by taking its own measurements of the behaviourai biometrics when the authenticated user is using the device, This is because the measurements of the behavioural biometrics will normally correspond to the typical measurements Bm of the user's behavioural biometrics when the authenticated user using the device (arid are unlikely to correspond if a different user is using the device), The information describing how to recreate each of the set of secrets from a respective one or more of the user's behavioural biometrics is then stored (this information may be referred to herein as identification data). This data includes an indication of how each of the shares S, is to he generated from the behavioural biometrics. For example, for each of the shares, a collection of coefficients Cm and a residual value r may be included in the authentication data. These coefficients and residual value may allow the respective share s,, to be generated according to the following equation (in the manner that has already been discussed): Sr = + .* This data can then be used by a computing device 202 to recreate the shares Sn, by combining the provided coefficients Cm for each share sr, with measured values of the behavioural biometrics Bin (and/or copies of the other factors Fr, provided by the user). This can be done by taking measurements of the user's behavioural biometrics and then using the provided coefficients and residual values for each secret share, together with these measurements, to reproduce each of the secrets. These secrets can the be combined according to the secret sharing scheme to recreate the secret credential. For example, when Shamir's secret sharing scheme is used, each of the shares Sr represent a point on a polynomial of order (where t is the threshold of the secret sharing scheme). Lagrange basis polynomials can then be used to determine the coefficients of the polynomial of degree that is defined by those points. The secret credential S is then identified as the first coefficient of this polynomial.
When the user of the device is the user to which the secret credential S belongs, the measurements of the user's behavioural biometrics will match those used to generate the shares, meaning that the correct set of shares is obtained (and therefore the secret credential will be correctly obtained according to the secret sharing algorithm). Of course it will be appreciated that the generation of each share can be made dependent on different ones or subsets of the behavioural biometrics (for example, by setting any of the coefficients Cm to zero in order to exclude any particular behavioural biometric). Since this data enables the secret credential to be generated whenever the user is using the device (i.e. from measurements of their behavioural biometrics), there is no need for the secret credential to be stored on the device.
In order to ensure reliable recreation of the secret credential, it is desirable that the behavioural biometrics upon which the shares are generated are reliably present whenever the authorised user is using the computing device.
One way of achieving this is through the selection of appropriate behavioural biometrics. In particular, those behavioural biometrics that are most reliably present for a particular user may be chosen for use in generating the shares. Similarly, the measurements of these behavioural biometrics may be normalised through various operations, or else taken at a certain level of granularity (or resolution) to improve the probability that the same value will be obtained whenever the user is operating the computing device. Such techniques will be known to the skilled person, However, through the use of the secret sharing scheme, there is provided an additional mechanism to improve the reliability with which the secret credential can be used. Specifically, through the specification of a suitable threshold t for the secret sharing scheme which is less than the total number of shares Sn that are generated (i.e. when t c n), the authentication mechanism can be made resilient to the legitimate absence of some of the behavioural biometric factors (Le. resilient to situations where the measurements of those behavioural biometric factors do not correspond to the typical measurements for the user). As discussed above, this means that only a subset of the set of shares need to be correctly generated in order for the secret credential to be recreated. For example, some behavioural biometrics might only he present when the user is performing a particular activity (e.g. a users gait can only be relied upon when the user is walking). However, such behavioural biometrics may be very distinctive for the user when they are carrying out that particular activity. Therefore, rather than ignore such behavioural biometrics completely, an appropriate threshold t can be set meaning that only t of the total n secrets need to be recreated. Accordingly, by ensuring an appropriate distribution of the use of the behavioural biometric measurements to generate each secret (e.g. such as by using a single different behavioural biometric measurement in the generation of each secret), the mechanism can ensure that the secret credential S can be recreated, even if some of the behavioural biometrics are missing, provided that a sufficient number have been reliably reproduced to enable the generation of the threshold t number of secrets. As an example, such a system may be designed by identifying a min;murn number of the behavioural biometrics that have been chose for authenticating the user which are reliably present at any given time and setting the threshold t of the secret sharing scheme to that number.
Accordingly, multiple copies S' of the secret credential S may be recreated for a particular user based on different combinations of the shares that have been generated based on the behavioural biometrics. These may then be tested (for example, by attempting to authenticate or identity the user based on the copy of the credential) until a correct copy is identified (for example, when the authentication or identification is successful) or when all possible combinations of the generated shares (in sets that exceed the threshold t size) have been tried (or when a predetermined number of attempts have failed).
The secret credential that is generated by the present invention may be used for many different purposes, including to identify (or both) a user in embodiments of the present invention, by comparing a copy of the secret credential that is generated based on measurements of a current user's behavioural biometrics with the actual secret credential for a user. If the two match, it is highly likely that the user is the user to whom the secret credential belongs and the user may therefore be identified as being that user.

Claims (15)

  1. CLAIMSA computer implemented method for applying a control policy in respect of a computing device, the method comprising, at the computing device: monitoring one or more behavioural biometrics of a current user of the computing device; and causing a control policy to be applied in respect of the computing device based on an identity of the current user that is determined based on the one or more behavioural biometrics.
  2. 2. The method of claim 1, further comprising, at the computing device: determining an identity of the current user of the computing device based on the one or more behavioural biometrics.
  3. 3. The method of claim 2, wherein determining the identity of the current user comprises using a trained machine learning model to classify the behavioural biometrics as belonging to one of a set of possible users.
  4. 4. The method of claim 2, wherein determining the identity of the current user comprises: accessing identification data for a set of possible users, the identification data indicating, for each user, how a respective set of shares of a secret sharing scheme that represent a secret credential for that user can be generated from the behavioural biometrics; determining whether one or more of the set of possible users is the current user by: generating the respective set of shares for that possible user from the behavioural biometrics of the current user, as indicated by the identification data; recreating a copy of the secret credential for that user based on the generated set of shares, in accordance with the secret sharing scheme; and determining whether the recreated copy of the secret credential for that user is correct; and in response to determining that the recreated copy of the secret credential for one of the set of possible users is correct, determining that the identity of the current user is the identity of that user.
  5. J. The method of any one of claims 2 to 4, wherein determining the identity of the current user further comprises identifying the current user as an unidentified user that is not in the set of possible users, and wherein the control policy that is applied is a default control policy for unidentified users.
  6. 6. The method of any one of claims 1 to 5, wherein the method further comprises: authenticating an initial user of the computing device to determine an identity of the initial user; causing an initial control policy to be applied in respect of the computing device based on the identity of the initial user, wherein the identity of the current user is determined subsequent to the authentication of the initial user and the device is configured to enable the current user to use the device in accordance with the identity of the initial user subject to the control policy.
  7. 7. The method of any one of claims 1 to 6, wherein the control policy is applied, at least in part, by the computing device.
  8. 8. The method of any one of claims 1 to 6, wherein the control policy is applied, at least in part, by another computing device that is associated with a network via which the device communicates and the method further comprises communicating with the other computing device to cause it to apply the control policy.
  9. 9. ,a computer implemented method for applying a control policy in respect of a computing device, the method comprising: receiving, from the computing device, data identifying a current user of the computing device, the data indicating one or more behavioural biometrics of the current user; determining an identity of the current user based on the one or more behavioural biometrics: arid causing a control policy to be applied in respect of the computing device based on the dentity of the current user.
  10. 10. The method of claim 9, wherein determining the identity of the current user comprises using a trained machine learning model to classify the behavioural biometrics as belonging to one of a set of possible users.
  11. 11. The method of claim 9, wherein determining the identify of h current user comprises: accessing identification data for a set of possible users, the identification data indicating, for each user, how a respective set of shares of a secret sharing scheme that represent a secret credential for that user can be generated from the behavioural biometrics; determining whether one or more of the set of possible users is the current user by: generating the respective set of shares for that possible user from the behavioural biometrics of the current user, as indicated by the identification data; recreating a copy of the secret credential for that user based on the generated set of shares, in accordance with the secret sharing scheme; and determining whether the recreated copy of the secret credential for that user is correct; and in response to determining that the recreated copy of the secret credential for one of the set of possible users is correct, determining that the identity of the current user is the identity of that user.
  12. 12. The method of any one of claims 9 to 11, wherein determining the identity of the current user comprises identifying the current user as an unidentified user that is not in the set of possible users, and wherein the control policy that is applied is a default control policy for unidentified users.
  13. 13. The method of any one of claims 9 to 12, wherein the control policy is, at least partially, applied by the computing device and the method further comprises providing, to the computing device, the control policy that is to be applied by the computing device. 2.5
  14. 14. A computer system comprising a processor and a memory storing computer program code for performing the steps of any one of claims 1 to 13.
  15. 15. A computer program which, when executed by one or more processors, is arranged to carry out a method according to any one of claims 1 to 13.
GB1911314.1A 2019-08-07 2019-08-07 Behavioural blometric control policy application Withdrawn GB2586149A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1911314.1A GB2586149A (en) 2019-08-07 2019-08-07 Behavioural blometric control policy application

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1911314.1A GB2586149A (en) 2019-08-07 2019-08-07 Behavioural blometric control policy application

Publications (2)

Publication Number Publication Date
GB201911314D0 GB201911314D0 (en) 2019-09-18
GB2586149A true GB2586149A (en) 2021-02-10

Family

ID=67990840

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1911314.1A Withdrawn GB2586149A (en) 2019-08-07 2019-08-07 Behavioural blometric control policy application

Country Status (1)

Country Link
GB (1) GB2586149A (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114745305B (en) * 2022-06-15 2022-09-09 中邮消费金融有限公司 Dynamic interaction method and system based on user behavior recognition

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130097416A1 (en) * 2011-10-18 2013-04-18 Google Inc. Dynamic profile switching
US20150113631A1 (en) * 2013-10-23 2015-04-23 Anna Lerner Techniques for identifying a change in users
US20180077154A1 (en) * 2014-12-23 2018-03-15 Intel Corporation User profile selection using contextual authentication

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130097416A1 (en) * 2011-10-18 2013-04-18 Google Inc. Dynamic profile switching
US20150113631A1 (en) * 2013-10-23 2015-04-23 Anna Lerner Techniques for identifying a change in users
US20180077154A1 (en) * 2014-12-23 2018-03-15 Intel Corporation User profile selection using contextual authentication

Also Published As

Publication number Publication date
GB201911314D0 (en) 2019-09-18

Similar Documents

Publication Publication Date Title
US11238349B2 (en) Conditional behavioural biometrics
Rasmussen et al. Authentication Using Pulse− Response Biometrics
US20220261466A1 (en) User authentication based on behavioral biometrics
US11663850B2 (en) Method and system to prevent identity theft for fingerprint recognition enabled touch screen devices
US20190220592A1 (en) User authentication
Buriro et al. Evaluation of motion-based touch-typing biometrics for online banking
Mahadi et al. A survey of machine learning techniques for behavioral-based biometric user authentication
Kaczmarek et al. Assentication: user de-authentication and lunchtime attack mitigation with seated posture biometric
Mansour et al. A context-aware multimodal biometric authentication for cloud-empowered systems
Shila et al. CASTRA: Seamless and unobtrusive authentication of users to diverse mobile services
US20220164422A1 (en) Access control classifier training
Jeong et al. Examining the current status and emerging trends in continuous authentication technologies through citation network analysis
Yang et al. TKCA: a timely keystroke-based continuous user authentication with short keystroke sequence in uncontrolled settings
US11824866B2 (en) Peripheral landscape and context monitoring for user-identify verification
Yang et al. Retraining and dynamic privilege for implicit authentication systems
Banirostam et al. Functional control of users by biometric behavior features in cloud computing
GB2586149A (en) Behavioural blometric control policy application
Saini et al. Authenticating mobile phone users based on their typing position using keystroke dynamics
Tiwari et al. Emerging Biometric Modalities and Integration Challenges
Buriro et al. Evaluation of motion-based touch-typing biometrics in online financial environments
US20220376902A1 (en) Resource access control
JP2018136625A (en) Identification apparatus, identification method and identification program
US20220156351A1 (en) Access control
GB2585837A (en) User authentication based on behavioural biometrics
Aljohani et al. Authentication Based on Touch Patterns Using an Artificial Immune System

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)