GB2503417A - Controlling access according to both access code and user's action in entering the code - Google Patents

Controlling access according to both access code and user's action in entering the code Download PDF

Info

Publication number
GB2503417A
GB2503417A GB201207126A GB201207126A GB2503417A GB 2503417 A GB2503417 A GB 2503417A GB 201207126 A GB201207126 A GB 201207126A GB 201207126 A GB201207126 A GB 201207126A GB 2503417 A GB2503417 A GB 2503417A
Authority
GB
United Kingdom
Prior art keywords
user
code
action
expected
access
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB201207126A
Other versions
GB201207126D0 (en
Inventor
Farad Azima
Keith Edward Mayes
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nearfield Communications Ltd
Original Assignee
Nearfield Communications Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nearfield Communications Ltd filed Critical Nearfield Communications Ltd
Priority to GB201207126A priority Critical patent/GB2503417A/en
Publication of GB201207126D0 publication Critical patent/GB201207126D0/en
Publication of GB2503417A publication Critical patent/GB2503417A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/36User authentication by graphic or iconic representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication

Abstract

A plurality of selectable fields is presented to a user and the users selection is captured of one or more of the fields. In one embodiment, a representation of a key entry device 10 is displayed comprising said selectable fields 12. An action of the user selecting the fields is also captured, preferably visually with an image capture device such as a camera 16 and typically defining the manner in which the fields are selected. The captured user action might be, for example, whether the left or right hand is used to enter a selection or a particular gesture, motion or physical characteristic like the users body geometry or limb geometry. The captured selection and captured action are compared respectively with an expected access code, perhaps a password or PIN, and an expected user action. The user is thus authenticated and access is permitted in response to a successful result of the comparison. Indicators 14 (e.g. L and R) may be displayed, associated with each of the selectable fields, and indicate a required user action for selecting the field.

Description

Security systems
FIELD OF THE INVENTION
The invention relates to security systems for controlling access / authenticating users using access codes.
BACKGROUND TO THE INVENTION
Keypads are widely used to provide a secure way of authenticating a user. They are used in many systems, such as cash machines, retail payment terminals, unlocking devices such as computers, smart-phones, tablets and the like, and controlling entry to security safes and doors.
In a conventional access controlled system, the user is prompted for an access code I personal identification number (PIN) and enters this via a physical keypad. The physically keypad, often a numerical keypad, provides a fixed key layout meaning that there is a risk onlookers could observe a user entering the PIN code. The system may also be vulnerable to key logger devices or malware installed on a device to track a user's PIN code.
A dynamic keypad may be used in which the keyboard is presented on a touchscreen.
The key layout may be changeable making it much easier for an onlooker to steal the access code, but again this may still be vulnerable to key logger devices.
The invention herein describes socks to address those challenges and provides improved methods and apparatus for controlling access.
SUMMARY OF THE INVENTION
According to a first aspect of the invention there is provided a method of controlling access using an access code, the method comprising: presenting a plurality of selectable fields to a user; capturing a selection of one or more of the selectable fields selected by a user; capturing an action of the user selecting the one or more of the selectable fields, the action defining a manner in which the user selected the one or more of the selectable fields; reading an expected access code; comparing the captured selection of the one or more of the selectable fields with the expected access code; processing the captured action of the user to compare the action of the user selecting the one or more of the selectable fields with an expected user action for the selected one or more of the selectable fields; and responsive to comparing the captured selection with the expected access code and comparing the captured action with the expected user action, determining if access is permitted.
The selectable fields may be an on screen representation of a key entry device, or may be a physical key entry device for example to allow a user selection to be captured.
With the former a touch-screen interface may be overlaid over the screen, for example, or alternatively an image capture device may be used to visually capture the selection of fields by a user. Capturing a selection may also comprise reading a key press of a key or a touch-screen display. Such fields may include alphanumeric values, such as numbers (e.g. a conventional PIN code) and characters.
In addition to capturing the selection, the action of selection is also captured, meaning that the manner in which the selection is made is captured. The manner in which the user selects the one or more selectable fields may comprise a movement of the user selecting the field, for example detecting a hand (or finger) performing the selection.
This may include identifying if a user uses their left or right had to make the selection.
An expected access code is used to determine if the user has entered the correct access / PIN code. This may be stored in a data store locally on the device being used for access code entry, stored remotely on a data server / database (e.g. cloud storage) or may alternatively be on an item associated with the user, such as a cash card, identity card or the like. The access/PIN code and the matching functionality may be protected in accordance with information security best practices in order to retain its confidentiality and integrity and access. For example a PIN may be cryptographically protected and/or only accessible within a hardware security module such as smart card chip e.g. such as in "match-on-chip" solutions.
The capture action(s) is also processed to determine if each selection was made according to an expected action. One or more actions may be required. For example, each selection may have an independent action associated with it meaning that the action is checked for each selected field. Such action capturing overcomes problems faced by key-logging as in addition to access code entry, the action of a user entering the code is also part of the access control / authentication process.
The processing required to determine a selection and manner in which a user makes a selection may be provided locally on a device used by the user or may alternatively be implemented in a remote computer system configured to perform such processing, such as a cloud computing platform. A data store storing expected actions / access codes may also be stored locally or remotely, such as on a cloud computing platform.
Entering exactly the correct code may not be necessary, i.e. an exact match may not be needed, although an exact match is one option. Determining if access is permitted may comprise determining a confidence level indicative of an authenticity of the user from the comparing of the captured selection with the expected access code and comparing the captured action with the expected user action. For example, if a user gets three out of four digits of an access code correct, but one wrong, the additional action data may still give at least the same confidence as four correct access code digits. Responsive to the confidence level meeting a required confidence level access may then be permitted. If the confidence level is not met, the method may further comprise requiring the process to be started again, indicating to the user that the first attempt has failed.
Determining if access is permitted may comprise determining if the captured selection matches the expected access code and the captured action matches the expected user action. Responsive to both matching, access may then be permitted, (the confidence level may require an exact match). The required confidence level may also be variable and changed dynamically, depending on a deemed level of security required.
The expected user action assigned to the one or more selectable fields may be generated for each access code entry by a user. In other words, the expected user action may not be predetermined by a user and may be generated for each access code entry event by a user. The advantage of this is that a user then does not need to memorise an action associated with each, one, or more of the fields of their access code, but still means that the action forms part of the controlling access/authentication process to overcome any problems of key-logging.
The plurality of selectable fields may be arrangeable randomly which may help to overcome the problem of onlookers viewing a user's access code and provides improved security and flexibility.
Each of the one or more of the selectable fields may have an expected user action associated with the respective selectable field, meaning that separate, independent actions may be required for each field of the access code.
The expected action may be tied to a particular access code field and may have been preselected by a user (when setting their access code for example) and memorised.
This means that the expected action may then need to be stored, for example in the data store along with the access code. Such expected actions may be selected from a plurality of available actions which a system implementing such a method has been configured to identify. This may include being able to differentiate between left or right hand selection, or identifying particular fingers for example. Such available actions may be stored in a further data store, either locally or remotely.
The expected user action for the one or more of the selectable fields may be randomly selected for each of the one or more of the selectable fields. In other words, when the expected user action is generated for each access code entry by a user prior to entry, random generation of the expected actions avoids a third party predicting what the actions required. Such random generation may be random selection from a range of available actions that can be captured and determined.
The method may further comprise presenting an indicator of the expected user action for the one or more of the selectable fields to the user. This may be by presenting an indicator associated with each of the fields, for example, next to entry field, or may be presented only one indicator at a time to be used for the next selection. This may be used, for example, when the expected user action(s) assigned to the selectable fields are generated for each access code entry event -it will be appreciated that this then provides information to the user on the manner of access code entry. If an onscreen representation of a keypad is used the indicator may appear on this screen for example; if a physical key entry device is used with no display, a separate display may be user or other forms of indicator, such as LEDs may be used. An audio indicator may also be used, for example spoken word, different tones for left or right hand selection, or tones produced from different speakers, or a combination of one or more of these.
The process of capturing an action may further comprise capturing a sequence of actions for each element of the user access code, and wherein the processing the captured action comprises processing the sequence of actions to compare the sequence of actions of the user selecting the one or more of the selectable fields with an expected sequence of user actions for the selected one or more of the selectable fields. In other words, each selectable field may have an action associated with it meaning that an access code of, for example, four digits, may mean that four actions are required, one for each field of the access code.
Processing the capture of the user action may comprise identifying if the user selected each of the one or more of the selectable fields with a left hand or right hand, i.e. left or right hand selection. This is an example of the manner in which the selection was made. Comparing the expected user action may then further comprise determining for each selected fields if the field was selected with an expected left hand or right hand in order to determine of the correct action/manner of entry of the access code was used.
Capturing an action may further comprise capturing a gesture of the user selecting the one or more of the selectable fields. This gesture may include a motion of the user performing the selection action in order to identify, for example how a user's arm and/or hand moves in the process of making a selection. This may further or alternatively include capturing how a movement occurs over time, and/or the speed of movement.
Capturing an action may further comprise capturing a physical characteristic of the user, such as body or limb geometry. These provide an increased level of security and may allow identification of a unique user, meaning that even if the access code and any actions were known to a third party, then access may still be prevented by recognising that a different person is performing the access code entry and action. Furthermore, it will be appreciated that this introduces no further stage of authentication unlike many conventional access control processes -a user continues to enter their access code with the appropriate action.
Capturing a selection may alternatively comprises identifying a position of a selection in a 3-dimensional space if a display is presented in such a 3D space using 3D projection / holographic projection (in such an embodiment, no physical screen may be needed), through to the use of Virtual Reality (VR) headsets. This leads to a selection in free space.
According to a second aspect of the invention there is provided an access code controlled computing system, the computing system comprising: an image capture device, a user display, a processor and memory, the memory storing: code to receive a user input of a selection of one or more of the selectable fields selected by a user; code to capture, with the image capturing device, an action of the user selecting the one or more of the selectable fields, the action defining a manner in which the user selects the one or more of the selectable fields; code to read an expected access code from a data store; code to compare the captured selection of the one or more of said selectable fields with the expected access code; code to process the captured action of the user to compare the action of the user selecting the one or more of the selectable fields with an expected user action for the selected one or more of the selectable fields; and code responsive to the comparing the captured selection with the expected access code and the comparing the captured action with the expected user action to determine if access is permitted.
The access code controlled computer system may be used to control access to the computer system, for example providing the controlled access via a logon screen, or provided as a secure screensaver. The computing system may be, for example, a mobile phone, smartphone or tablet computer. It may alternatively be a desktop or laptop computer, virtual reality system of games console, The image capture device used on the access controlled computer system may comprises a multi-sensor device, such as a KinectTM type device configured providing depth and special recognition. The capture device may then capture a plurality of parameters such as image data, depth data and spatial data. Such a combination of parameters may be particularly useful for both identifying the manner in which a selectable field is selected by a user. This may be used useful for identifying a position of a user's hand in space to enable capturing when a field is selected.
The user display may be anything that a user can use to visualise what is to be selection and may be in 3D space using 3D projection I holographic projection (in such an embodiment, no physical screen may be needed), through the use of Virtual Reality (VR) headsets, conventional displays and the like.
The data store may be one of many variants, including, for example a database (preferably secure), flash storage, hard disk stored, on-chip storage, smart card ship storage etc. The computing system may not require exactly the correct code, i.e. an exact match may not be needed, although an exact match is one option. In the computing system the code to determine if access is permitted may comprise code to determine a confidence level indicative of an authenticity of the user from the code to compare the captured selection with the expected access code and the code to process the captured action data. The computing system may further comprise code responsive to the confidence level meeting a required confidence level to permit access.
Alternatively, or additionally of operating in a different mode, for example, a more secure mode, the code to determine if access is permitted may comprise code to determine if the captured selection matches the expected access code and the code to process the captured action data may comprise code to determine if the captured action matches the expected user action. The computing system may further comprise code responsive to the captured selection matching the expected access code and the captured action matching the expected user action to permit access.
The computing system may further comprise code to present the selectable fields to a user on the user display, for example a desktop or laptop computer screen or smartphoneltable display. In the computing system the code to present the plurality of selectable fields on the user display may comprise code to randomly arrange the plurality of selectable fields on the display to overcome the problem of onlookers identifying the access code entered by a user by the position of their hands.
The computing system may further comprise code to assign the expected user action to the one or more selectable fields for each access code entry by a user. In other words, the excepted user action may be dynamically generated (and not memorised by a user for each access code entry event-the entry action then changes each time a user enters their access code.
The computing system may further comprise code to select an expected user action for the one or more of the selectable fields from a plurality of available actions. This may comprise for example, having two available actions, e.g. "left or right hand action" to choose from. The system may also be updated over time to incorporate additional actions. Such actions may be randomly selected.
The computing system may further comprise code to present an indicator of the expected user action for each of the one or more of the selectable fields on the user display, displaying actions required for several selectable fields at the same time or alternatively displaying only a select one or more at a time, replacing these on the display with new actions for the next entry fields of an access code as the user types in
their access code one field at a time.
The code to process the capture of the user action may further comprise code to identify if the user selected each of the one or more of the selectable fields with a left hand or right hand. The computer system may then comprise further code to compare the action to an expected left or right hand action of selecting the fields.
The code to receive a user input of a selection of one or more of the selectable fields in the computing system may comprise code to capture the input selection of the one or more of the selectable fields using the image capture device. In other words, no key entry device may be necessary and the user may select fields on a screen representation of a keypad for example, with image processing used to identify which virtual key / field' was selected by a user, in addition to capturing the action of selecting.
According to a further aspect of the invention there is provided a method of controlling access using an access code, the method comprising: displaying a representation of a key entry device to a user, the representation of a key entry device comprising a plurality of selectable fields, and displaying indicators associated with each of the fields, the indicators indicating a required user action for selecting the field; capturing an access code entered by the user; monitoring the access code entry by the user to visually capture a series of actions performed by the user entering the access code; determining if the access code matches an expected access code; processing the captured series of actions to determine if the series of actions matches an expected series of actions for the expected access code; and responsive to the access code matching the expected access code and the series of actions matching the expected series of actions, permitting access.
The key entry device may be displayed on a screen for example, with indicators identifying an action for selecting a particular field of the key entry device representation on a screen. Each field / item in a grid representation of the entry device may for example show an L" or "R" indicating that this particular field should be selected with a user's left or right hand respectively. The representation may be generated in a random arrangement to avoid onlookers identifying where a user is pressing.
The access code entered by the user is captured This may be visually or via another means, for example via a touch-screen interfaces.
The method may comprise generating an expected user action for the one or more selectable fields for each access code entry by a user. In other words, prior to a user commencing entry of an access code (e.g. detecting pressing of a log-on' button for example the required actions may be generated, for example randomly, assigning the actions from a plurality of available actions (e.g. randomly choosing left" or right" hand
press for each field.
According to a still further aspect of the invention there is provided an access code entry device for authenticating a user, the entry device comprising: an input device for access code entry; an image capture device arranged to capture actions of a user operating the input device; a processor and memory, the memory storing: code to receive a user input from the input device of an access code entered by the user; code to capture with the image capturing device an action of the user entering the access code on the input device; code to read an expected access code from a data store; code to determine if the access code entered by the user matches the expected access code; code to process the captured action of the user to determine if the action of the user entering the access code matches an expected user action for the access code; and code responsive to the user entered access code matching the expected access code and the captured action matching the expected user action to authenticate the user.
The method of controlling access using an access code may be further used to authenticate a user using an access code. To improve authentication and identify a particular user and mitigate against an unauthorised party becoming aware of a user's access code, a user may be further authenticated by capturing a gesture (for example a motion of the user) and/or a physical characteristic (for example body or limb geometry), as the user selects one or more of the selectable fields.
A method of controlling access using an access code controlled entry system is also described using the features of first aspect of the invention.
In any of the above aspects, one or more components of the method/system may be implemented in attack resistant hardware / security modules or trusted platform type modules such as smart card chips in order to prevent a third party obtain access to parts of the system.
The invention further provides processor control code to implement the above-described methods, in particular on a data carrier such as a disk, CD-or DVD-ROM, programmed memory such as read-only memory (Firmware), or on a data carrier such as an optical or electrical signal carrier. Code (and/or data) to implement embodiments of the invention may comprise source, object or executable code in a conventional programming language (interpreted or compiled) such as C, or assembly code, code for setting up or controlling an ASIC (Application Specific Integrated Circuit) or FPGA (Field Programmable Gate Array), or code for a hardware description language such as Verilog'TM or VHDL (Very high speed integrated circuit Hardware Description Language). As the skilled person will appreciate such code and/or data may be distributed between a plurality of coupled components in communication with one another.
Features from the above described aspects and embodiments of the invention may be combined in different permutations.
BRIEF DESCRIPTION OF THE DRAWINGS
For a better understanding of the invention and to show how it may be carried into effect reference shall now be made, by way of example only, to the accompanying drawings in which: Figure la shows an example key entry device representation for the visual locker system; Figure lb shows the key entry device representation of Figure 1 a provided on a display device; Figure 2 shows an example of the authorisation process; and Figure 3 shows an example of a system to implement the visual locker system.
DETAILED DESCRIPTION OF THE PREFFERED EMBODIMENTS
We describe a visual locker" system that provides a means to control user access / authenticate a user to devices such as a computing device (e.g. PC, tablet, smartphone), and authenticate a user to, for example, a cash (ATM) machine, via optical means.
Data Entry -the dynamic challenge mechanism The invention herein described provides a method and apparatus for authenticating a user to a device via a combination of optical recognition and access code entry. The existing problems are addressed by introducing a natural something you are" to the authentication process in combination with access code (PIN) entry (the latter providing the conventional "something you know" code) without overcomplicating the user experience. Herein, when we ref or to devices, we include electronic devices such as desktop I laptop computers, tables, smart-phones, door/safe security entry systems, cash (ATM) machines, alarm systems and the like, but it will be appreciated that the invention is not limited to only these devices, and they are listed by way of examples only, similarly a PIN is only an example of "something you know".
Referring to Figure la, this shows an example of the keypad 10, and key grid 12 presented to a user to provide the "visual locker" entry system. In this example the access code comprises number and letters (alphanumeric values), but it will be appreciated that many other forms of codes may be used for access, including symbols and images, as well as multiple different alphabets.
Figure lb shows the same keypad arrangement 12 presented on a display of a smartphone 20 incorporating a camera 16. In other embodiments however it will be appreciated that the camera could be a webcam for a desktop / laptop computer, a motion/positional tracking device (such as an Xbox KinectlM type of device), a tablet computer, or a discreet device attached to a security door/safe in additional to the keypad.
The visual locker system may be provided as an integrated element in a security system, or may, depending on the platform, be an application installable on a device such as a smartphone, tablet or computer.
To authenticate/enable access, the user is presented with a dynamic keypad having the location of the entry valuesrandomly presented/arranged for each access code attempt. In Figure la and lb, a 4x4 grid layout is used, providing entry values 0-9 and A-F. These are randomly populated on the key matrix 12. Associated with each entry value is an indicator of how a user must perform to select the particular key/virtual key (e.g. on a touch-screen). This action is captured by camera 16 to analyse the manner in which a user has pressed a key. In this example a user must select a key with either their left or right hand according to the indication on the key matrix 12 ("L" and "R" are shown to indicate a left or right hand entry requirement).
A variable or random positioning of each entry value is used to reduce the chance of an attacker/onlooker simply observing one valid entry and then copying it. Combined with this, tracking the movement of how a user selects/presses a key (the action) increases the difficulty of an attacker guessing the authentication movement i.e. if the physical entry has 2m variations then an n digit pin has 2fl times as many entry combinations.
This means that, for example, a 4 digit PIN with Left or Right hand entry would have an extra sixteen combinations, equivalent to an extra PIN digit in the example of Fig.la/b -i.e. in authentication terms, adding movement (and movement style/characteristic) aspects can be considered as giving us a longer PIN (e.g. extra digits).
The combination of access code and manner of selection means that matching doesnot need to be an exact yes/no process. Matching need not be perfect, but may give enough confidence to make a decision of the authenticity of a user. Whilst a rigid combination of matching access code and the manner of selection may be used for higher security, the alternative is trade higher security for more usability. For example, if a user gets three PIN digits OK and one wrong then the added "digits" from the movement may still give us at least the same confidence as four correct PIN digits.
Furthermore, in some variants, the length of access code may be varied as well as the amount/type of action or movement (and dynamic aspects) needed to authenticate a user depending on the circumstances. For example, in public use, a mixed movement/PIN dynamic challenge may be used. At home where a user would not be observed, the system may then rely on more static information or more movement if it were easier for user. A user could, for example, perform their favourite wave at the computer.
Depending on the scenario,, each alphanumeric character of a user's access code may have a fixed action (e.g. "L" or "R") assigned to it for that particular user.
With a cash machine for example, a user could be identified from their cash card meaning that the PIN code and action could be stored on the card. In such an example, the action does not need to be indicated for each key -instead the user may also be required to memorise the key pressing action for each element of the access code. For example, a conventional pin code of "1 -6-5-9" may then be adapted to "1 R-6L-5R-9L' meaning that "1" must be entered with the right hand; 6" with the left hand; "5" with the right hand; then 9" with the left hand". Again this overcomes the problem of key logging malware tracking key presses, although risk of observation makes a dynamic challenge a useful variant from a security perspective depending on the intended context of use The action may alternatively be randomly or dynamically assigned to each entry field, as has been done in Figure la, meaning that a user does not need to remember the necessary action to use to select a field/element of within the grid. Instead, a user follows the on screen guidance assigned to each key as shown in Figure la and lb. Tracking the action by a user provides an additional security layer. The actions may also be static for an access code entry or may change each time a key is pressed (or based on elapsed time) meaning that the "L/R" (left or right hand) entry for each key may change each time an elementlalphanumeric character of the access code is entered. This means that with a PIN of 1-0-6-6" for example, the first time a user presses "6", there may be a requirement to press "6" with the left hand; for the second press of 6" the indicator on the grid associated with "6" may then have changed to "R" meaning that the visual locker may then require "6" to be pressed by the right hand.
The system is useable on a variety of different entry methods, including physical key entry devices (keypads, keyboard and the like), touch-screens and non-touch activated surfaces: * With a physical keyboard, an on screen representation of the keypad may be shown (for example a 4x4 grid/matrix) with the onscreen representation showing an assignment of each key to a particular number (the keypad itself may be blank).
* Alternatively, each key may have its own inbuilt display (or there may be a shared display), or indicator lights for example, which may light up "left" or "right" to indicate the entry action.
* A virtual keyboard may be used on a touch-screen device for example; allowing a use to interact directly with the screen, such as is shown in Figure lb. In variants the screen may instead be a representation in 3D space, rendering images in 3D by a variety of different technologies (3D displays with a user wearing 3D glasses
for example).
* Access code entry may alternatively be captured by the camera, with no physical button key press. A keypad may be displayed on a screen for example and the camera then monitor where on the screen a user presses (thus the screen may not need to be a touch-screen) -this further eliminates any risk of key logging at all.
* In another variant, the grid may be presented on a computer screen and a user may use a convention computer keyboard or keypad to enter their access code.
In the above the camera is used to capture images/video and to monitor the manner in which an access code is entered. This may include monitoring the movement of a user (e.g. identifying movement of the left or right hand entering an access code for example), or as described below, a user gesture including body, limb geometry, style of movement etc. Such movement may be detected for example by capturing a stream of images, and identifying when a key is pressed. Processing an image capture image at the point of a key allows use of the left or right hand to be identified. In more advanced systems, for example, using a MicrosoftlM KinectlM type device, a user may be required to user particular fingers for key entry, and the system would instruction the user for example to use "left thumb, right index finger etc".
It will be appreciated that the grid (or matrix) may be of different sizes, with blank keys, be randomly distributed (a non-uniform grid) or be missing one or more keys. A common variant would be a 3x4 numeric keypad only with keys showing numbers 0-9 only. Enter" and "cancel" keys may fill the unused spaces on the grid.
In some variants, when there is primarily a desire to avoid key-logger malware/devices and where onlookers may be unlikely,such as in a user's home it may not be necessary to randomly vary the keyboard layout, instead relying on monitoring a user's action in combination with access code entry.
Other ways of indicating the selection action to the user are possible, not just presenting a "LYI'R" notation -a more user friendly interface may be used for example, with icons and hand pictures. Indicators could also be via audio or tactile feedback.
Processing and analysing The dynamic challenge mechanism uses image processing/motion detection software to monitor the camera video stream and determine that the correct access code was entered in combination with the correct action (e.g. left/right hand). Image processing may be used to detect which key was pressed, or this may be performed in any conventional manner (using a conventional keypad for example).
Learning The visual/motion detection software may also be made more sophisticated to cope with the prospect of an attacker discovering the user's access code. This may be particularly useful for situations where a user has a pre-stored set of actions (e.g. "1 R- 6L-5R-9L') tied to their access code, although is still useful when dynamically generating the selection action.
When the legitimate user first enters their access code, the system may be set into a training mode. In this mode, the system not only considers whether an access code and action (e.g. entering digits of a code with a left or right hand) is correct, but may also analyse the detailed visual information and motion characteristics of the user, in other words analysing the user's gesture. Training, which can be repeated to improve confidence, captures and models the relevant characteristics of motion of a user's gesture (the motion characteristic"). The may include, for example monitoring user traits including body, limb geometry, style of movement etc; in other words, not just that a button is pressed with a user's left or right hand, but also how an appendage of a user moves to achieve the pressing of a button with their left or right hand. This additional motion characteristic data provides gesture information, providing detailed motion characteristics during an authentication challenge about the user meaning the system can now check a combination of the access code and action with further monitoring and tracking of visual user information to check the action is applied in the correct way. Such information can then be used to compute a confidence level that the registered/correct user is in fact present and entering the access code. This means that even if an unauthorised party becomes aware of the access code! irrespective of whether the action(s) associated with the access code are randomly generated or fixed, access to an unauthorised user may still be prevented.
Visual Locker Dashboard Referring now to Figure 2, this show an example representation 30 of how the visual locker may authenticates/approve access. This may work invisibly/automatically (i.e. a behind the scenes" representation of the criteria to be assessed for authentication), only prompting the user when authentication was needed. Alternatively the representation may be displayed to user a user as part of the access code entry process and/or during any training/registration phase.
Notifiers (32,34,36,38,40,42,44), shown as traffic light lamps, show which components of the authentication are correct. Green indicates true/pass, red indicates false/fail and amber indicates uncertain.
* The "User Present" notifier 32 detects that any person is in front of camera, which may be used to activate the authentication/access control process. Indicator 34 turns green when a person is detected.
* The "Correct Password" notifier 34 indicates that the correct password/access code has been entered (e.g. 1-6-5-9). Indicator 34 turns green when the correct access code is entered.
* If the correct password has been entered, "Motion Response" notifier 36 indicates if the correct action (i.e. correct hand used to enter a value/character from the access code))was used.
* If the system also monitors a more details characteristics including gestures or physical traits, the "Motion Characteristics" indicator 38 is used to identify if the visual data and movement was consistent with the user's gesture/physical traits.
* "Lock state" indicator 40 changes from closed to open if all authentication requirements are met, i.e. with all lights to left green. The visual locker then deems the user authenticated and may allow access. For a computer, tablet or smart phone the effect is then to hide or close the lock screen and return the user to the operating system desktop / application) * "Just Left" notifier 42 means a person has moved away and it triggers a count down timeout 46. This lamp is temporal and will go green then amber then red at timeout.
* "Just Back" indicator 44 means a person has returned. If Timeout has not expired it is reset; lamp is temporal.
* If "Timeout" 46 expires before a person returns then the visual locker returns to a locked state and all lamps to the left of lock state indicator 40 go red, meaning that authentication is again required. A Timeout of 0' triggers a locked state as soon as person departs.
* The "Virtual PIN" button 47 triggers a keypad challenge. This may be used to provide more training data for the detection algorithm, allowing a user's gestures / characteristic motion /physical traits to be further monitored when they complete their unlock sequence. This may also be used to allow a user to change their access code and any fixed action associated with the access code (if dynamic actions are not used). Indicator 46 may be used to provide a guidance of one or more measures, including pin/action strength (e.g. if may be preferable to avoid access code /action combinations of "1-1-1-1, L-L-L-L') and may also be used to indicate the level of learning / confidence in identifying a users characteristic motion.
* Exit" button 49 closes the application, which may put the device back into a sleep mode, blank the screen, or return back to a default state.
System structure Referring now to Figure 3, this show an example of the components forming a user authentication system. In this example, the system provides a 2D Visual Locker system with a 2D keypad representation. Other examples using 3D and/or holographic projection are also possible variants.
* A user 51 is sat in front a computer 53, with a camera 52 positioned above the screen 54 and wishes to log in to their computer. The user may also be presented with a user ID field so that they can enter their username. A web-cam 52 positioned on top of the computer (or integrated into the bezel of the screen for example) is used as the image capture device.
* To start a dynamic challenge the challenge generator 58 creates a keypad grid, based on the access code size, the number of possible motions (e.g. Left/Right hand) and a random number, and randomly assigns actions to each item field in the grid. The random generator 60 represents a source of a random number, used to ensure that any challenges (grids and hR request combinations) created by the Challenge Generator 58 are not correlated or predictable. Such a random number is referred to as a "nonce" in information security terms. A good quality random number generator will be based on a physically phenomena such as thermal noise and comply with information security best practice such as specified in "NIST SP 800-90 Recommendation for Random Number generation, March 2007".
* The grid 12 (as shown in Figure la for example) is then displayed on the screen 54.
* The user 51 enters their access code on the computer keyboard, ensuring that each key is pressed with the appropriate action (e.g. using their left or right hand) according to the onscreen indicators as depicted in Figure la. In a first variant the user may touch the screen to enter the pin code, the camera processing the captured images to determine where a user has touched the screen. In a second variant a touch-screen may be used.
* The user response is captured via the camera 52 and processed by the image analyser 56. The image analyser is configured to identify the acceptable actions for data entry (e.g. Left/Right hand press) and optionally also the user's gesture or physical traits. The Visual/Motion Store 62 may store data, for example training data and data relating to one or more users, allowing user actions to be identified.
Actions or movements may be form part of an action/movement alphabet stored in the visual/motion store 62. Multiple movement alphabets may also be used.
* The analyser 56 sends a stream of data (e.g. grid co-ordinates) to the PIN Checker 74 to check the entered access code using the PIN store 76.
* The analyser sends a stream of actions (e.g. L,R,L,R) to the Motion Response checker 72 that, with knowledge of the challenge, and the checked access code from the PIN checker 74, can check the validity of the action sequence used when entering the access code.
* The analyser 56 sends data to the Visual/Motion Checker 68 that, with knowledge of learnt user characteristics from the Visual/Motion Store 62, can compute a confidence that the registered user is present.
* The outputs from the checkers 74, 72, 68 are combined by the results combiner 70 and depending on the policy control 66 adopted, a decision is made and appropriate output actions 78 triggered. (The policy control module 66 is connected to other modules in the system, but connections are omitted for clarity). The policy control 66 includes parameters that are defined/controlled by the policy owner. This could depend on the circumstance/deployment, and could be the user, the provider of the authentication system or the accessed service provider or some combination of these and other parties. The policy may be considered as a weighted combination of the outputs from 68, 72 and 74 and the combinations and thresholds required to trigger the available action set. For example a policy might require the correct access code/PIN and action, but not need the correct personalised motion/gesture. Another example might need the correct motion and motion check, but not care about the PIN itself. Another example could tolerate imperfect results from the inputs, yet taken in combination acquire sufficient confidence to trigger an action.
* In learning mode, outputs from the Image Analyser 56 are further processed by the Learning handler 64. If the result from the results combiner 70 indicates success, the contents of the Visual/Motion Store 62 are updated by the learning handler 64.
It will be appreciated that for a completely new user, the visual/motion checker 68 may not yet have any data in the visual/motion store relating to user gestures / characteristic motion, meaning that only the motion response checker 72 and PIN checker 74 may need to be considered.
The above components may be implemented by one device or features provided on different communicating devices. For example a multi-sensor device may perform image capture, monitor user selections, and also process this data, delivering it to the computer with a yes/no" authentication.
No doubt many other effective alternatives will occur to the skilled person. It will be understood that the invention is not limited to the described embodiments and encompasses modifications apparent to those skilled in the art lying within the spirit and scope of the claims appended hereto.
Through out the description and claims of this specification, the words "comprise' and "contain' and variations of the words, for example comprising" and "comprise', means "including but not limited to, and is not intended to (and does not) exclude other moieties, additives, components, integers or steps.
Throughout the description and claims, the singular encompasses the plural (for example a processor may mean multiple processors) unless the context otherwise requires. In particular, where the indefinite article is used, the specification is to be understood as contemplating plurality as well as singularity, unless the context requires otherwise.
Features, integers, characteristics or groups described in conjunction with a particular aspect, embodiment or example, of the invention are to be understood to be applicable to any other aspect, embodiment or example described herein unless incompatible therewith.

Claims (40)

  1. CLAIMS: 1. A method of controlling access using an access code, the method comprising: presenting a plurality of selectable fields to a user; capturing a selection of one or more of said selectable fields selected by a user; capturing an action of said user selecting said one or more of said selectable fields, said action defining a manner in which said user selected said one or more ofsaid selectable fields;reading an expected access code; comparing said captured selection of said one or more of said selectable fields with said expected access code; processing said captured action of said user to compare said action of said user selecting said one or more of said selectable fields with an expected user action for said selected one or more of said selectable fields; and responsive to said comparing said captured selection with said expected access code and said comparing said captured action with said expected user action, determining if access is permitted.
  2. 2. A method as claimed in claim 1, wherein said determining if access is permitted comprises determining a confidence level indicative of an authenticity of said user from said comparing of said captured selection with said expected access code and said comparing of said captured action with said expected user action, and responsive to said confidence level meeting a required confidence level, permitting access.
  3. 3. A method as claimed in claim 1 or 2, wherein said determining if access is permitted comprises determining if said captured selection matches said expected access code and said captured action matches said expected user action, and responsive to said captured selection matching said expected access code and said captured action matching said expected user action, permitting access.
  4. 4. A method as claimed in claim 1, 2 or 3, wherein said expected user action for said one or more selectable fields is generated for each access code entry.
  5. 5. A method as claimed in any preceding claim, wherein said plurality of selectablefields are arrangeable randomly.
  6. 6. A method as claimed in any preceding claim, wherein each of said one or more of said selectable fields has an expected user action associated with said respectiveselectable field.
  7. 7. A method as claimed in any preceding claim, further comprising selecting said expected user action for said one or more of said selectable fields from a plurality of available actions.
  8. 8. A method as claimed in claim 7, wherein said expected user action for said one or more of said selectable fields is randomly selected for each of said one or more ofsaid selectable fields.
  9. 9. A method as claimed in any preceding claim, further comprising presenting an indicator of said expected user action for said one or more of said selectable fields to said user.
  10. 10. A method as claimed in any preceding claim, wherein said capturing a said action further comprises captures a sequence of actions for each element of said user access code, and wherein said processing said captured action comprises processing said sequence of actions to compare said sequence of actions of said user selecting said one or more of said selectable fields with an expected sequence of user actions for said selected one or more of said selectable fields.
  11. 11. A method as claimed in any preceding claim, wherein said processing said capture of said user action comprises identifying if said user selected each of said one or more of said selectable fields with a left hand or right hand, and wherein said comparing said user action of selecting said one or more of said selectable fields with an expected user action comprises determining for each said selected fields if said field was selected with an expected left hand or right hand.
  12. 12. A method as claimed in claim 1, further comprising reading one or more of said expected access code and said expected user action from a data store.
  13. 13. A method as claimed in any preceding claim, wherein said capturing an action further comprises capturing a gesture of said user selecting said one or more of saidselectable fields.
  14. 14. A method as claimed in claim 13, wherein said gesture comprises a motion of said user.
  15. 15. A method as claimed in any preceding claim, wherein said capturing an action further comprises capturing a physical characteristic of said user.
  16. 16. A method as claimed in claim 15, wherein said physical characteristic comprises one or more of body geometry and limb geometry.
  17. 17. A method as claimed in any preceding claim, wherein said capturing said selection of one or more of said selectable fields comprises visually capturing said selection.
  18. 18. A method as claimed in any preceding claim, wherein said capturing said selection of one or more of said selectable fields comprises reading a key press of a key or a touch-screen display.
  19. 19 A method as claimed in any one of claims ito 17, wherein said capturing said selection of one or more of said selectable fields comprises identifying a position of a selection in a 3-dimensional space.
  20. 20. A method as claimed in any preceding claim, further comprising learning said physical characteristic of said user.
  21. 21. A carrier carrying computer program code configured to, when running, implement the method of any one of claims 1 to 20.
  22. 22. An access code controlled computing system, the computing system comprising: an image capture device, a user display, a processor and memory, said memory storing: code to receive a user input of a selection of one or more of said selectablefields selected by a user;code to capture, with said image capturing device, an action of said user selecting said one or more of said selectable fields, said action defining a manner in which said user selects said one or more of said selectable fields; code to read an expected access code from a data store; code to compare said captured selection of said one or more of said selectablefields with said expected access code;code to process said captured action of said user to compare said action of said user selecting said one or more of said selectable fields with an expected user action for said selected one or more of said selectable fields; and code responsive to said comparing said captured selection with said expected access code and said comparing said captured action with said expected user action to determine if access is permitted.
  23. 23. A computing system as claimed in claim 22, wherein said code to determine if access is permitted comprises code to determine a confidence level indicative of an authenticity of said user from said code to compare said captured selection with said expected access code and said code to process said captured action data, and further comprising code responsive to said confidence level meeting a required confidence level to permit access.
  24. 24. A computing system as claimed in claim 22 or 23, wherein said code to determine if access is permitted comprises code to determine if said captured selection matches said expected access code and said code to process said captured action data comprises code to determine if said captured action matches said expected user action, and further comprising code responsive to said captured selection matching said expected access code and said captured action matching said expected user action to permit access.
  25. 25. A computing system as claimed in claim 22, 23, or 24, further comprising code to present said selectable fields to a user on said user display.
  26. 26. A computing system as claimed in any one of claim 22 to 25, further comprising code to assign said expected user action to said one or more selectable fields for each access code entry by a user.
  27. 27. A computing system as claimed in any one of claims 22 to 26, wherein said code to present said plurality of selectable fields on said user display comprises code to randomly arrange said plurality of selectable fields.
  28. 28. A computing system as claimed in any one of claims 22 to 27, further comprising code to select an expected user action for said one or more of said selectable fields from a plurality of available actions.
  29. 29. A computing system as claimed in claim 28, wherein said code to select an expected user action comprises code to randomly select an excepted user action from said plurality of available actions.
  30. 30. A computing system as claimed in any one of claims 22 to 29, further comprising code to present an indicator of said expected user action for each of said one or more of said selectable fields on said user display.
  31. 31. A computing system as claimed in any one of claims 22 to 30, wherein said code to process said capture of said user action comprises code to identify if said user selected each of said one or more of said selectable fields with a left hand or right hand, and wherein said code to compare said user action of selecting said one or more of said selectable fields with an expected user action comprises code to determine for each said selected fields if said field was selected with an expected left hand or right hand.
  32. 32. A computing system as claimed in any one of claims 22 to 31, wherein said code to receive a user input of a selection of one or more of said selectable fields selected by a user comprises code to capture said input selection of said one or more of said selectable fields using said image capture device.
  33. 33. A computing device as claimed in any one of claims 22 to 32, wherein said computing device is a mobile phone, smartphone or tablet computer.
  34. 34 A computing device as claimed in any one of claims 22 to 33, wherein said computer device is a desktop or laptop computer.
  35. 35. A computing device as claimed in any one of claims 22 to 34, wherein said image capture device comprises a multi-sensor device configured to capture a plurality of parameters, in particular image data and depth data.
  36. 36. A method of controlling access using an access code, the method comprising: displaying a representation of a key entry device to a user, said representation of a key entry device comprising a plurality of selectable fields, and displaying indicators associated with each of said fields, said indicators indicating a required useraction for selecting said field;capturing an access code entered by said user; monitoring said access code entry by said user to visually capture a series of actions performed by said user entering said access code; determining if said access code matches an expected access code; processing said captured series of actions to determine if said series of actions matches an expected series of actions for said expected access code; and responsive to said access code matching said expected access code and said series of actions matching said expected series of actions, permitting access.
  37. 37. A method as claimed in claim 36, wherein said expected user action for said one or more selectable fields is generated for each access code entry by a user.
  38. 38. A method as claimed in claim 36 or 37, wherein said displaying said representation of said key entry device comprises randomly positioning each of said selectable fields on said representation of said key entry device.
  39. 39. A method as claimed in any one of claims 3610 38, wherein said access code is visually captured.
  40. 40. An access code entry device for authenticating a user, the entry device comprising: an input device for access code entry; an image capture device arranged to capture actions of a user operating said input device; a processor and memory, said memory storing: code to receive a user input from said input device of an access code entered by said user; code to capture with said image capturing device a action of said user entering said access code on said input device; code to read an expected access code from a data store; code to determine if said access code entered by said user matches said expected access code; code to process said captured action of said user to determine if said action of said user entering said access code matches an expected user action for said access code; and code responsive to said user entered access code matching said expected access code and said captured action matching said expected user action to authenticate said user.
GB201207126A 2012-04-24 2012-04-24 Controlling access according to both access code and user's action in entering the code Withdrawn GB2503417A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB201207126A GB2503417A (en) 2012-04-24 2012-04-24 Controlling access according to both access code and user's action in entering the code

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB201207126A GB2503417A (en) 2012-04-24 2012-04-24 Controlling access according to both access code and user's action in entering the code

Publications (2)

Publication Number Publication Date
GB201207126D0 GB201207126D0 (en) 2012-06-06
GB2503417A true GB2503417A (en) 2014-01-01

Family

ID=46261750

Family Applications (1)

Application Number Title Priority Date Filing Date
GB201207126A Withdrawn GB2503417A (en) 2012-04-24 2012-04-24 Controlling access according to both access code and user's action in entering the code

Country Status (1)

Country Link
GB (1) GB2503417A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016036294A1 (en) * 2014-09-05 2016-03-10 Telefonaktiebolaget L M Ericsson (Publ) Device and method for authenticating a user

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040111646A1 (en) * 2002-12-10 2004-06-10 International Business Machines Corporation Password that associates screen position information with sequentially entered characters
WO2008008473A2 (en) * 2006-07-11 2008-01-17 Agent Science Technologies, Inc. Behaviormetrics application system for electronic transaction authorization
US20090102603A1 (en) * 2007-10-19 2009-04-23 Fein Gene S Method and apparatus for providing authentication with a user interface system
US20100180336A1 (en) * 2009-01-13 2010-07-15 Nolan Jones System and Method for Authenticating a User Using a Graphical Password
US20100333198A1 (en) * 2008-03-04 2010-12-30 Kana Mikake Authentication method and input device
AU2011202415B1 (en) * 2011-05-24 2012-04-12 Microsoft Technology Licensing, Llc Picture gesture authentication

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040111646A1 (en) * 2002-12-10 2004-06-10 International Business Machines Corporation Password that associates screen position information with sequentially entered characters
WO2008008473A2 (en) * 2006-07-11 2008-01-17 Agent Science Technologies, Inc. Behaviormetrics application system for electronic transaction authorization
US20090102603A1 (en) * 2007-10-19 2009-04-23 Fein Gene S Method and apparatus for providing authentication with a user interface system
US20100333198A1 (en) * 2008-03-04 2010-12-30 Kana Mikake Authentication method and input device
US20100180336A1 (en) * 2009-01-13 2010-07-15 Nolan Jones System and Method for Authenticating a User Using a Graphical Password
AU2011202415B1 (en) * 2011-05-24 2012-04-12 Microsoft Technology Licensing, Llc Picture gesture authentication

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016036294A1 (en) * 2014-09-05 2016-03-10 Telefonaktiebolaget L M Ericsson (Publ) Device and method for authenticating a user
CN106605395A (en) * 2014-09-05 2017-04-26 瑞典爱立信有限公司 Device and method for authenticating a user

Also Published As

Publication number Publication date
GB201207126D0 (en) 2012-06-06

Similar Documents

Publication Publication Date Title
US9817964B2 (en) Methods and apparatus to facilitate secure screen input
KR101132368B1 (en) System for safely inputting password using shift value of password input and method thereof
US20120110663A1 (en) Apparatus and method for inputting user password
CN109891418A (en) Method for protecting the transaction executed from non-security terminal
CN103034798B (en) A kind of generation method and device of random cipher
KR20160149187A (en) Password verifying device and method
US8869261B1 (en) Securing access to touch-screen devices
GB2502773A (en) User authentication by inputting code on a randomly generated display
US20130104227A1 (en) Advanced authentication technology for computing devices
CN108027854A (en) Multi-user's strong authentication token
US9710627B2 (en) Computer implemented security method and system
KR20150084678A (en) Method of inputting confidential data on a terminal
Chabbi et al. Dynamic array PIN: A novel approach to secure NFC electronic payment between ATM and smartphone
US11132432B2 (en) Tactile challenge-response testing for electronic devices
EP3189642A1 (en) Device and method for authenticating a user
US11423183B2 (en) Thermal imaging protection
GB2503417A (en) Controlling access according to both access code and user's action in entering the code
Adithya et al. Security enhancement in automated teller machine
CN109891821A (en) Method for executing sensitive operation with using non-security terminal security
KR20180048423A (en) Method for securing a transaction performed from a non-secure terminal
KR20170002169A (en) User authentication system using user authentication pattern and method for defining user authentication pattern
Ling et al. You cannot sense my pins: A side-channel attack deterrent solution based on haptic feedback on touch-enabled devices
KR20090043091A (en) Method and apparatus for user certification
TWI550431B (en) Authority management device
CN114730336A (en) Improved system and method for secure data entry and authentication

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)