GB2421093A - Trusted user interface - Google Patents

Trusted user interface Download PDF

Info

Publication number
GB2421093A
GB2421093A GB0426818A GB0426818A GB2421093A GB 2421093 A GB2421093 A GB 2421093A GB 0426818 A GB0426818 A GB 0426818A GB 0426818 A GB0426818 A GB 0426818A GB 2421093 A GB2421093 A GB 2421093A
Authority
GB
United Kingdom
Prior art keywords
user
indicator
trusted
private
computing device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0426818A
Other versions
GB0426818D0 (en
Inventor
Tim Band
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Symbian Software Ltd
Original Assignee
Symbian Software Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Symbian Software Ltd filed Critical Symbian Software Ltd
Priority to GB0426818A priority Critical patent/GB2421093A/en
Publication of GB0426818D0 publication Critical patent/GB0426818D0/en
Publication of GB2421093A publication Critical patent/GB2421093A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/82Protecting input, output or interconnection devices

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A trusted user interface for operating an interactive computing device with secure storage, which is accessible only to trusted system components, in which a personalised indicator is requested from a user, which is then saved in secure storage. Thereafter, when the trusted system components take control of the screen or other output mechanism, the displaying or rendering of this personalised indicator to the user serves as an indication that the user can trust the currently running application.

Description

1 2421093 User Interface for a Computing Device The present invention
relates to a user interface (UI) for a computing device, and in particular to a trusted user interface for an interactive computing device which requires no dedicated hardware: the user interface forms an element of a platform security architecture for the computing device. The present invention also provides a method of implementing a trusted UI for an interactive computing device which requires no dedicated hardware.
In the context of the present invention the term interactive computing device should be construed to include any device which includes both a screen and other output mechanism for displaying information and also a keyboard, keypad, button array, touchscreen or any other input mechanism for user input. Hence, in the context of the present invention, the term interactive computing device is intended to include devices such as desktop computers, laptop computers, Personal Digital Assistants (PDA5), Mobile Telephones, Smartphones, Digital Cameras for still and video photography, Digital Music Players as well as many other industrial and domestic devices including such output display and user input means, ranging from cash machines (ATM5) to transport devices including interactive displays, such as motor vehicles, trains, boats, and aircraft of all kinds.
Interactive computing devices are commonly used to store sensitive personal data. However, where such devices are programmable, such as including a capability to be loaded with additional software, there is a risk that mischievous or malicious software, commonly known as malware, could inadvertently be loaded on to the device, which puts the integrity and confidentiality of that sensitive data at risk.
Where interactive computing devices are connected to wider computing networks, this risk becomes much greater, as it is well-known that such networks (which include, but are not limited to, the internet) are one of the main methods by which viruses, trojans, worms, and other malware can propagate from device to device. As well as causing increased scope for infection, connected devices also offer increased scope for the theft of personal data, and such theft can be conducted in such a way that it could well be undetectable by the device user.
Additionally, the fact that network connected devices can be used to conduct financial transactions which depend on the confidentiality of personal data used for authentication (such as PIN numbers) makes the possibility of theft of such data extremely serious, because a person who illegitimately acquires this otherwise secure personal data could thereby gain access to the bank and credit card accounts of the device user.
The threat to secure personal data such as passwords and PIN numbers does not primarily arise from the actual theft of such data when actually stored on the device. Such data can be acquired by a third party when it is actually input into the interactive computing device by the user, because the user, at the time of entering the secure data, erroneously believes that he/she is operating in a secure environment; but in fact this is not the case. This type of unauthorised acquisition is in fact quite common and is usually effected by malware that the user has unknowingly loaded onto the device. Among the main methods used by malware to obtain such information are so-called spoofing' and phishing' attacks. In these types of attack, the appearance of an application or internet web site that is well-known, and thereby trusted by the user, is replicated on the device user interface. The user is thereby tricked into divulging authentication information to the spoof application or interface, which can be used subsequently by a rogue person seeking to steal the user's identity and subsequently masquerade as the user. It is now reported that such fraudulent use of what should be secure and very private personal information is costing the banking institutions in excess of a billion pounds per year. Hence, such unauthorised acquisition of secure personal data is a very serious problem.
A key weapon in defending against such attacks is for an interactive computing device to possess a trusted user interface: this is a user interface which offers a virtual guarantee to the user that the application being run is exactly what it claims to be, and that any data which the user enters or requests will not be made available to any untrusted application.
The security threats which a trusted user interface guards against include those where a rogue application either: 1. attempts to mimic the interface of a genuine application in an attempt to fool a user into divulging sensitive personal data; or 2. attempts to capture sensitive personal data from either the screen of a device (by inspecting the areas of memory associated with the display) or from user data input (e.g. by capturing user key selections).
It is known that a general-purpose interactive computing device can be adapted to permit the provision of a trusted UI in software. Patent application GB 031 2202.5kentitled "Trusted User Interface For A Secure Mobile Wireless Device" discloses a mobile wireless device programmed with software which provides a trusted user interface for the device by allowing the content of a secure screen memory to be modifiable only by authorised applications.
Normally, the entire screen memory address is public information within the computing device, making the entire screen memory fully available to any application on the device. Thus, even sensitive dialogs use screen memory which can in theory be accessed by malicious software, enabling that malicious code to grab PIN data etc., or corrupt a user interface which is trusted by a user. But, with the trusted user interface described in GB the screen memory is partitioned into secure and non- secure frame buffers. Unauthorised applications are prevented from accessing the data displayed from the secure frame buffer because such applications are able to access only non-secure screen buffer memory. Hence, malicious applications, which are not authorised, cannot retrieve data from a trusted dialog and thus cannot compromise any data forming part of that trusted dialog. Furthermore, because the trusted user interface may be implemented through software, it can require no new hardware per se on the device - the only requirement is that components (e.g. a software window server; a video chip etc.) can access content selectively from different parts of screen memory: i.e. the secure and non-secure frame buffers.
GB 0312202.5[describes two embodiments for confirming to a user that a trusted UI is active: * Trusted hardware indicator: a trusted' LED which is illuminated when a trusted UI interaction occurs. This indicator can be accessed through a device driver dedicated to the window server.
The window server switches this LED on when it receives a genuine request to display a trusted dialog within a trusted UI session. At the end of this trusted session, the window server switches this LED off.
* Trusted software indicator: a particular symbol/logo on trusted dialogs in a specific part of the screen not accessible to third parties. The trusted software indicator requires access to video RAM by general user mode code to be denied for the specific part of the screen used to display the trusted software indicator.
Other operating systems, such as Linux, make use of the concept of a Secure Attention Key. This is a specific key or key combination which is reserved for starting a trusted UI session. This dedicated key or key combination is protected by means of special hardware, with the result that malicious applications are not able to detect the activation of this key or key combination.
Therefore, a user can have confidence that a session initiated in this way is genuine.
Although the solutions disclosed above implement a trusted UI in software, they all require, either directly or indirectly, dedicated visual or input hardware. o)
GB 031 22O2.5pecifically discloses that "In order to protect the user against fake trusted dialog attacks, a LED or reserved screen space must be used." Clearly, there are drawbacks to the solutions described above, including: * The inclusion of a dedicated LED increases device manufacturing costs.
Furthermore, if the LED fails or is damaged, verification of the Trusted UI also fails.
* Reserving a specific area of the device display screen inevitably reduces the area of the screen available for all other applications.
* Requiring the user to press a special button to continue a transaction is an awkward intrusion into the interaction between the software and the user. Additionally, the requirement for a dedicated key has both manufacturing and genuine design costs, especially on smaller computing devices such as mobile telephones, where the number of user controls needs to be kept to a minimum.
There is, therefore, no currently available method of providing an indicator of a trusted UI entirely in software without the need to provide some form of dedicated hardware for this task.
It is therefore an object of the present invention to provide a method of implementing a Trusted UI which requires no dedicated visual hardware.
According to a first aspect of the present invention there is provided a method of operating a computing device comprising a. an input mechanism; b. an output mechanism; and c. storage means; which can be secured and made accessible only to trusted system components; the method comprising providing one or more private indicators for storage in the storage means; causing the trusted system components to secure the output and input mechanisms, and displaying at least one of the private indicators for indicating that an application or service running on the device can be trusted with sensitive or private data.
According to a second aspect of the present invention there is provided a computing device capable of operating in accordance with a method of the first aspect.
According to a third aspect of the present invention there is provided an operating system for causing a computing device to operate in accordance with a method according to the first aspect.
An embodiment of the present invention will now be described by way of further example only.
The present invention is predicated on the basis that appropriate software security measures are known to be in place elsewhere in the system on which the trusted UI is to be implemented. A preferred implementation of the present invention utilises the platform security model disclosed in GB patent %1I11 application 0312191.0 Lentitled "Secure Mobile Wireless Device" and GB tW) patent application 03121 90. 2kentitled "Mobile Wireless Device With Protected File System".
The platform security model described in these applications was originally developed for the Symbian 05TM operating system available from Symbian Software Ltd, of London, United Kingdom. The aim of this model is to defend the resources and data belonging to the operating system and its users against illegal, malicious or badly implemented programs. It achieves this by restricting access rights to sensitive system resources by means of capabilities granted to all items of execytable code; this aspect of the plafform security model is described in GB 0312191 GB 0312190.2Ldescribes how data storage on the system is partitioned.
Some parts of the file system are open to all applications; but each executable process has a private area which it, and it alone, is able to access, and other portions of the file system cannot be accessed by any user applications at all, being reserved for the Trusted Computing Base (TCB) which consists of a small number of components with powerful capabilities that can be relied on not to be subverted and which therefore serve to guarantee the integrity of the device.
The perception behind the present invention is that a combination of restricted access to system resources required for input and output (such as keypad and screen or any other methods; which should be construed to cover the input and output hardware together with any memory areas or buffers associated with them) with guaranteed secure data storage (e.g. a protected file system) enables a trusted UI to be implemented without the necessity for any dedicated visual hardware on a device.
The mechanisms for implementing such a trusted UI are as follows: At some known and secure point the interactive computing device requests that the user chooses (and preferably personalises) a private indicator which will be used to indicate the presence of a trusted UI.
* A suitable known and secure point would be during first boot sequence, when no third party user application has been placed on the device and when none of its possible communication links have yet been activated.
* Though the exact nature of this indicator will vary from device to device, a graphical indicator is to be preferred because the relative complexity of such indicators renders them inherently more secure against duplication.
* However, if the device is not capable of displaying graphics, a textual indicator (such as a passphrase or a unique number) could alternatively be used.
Whatever indicator is used, it should preferably be personalised to the individual user. Examples of how such personalisation can be achieved include: * A device with a graphics editing capability could allow the user to sign or deface a default image.
* A device capable of displaying graphics but with no editing capability could allow the user to choose one graphic image from a sufficiently numerous permutation or combination of images.
* A device either including a digital camera, or able to link to one, could allow the user to generate or submit a completely unique image of the user's personal choice.
* A text-only device could request the user to input a unique message in the same way that text-based cryptography requires a unique passphrase to be chosen.
Once chosen by the user, this personalised indicator is stored in a secure location only accessible to the TCB. The storage of the user specific indicator in this manner guarantees that a malicious or mischievous user program cannot find out what manner of indicator is being used to indicate the trusted UI by locating and searching the location where the indicator data is stored.
Hence, when this image is displayed thereafter, the user knows that a trusted connection is being provided to the TCB.
At this point in the user experience, the TCB will have locked all access by untrusted software to any sensitive hardware, including any associated memory buffers used for user input and output (such as screen or keyboard buffers). This is what a trusted UI means. The fact that sensitive hardware includes all hardware and areas of memory associated with (for example) a screen display means that no malicious or mischievous user program can find out what manner of indicator is being used to indicate the trusted UI while the indicator is in use.
Because the indicator of the Trusted UI is both personalised and not discoverable (either when it is in use or when it is not in use) attempts to access sensitive user data or services by masquerading as a trusted UI (known variously as spoofing or phishing attacks) are not practical. Malicious or mischievous programs have no way of knowing, and no way of determining, what the indicator for the Trusted UI might be.
By adapting the above methodology, it is also possible for more than one indicator to be generated by the user, which could be used for competing applications or services which cannot be trusted by the user not to abuse each other's data. Such dedicated indicators could be used either singly or in combination with a generalised indicator. Clearly this is more flexible than any single hardware mechanism.
An example of how this may be achieved in practice would be for an initial screen saying "choose your trusted buddy" to be displayed when the user switches on a device. The screen can be arranged also to show small pictures named "frog", "gorilla", "sheep", "rabbit" and so on. Then, assuming the user chooses "gorilla", the next screen can show different coloured gorillas.
Having chosen a blue gorilla, the next screen would ask the user to name the gorilla. In this example, the name Chaz' is selected for the gorilla. Then, the next screen can display "Remember: Only trust Chaz the Blue Gorilla with sensitive information!" Thereafter, the display of Chaz the Blue Gorilla in a UI would indicate to the user that the UI in question can be trusted. Likewise, the absence of Chaz the Blue Gorilla in a displayed UI would serve as a warning to the user that the UI cannot be trusted.
It can be seen from the above embodiments that, in contrast to previous methods, this invention requires no dedicated hardware at all. While the implementation described above is for the Symbian OSTM operating system, those skilled in the art will readily recognise that it is applicable to any interactive computing device which has suitable security provisions for data storage and for its display.
The present invention provides many advantages over the known methods for implementing a trusted user interface, including: * Because this method of indicating a trusted UI is a software only solution, it is cheaper to implement than alternative methods requiring special hardware such as a LED that lights when the trusted UI in active.
* It is more reliable than alternative methods requiring an LED, as there is no need to rely on a potentially fragile hardware indicator.
* Because the trusted UI indicator does not require a dedicated area of the screen set aside for its use, there is more screen area available for application programs and user data.
* It is known that personalizing complex technological devices serves to reassure users that they are in control; the act of personalising the indicator of the trusted UI will therefore reconcile a user to the sometimes onerous security requirements of such devices.
Hence, the present invention provides a method of operating an interactive computing device with secure storage, which is accessible only to trusted system components, in which a personalised indicator is requested from a user, which is then saved in secure storage. Thereafter, when the trusted system components take control of the screen or other output mechanism, the displaying or rendering of this personalised indicator to the user serves as an indication that the user can trust the currently running application.
Although the present invention has been described with reference to particular embodiments, it will be appreciated that modifications may be effected whilst remaining within the scope of the present invention as defined by the appended claims.

Claims (13)

  1. Claims: 1. A method of operating a computing device comprising d. an input
    mechanism; e. an output mechanism; and f. storage means; which can be secured and made accessible only to trusted system components; the method comprising providing one or more private indicators for storage in the storage means; causing the trusted system components to secure the output and input mechanisms, and displaying at least one of the private indicators for indicating that an application or service running on the device can be trusted with sensitive or private data.
  2. 2. A method according to claim I comprising providing a private indicator respective for each application or service requiring sensitive or private data.
  3. 3. A method according to claim I comprising providing a private indicator which is common for more than one application or service requiring sensitive or private data.
  4. 4. A method according to claim 1 comprising providing a general private indicator for all applications or services for use in combination with one or more private indicators respective to specific applications or services.
  5. 5. A method according to any one of the preceding claims in which the format for a private indicator is requested from the user.
  6. 6. A method according to claim 5 in which the private indicator is requested from the user when the computing device is switched on for the very first time.
  7. 7. A method according to claim 5 in which the private indicator is requested from the user when an application or service is used for the very first time.
  8. 8. A method according to claim 7 in which a single general private indicator used for all applications or services is displayed when a private indicator is requested from the user when an application or service is used for the very first time.
  9. 9. A method according to any one of the preceding claims in which the user is enabled by the trusted system components to alter the current indicator in a secure manner.
  10. 1O.A method according to any one of the preceding claims in which the indicator can be personalised by the owner or user.
  11. 11.A method according to any one of the preceding claims in which the indicator comprises at least one of an icon or other graphic, a photographic or other image, a text string, an audio file, or any other entity which can be perceived by the user.
  12. 12.A computing device capable of operating in accordance with a method as claimed in any one of claims I to 11.
  13. 13.An operating system for causing a computing device to operate in accordance with a method as claimed in any one of claims I to 11.
GB0426818A 2004-12-07 2004-12-07 Trusted user interface Withdrawn GB2421093A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB0426818A GB2421093A (en) 2004-12-07 2004-12-07 Trusted user interface

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0426818A GB2421093A (en) 2004-12-07 2004-12-07 Trusted user interface

Publications (2)

Publication Number Publication Date
GB0426818D0 GB0426818D0 (en) 2005-01-12
GB2421093A true GB2421093A (en) 2006-06-14

Family

ID=34073307

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0426818A Withdrawn GB2421093A (en) 2004-12-07 2004-12-07 Trusted user interface

Country Status (1)

Country Link
GB (1) GB2421093A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009059935A1 (en) 2007-11-06 2009-05-14 Giesecke & Devrient Gmbh Data processing device and method for operating a data processing device
US7913292B2 (en) 2006-10-18 2011-03-22 Microsoft Corporation Identification and visualization of trusted user interface objects
US8938780B2 (en) 2012-03-27 2015-01-20 Telefonaktiebolaget L M Ericsson (Publ) Display authentication
CN107077565A (en) * 2015-11-25 2017-08-18 华为技术有限公司 The collocation method and equipment of a kind of safe configured information

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1994001821A1 (en) * 1992-07-10 1994-01-20 Secure Computing Corporation Trusted path subsystem for workstations
EP0647895A1 (en) * 1993-10-04 1995-04-12 Addison M. Fischer Method for preventing inadvertent betrayal of stored digital secrets by a trustee
EP1056014A1 (en) * 1999-05-28 2000-11-29 Hewlett-Packard Company System for providing a trustworthy user interface

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1994001821A1 (en) * 1992-07-10 1994-01-20 Secure Computing Corporation Trusted path subsystem for workstations
EP0647895A1 (en) * 1993-10-04 1995-04-12 Addison M. Fischer Method for preventing inadvertent betrayal of stored digital secrets by a trustee
EP1056014A1 (en) * 1999-05-28 2000-11-29 Hewlett-Packard Company System for providing a trustworthy user interface

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Compartmented Mode Workstation: Prototype Highlights", J. Berger, J. Picciotto, J. Woodward, P. Cummings, 1990, downloaded from http://ieeexplore.ieee.org website, 11th March 2005 *
"The Trusted Path between SMITE and the User", S. Wiseman, P. Terry, A. Wood, C. Harrold, 1988, downloaded from http://ieeexplore.ieee.org/iel2/201/427/00008107.pdf?arnumber=8107, 11th March 2005. *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7913292B2 (en) 2006-10-18 2011-03-22 Microsoft Corporation Identification and visualization of trusted user interface objects
WO2009059935A1 (en) 2007-11-06 2009-05-14 Giesecke & Devrient Gmbh Data processing device and method for operating a data processing device
US8938780B2 (en) 2012-03-27 2015-01-20 Telefonaktiebolaget L M Ericsson (Publ) Display authentication
CN107077565A (en) * 2015-11-25 2017-08-18 华为技术有限公司 The collocation method and equipment of a kind of safe configured information
EP3370449A4 (en) * 2015-11-25 2018-09-05 Huawei Technologies Co., Ltd. Method and device for configuring security indication information
CN107077565B (en) * 2015-11-25 2019-11-26 华为技术有限公司 A kind of configuration method and equipment of safety instruction information
US11100227B2 (en) 2015-11-25 2021-08-24 Huawei Technologies Co., Ltd. Security indication information configuration method and device

Also Published As

Publication number Publication date
GB0426818D0 (en) 2005-01-12

Similar Documents

Publication Publication Date Title
US9038171B2 (en) Visual display of website trustworthiness to a user
US8775524B2 (en) Obtaining and assessing objective data ralating to network resources
US7904946B1 (en) Methods and systems for secure user authentication
RU2744671C2 (en) System and methods for detecting network fraud
JP5133248B2 (en) Offline authentication method in client / server authentication system
Jøsang et al. Trust requirements in identity management
US8850526B2 (en) Online protection of information and resources
IL203763A (en) System and method for authentication, data transfer and protection against phishing
FR2885424A1 (en) DATA PROCESSING DEVICE, TELECOMMUNICATION TERMINAL AND DATA PROCESSING METHOD USING DATA PROCESSING DEVICE.
WO2011138558A2 (en) Method for authenticating a user requesting a transaction with a service provider
US10411901B2 (en) Multi-user strong authentication token
EP1512057A1 (en) Trusted user interface for a secure mobile wireless device
Ollmann The phishing guide
GB2449240A (en) Conducting secure online transactions using CAPTCHA
GB2421093A (en) Trusted user interface
Brar et al. Vulnerabilities in e-banking: A study of various security aspects in e-banking
CN117751551A (en) System and method for secure internet communications
Singh et al. Phishing: A computer security threat
KR102606701B1 (en) Method of implementing technology to prevent smishing text fraud crimes related to congratulations and condolences
WO2014206192A1 (en) Method for indicating operating environment of mobile device and mobile device capable of indicating operating environment
KR20090102410A (en) Detecting system for phishing domain
Harrison Preventing ransomware–doing the impossible?
Safaryan et al. Website Spoofing as Technology of Cyber Attacks on the Computer Systems
MACENA CYBER SECURITY AND DATA PROTECTION
Sujatha et al. URL Analysis and cross site scripting with secured authentication protocol system in financial services

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)