WO2020071976A1 - Method and computer operating system for impeding side channel attacks - Google Patents

Method and computer operating system for impeding side channel attacks

Info

Publication number
WO2020071976A1
WO2020071976A1 PCT/SE2018/051027 SE2018051027W WO2020071976A1 WO 2020071976 A1 WO2020071976 A1 WO 2020071976A1 SE 2018051027 W SE2018051027 W SE 2018051027W WO 2020071976 A1 WO2020071976 A1 WO 2020071976A1
Authority
WO
WIPO (PCT)
Prior art keywords
operating system
software component
computer operating
timing information
computer
Prior art date
Application number
PCT/SE2018/051027
Other languages
French (fr)
Inventor
Christian Olrog
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to PCT/SE2018/051027 priority Critical patent/WO2020071976A1/en
Publication of WO2020071976A1 publication Critical patent/WO2020071976A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/556Detecting local intrusion or implementing counter-measures involving covert channels, i.e. data leakage between processes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/52Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow
    • G06F21/53Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow by executing in a restricted environment, e.g. sandbox or secure virtual machine
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/71Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information
    • G06F21/75Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information by inhibiting the analysis of circuitry or operation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/10Network architectures or network communication protocols for network security for controlling access to devices or network resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/002Countermeasures against attacks on cryptographic mechanisms
    • H04L9/005Countermeasures against attacks on cryptographic mechanisms for timing attacks

Definitions

  • the present disclosure relates generally to a method and a computer operating system, for impeding or obstructing a side channel attack from a software component running in a computer and being controlled by the computer operating system.
  • malware an unreliable and potentially malicious software component, commonly referred to as“malware”, running in a computer may be able to derive information from the computer simply by detecting certain parameters produced or released when the computer performs various operations such as computational, retrieving and processing operations and tasks. Such detectable parameters may include timing information, power consumption, electromagnetic leaks, and even heat and sound generation from hardware on which operations are performed. This type of extra information “leaking” from the computer’s operation system can be exploited by the malware to extract certain characteristics of the executed operations. The malware may then be able to extract or“read” potentially secret and/or sensitive information by meticulously analyzing the extracted characteristics of the executed computer operations.
  • a certain computer operation may always take a predictable amount of time to execute, or consume a predictable amount of power, or emit a
  • malware may be able to identify the computer operation by measuring one or more of the above parameters and thus recognize the operation accordingly and even some sensitive information processed or produced in the operation.
  • the above extracting of sensitive information by measuring and analyzing of parameters produced and released when the computer is operating is commonly referred to as“side channel attacks”. This term implies that the parameters provide a detectable side channel from which various information can be extracted without having to read any actual signals communicated in the computer during the operation which may be protected anyway by encryption or the like.
  • timing based side channel attacks on the computer made by malware running in the computer are difficult to prevent and the malware can easily measure the time it takes to get a response to various requests and also reading timing information in the response returned from the computer operation system.
  • the malware may for example repeat the same request over and over so as to extract information from the timing of the responses from the computer operating system. It is thus a problem that a timing based side channel attack from malware is very difficult to foresee, detect and prevent.
  • a method is performed in a computer operating system, for impeding a side channel attack by a software component controlled by the computer operating system.
  • the computer operating system detects that the software component is potentially capable of executing a side channel attack based on timing of operations in the computer operating system.
  • the computer operating system then limits access to timing information of said operations for the software component, and provides timing information to the software component according to said limited access to timing information.
  • a computer operating system is arranged to impede a side channel attack by a software component controlled by the computer operating system.
  • the computer operating system is configured to detect that the software component is potentially capable of executing a side channel attack based on timing of operations in the computer operating system.
  • the computer operating system is further configured to limit access to timing information of said operations for the software component, and to provide timing information to the software component according to said limited access to timing information.
  • a computer program is also provided comprising instructions which, when executed on at least one processor in the computer operating system, cause the at least one processor to carry out the method described above.
  • a carrier is also provided which contains the above computer program, wherein the carrier is one of an electronic signal, an optical signal, a radio signal, or a computer readable storage medium.
  • Fig. 1 is a schematic block diagram illustrating how a software component obtains timing information from an operating system of a computer, according to the prior art.
  • Fig. 2 is a flow chart illustrating a procedure in a computer operating system, according to some example embodiments.
  • Fig. 3 is a schematic block diagram illustrating an example of how the solution may be used in a computer, according to further example embodiments.
  • Fig. 4 is a schematic block diagram illustrating another example of how the solution may be used in a computer where virtual machines are employed, according to further example embodiments.
  • Fig. 5 is a flow chart illustrating an example of how a computer operating system may operate in more detail, according to further example embodiments.
  • Fig. 6 is a block diagram illustrating how a computer operating system may be structured, according to further example embodiments. Detailed description
  • a solution is provided to impede or even prevent a side channel attack by a potentially malicious software component that is running in a computer and being controlled by an operating system of the computer. This can be achieved by first detecting that the software component is potentially capable of executing a side channel attack, e.g. by assigning a trust classification to the software component based on its signature or identity and/or its estimated “reputation”, and then checking a predefined limitation policy for the assigned trust classification from a policy database or the like. Access to timing information of computer operations is then limited for the software component according to the limitation policy that was found valid for the software component according to said classification.
  • the software component s access to timing information may for example be limited by reducing resolution of timing information when provided to the software component. Thereby, the software component will not be able to extract any accurate and“true” timing information from the timing information provided with limited access, and the software component is thus hindered from extracting sensitive information from the provided timing information.
  • a software component with“dubious intentions” running in a computer may issue a certain request several times and each time get a response with timing information from the computer’s operating system.
  • the timing information provides an indication of the computer operation that the operating system performs upon receiving the request.
  • the malicious software component is then able to extract or estimate potentially sensitive and/or secret information from the timing of the received responses, as explained above.
  • a computer 100 comprises an operating system 102 which is responsible for controlling various computer programs and software running in the computer 100 including responding to requests with timing information attached in the responses which timing information may thus constitute footprints of computer operations performed when receiving the requests.
  • the term“software component” is used herein to generally denote a computer executable software program or some part thereof.
  • the timing information is generated and provided from the operating system 102 by“timing functions” 102A therein which typically operate by responding with a timestamp or the like upon request from a software component.
  • a potentially malicious software component running in the computer is denoted 104 which repeatedly sends requests for operations where a side channel may exist, followed by a request for time information, to the operating system 102 which returns responses with timing information, e.g. in the form of timestamps.
  • the software component analyzes the responses and their timing information and basically extracts“hidden” information from the operating system based on the timing information that can be extracted from the work requested, hence performing a side channel attack.
  • Embodiments herein have been devised to impede such a side channel attack by any potentially malicious software component.
  • the term“potentially malicious” is used herein to indicate that the software component is not (yet) proven to be trustworthy in the sense that it cannot be safely established that the software component will not attempt a side channel attack based on timing information. It is therefore said that the non-trusted software component, which is subject to the embodiments herein, is potentially capable of executing a side channel attack based on timing of operations in the computer operating system. This may be the case e.g. if the software component is unknown, or known to have a“bad” reputation by being somehow involved in previously noted attacks.
  • the“computer operating system” may in practice be implemented as software providing resource sharing and isolation.
  • the software components comprise virtual machines, VMs
  • the computer operating system may be a hypervisor or the like controlling the virtual machines and providing virtual processing resources to support operations by the virtual machines.
  • FIG. 2 An example of how the solution may be employed in terms of actions performed by a computer operating system is illustrated by the flow chart in Fig. 2. Some examples of how the computer operating system may be configured in practice will be described later below with reference to Figs 3, 4 and 6.
  • a first optional action 200 indicates that the computer operating system may initially receive, detect and/or identify a new software component to be executed in the computer, which software component is thus subject to the procedures described herein.
  • the computer operating system detects that the software component is potentially capable of executing a side channel attack based on timing of operations in the computer operating system. As indicated above, this may be detected when finding that the software component is not explicitly known to be trustworthy, or is known to be more or less unreliable, e.g. by being registered as having a bad reputation.
  • Software components with bad reputation may be registered in a reputation database or the like which is accessible by the computer operating system herein and also by other computer operating systems. Information regarding trust of software components may thus be maintained in such as reputation database which could thus be consulted once the new software component has been identified.
  • timing information of said operations for the software component in another action 204.
  • access to timing information may be limited by reducing resolution of the timing provided in responses to the software component, so that the exact detailed true timing is obscured to the software component. Limiting access to timing information can be done in different ways which will be described in more detail later below.
  • a final action 206 illustrates that the computer operating system provides timing information to the software component according to said limited access to timing information, e.g. in a response to a request issued by the software component.
  • the detecting action 202 may comprise assigning a trust classification to the software component, where the trust classification indicates to what extent the software component can be trusted, i.e. basically a level or“degree” of trust.
  • the trust classification may be assigned in this embodiment based on at least one of: A) A signature or identity of the software component. For example, if the signature or identity is already known to the computer operating system to be acceptable, the classification may be set to indicate that the software component can be trusted.
  • the trust classification may be set to a value between 1 and“N” where 1 denotes lack of trust and N denotes full trust. Any number of levels could be used in such a classification and this embodiment is not limited in this respect.
  • a trust classification is assigned
  • another example embodiment may be that the computer operating system performs the limiting action 204 based on a predefined limitation policy for the assigned trust classification.
  • a limitation policy may have been defined for each
  • classification level for example dictating that a software component with trust classification 1 should be given maximum limitation of timing access while on the other hand a software component with trust classification N should be given full access to timing information, i.e. no limitation at all.
  • the levels between 1 and N may be given correspondingly varying degrees of limitation of timing access. It is also possible to use only two classes, trusted or not trusted.
  • the estimated reputation of the software component may be obtained from a database where reputations regarding trust of software components are maintained.
  • a reputation database may be centralized in the sense that it is accessible for a number of computer operating systems and any computer operating system instance may register a software component and an estimated level of trust, i.e. reputation, of the component in the reputation database.
  • any other computer operating system may check the reputation of a newly detected software component as taught by this embodiment. Even if a newly detected software component is totally unknown to a computer operating system, it may still have been used in other computer operating systems and be deemed trustworthy based on a good reputation in the reputation database as certified by one or more other computer operating systems.
  • the action 204 of limiting the access to timing information may comprise reducing resolution of the timing information provided to the software component.
  • timing information with said reduced resolution may be returned to the software component in response to a time information request from the software component.
  • reducing resolution of the returned timing information may comprise filtering a true timing information by setting the least significant n bits of the true timing information to zero in the returned timing information, the n bits thus being at the end of the complete timing information.
  • the number n may be freely chosen to determine the degree of resolution, i.e. the higher value of n the lower resolution. For example, if a true timestamp is a binary number with 20 bits, the last 4 bits, being the 4 least significant bits, may be set to zero to obscure the true timestamp.
  • An illustrative example may be as follows:
  • a true timestamp of 11010010110001101011 is filtered by forming a“manipulated” timestamp of 11010010110001100000 where the least significant last 4 bits, underlined, are set to zero.
  • the value of n may be dictated by the trust classification, i.e. a low level of trust implies a high value of n, and vice versa.
  • reducing resolution of the returned timing information may comprise manipulating a true timing information by adding a random value to the true timing information in the returned timing information. Thereby, the true timing information will be likewise obscured in the returned timing information.
  • said limiting may comprise scheduling
  • scheduling processing resources may be referred to as CPU scheduling and it is mechanism by which an operating system basically determines which applications with threads should be run in which order.
  • processing resources comprises one or more Central Processing Units
  • limiting the access to timing information in action 204 may comprise providing CPU accounting with randomized information for a task performed for the software component.
  • the true CPU accounting may otherwise provide timing information that could be used by the software
  • CPU accounting is basically an operating system functionality for keeping track of how much time an application thread has been executing on a CPU, as opposed to waiting to execute.
  • the computer operating system 302 is comprised in a computer 300 where a software component 304 is running as controlled by the computer operating system 302.
  • the computer operating system 302 is shown to comprise a timing functions block 302A, a CPU scheduler 302B, a set of CPUs 302C, a policy database 302D and a Heuristics analyzed block 302E.
  • the computer operating system 302 is also able to consult a reputation database 306 to obtain a reputation of the software component 304, if needed.
  • the software component 304 may or may not require access to precise timing information in order to operate optimally.
  • the software component 304 may be delivered as part of the computer operating system 302, or it may be installed by a user or installed as part of some malicious activity.
  • the software component 304 may request CPU resources from the scheduler 302B, e.g. based on a scheduling priority such as realtime or background.
  • the software component 304 may also request an absolute time or some relative timing information from any of the available timing functions 302A.
  • the term“timing information” is used to represent both absolute time and relative timing information.
  • the CPU scheduler 302B assigns CPU resources to the software component 304, typically in intervals of time, also referred to as“time slices”.
  • the term CPU resources may refer to one or more individual CPUs.
  • the timing functions 302A will respond with timing information to the software component 304, taking into account the trust classification that the requesting software component has been assigned to.
  • the timing functions 302A will query the policy database 302D where the identified software component 304 can be mapped to a trust class and where each trust class is associated to a limitation policy.
  • the heuristics analyzer 302E can take the software component’s behavior into account as well as user interaction, e.g. a user message“please allow this software” may be received via a command line or a Graphical User Interface, GUI prompt. The heuristics analyzer 302E may then further report its findings to the reputation database 306 where the reputation for the identified software
  • component 304 can be updated accordingly.
  • the reputation database 306 gathers information from any reputation providers and carries out some kind of algorithm to weigh reputation updates.
  • the reputation database 306 may thus provide a reputation per identified software component and recommend a specific trust class regarding access to precise timing, depending on the reputation.
  • the CPUs 302C may be instructed by the timing functions 302A to rapidly alter their base frequency, e.g. to make it very difficult to accurately assess the time spent on executing while trying to perform a side channel attack and trying to produce an internal timing function in the software component.
  • the CPU scheduler 302B may be instructed by the timing functions 302A to apply random length time slices as well as to partially randomize process accounting, in order to “mislead” the software component about how long and what speed the CPU has actually been executing. Additionally, the timing functions 302A may reduce the time accuracy when responding to requests from the software component 304.
  • FIG. 4 Another example of how the above-described computer operating system may be arranged will now be described with reference to Fig. 4 where the computer operating system is implemented as a hypervisor 402 comprised in a computer 400 where a software component is running as a virtual machine, VM, 404 controlled by the hypervisor 402.
  • a hypervisor 402 comprised in a computer 400 where a software component is running as a virtual machine, VM, 404 controlled by the hypervisor 402.
  • VM virtual machine
  • 404 controlled by the hypervisor 402.
  • a software component in the form of a virtual machine 404 is in many cases impossible to identify directly since all software components of virtual machines reside inside a virtual harddisk which is more or less a“black box”.
  • a trusted“software agent” or the like can be inserted into the virtual machine, and such an agent can perform identification of the software component, the situation will be similar to Fig. 3. Otherwise, some kind of“best effort” identification may be possible using in-memory scanning where some parts of a memory containing trusted code can be identified. As a result, timing requests coming from such parts of the memory with trusted code can be allowed access to accurate and true timing information, whereas other parts can only be allowed limited access to timing information with reduced accuracy/resolution.
  • FIG. 5 A more detailed example of how the above-described computer operating system may act in order to impede a side channel attack by a software component controlled by the computer operating system, will now be described with reference to the flow chart in Fig. 5. It is assumed that the computer operating system resides in a computer and that the software component is to be executed in the computer. For example, the computer operating system may be either of the above-described computer operating system 302 and hypervisor 402, without limitation to these examples.
  • the computer operating system initially receives, detects and/or identifies a new software component, in an action 500, the software component to be executed in the computer.
  • the computer operating system obtains information about the software component’s signature, identity and/or reputation, of which the latter may be accessible from a reputation database.
  • the computer operating system may detect that the software component is potentially capable of executing a side channel attack based on timing of operations in the computer operating system, as described above for action 202.
  • the computer operating system then assigns a trust classification to the software component based on the obtained information, in an action 504.
  • the trust classification basically indicates whether the software component can be considered reliable or not with respect to attacks.
  • the trust classification may indicate a level of trust, in a simple example comprising just two levels: trusted or not trusted. Assigning a trust classification has been described in more detail above.
  • the trust classification indicates that the software component is deemed reliable and can be fully trusted or not. It is assumed that the trust classification can indicate either that the software component can be fully trusted, corresponding to the highest level of trust, or that the software component cannot be completely trusted, i.e. when the trust classification is at any level below the highest level of trust. If fully trusted in action 506, another action 508 illustrates that the computer operating system provides timing information, e.g. timestamps or similar, to the software component with unlimited access to timing information. In other words, the computer operating system provides true and accurate timing information which has not been intentionally obscured or manipulated.
  • timing information e.g. timestamps or similar
  • the computer operating system fetches a limitation policy that is valid for the trust classification in another action 510.
  • predefined limitation policies for different trust classifications may be accessible from a policy database 302D or 402D. Some examples of limitation policies have been presented above. In general, the limitation policies may dictate varying degrees of limitation of timing access.
  • the computer operating system limits access to timing information for the software component according to the limitation policy fetched in action 510. This action corresponds to action 204 above.
  • the computer operating system in this example filters the true timing information by setting the n least significant bits of the true timing information to zero, in another action 516, so as to reduce resolution and obscure the true and accurate timing information.
  • a final action 518 illustrates that the computer operating system provides timing information with the reduce resolution to the software component in response to the request of action 514. Action 518 corresponds to action 206 above.
  • the computer operating system is able to control resolution of timing information for its users and software components, depending on their reliability.
  • the computer operating system is able to dynamically change resolution of timing information for any user and software component.
  • the computer operating system is able to read and apply a predefined limitation policy to determine the appropriate resolution of time
  • the computer operating system is able to prevent the use of“homegrown” timers (such as loops or similar) by randomizing CPU accounting and/or controlling resource scheduling by adding random time intervals.
  • Randomizing CPU accounting may include using a function that adds randomness to the normally very exact CPU accounting functionality. In essence, a home grown timer based on a tight loop with a counter may ask the operating system for how long it was actually running and use that to deduce timing information. By adding a specified randomness to the value reported in CPU accounting, it will be more difficult to determine the actual time spent on executing a computer operation. Normally, scheduling is applied to be as consistent as possible - e.g.
  • the computer operating system is able to control resolution of time for its hardware components where supported by hardware.
  • the computer operating system is able to manage old hardware by memory mapping timer registers so they generate a page fault and can be emulated.
  • memory mapping is one way by which hardware can be isolated from a running application; when the application tries to read/write from/to a memory address it believes is connected to the timer chip, a modern CPU’s memory management unit (MMU) will intercept the request and try to decode and translate the address specified.
  • MMU memory management unit
  • the translation table used by MMU can include instructions to generate a software“fault” which allows an operating system to intercept the request and“emulate” the actual hardware in software. This could allow the software to provide a filter to the timing hardware and return timer data that has been manipulated based on running application.
  • the computer operating system is able to add varying degrees of randomness to the provided timing information.
  • the computer operating system is able to control and continuously vary actual CPU frequency while reporting a different CPU frequency to a running application of the software component.
  • Modern CPU’s do not run at a fixed frequency.
  • the CPU frequency can also be adjusted based on thermal levels in CPU or based on whether a power cord is attached or not. If the software reduces the frequency for a CPU which is executing an application of the untrusted software component, it should also hide the reduced frequency so that if the application queries for the current frequency it should get something randomized (e.g. based on original value but with a randomness factor added) as response.
  • the computer operating system is able to detect suspicious behavior of a software component.
  • the solution supports fingerprinting of code fragments in a memory (such as a VM memory) and sharing with a remote reputation database. Fingerprinting is the generic term for making an immutable identifier for a piece of hardware or software. In this case it is about being able to identify that an application executing on one machine is identical to an application executing on another machine (so that trust or reputation can be conveyed between machines).
  • the solution supports fingerprinting of code based on binary on disk.
  • Fingerprinting can be done in memory after the application of the software component has been read from disk an put in memory by operating system loader and potentially having been altered by the application itself. Fingerprinting can also be performed directly on the binary file(s) containing the application of the software component. The latter is far more simple but may not work in all scenarios - e.g. when the file resides inside a virtual machine. The memory fingerprinting may still be possible to do from outside the virtual machine.
  • the solution supports reporting of suspicious or long streaks of unsuspicious behavior to the reputation database.
  • the solution supports manual requests to give a binary better timer access - e.g. if a problem is detected. For example, a small menu option by the clock or a command line can give status information about running processes such as“this application seems to be using precision timer information but has not been granted access to use it” along with an option to manually override and say“this
  • the solution supports mapping of code fragment in memory to likely binary in case of VMs (where the VM is not fully trusted an end user can still report) and the subsequent request to give a timer better access if a problem is detected.
  • This relates to fingerprinting. For example, if a binary has a“good” reputation or has been manually set to“allow full timer access”, there may be a problem with virtual machines in that the host computer and operating system do not really know what the guest operating system inside the guest virtual machine is loading. This means that the operating system cannot directly map from a file on disk onto an
  • the host operating system may be able to connect a piece of memory inside the VM to a binary file based on portions of them having the same
  • the solution supports downloading indications of how many users have observed problems combined with reported unsuspicious activity for a specific binary or code fragment. This is the connection to the reputation database. Fingerprints along with statistics of how code associated with those fingerprints may have: 1 ) behaved heuristically 2) been configured by users, and/or 3) been reported by users to not work with certain settings.
  • the solution supports dynamically altering the limitation policy if a suspicious activity is detected from a software component.
  • the solution also supports applying a default limitation policy with low resolution, e.g. for any unknown or unidentified software component.
  • the solution supports alerting a trusted user if a potentially unreliable software component is requesting timing information and resolution of the timing information is lowered by limitation policy.
  • the block diagram in Fig. 6 illustrates a detailed but non-limiting example of how a computer operating system 600 may be structured to bring about the above- described solution and embodiments thereof.
  • the computer operating system 600 may be configured to operate according to any of the examples and embodiments for employing the solution as described herein, where appropriate and as follows.
  • the computer operating system 600 is shown to comprise a processor P and a memory M, said memory comprising instructions executable by said processor P whereby the computer operating system 600 can act as described herein.
  • the computer operating system 600 also comprises a communication circuit C with suitable equipment for receiving requests and transmit responses in the manner described herein.
  • the computer operating system 600 corresponds to the computer operating systems 302 and 402 in Figs 3 and 4, respectively.
  • the communication circuit C may be configured for communication with software components and with a reputation database as described herein, using suitable protocols and messages, while the embodiments herein are not limited to using any specific types of messages or protocols for such communication.
  • the computer operating system 600 comprises means configured or arranged to basically perform at least some of the actions in Figs 2 and 5, and more or less as described above in various examples and embodiments.
  • the computer operating system 600 is arranged or configured to impede a side channel attack by a software component 602 controlled by the computer operating system, as follows.
  • the computer operating system 600 is configured to detect that the software component is potentially capable of executing a side channel attack based on timing of operations in the computer operating system. This operation may be performed by a detecting module 600A in the computer operating system 600, e.g. in the manner described above for action 202.
  • the detecting module 600A could alternatively be named an identifying module or discovering module.
  • the computer operating system 600 is further configured to limit access to timing information of said operations for the software component. This operation may be performed by a controlling module 600B in the computer operating system 600, e.g. as described above for action 204.
  • the controlling module 600B could alternatively be named a limiting module or timing access module.
  • the computer operating system 600 is also configured to provide timing
  • This operation may be performed by a providing module 600C in the computer operating system 600, basically as described above for action 206.
  • the providing module 600C could alternatively be named a response module or information module.
  • Fig. 6 illustrates various functional modules or units in the computer operating system 600, and the skilled person is able to implement these functional modules in practice using suitable software and hardware.
  • the solution is generally not limited to the shown structures of the computer operating system 600, and the functional modules or units 600A-C therein may be
  • the processor P may comprise a single Central Processing Unit (CPU), or could comprise two or more processing units such as CPUs.
  • the processor P may include a general purpose microprocessor, an instruction set processor and/or related chip sets and/or a special purpose microprocessor such as an Application Specific Integrated Circuit (ASIC).
  • ASIC Application Specific Integrated Circuit
  • the processor P may also comprise a storage for caching purposes.
  • Each computer program may be carried by a computer program product in the computer operating system 600 in the form of a memory having a computer readable medium and being connected to the processor P.
  • the computer program product or memory in the computer operating system 600 may thus comprise a computer readable medium on which the computer program is stored e.g. in the form of computer program modules or the like.
  • the memory may be a flash memory, a Random-Access Memory (RAM), a Read-Only Memory (ROM), an Electrically Erasable Programmable ROM (EEPROM) or Hard Drive storage (HDD), and the program modules could in alternative embodiments be distributed on different computer program products in the form of memories within the computer operating system 600.
  • the solution described herein may thus be implemented in the computer operating system 600 by a computer program comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the actions according to any of the above embodiments and examples, where appropriate.
  • the solution may also be implemented in a carrier containing the above computer program, wherein the carrier is one of an electronic signal, an optical signal, a radio signal, or a computer readable storage product or computer program product.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Computer And Data Communications (AREA)

Abstract

A method and a computer operating system (302) for impeding a side channel attack by a software component (304) which is controlled by the computer operating system when running in a computer (300). When detecting that the software component is potentially capable of executing a side channel attack based on timing of operations in the computer operating system, the computer operating system (302) limits access to timing information of said operations for the software component. Timing information is then provided to the software component according to said limited access to timing information.

Description

METHOD AND COMPUTER OPERATING SYSTEM FOR IMPEDING SIDE
CHANNEL ATTACKS
Technical field
The present disclosure relates generally to a method and a computer operating system, for impeding or obstructing a side channel attack from a software component running in a computer and being controlled by the computer operating system.
Background
In the field of computers and software, it is well-known that an unreliable and potentially malicious software component, commonly referred to as“malware”, running in a computer may be able to derive information from the computer simply by detecting certain parameters produced or released when the computer performs various operations such as computational, retrieving and processing operations and tasks. Such detectable parameters may include timing information, power consumption, electromagnetic leaks, and even heat and sound generation from hardware on which operations are performed. This type of extra information “leaking” from the computer’s operation system can be exploited by the malware to extract certain characteristics of the executed operations. The malware may then be able to extract or“read” potentially secret and/or sensitive information by meticulously analyzing the extracted characteristics of the executed computer operations.
For example, a certain computer operation may always take a predictable amount of time to execute, or consume a predictable amount of power, or emit a
predictable amount of heat, radiation and/or sound, thus effectively leaving traces or“footprints” that can be detected and analyzed without having to read the actual information which may be protected as such. In other words, the malware may be able to identify the computer operation by measuring one or more of the above parameters and thus recognize the operation accordingly and even some sensitive information processed or produced in the operation. The above extracting of sensitive information by measuring and analyzing of parameters produced and released when the computer is operating, is commonly referred to as“side channel attacks”. This term implies that the parameters provide a detectable side channel from which various information can be extracted without having to read any actual signals communicated in the computer during the operation which may be protected anyway by encryption or the like.
In particular, timing based side channel attacks on the computer made by malware running in the computer are difficult to prevent and the malware can easily measure the time it takes to get a response to various requests and also reading timing information in the response returned from the computer operation system. The malware may for example repeat the same request over and over so as to extract information from the timing of the responses from the computer operating system. It is thus a problem that a timing based side channel attack from malware is very difficult to foresee, detect and prevent.
Summary
It is an object of embodiments described herein to address at least some of the problems and issues outlined above. It is possible to achieve this object and others by using a method and a computer operating system as defined in the attached independent claims.
According to one aspect, a method is performed in a computer operating system, for impeding a side channel attack by a software component controlled by the computer operating system. In this method, the computer operating system detects that the software component is potentially capable of executing a side channel attack based on timing of operations in the computer operating system.
As a result, the computer operating system then limits access to timing information of said operations for the software component, and provides timing information to the software component according to said limited access to timing information.
According to another aspect, a computer operating system is arranged to impede a side channel attack by a software component controlled by the computer operating system. The computer operating system is configured to detect that the software component is potentially capable of executing a side channel attack based on timing of operations in the computer operating system. The computer operating system is further configured to limit access to timing information of said operations for the software component, and to provide timing information to the software component according to said limited access to timing information.
When using either of the above method and computer operating system, it is an advantage that it becomes more difficult or even impossible for the software component to execute a successful timing based side channel attack since the software component does not have access to the true and accurate timing information which has been obscured by the above limited access
The above method and computer operating system may be configured and implemented according to different optional embodiments to accomplish further features and benefits, to be described below.
A computer program is also provided comprising instructions which, when executed on at least one processor in the computer operating system, cause the at least one processor to carry out the method described above. A carrier is also provided which contains the above computer program, wherein the carrier is one of an electronic signal, an optical signal, a radio signal, or a computer readable storage medium. Brief description of drawings
The solution will now be described in more detail by means of exemplary embodiments and with reference to the accompanying drawings, in which:
Fig. 1 is a schematic block diagram illustrating how a software component obtains timing information from an operating system of a computer, according to the prior art.
Fig. 2 is a flow chart illustrating a procedure in a computer operating system, according to some example embodiments. Fig. 3 is a schematic block diagram illustrating an example of how the solution may be used in a computer, according to further example embodiments.
Fig. 4 is a schematic block diagram illustrating another example of how the solution may be used in a computer where virtual machines are employed, according to further example embodiments.
Fig. 5 is a flow chart illustrating an example of how a computer operating system may operate in more detail, according to further example embodiments.
Fig. 6 is a block diagram illustrating how a computer operating system may be structured, according to further example embodiments. Detailed description
Briefly described, a solution is provided to impede or even prevent a side channel attack by a potentially malicious software component that is running in a computer and being controlled by an operating system of the computer. This can be achieved by first detecting that the software component is potentially capable of executing a side channel attack, e.g. by assigning a trust classification to the software component based on its signature or identity and/or its estimated “reputation”, and then checking a predefined limitation policy for the assigned trust classification from a policy database or the like. Access to timing information of computer operations is then limited for the software component according to the limitation policy that was found valid for the software component according to said classification.
The software component’s access to timing information may for example be limited by reducing resolution of timing information when provided to the software component. Thereby, the software component will not be able to extract any accurate and“true” timing information from the timing information provided with limited access, and the software component is thus hindered from extracting sensitive information from the provided timing information.
It was mentioned above that a software component with“dubious intentions” running in a computer may issue a certain request several times and each time get a response with timing information from the computer’s operating system. The timing information provides an indication of the computer operation that the operating system performs upon receiving the request. The malicious software component is then able to extract or estimate potentially sensitive and/or secret information from the timing of the received responses, as explained above.
This behavior is illustrated in Fig. 1 where a computer 100 comprises an operating system 102 which is responsible for controlling various computer programs and software running in the computer 100 including responding to requests with timing information attached in the responses which timing information may thus constitute footprints of computer operations performed when receiving the requests. The term“software component” is used herein to generally denote a computer executable software program or some part thereof. The timing information is generated and provided from the operating system 102 by“timing functions” 102A therein which typically operate by responding with a timestamp or the like upon request from a software component.
A potentially malicious software component running in the computer is denoted 104 which repeatedly sends requests for operations where a side channel may exist, followed by a request for time information, to the operating system 102 which returns responses with timing information, e.g. in the form of timestamps. The software component analyzes the responses and their timing information and basically extracts“hidden” information from the operating system based on the timing information that can be extracted from the work requested, hence performing a side channel attack. Embodiments herein have been devised to impede such a side channel attack by any potentially malicious software component.
The term“potentially malicious” is used herein to indicate that the software component is not (yet) proven to be trustworthy in the sense that it cannot be safely established that the software component will not attempt a side channel attack based on timing information. It is therefore said that the non-trusted software component, which is subject to the embodiments herein, is potentially capable of executing a side channel attack based on timing of operations in the computer operating system. This may be the case e.g. if the software component is unknown, or known to have a“bad” reputation by being somehow involved in previously noted attacks.
The solution will be described herein with reference to functionality in a computer operating system which is basically responsible for controlling various software components running in a computer and for scheduling and providing processing resources, commonly referred to as Central Processing Units, CPUs, for execution of computer operations required by the software components. In this disclosure, the“computer operating system” may in practice be implemented as software providing resource sharing and isolation. For example, if the software components comprise virtual machines, VMs, the computer operating system may be a hypervisor or the like controlling the virtual machines and providing virtual processing resources to support operations by the virtual machines.
An example of how the solution may be employed in terms of actions performed by a computer operating system is illustrated by the flow chart in Fig. 2. Some examples of how the computer operating system may be configured in practice will be described later below with reference to Figs 3, 4 and 6.
The actions in Fig. 2 are thus performed by the computer operating system for impeding a side channel attack by a software component controlled by the computer operating system in a computer. A first optional action 200 indicates that the computer operating system may initially receive, detect and/or identify a new software component to be executed in the computer, which software component is thus subject to the procedures described herein.
In another action 202, the computer operating system detects that the software component is potentially capable of executing a side channel attack based on timing of operations in the computer operating system. As indicated above, this may be detected when finding that the software component is not explicitly known to be trustworthy, or is known to be more or less unreliable, e.g. by being registered as having a bad reputation. Software components with bad reputation may be registered in a reputation database or the like which is accessible by the computer operating system herein and also by other computer operating systems. Information regarding trust of software components may thus be maintained in such as reputation database which could thus be consulted once the new software component has been identified.
Having detected in action 202 basically that the software component is potentially malicious, implying that a timing based side channel attack cannot safely be ruled out or dismissed, the computer operating system limits access to timing
information of said operations for the software component, in another action 204. In this action, access to timing information may be limited by reducing resolution of the timing provided in responses to the software component, so that the exact detailed true timing is obscured to the software component. Limiting access to timing information can be done in different ways which will be described in more detail later below.
A final action 206 illustrates that the computer operating system provides timing information to the software component according to said limited access to timing information, e.g. in a response to a request issued by the software component. Thereby, the attack risks associated with providing detailed and accurate timing information to the potentially malicious software component can be avoided. It is thus an advantage of the above procedure that it becomes significantly more difficult, or even impossible, for the software component to execute a successful side channel attack based on timing of operations in the computer operating system, since the software component does not have access to the true and accurate timing information which has thus been obscured by the above limited access.
Some examples of embodiments that may be employed in the above procedure in Fig. 2 will now be described. In one example embodiment, the detecting action 202 may comprise assigning a trust classification to the software component, where the trust classification indicates to what extent the software component can be trusted, i.e. basically a level or“degree” of trust. The trust classification may be assigned in this embodiment based on at least one of: A) A signature or identity of the software component. For example, if the signature or identity is already known to the computer operating system to be acceptable, the classification may be set to indicate that the software component can be trusted.
B) An estimated reputation of the software component, which may be
obtained from a reputation database based on the software component’s signature or identity.
For example, the trust classification may be set to a value between 1 and“N” where 1 denotes lack of trust and N denotes full trust. Any number of levels could be used in such a classification and this embodiment is not limited in this respect.
If a trust classification is assigned, another example embodiment may be that the computer operating system performs the limiting action 204 based on a predefined limitation policy for the assigned trust classification.
In this embodiment, a limitation policy may have been defined for each
classification level, for example dictating that a software component with trust classification 1 should be given maximum limitation of timing access while on the other hand a software component with trust classification N should be given full access to timing information, i.e. no limitation at all. The levels between 1 and N may be given correspondingly varying degrees of limitation of timing access. It is also possible to use only two classes, trusted or not trusted.
In another example embodiment, the estimated reputation of the software component may be obtained from a database where reputations regarding trust of software components are maintained. Such a reputation database may be centralized in the sense that it is accessible for a number of computer operating systems and any computer operating system instance may register a software component and an estimated level of trust, i.e. reputation, of the component in the reputation database. Thereby, any other computer operating system may check the reputation of a newly detected software component as taught by this embodiment. Even if a newly detected software component is totally unknown to a computer operating system, it may still have been used in other computer operating systems and be deemed trustworthy based on a good reputation in the reputation database as certified by one or more other computer operating systems.
In another example embodiment, the action 204 of limiting the access to timing information may comprise reducing resolution of the timing information provided to the software component. In another example embodiment, timing information with said reduced resolution may be returned to the software component in response to a time information request from the software component.
The resolution of the timing information may be reduced in several different ways. According to one example embodiment, reducing resolution of the returned timing information may comprise filtering a true timing information by setting the least significant n bits of the true timing information to zero in the returned timing information, the n bits thus being at the end of the complete timing information.
The number n may be freely chosen to determine the degree of resolution, i.e. the higher value of n the lower resolution. For example, if a true timestamp is a binary number with 20 bits, the last 4 bits, being the 4 least significant bits, may be set to zero to obscure the true timestamp. An illustrative example may be as follows:
A true timestamp of 11010010110001101011 is filtered by forming a“manipulated” timestamp of 11010010110001100000 where the least significant last 4 bits, underlined, are set to zero. In further examples, the value of n may be dictated by the trust classification, i.e. a low level of trust implies a high value of n, and vice versa.
The resolution of the timing information may be reduced in another way, as follows. In another example embodiment, reducing resolution of the returned timing information may comprise manipulating a true timing information by adding a random value to the true timing information in the returned timing information. Thereby, the true timing information will be likewise obscured in the returned timing information. In another example embodiment, said limiting may comprise scheduling
processing resources to the software component based on random time intervals. The true time intervals of scheduling processing resources may otherwise provide timing information that could be used by the software component for extracting sensitive information based on the true scheduling time intervals. Scheduling processing resources may be referred to as CPU scheduling and it is mechanism by which an operating system basically determines which applications with threads should be run in which order.
When using the latter embodiment, another example embodiment may be that said processing resources comprises one or more Central Processing Units,
CPUs. In another example embodiment, limiting the access to timing information in action 204 may comprise providing CPU accounting with randomized information for a task performed for the software component. The true CPU accounting may otherwise provide timing information that could be used by the software
component for extracting sensitive information based on the true CPU accounting. CPU accounting is basically an operating system functionality for keeping track of how much time an application thread has been executing on a CPU, as opposed to waiting to execute.
An example of how the above-described computer operating system may be arranged in practice will now be described with reference to Fig. 3 where the computer operating system 302 is comprised in a computer 300 where a software component 304 is running as controlled by the computer operating system 302. The computer operating system 302 is shown to comprise a timing functions block 302A, a CPU scheduler 302B, a set of CPUs 302C, a policy database 302D and a Heuristics analyzed block 302E. The computer operating system 302 is also able to consult a reputation database 306 to obtain a reputation of the software component 304, if needed.
The software component 304 may or may not require access to precise timing information in order to operate optimally. For example, the software component 304 may be delivered as part of the computer operating system 302, or it may be installed by a user or installed as part of some malicious activity. The software component 304 may request CPU resources from the scheduler 302B, e.g. based on a scheduling priority such as realtime or background. The software component 304 may also request an absolute time or some relative timing information from any of the available timing functions 302A. Throughout this description, the term“timing information” is used to represent both absolute time and relative timing information.
The CPU scheduler 302B assigns CPU resources to the software component 304, typically in intervals of time, also referred to as“time slices”. The term CPU resources may refer to one or more individual CPUs. It is assumed that the computer operating system 302 assigns a trust classification to the software component 304 in the manner described above. The timing functions 302A will respond with timing information to the software component 304, taking into account the trust classification that the requesting software component has been assigned to. The timing functions 302A will query the policy database 302D where the identified software component 304 can be mapped to a trust class and where each trust class is associated to a limitation policy.
The heuristics analyzer 302E can take the software component’s behavior into account as well as user interaction, e.g. a user message“please allow this software” may be received via a command line or a Graphical User Interface, GUI prompt. The heuristics analyzer 302E may then further report its findings to the reputation database 306 where the reputation for the identified software
component 304 can be updated accordingly.
The reputation database 306 gathers information from any reputation providers and carries out some kind of algorithm to weigh reputation updates. The reputation database 306 may thus provide a reputation per identified software component and recommend a specific trust class regarding access to precise timing, depending on the reputation.
If the trust class of the software component 304 indicates a“low” trust, the CPUs 302C may be instructed by the timing functions 302A to rapidly alter their base frequency, e.g. to make it very difficult to accurately assess the time spent on executing while trying to perform a side channel attack and trying to produce an internal timing function in the software component. Furthermore, the CPU scheduler 302B may be instructed by the timing functions 302A to apply random length time slices as well as to partially randomize process accounting, in order to “mislead” the software component about how long and what speed the CPU has actually been executing. Additionally, the timing functions 302A may reduce the time accuracy when responding to requests from the software component 304.
Another example of how the above-described computer operating system may be arranged will now be described with reference to Fig. 4 where the computer operating system is implemented as a hypervisor 402 comprised in a computer 400 where a software component is running as a virtual machine, VM, 404 controlled by the hypervisor 402. This example is similar to the example shown in fig 3.
A software component in the form of a virtual machine 404 is in many cases impossible to identify directly since all software components of virtual machines reside inside a virtual harddisk which is more or less a“black box”.
If a trusted“software agent” or the like can be inserted into the virtual machine, and such an agent can perform identification of the software component, the situation will be similar to Fig. 3. Otherwise, some kind of“best effort” identification may be possible using in-memory scanning where some parts of a memory containing trusted code can be identified. As a result, timing requests coming from such parts of the memory with trusted code can be allowed access to accurate and true timing information, whereas other parts can only be allowed limited access to timing information with reduced accuracy/resolution.
A more detailed example of how the above-described computer operating system may act in order to impede a side channel attack by a software component controlled by the computer operating system, will now be described with reference to the flow chart in Fig. 5. It is assumed that the computer operating system resides in a computer and that the software component is to be executed in the computer. For example, the computer operating system may be either of the above-described computer operating system 302 and hypervisor 402, without limitation to these examples.
In the same manner as action 200 above, the computer operating system initially receives, detects and/or identifies a new software component, in an action 500, the software component to be executed in the computer. In a next action 502, the computer operating system obtains information about the software component’s signature, identity and/or reputation, of which the latter may be accessible from a reputation database. In this action, the computer operating system may detect that the software component is potentially capable of executing a side channel attack based on timing of operations in the computer operating system, as described above for action 202.
The computer operating system then assigns a trust classification to the software component based on the obtained information, in an action 504. The trust classification basically indicates whether the software component can be considered reliable or not with respect to attacks. The trust classification may indicate a level of trust, in a simple example comprising just two levels: trusted or not trusted. Assigning a trust classification has been described in more detail above.
It is then determined, in an action 506, whether the trust classification indicates that the software component is deemed reliable and can be fully trusted or not. It is assumed that the trust classification can indicate either that the software component can be fully trusted, corresponding to the highest level of trust, or that the software component cannot be completely trusted, i.e. when the trust classification is at any level below the highest level of trust. If fully trusted in action 506, another action 508 illustrates that the computer operating system provides timing information, e.g. timestamps or similar, to the software component with unlimited access to timing information. In other words, the computer operating system provides true and accurate timing information which has not been intentionally obscured or manipulated. On the other hand, if the trust classification indicates that the software component is not reliable and cannot be fully trusted in action 506, the computer operating system fetches a limitation policy that is valid for the trust classification in another action 510. As said above, predefined limitation policies for different trust classifications may be accessible from a policy database 302D or 402D. Some examples of limitation policies have been presented above. In general, the limitation policies may dictate varying degrees of limitation of timing access. In another action 512, the computer operating system then limits access to timing information for the software component according to the limitation policy fetched in action 510. This action corresponds to action 204 above.
Once a request for timing information is received from the software component, as indicated by an action 514, the computer operating system in this example filters the true timing information by setting the n least significant bits of the true timing information to zero, in another action 516, so as to reduce resolution and obscure the true and accurate timing information. A final action 518 illustrates that the computer operating system provides timing information with the reduce resolution to the software component in response to the request of action 514. Action 518 corresponds to action 206 above.
When using one or more of the above-described embodiments, it is an advantage that it is more difficult, or even impossible, for the software component to execute a useful side channel attack since its access to true and accurate timing information has been limited. Some further features and advantages will now be discussed.
The computer operating system is able to control resolution of timing information for its users and software components, depending on their reliability.
The computer operating system is able to dynamically change resolution of timing information for any user and software component.
The computer operating system is able to read and apply a predefined limitation policy to determine the appropriate resolution of time The computer operating system is able to prevent the use of“homegrown” timers (such as loops or similar) by randomizing CPU accounting and/or controlling resource scheduling by adding random time intervals. Randomizing CPU accounting may include using a function that adds randomness to the normally very exact CPU accounting functionality. In essence, a home grown timer based on a tight loop with a counter may ask the operating system for how long it was actually running and use that to deduce timing information. By adding a specified randomness to the value reported in CPU accounting, it will be more difficult to determine the actual time spent on executing a computer operation. Normally, scheduling is applied to be as consistent as possible - e.g. to ensure that music playback does not stutter or that background images of a game move evenly. By doing the opposite, e.g. scheduling application threads run on CPU for uneven amounts of time and altering priority in queues, for software components which are not trusted, it becomes harder for them to keep track of true time.
The computer operating system is able to control resolution of time for its hardware components where supported by hardware.
The computer operating system is able to manage old hardware by memory mapping timer registers so they generate a page fault and can be emulated. There are many possible sources of timing information in a Personal Computer, PC, and legacy hardware includes timer chips that can be programmed and queried to keep track of time. Memory mapping is one way by which hardware can be isolated from a running application; when the application tries to read/write from/to a memory address it believes is connected to the timer chip, a modern CPU’s memory management unit (MMU) will intercept the request and try to decode and translate the address specified. The translation table used by MMU can include instructions to generate a software“fault” which allows an operating system to intercept the request and“emulate” the actual hardware in software. This could allow the software to provide a filter to the timing hardware and return timer data that has been manipulated based on running application.
The computer operating system is able to add varying degrees of randomness to the provided timing information. The computer operating system is able to control and continuously vary actual CPU frequency while reporting a different CPU frequency to a running application of the software component. Modern CPU’s do not run at a fixed frequency. To save battery there is functionality in the CPU as well as in the operating system that can slow down the frequency of the CPU to save energy. The CPU frequency can also be adjusted based on thermal levels in CPU or based on whether a power cord is attached or not. If the software reduces the frequency for a CPU which is executing an application of the untrusted software component, it should also hide the reduced frequency so that if the application queries for the current frequency it should get something randomized (e.g. based on original value but with a randomness factor added) as response.
The computer operating system is able to detect suspicious behavior of a software component.
The solution supports fingerprinting of code fragments in a memory (such as a VM memory) and sharing with a remote reputation database. Fingerprinting is the generic term for making an immutable identifier for a piece of hardware or software. In this case it is about being able to identify that an application executing on one machine is identical to an application executing on another machine (so that trust or reputation can be conveyed between machines). The solution supports fingerprinting of code based on binary on disk.
Fingerprinting can be done in memory after the application of the software component has been read from disk an put in memory by operating system loader and potentially having been altered by the application itself. Fingerprinting can also be performed directly on the binary file(s) containing the application of the software component. The latter is far more simple but may not work in all scenarios - e.g. when the file resides inside a virtual machine. The memory fingerprinting may still be possible to do from outside the virtual machine.
The solution supports reporting of suspicious or long streaks of unsuspicious behavior to the reputation database. The solution supports manual requests to give a binary better timer access - e.g. if a problem is detected. For example, a small menu option by the clock or a command line can give status information about running processes such as“this application seems to be using precision timer information but has not been granted access to use it” along with an option to manually override and say“this
application should have full/limited/no timer access”.
The solution supports mapping of code fragment in memory to likely binary in case of VMs (where the VM is not fully trusted an end user can still report) and the subsequent request to give a timer better access if a problem is detected. This relates to fingerprinting. For example, if a binary has a“good” reputation or has been manually set to“allow full timer access”, there may be a problem with virtual machines in that the host computer and operating system do not really know what the guest operating system inside the guest virtual machine is loading. This means that the operating system cannot directly map from a file on disk onto an
application running inside the virtual machine. But using fingerprinting
mechanisms the host operating system may be able to connect a piece of memory inside the VM to a binary file based on portions of them having the same
fingerprint.
The solution supports downloading indications of how many users have observed problems combined with reported unsuspicious activity for a specific binary or code fragment. This is the connection to the reputation database. Fingerprints along with statistics of how code associated with those fingerprints may have: 1 ) behaved heuristically 2) been configured by users, and/or 3) been reported by users to not work with certain settings. The solution supports dynamically altering the limitation policy if a suspicious activity is detected from a software component.
The solution also supports applying a default limitation policy with low resolution, e.g. for any unknown or unidentified software component. The solution supports alerting a trusted user if a potentially unreliable software component is requesting timing information and resolution of the timing information is lowered by limitation policy.
The block diagram in Fig. 6 illustrates a detailed but non-limiting example of how a computer operating system 600 may be structured to bring about the above- described solution and embodiments thereof. The computer operating system 600 may be configured to operate according to any of the examples and embodiments for employing the solution as described herein, where appropriate and as follows. The computer operating system 600 is shown to comprise a processor P and a memory M, said memory comprising instructions executable by said processor P whereby the computer operating system 600 can act as described herein. The computer operating system 600 also comprises a communication circuit C with suitable equipment for receiving requests and transmit responses in the manner described herein.
The computer operating system 600 corresponds to the computer operating systems 302 and 402 in Figs 3 and 4, respectively. The communication circuit C may be configured for communication with software components and with a reputation database as described herein, using suitable protocols and messages, while the embodiments herein are not limited to using any specific types of messages or protocols for such communication.
The computer operating system 600 comprises means configured or arranged to basically perform at least some of the actions in Figs 2 and 5, and more or less as described above in various examples and embodiments. In Fig. 6, the computer operating system 600 is arranged or configured to impede a side channel attack by a software component 602 controlled by the computer operating system, as follows.
The computer operating system 600 is configured to detect that the software component is potentially capable of executing a side channel attack based on timing of operations in the computer operating system. This operation may be performed by a detecting module 600A in the computer operating system 600, e.g. in the manner described above for action 202. The detecting module 600A could alternatively be named an identifying module or discovering module.
The computer operating system 600 is further configured to limit access to timing information of said operations for the software component. This operation may be performed by a controlling module 600B in the computer operating system 600, e.g. as described above for action 204. The controlling module 600B could alternatively be named a limiting module or timing access module.
The computer operating system 600 is also configured to provide timing
information to the software component according to said limited access to timing information. This operation may be performed by a providing module 600C in the computer operating system 600, basically as described above for action 206. The providing module 600C could alternatively be named a response module or information module.
It should be noted that Fig. 6 illustrates various functional modules or units in the computer operating system 600, and the skilled person is able to implement these functional modules in practice using suitable software and hardware. Thus, the solution is generally not limited to the shown structures of the computer operating system 600, and the functional modules or units 600A-C therein may be
configured to operate according to any of the features and embodiments described in this disclosure, where appropriate.
The functional modules or units 600A-C described above could thus be
implemented in the computer operating system 600 by means of hardware and program modules of a computer program comprising code means which, when run by the processor P causes the computer operating system 600 to perform at least some of the above-described actions and procedures.
In Fig. 6, the processor P may comprise a single Central Processing Unit (CPU), or could comprise two or more processing units such as CPUs. For example, the processor P may include a general purpose microprocessor, an instruction set processor and/or related chip sets and/or a special purpose microprocessor such as an Application Specific Integrated Circuit (ASIC). The processor P may also comprise a storage for caching purposes.
Each computer program may be carried by a computer program product in the computer operating system 600 in the form of a memory having a computer readable medium and being connected to the processor P. The computer program product or memory in the computer operating system 600 may thus comprise a computer readable medium on which the computer program is stored e.g. in the form of computer program modules or the like. For example, the memory may be a flash memory, a Random-Access Memory (RAM), a Read-Only Memory (ROM), an Electrically Erasable Programmable ROM (EEPROM) or Hard Drive storage (HDD), and the program modules could in alternative embodiments be distributed on different computer program products in the form of memories within the computer operating system 600.
The solution described herein may thus be implemented in the computer operating system 600 by a computer program comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the actions according to any of the above embodiments and examples, where appropriate. The solution may also be implemented in a carrier containing the above computer program, wherein the carrier is one of an electronic signal, an optical signal, a radio signal, or a computer readable storage product or computer program product.
While the solution has been described with reference to specific exemplifying embodiments, the description is generally only intended to illustrate the inventive concept and should not be taken as limiting the scope of the solution. For example, the terms“computer operating system”,“side channel attack”,“software component”,“timing information”,“trust classification”,“processor scheduler”, and “limitation policy” have been used throughout this disclosure, although any other corresponding entities, functions, and/or parameters could also be used having the features and characteristics described here. The solution is defined by the appended claims.

Claims

1. A method performed by a computer operating system (302) for impeding a side channel attack by a software component (304) controlled by the computer operating system, the method comprising: - detecting (202) that the software component is potentially capable of executing a side channel attack based on timing of operations in the computer operating system,
- limiting (204) access to timing information of said operations for the software component, and - providing (206) timing information to the software component according to said limited access to timing information.
2. A method according to claim 1 , wherein said detecting comprises assigning a trust classification to the software component based on at least one of its signature or identity and an estimated reputation of the software component.
3. A method according to claim 2, wherein said limiting is performed based on a predefined limitation policy (302D) for the assigned trust classification.
4. A method according to claim 2 or 3, wherein the estimated reputation of the software component is obtained from a database (306) where reputations regarding trust of software components are maintained.
5. A method according to any of claims 1 -4, wherein said limiting comprises reducing resolution of the timing information provided to the software component.
6. A method according to claim 5, wherein timing information with said reduced resolution is returned to the software component in response to a time information request from the software component.
7. A method according to claim 6, wherein reducing resolution of the returned timing information comprises filtering a true timing information by setting the least significant n bits of the true timing information to zero in the returned timing information.
8. A method according to claim 6, wherein reducing resolution of the returned timing information comprises manipulating a true timing information by adding a random value to the true timing information in the returned timing information.
9. A method according to any of claims 1 -8, wherein said limiting comprises scheduling processing resources to the software component based on random time intervals.
10. A method according to claim 9, wherein said processing resources comprise one or more Central Processing Units, CPUs (302C).
11. A method according to any of claims 1 -10, wherein said limiting comprises providing CPU accounting with randomized information for a task performed for the software component.
12. A computer operating system (600) arranged to impede a side channel attack by a software component (602) controlled by the computer operating system, wherein the computer operating system is configured to:
- detect (600A) that the software component is potentially capable of executing a side channel attack based on timing of operations in the computer operating system,
- limit (600B) access to timing information of said operations for the software component, and
- provide (600C) timing information to the software component according to said limited access to timing information.
13. A computer operating system (600) according to claim 12, the computer operating system is configured to perform said detecting by assigning a trust classification to the software component based on at least one of its signature or identity and an estimated reputation of the software component.
14. A computer operating system (600) according to claim 13, wherein the computer operating system is configured to perform said limiting based on a predefined limitation policy for the assigned trust classification.
15. A computer operating system (600) according to claim 13 or 14, wherein the computer operating system is configured to obtain the estimated reputation of the software component from a database where reputations regarding trust of software components are maintained.
16. A computer operating system (600) according to any of claims 12-15, wherein the computer operating system is configured to perform said limiting by reducing resolution of the timing information provided to the software component.
17. A computer operating system (600) according to claim 16, wherein the computer operating system is configured to return timing information with said reduced resolution to the software component in response to a time information request from the software component.
18. A computer operating system (600) according to claim 17, wherein the computer operating system is configured to reduce resolution of the returned timing information by filtering a true timing information by setting the least significant n bits of the true timing information to zero in the returned timing information.
19. A computer operating system (600) according to claim 17, wherein the computer operating system is configured to reduce resolution of the returned timing information by adjusting a true timing information by adding a random value to the true timing information in the returned timing information.
20. A computer operating system (600) according to any of claims 12-19, wherein the computer operating system is configured to perform said limiting by scheduling processing resources to the software component based on random time intervals.
21. A computer operating system (600) according to claim 20, wherein said processing resources comprise one or more Central Processing Units, CPUs.
22. A computer operating system (600) according to any of claims 12-21 , wherein said limiting comprises providing CPU accounting with randomized information for a task performed for the software component.
23. A computer program comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method according to any one of claims 1 -11.
24. A carrier containing the computer program of claim 23, wherein the carrier is one of an electronic signal, optical signal, radio signal, or computer readable storage medium.
PCT/SE2018/051027 2018-10-05 2018-10-05 Method and computer operating system for impeding side channel attacks WO2020071976A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/SE2018/051027 WO2020071976A1 (en) 2018-10-05 2018-10-05 Method and computer operating system for impeding side channel attacks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/SE2018/051027 WO2020071976A1 (en) 2018-10-05 2018-10-05 Method and computer operating system for impeding side channel attacks

Publications (1)

Publication Number Publication Date
WO2020071976A1 true WO2020071976A1 (en) 2020-04-09

Family

ID=63858019

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE2018/051027 WO2020071976A1 (en) 2018-10-05 2018-10-05 Method and computer operating system for impeding side channel attacks

Country Status (1)

Country Link
WO (1) WO2020071976A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114268479A (en) * 2021-12-14 2022-04-01 北京奕斯伟计算技术有限公司 Processing method and device for defending channel attack on shared storage side and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014065801A1 (en) * 2012-10-25 2014-05-01 Empire Technology Development Llc Secure system time reporting
US20150082434A1 (en) * 2012-03-07 2015-03-19 The Trustees Of Columbia University In The City Of New York Systems and methods to counter side channels attacks
US20150350239A1 (en) * 2013-12-12 2015-12-03 Empire Technology Development Llc Randomization of processor subunit timing to enhance security
US9754103B1 (en) * 2014-10-08 2017-09-05 Amazon Technologies, Inc. Micro-architecturally delayed timer

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150082434A1 (en) * 2012-03-07 2015-03-19 The Trustees Of Columbia University In The City Of New York Systems and methods to counter side channels attacks
WO2014065801A1 (en) * 2012-10-25 2014-05-01 Empire Technology Development Llc Secure system time reporting
US20150350239A1 (en) * 2013-12-12 2015-12-03 Empire Technology Development Llc Randomization of processor subunit timing to enhance security
US9754103B1 (en) * 2014-10-08 2017-09-05 Amazon Technologies, Inc. Micro-architecturally delayed timer

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114268479A (en) * 2021-12-14 2022-04-01 北京奕斯伟计算技术有限公司 Processing method and device for defending channel attack on shared storage side and electronic equipment
CN114268479B (en) * 2021-12-14 2023-08-18 北京奕斯伟计算技术股份有限公司 Processing method and device for defending shared storage side channel attack and electronic equipment

Similar Documents

Publication Publication Date Title
US10701091B1 (en) System and method for verifying a cyberthreat
US9772953B2 (en) Methods and apparatus for protecting operating system data
US9565214B2 (en) Real-time module protection
US9934376B1 (en) Malware detection appliance architecture
US20200193017A1 (en) Leveraging Instrumentation Capabilities to Enable Monitoring Services
US10055585B2 (en) Hardware and software execution profiling
US9262246B2 (en) System and method for securing memory and storage of an electronic device with a below-operating system security agent
Christodorescu et al. Cloud security is not (just) virtualization security: a short paper
US8584242B2 (en) Remote-assisted malware detection
US20180025183A1 (en) Management of Authenticated Variables
RU2627107C2 (en) Code execution profiling
US8484739B1 (en) Techniques for securely performing reputation based analysis using virtualization
US7437766B2 (en) Method and apparatus providing deception and/or altered operation in an information system operating system
US20160191550A1 (en) Microvisor-based malware detection endpoint architecture
US20040168173A1 (en) Method and apparatus providing deception and/or altered execution of logic in an information system
Sabbagh et al. Scadet: A side-channel attack detection tool for tracking prime-probe
US20230144818A1 (en) Malicious software detection based on api trust
CN111324891A (en) System and method for container file integrity monitoring
US20230222226A1 (en) Memory scan-based process monitoring
WO2020071976A1 (en) Method and computer operating system for impeding side channel attacks
US10809924B2 (en) Executable memory protection
Gilbert et al. Dymo: Tracking dynamic code identity
Flatley Rootkit Detection Using a Cross-View Clean Boot Method
Redini Analyzing and securing firmware for IoT devices
Torres Towards Monitoring and Detection of Data-Only Manipulation Attacks Using Low-level Hardware Information

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18786440

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18786440

Country of ref document: EP

Kind code of ref document: A1