WO2022128142A1 - Apparatus and method for managing access to data memory by executable codes based on execution context - Google Patents

Apparatus and method for managing access to data memory by executable codes based on execution context Download PDF

Info

Publication number
WO2022128142A1
WO2022128142A1 PCT/EP2020/087352 EP2020087352W WO2022128142A1 WO 2022128142 A1 WO2022128142 A1 WO 2022128142A1 EP 2020087352 W EP2020087352 W EP 2020087352W WO 2022128142 A1 WO2022128142 A1 WO 2022128142A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
memory
kernel
access
processor
Prior art date
Application number
PCT/EP2020/087352
Other languages
French (fr)
Inventor
Igor STOPPA
Original Assignee
Huawei Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co., Ltd. filed Critical Huawei Technologies Co., Ltd.
Priority to PCT/EP2020/087352 priority Critical patent/WO2022128142A1/en
Priority to CN202080107892.9A priority patent/CN116635855A/en
Publication of WO2022128142A1 publication Critical patent/WO2022128142A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/52Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow
    • G06F21/53Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow by executing in a restricted environment, e.g. sandbox or secure virtual machine
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/552Detecting local intrusion or implementing counter-measures involving long-term monitoring or reporting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/554Detecting local intrusion or implementing counter-measures involving event detection and direct action
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6281Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database at program execution time, where the protection is within the operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/71Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information
    • G06F21/74Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information operating in dual or compartmented mode, i.e. at least one secure mode

Definitions

  • the disclosure relates generally to computing systems; more specifically, the disclosure relates to an apparatus and a method for managing access to a data memory of a computing system for protection against unwarranted executable codes.
  • the disclosure also relates to a non-transitory computer readable media for performing the aforesaid method.
  • Computing apparatus including microprocessor systems or microcontroller systems operate by retaining most of their transient state into a memory.
  • computer applications typically need to allocate memory and store data within a computing apparatus on which they are hosted.
  • User applications are typically supported by an operating system (OS) and need to request the OS to allocate various types of memory on their behalf.
  • OS operating system
  • Data stored in certain types of memory of a given system often remains unchanged for long periods of time and may be of high importance to the security of the given system. These data can become plausible targets for hackers and computer malware. Unauthorized modification can lead to system down time or loss of monetary value.
  • the Linux® kernel typically runs in an ELI exception level.
  • ELI exception level all data is theoretically accessible to any function, regardless whether or not the function has a legitimate reason to access the data. Certain data holds particular relevance, either with regard to protecting the system itself, or purely as information that might be valuable for an attacker to exfiltrate. It is therefore highly desirable to limit access to the data, exclusively to those pieces of code that are specifically supposed or required to access the data.
  • a defence mechanism known in the prior art against such malicious attacks on data stored in memory is to deploy one or more Memory Management Units (MMUs).
  • MMUs Memory Management Units
  • a given MMU can limit access to certain memory regions, thereby trying to prevent an attack (as previously described).
  • a program e.g. an operating system or a hypervisor
  • the CPU may configure the MMU to circumscribe sets of addresses accessible by programs running on the CPU.
  • the MMU can be reprogrammed, since an attacker that has gained capability of accessing (e.g. to write) the memory, can use the same capability, to re-program or disable an established barrier established by the MMU.
  • TEE trusted execution environment
  • APIs application programming interfaces
  • the TEE may require a separate implementation of some functionality, which might be already available in the kernel. This is often the case, due to licensing, when the TEE is for example either fully proprietary or has a licence which is not compatible with the kernel.
  • multiple operations within the same kernel context while making use of such secret data located within the TEE, may require either a specialized TEE serialization API (which is usually not the case), or multiple TEE invocations, which may cause additional overhead, due to the repeated transitions between different exception levels.
  • the disclosure seeks to provide an apparatus and a method for managing access to data memory by executable codes based on execution context.
  • An aim of the disclosure is to provide a solution that overcomes at least partially the problems encountered in prior art, and to provide apparatus that is able to make enhancements to a kernel, utilize a higher-privilege execution environment, such as either a trusted execution environment (TEE) or a hypervisor (EL2 in ARM parlance) for granting to, or removing from, the kernel access to memory pages of a data context and leverage support for isolation of user space memory mapping between cores and threads where available, for example with a x86_64 architecture.
  • TEE trusted execution environment
  • EL2 hypervisor
  • the disclosure also seeks to provide a solution to the existing drawbacks of high execution overhead and a need to replicate the kernel in the TEE, as in known techniques.
  • the disclosure provides an apparatus comprising a processor coupled to a data memory, wherein the processor and the data memory are configured to implement a kernel in which an operating system is executed.
  • the kernel is configured to execute a memory manager that determines access that the kernel has to the data memory.
  • the processor is configured to provide a higher-privilege execution environment that is managed by the memory manager that controls access that one or more executable codes have to one or more portions of the data memory.
  • the kernel is configured to support a plurality of data contexts that are accessible to the one or more executable codes, while denying the one or more executable codes access to data contexts that are unrelated to the one or more executable codes.
  • the disclosure provides a method for (namely, a method of) operating an apparatus comprising a processor coupled to a data memory, wherein the processor and the data memory are configured to implement a kernel in which an operating system is executed.
  • the method includes:
  • the disclosure provides a non-transitory computer readable media having stored thereon program instructions that when executed by a processor cause the processor to perform the method.
  • the apparatus and method of the disclosure provide code reusability by performing the needed operation for protecting the data in a secured way within the kernel, instead of doing it within the higher-privilege execution environment, such as trusted execution environment (TEE), so that there is no need to replicate required functionality of the kernel inside the TEE. Furthermore, the apparatus and method of the disclosure reduce execution overhead by granting or revoking access to secret data of the data memory to the kernel.
  • TEE trusted execution environment
  • the apparatus and method of the disclosure reduce execution overhead by granting or revoking access to secret data of the data memory to the kernel.
  • the kernel enters or exits one or more critical sections, only at those stages the kernel is allowed to access the data, so that the overhead becomes tied to entering to exiting the one or more critical sections, instead of being proportional to the number of operations on data within the one or more critical sections.
  • the memory manager accesses an in-kemel memory management library for implementing data segregation of data stored in the data memory, wherein data stored in the data memory that belongs to a given data context is placed in corresponding data memory pages which are exclusively reserved for the given data context, and wherein other data contexts have other corresponding data pages that are non-overlapping with the data pages of the given data context.
  • the segregation of data stored in the data memory helps in modifying the data context without modifying or even accessing the other data contexts.
  • By leveraging the overlapping effects it is possible to have multiple data contexts, which are accessible only to the code that is meant to deal with each of them respectively, while preventing access to unrelated code, even while the aforesaid data context is being accessed by its legitimate user. Moreover, all of this may be done primarily from within a kernel exception level, keeping the involvement of the higher- privilege execution environment at a minimum.
  • the memory manager segregates data in the data memory into data that is at least one of selectively write protectable and selectively read protectable.
  • Such selective segregation allows to provide different access levels (namely, types) to different data depending on, for example, sensitivity of the data or the like.
  • the higher-privilege execution environment when in operation temporally dynamically changes availability of certain data memory pages between a plurality of data contexts provided by the apparatus, to selectively allow or deny access of the kernel to certain sets of data memory pages, as a function of whether or not a given executable code is active at a given time in the apparatus.
  • the higher-privilege execution environment can selectively allow or deny access to a certain set of pages, based on them being associated to the kernel context which is currently active. If a certain executable code does not have a need to read or write certain data, such data can be kept inaccessible to such executable code.
  • the apparatus is configured to use a separate memory map for segregating each corresponding data context.
  • the kernel has a primary memory map, wherein all readable data and executable code are recorded into the primary memory map.
  • the primary memory map operates at the kernel level and the data mapped into the primary memory map may be such requiring only write protection, and thus the data in the primary map is not affected by the higher-privilege execution environment.
  • the apparatus is configured to provide an isolation of user space memory mapping between CPU cores of the processor and their associated hardware threads to assist the memory manager to manage access of the executable codes to data contexts.
  • the user space memory map is local to the CPU core of the processor which needs access to the data. This prevents executable codes which are being executed in other CPU cores of the processor, possibly a compromised core, from accessing the data.
  • the higher-privilege execution environment includes one or more executable tools that are useable to validate a status of the kernel, and to refuse requests coming from the kernel when security of the kernel becomes compromised.
  • the higher-privilege execution environment may determine whether or not the kernel is compromised, and may deny access to data to such compromised kernel, and thereby the higher-privilege execution environment (such as TEE) can prevent exploitation of data by the compromised kernel.
  • the apparatus is configured to compute a hash of critical data of the kernel, wherein the apparatus determines that the security of the kernel has been compromised when hashes of the critical data are mutually inconsistent.
  • the hash may be computed either periodically or just-in-time, in an event-driven fashion.
  • Such periodical and/or event-driven checking determines whether or not the kernel is compromised, thereby preventing possible data exploitation by a compromised kernel at an early stage.
  • the apparatus uses a hypervisor to manage the higher-privilege execution environment.
  • the hypervisor creates and manages multiple process spaces, and thus can isolate a process, for example processes associated with an operating system, in a separate process space to enable the higher-privilege execution environment, such as TEE to provide different access levels to various executable codes to one or more portions of the data memory.
  • the hypervisor may further enhance the security by further preventing the kernel gaining access to the TEE unnecessarily.
  • the kernel is based on Linux®.
  • FIG. 1 is a schematic illustration of an apparatus for managing access to a data memory, in accordance with an implementation of the present disclosure
  • FIG. 2 is a schematic illustration of a trusted execution environment utilizing segregated data contexts, in accordance with an implementation of the disclosure
  • FIG. 3 is a schematic illustration of a memory map providing a mapping scheme for kernel data, in accordance with an implementation of the disclosure.
  • FIG. 4 is a flowchart listing steps involved in a method for managing access to a data memory, in accordance with an implementation of the disclosure.
  • an underlined number is employed to represent an item over which the underlined number is positioned or an item to which the underlined number is adjacent.
  • a non-underlined number relates to an item identified by a line linking the nonunderlined number to the item.
  • the non-underlined number is used to identify a general item at which the arrow is pointing.
  • references in this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure.
  • the appearance of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
  • the terms “a” and “an” herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items.
  • various features are described which may be exhibited by some embodiments and not by others.
  • various requirements are described which may be requirements for some embodiments but not for other embodiments.
  • the disclosed implementations may be implemented, in some cases, in hardware, firmware, software, or any combination thereof.
  • the disclosed implementations may also be implemented as instructions carried by or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage media, which may be read and executed by one or more processors.
  • a machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or non-volatile memory, a media disc, or other media device).
  • FIG. 1 is a schematic illustration of an apparatus 100 for protection of a data memory 102 therein, in accordance with an embodiment of the present disclosure.
  • the apparatus 100 may be employed in a variety of computing devices such as laptops, computers, smartphones, palmtops, tablets, and the like.
  • the apparatus 100 can also be implemented as an industrial sensor, an actuator, an Internet of Things (loT) device, a network apparatus, a wearable terminal device, a drone, a device integrated into an automobile, a television, an embedded terminal device, and a cloud device, etc.
  • the terms “apparatus,” “computing device” and “computing system” have been interchangeably used without any limitations.
  • the apparatus 100 comprises a processor 104, also referred to as Central Processing Unit (CPU).
  • the apparatus 100 further comprises an operating system 106, a memory manager 108 and a hypervisor 110.
  • the processor 104 provides a higher-privilege execution environment 112 that is managed by the memory manager 108.
  • the term “managed” may be interpreted to mean that the higher-privilege execution environment 112 may be “steered” or “influenced” by the memory manager 108, such that the higher-privilege execution environment 112 retains a certain level of independence, to vet and possibly reject inconsistent/incorrect requests.
  • the data memory 102 in the apparatus 100, provides a user space memory map 114 (hereinafter, sometimes referred to as memory map 114).
  • the processor 104 may have a plurality of CPU cores (hereinafter, sometimes referred to as “cores”).
  • cores the processor 104 is shown to include four CPU cores, namely a first core 116, a second core 118, a third core 120 and a fourth core 122. It may be appreciated that the number of cores shown are exemplary only and shall not be construed as limiting to the disclosure in any manner.
  • the processor 104 is in communication with various elements, including the data memory 102, in the apparatus 100 through a first communication link 124 and a second communication link 126.
  • the term “data memory” refers to any appropriate type of computer memory capable of storing and retrieving computer program instructions or data.
  • the data memory 102 may be one of, or a combination of, various types of volatile and non-volatile computer memory such as for example read only memory (ROM), random access memory (RAM), cache memory, magnetic or optical disk, or other types of computer operable memory capable of retaining computer program instructions and data.
  • the data memory 102 is configured to store software program instructions or software programs along with any associated data as may be useful for the apparatus 100.
  • the software programs stored in the data memory 102 may be organized into various software modules or components which may be referred to using terms based on the type or functionality provided by each software component.
  • the software components may include an operating system (OS), a hypervisor, a device or other hardware drivers, and/or various types of user applications such as a media player, an electronic mail application, a banking application, etc.
  • OS operating system
  • hypervisor a hypervisor
  • a device or other hardware drivers and/or various types of
  • processor may refer to any type of computational element, such as but not limited to, a microprocessor, a microcontroller, a complex instruction set computing (CISC) microprocessor, a reduced instruction set (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, or any other type of processor or processing circuit, and refer to any device or portion of a device that processes electronic data, e.g., from registers and/or memory to transform that electronic data into other electronic data that, e.g., may be stored in registers and/or memory.
  • processor is intended to include multi-core processors that may comprise two or more independent processors (referred to as “cores”) that may execute instructions contemporaneously.
  • the processor 104 may be a single processor with a single core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiples cores, or any combination thereof.
  • a single processor with multiple cores e.g., a multi-core processor
  • multiple processors with a single core e.g., multiple processors with multiples cores, or any combination thereof.
  • the processor 104 is in data communication with the data memory 102.
  • the processor 104 is configured to read non-transient program instructions from the data memory 102 and perform examples of the methods and processes disclosed herein.
  • the software components from the data memory 102 may be executed separately or in combination by the processor 104 within collections of computing resources referred to as processes (or user spaces).
  • processes or user spaces.
  • the term "process” refers to the collection of computing resources accessible to one or more software programs executing code therein.
  • a "process” is an execution context that is managed by an operating system. The operating system, among other things, controls an execution of various processes.
  • Each process, or the user space, is maintained separately by the processor 104 and includes a collection of computing resources.
  • the collection of computing resources associated with a process are accessible to software programs executing within the process and may include resources such as a virtual memory space and/or hardware component(s).
  • the processor 104 is configured to separate, and when required isolate each process from other processes such that code executing in one process may be prevented from accessing or modifying the computing resources associated with a different process.
  • the processor 104 and the data memory 102 are configured to implement a kernel in which one or more processes of the operating system 106 is executed.
  • operating system refers to a system software that provides interface between the user and the hardware.
  • An operating system (OS) is a type or category of software program designed to abstract the underlying computer resources and provide services to ensure applications are running properly. Any suitable software program, such as a LinuxTM OS, WindowsTM OS, AndroidTM, iOS, or other operating systems or applications framework, are appropriate for use as kernel or OS kernel.
  • An OS may be implemented as a single software program or it may be implemented with a central application to handle the basic abstraction and services with a collection of additional utilities and extensions to provide a larger set of functionalities.
  • kernel relates to a central application portion of the operating system. The kernel is adapted to execute at an intermediate privilege level and to manage the lifecycle of, and allocate resources for, the user spaces/processes.
  • the kernel is based on Linux®. It may be appreciated that the term “Linux” as used in the present disclosure is intended to mean, unless the context suggests otherwise, any Linux-based operating system employing a Linux, or Unix, or a Unix-like kernel. It may be understood that such kernel also covers AndroidTM based phones, as long as AndroidTM OS use the Linux kernel.
  • the kernel is configured to execute the memory manager 108 that determines access that the kernel has to the data memory 102.
  • the memory manager 108 is a Memory Management Unit (MMU) that is implemented for protection of the data memory 102.
  • MMU Memory Management Unit
  • the memory manager 108 has its primary function as a translating element, which converts memory addresses of one or more virtual address spaces used by running software to one or more physical address spaces, representing the actual arrangement of data in the data memory 102.
  • the virtual address space is a set of virtual addresses made available for an executable code, that maps to the physical address space with a corresponding set of virtual addresses.
  • the translation function of the memory manager 108 is performed primarily by using a set of address translation tables.
  • the address translation tables may include plurality of data memory pages (hereinafter, sometimes referred to as “memory pages” or simply “pages” and discussed later in more detail with reference to FIG. 2).
  • Such memory pages may be contiguous blocks in the virtual memory and may be represented as a single unit in the page translation table.
  • the size of the memory page depends on the architecture of the processor 104. Traditionally, the minimum granularity of the memory page is 4096 bytes, i.e. 4 kb.
  • the address translation tables may help in locating a corresponding physical page frame which backs it in the physical memory. Moreover, the address translation tables may also determine if the page frame is not available, like with on-demand paging.
  • the memory manager 108 may be configured to enforce certain attributes on the memory pages. Such attributes are, for example, “read only,” “write only” or “executable but not modifiable”.
  • the memory manager 108 may have an internal cache known as a translation look-aside buffer (TLB) (not shown), which stores the results of the most recent translations on a faster, lower latency memory.
  • TLB translation look-aside buffer
  • the processor 104 is configured to provide the higher-privilege execution environment 112 that is managed by the memory manager 108 that controls access that one or more executable codes have to one or more portions of the data memory 102.
  • the higher-privilege execution environment 112 may be a trusted execution environment (TEE) or a hypervisor (EL2 in ARM parlance), as known in the art.
  • the terms “higher-privilege execution environment” and “trusted execution environment” have been interchangeably used, which generally refers to an environment comprising trusted program code that is isolated from other code located outside of the trusted execution environment and to which security policies are applied to provide secure execution of the program code.
  • the TEE 112 may represent a secured and isolated environment for the execution of the user applications.
  • the TEE 112 in a microprocessor system, such as the apparatus 100, is a way for the processor 104 therein to provide an additional, hardened, execution context, which is separated from the main environment and is expected to not be easily attackable, even after the primary environment has been compromised.
  • One such example of the TEE 112 is an ARM TrustZoneTM, which is a system- wide approach to embedded security option for the ARM Cortex-based processor systems.
  • the TEE 112 works by creating two environments to run simultaneously on a single core of the processor 104. Of the two environments, one may be a “non-secure” environment and other may be a “secure” environment.
  • the TEE 112 may provide a switch mechanism to switch between the two environments. All codes, data and the user applications that need to be protected may be operated under the aforesaid secure environment, whereas the aforesaid nonsecure environment may be the main environment or the primary environment and may include all the codes, data and the user application which may either not require or not afford such high protection. This typically relies on some specific hardware feature that is directly under the control of the TEE 112, as opposite to the primary environment.
  • the TEE 112 may also have the ability to transfer (“steal”) memory pages from the main environment, so that those memory pages may be neither read nor written.
  • the TEE 112 by transferring memory pages from the data memory 102, allow for exchange of data between the secure and non-secure environment without having to replicate it, so called “zero copying,” approach.
  • the kernel is further configured to support a plurality of data contexts that are accessible to the one or more executable codes, while denying the one or more executable codes access to data contexts that are unrelated to the one or more executable codes.
  • the executable codes may need to access certain data, and the data may have certain entities.
  • FIG. 2 is a schematic illustration of the TEE 112 implemented for segregating data, in accordance with an embodiment of the present disclosure.
  • the TEE 112 comprises a plurality of data contexts, with each data context comprising a plurality of memory pages.
  • each data context comprising a plurality of memory pages.
  • the TEE 112 is shown to include two exemplary data contexts, namely a first data context 204 and a second data context 206, with the first data context 204 allocated two exemplary memory pages 208 and 210 of the data memory 102 and the second data context 206 allocated two exemplary memory pages 212 and 214 of the data memory 102.
  • the kernel may give access to the specific data context(s) required by the executable codes, and the other data contexts would be hidden.
  • the executable code may need to access the first data context 204.
  • the kernel may give the access for the first data context 204 to the executable code.
  • the second data context 206 may be hidden from the executable code.
  • the second data context 206 may be protected from the executable code.
  • the data memory 102 is shown to include ‘n’ number of data contexts, namely a first data context 128, a second data context 130 and an nth data context 132, where n may be any positive integer.
  • the first data context 128 and the second data context 130 are managed by the TEE 112 and the nth data context 132 is not managed by the TEE 112.
  • the first data context 128 and the second data context 130 may work in the secured environment and the nth data context 132 may work in the non-secure environment.
  • first data context 128 and the second data context 130 may be secured and hidden from the user applications, other than the executable code which may be provided specific access thereto by the TEE 112.
  • the executable code may place a request to the TEE 112, and the TEE 112 may then selectively allow or deny the access.
  • the memory manager 108 accesses an in-kemel memory management library 134 for implementing data segregation of data stored in the data memory 102, wherein data stored in the data memory 102 that belongs to a given data context is placed in corresponding data memory pages which are exclusively reserved for the given data context, and wherein other data contexts have other corresponding data pages that are non-overlapping with the data pages of the given data context.
  • the in-kemel memory management library 134 may store information on how to segregate data according to data context.
  • the in-kemel memory management library 134 is a “prmem.”
  • the in-kernel memory management library 134 allows to organize kernel data by affinity (i.e.
  • the in-kemel memory management library 134 ensures that the memory properties associated to a certain context, which have page-level granularity, may not interfere with the properties of another context, by ensuring that each context is orthogonal to each other, at page level. Therefore, the in-kemel memory management library 134 provides both full write protection for constant data and controlled means of altering data which might be target for an attack and should be kept un-writable by ordinary memory write operations.
  • the memory manager 108 segregates data in the data memory 102 into data that is at least one of selectively write protectable and selectively read protectable.
  • the in-kemel memory management library 134 allows to segregate data in the data memory 102.
  • such segregated data may further be organized into the write protectable and the read protectable by the memory manager 108.
  • the write protectable data may be the data that may not be modified or overwritten without permission but could be read by the executable code having access thereto; and the read protectable data may be the data that could only be read by the executable code having access thereto when its associated use case is active.
  • the memory manager 108 ensures that the data that is write protectable may be grouped together and the data that is read protectable may be grouped together, so that they do not overlap. This allows to provide selective access to the memory pages of one data context with needing to provide access to the other data context.
  • the trusted execution environment 112 when in operation temporally dynamically changes availability of certain data memory pages between a plurality of data contexts provided by the apparatus 100, to selectively allow or deny access of the kernel to certain sets of data memory pages, as a function of whether or not a given executable code is active at a given time in the apparatus 100.
  • the TEE 112 may toggle the availability of certain (set of) data memory pages between different exception levels namely, kernel level and the level of the TEE itself in order to selectively allow/deny access to the certain set of data memory pages.
  • exception levels also known as the privilege levels
  • EL0 exception level 0
  • ELI exception level 1
  • EL2 exception level 2
  • EL3 exception level 3
  • all the user applications have EL0 access.
  • the kernel may run at ELI, the hypervisor 110 may run at EL2 and the firmware may run at EL3.
  • the executable codes that are executing at one exception level may not have access to data (in the data memory 102) being accessed by other executable codes executing at the same exception level or at higher exception level. However, if an executable code is executing at a higher exception level, such executable code may have access to the data being accessed by the lower exception levels.
  • the kernel is executed at ELI and the TEE 112 at EL3. Hence, all the data protected at the level of the TEE 112 may not be accessible by the kernel.
  • the kernel may request the TEE 112 to change the accessibility of that specific data context to the kernel level.
  • the TEE 112 can toggle the data contexts between two exception levels. The toggling between the two exception levels is based on which executable code associated to the kernel context is currently active. If certain executable code does not have a need to read/write certain data, the aforesaid data may be kept inaccessible to such executable code, without the executable code incurring in any problem/penalty. For example, with reference to FIG. 2, if the first data context 204 is needed to be accessed by the executable code, the TEE 112 may assign the first data context 204 to the ELI, while the second data context 206 may remain in the TEE level (EL3). Furthermore, if required, rather than providing access to the data contexts fully, only selective memory pages of the data context may be assigned.
  • the kernel may be given access to the memory page 208 of the first data context 204 and the memory page 214 of the second data context 206, while the memory page 210 of the first data context 204 and the memory page 212 of the second data context 206 may be hidden from the kernel.
  • the processor 104 may have multiple cores 116, 118, 120, 122 with each core adapted to work on one hardware thread at any given instant of time. It may be appreciated by a person skilled in the art that in spite of dynamically changing availability of certain data memory pages, the data memory 102 may still be vulnerable to attacks. For example, when certain data context is being accessed by one core (say, the first core 116) executing the legitimate code, it might be possible for another, rogue core (say, the third core 120) to access the very same data. This would be possible because typically, within the kernel, all of the data is accessible to every code and core.
  • FIG. 3 is an exemplary schematic illustration of a memory map 300 implementing different mappings for the kernel data, in accordance with an embodiment of the present disclosure.
  • the memory map 300 for implementing different mapping of the kernel data, may include a plurality of physical pages stored in the data memory 302, a primary memory map 304, a user space memory map 306 (also referred to as “secondary memory map 306”) and a barrier 308 provided by the TEE (such as, the TEE 112).
  • the data requiring only write protection is mapped in the primary memory map 304.
  • Such data is not affected by the barrier 308 provided by the TEE. Furthermore, the data requiring read protection is mapped in the user space memory map 306. The mapping is exclusively in a core-local mapping, that it is accessible exclusively to the core which has created it. The accessibility of such data is controlled by the TEE, timewise, so that it can be read only when its associated use case is active.
  • the mapping for all data for the executable codes are recorded in the primary memory map 304 from the plurality of physical pages stored in the data memory 302.
  • the data may be replicated in a context specific copy associated to the certain data context.
  • the replicated data may also contain the mapping for the associated data.
  • the mapping mechanism which allows to have multiple user space processes mapped to the same values of address space on multiple cores, without such mappings overlapping, may also be exploited here. Such mechanism ensures that the user space mapping will be exclusively accessible to its local core, and it may therefore also prevent unauthorised access to the protected data context, from compromised cores. It may be appreciated that, the readable data and executable code may be the data requiring only write protection, and are thus mapped into the primary memory map 304.
  • the executable code requests access to the memory map 300 for the write protectable data
  • the data is mapped in the primary memory map 304.
  • the primary mapping is at ELI, hence the data mapped into the primary memory map 304 may be accessed by all the cores, but is write-protected, and thus could not be tampered.
  • the apparatus 100 is configured to provide an isolation of user space memory mapping between CPU cores 116, 118, 120, 122 of the processor 104 and their associated hardware threads to assist the memory manager 108 to manage access of the executable codes to data contexts.
  • the hardware thread may be a single line of instruction to be executed, with each user application generally having multiple threads.
  • the data memory 302 includes a plurality of data contexts.
  • the write protectable data may be mapped from the data memory 302 into the primary memory map 304.
  • the data stored in a page 310 may be mapped to a page 314 of the primary memory map 304 (as represented by links 312).
  • the read protectable data may be mapped from the data memory 302 into the secondary memory map 306 (if the TEE provides an access to the executable code).
  • the data stored in a page 316 is mapped to a page 320 of the secondary memory map 306 (as represented by link 318).
  • the secondary memory map 306 is local to the core which needs the access to the data.
  • the barrier 308 provided by the TEE (such as, the TEE 112) isolates the data in the primary memory map 304 and the secondary memory map 306.
  • the trusted execution environment 112 includes one or more executable tools that are useable to validate a status of the kernel, and to refuse requests coming from the kernel when security of the kernel becomes compromised.
  • the kernel is the core of the operating system 106.
  • the data protected by the operating system 106 is also at risk, i.e. any data in the data memory 102 when accessed by the kernel may be available to the malicious program.
  • the TEE 112 can block requests from the kernel if determined to be compromised, and thereby prevent exploitation of data in the data memory 102.
  • the aforesaid executable tools useable to validate a status of the kernel may be hash functions, as known in the art.
  • the apparatus 100 is configured to compute a hash of critical data of the kernel, wherein the apparatus 100 determines that the security of the kernel has been compromised when hashes of the critical data are mutually inconsistent.
  • the critical data of the kernel may be the data that must not be modified by the malicious programs.
  • the data to be protected can be interpreted to comprise data, for example, relating to a transient state in the apparatus (e.g. some important data in the Random Access Memory (RAM), Cross-point, or Flash, etc.).
  • the data to be protected may be system-level data, for example, the data relating to the operating system.
  • the data to be protected may be application data regarding to the operating system and the application software.
  • the technique used for checking integrity of the kernel may utilize a hash.
  • the hash may be a code in the form of a string of numbers.
  • the hash of the critical data may be consistent for the kernel.
  • the hash of critical data may be compared with the known hash value for the same. If it matches, such data may be considered to be safe; otherwise, such data may be considered as compromised.
  • the hash may be computed periodically.
  • the hash may be computed just-in-time, in an event-driven fashion. Such periodical and/or event-driven checking ensures that the compromised kernel is detected sooner, and then the TEE 112 may block requests from such compromised kernel in early stages and prevent possible data exploitation by compromised kernel (as discussed above).
  • the apparatus 100 uses the hypervisor 110 to manage the trusted execution environment 112.
  • the hypervisor 110 may, generally, create and manage a special type of process space referred to as a virtual machine, which is a type of process space adapted to emulate a physical computer and the hypervisor 110 is typically configured to execute multiple virtual machines on a single computing device, such as the apparatus 100.
  • the hypervisor 110 is usually a small piece of code, and thus may be stored in the non-volatile memory as the firmware.
  • the exception level of the hypervisor 110 is EL2, which is greater than the exception level of the kernel.
  • the hypervisor 110 may be completely hidden from the kernel and the user space. That is, the kernel and the user may not even know the existence of the hypervisor 110.
  • the hypervisor 110 may further enhance the security of the data memory 102 by further preventing the kernel gaining access to the TEE 112 unnecessarily.
  • the apparatus of the present disclosure provides protection to the data memory 102 in the apparatus 100. This is achieved by managing access to the data memory 102 for protection against unwarranted executable codes.
  • the present disclosure provides enhancements to the in- kemel memory management library 134 (for example, prmem) to introduce the “read protected” property, utilize the trusted execution environment (TEE) 112 for granting/removing the kernel the access to whole sets of memory pages and leveraging support for isolation of user space memory map 114 between the CPU cores 116, 118, 120, 122 of the processor 104 and hardware threads.
  • TEE trusted execution environment
  • the implementation of the concept of data segregation is achieved, so that data which belongs to a specific context is placed within certain memory pages, which will be exclusively reserved for such context; and different contexts would have other non-overlapping sets of pages.
  • the purpose of the in-kemel memory management library 134 is to ensure that the memory properties associated to a certain context, which have page-level granularity, will not interfere with the properties of another context, by ensuring that each context is orthogonal to each other, at page level.
  • the TEE 112 can toggle the availability of certain (set of) pages between different exception levels (namely kernel level and the level of the TEE itself), it can now be used, with the segregation performed through the in-kemel memory management library 134, to selectively allow/deny access to a certain set of pages, based on them being associated to the kernel context which is currently active. If certain code does not have a need to read/write certain data, the aforesaid data can be kept inaccessible to such code, without the code incurring in any problem/penalty, assuming that the code has not being hijacked and it’s behaving abnormally, in which case it’s desirable to interfere with the abnormal behaviour.
  • the additional mapping is modelled after the mapping performed for the user space process, it is possible to have as many additional mappings as needed.
  • Each secondary mapping may contain exclusively those pages relevant for its associated use case. This will prevent that, should a use case somehow be compromised, the others will be still out of reach.
  • the TEE affects the memory availability to the kernel exception level by using page granularity
  • the TEE based protection may also support having a multitude of orthogonal secondary mappings. Similar mechanism may also be transposed to isolate part of the data of a user-space process from the user space code which is not supposed to access it.
  • the kernel would provide the enforcing backend residing in a more privileged exception level, while it would be necessary to port “prmem” to user space, so that it could be used, instead of the traditional “vmalloc”.
  • FIG. 4 is a flowchart 400 of a method for (namely, a method of) operating an apparatus (such as, the apparatus 100) comprising the processor coupled to the data memory, wherein the processor and the data memory are configured to implement a kernel in which an operating system is executed.
  • the method comprises configuring the kernel to execute a memory manager that determines access that one or more executable codes have to the data memory.
  • the method comprises configuring the processor to provide a trusted execution environment that is managed by the memory manager for the one or more executable codes to access one or more portions of the data memory.
  • the method comprises arranging for the kernel to support a plurality of data contexts which are accessible to the one or more executable codes, while denying the one or more executable codes access to data contexts that are unrelated to the one or more executable codes.
  • the present disclosure also provides a non-transitory computer readable media having stored thereon program instructions that when executed by a processor cause the processor to perform the method.
  • the various embodiments and variants disclosed above apply mutatis mutandis to the present non-transitory computer readable media.
  • the apparatus and method of the disclosure may be implemented in a number of applications for the read protection of selected data.
  • the apparatus and method protect the data in two ways: firstly, exfiltration of secrets which might be of use to the attacker may be adverted, and secondly, additional hardening for the write protected data is provided by hiding the memory content. Thus, it becomes even harder for the attack attempting to modify the kernel data because it may not be easily located.
  • the apparatus and the method of the disclosure may be suitable for protection of encryption keys, media access control (MAC) addresses, separate wallets for similar, but orthogonal use cases and the likes.
  • the encryption keys are set of random, unpredictable and unique strings in order to encrypt information
  • the media access control address (MAC) is a unique string assigned to a network interface controller (NIC).
  • the apparatus and the method of the disclosure may be applied to any device that has the memory manger based, even loosely, on the architecture described in the disclosure.
  • This means that the application of the concepts listed in the disclosure is not specific to any particular type of processor.
  • it may be applied to x86/x86_64, ARM/ARM64, RISC-V and the likes.
  • the only requirement is that there must be some additional mode such as the TEE or the hypervisor with higher privileges.
  • the apparatus and the method of the disclosure for the protection of data memory is advantageous both performance wise and code maintenance wise.
  • the disclosure provides an improvement quantifiable based on the amount of individual TEE invocations replaced by the single context switch. From the perspective of hardening existing kernel code, it is desirable to minimize the extent of changes required; as in case of upgrading the baseline, the amount of changes that must be migrated is small.
  • the TEE code has a license which is incompatible with the kernel license; in such case, it is not necessary to create a “clean-room” re-implementation of the handling of secrets, which risks introducing a new set of defects.
  • the same code may be used across platforms which may or may not have the capability of handling secrets, streamlining the release management.

Abstract

Disclosed is an apparatus and a method for protection of a data memory. The apparatus comprises a processor coupled to the data memory. The processor and the data memory are configured to implement a kernel in which an operating system is executed. The kernel is configured to execute a memory manager that determines access that the kernel has to the data memory. The processor is configured to provide a higher-privilege execution environment that is managed by the memory manager that controls access that one or more executable codes have to one or more portions of the data memory. The kernel is further configured to support a plurality of data contexts that are accessible to the one or more executable codes, while denying the one or more executable codes access to data contexts that are unrelated to the one or more executable codes.

Description

APPARATUS AND METHOD FOR MANAGING ACCESS TO DATA MEMORY BY
EXECUTABLE CODES BASED ON EXECUTION CONTEXT
TECHNICAL FIELD
The disclosure relates generally to computing systems; more specifically, the disclosure relates to an apparatus and a method for managing access to a data memory of a computing system for protection against unwarranted executable codes. The disclosure also relates to a non-transitory computer readable media for performing the aforesaid method.
BACKGROUND
Computing apparatus including microprocessor systems or microcontroller systems operate by retaining most of their transient state into a memory. Moreover, computer applications typically need to allocate memory and store data within a computing apparatus on which they are hosted. User applications are typically supported by an operating system (OS) and need to request the OS to allocate various types of memory on their behalf. Data stored in certain types of memory of a given system often remains unchanged for long periods of time and may be of high importance to the security of the given system. These data can become tempting targets for hackers and computer malware. Unauthorized modification can lead to system down time or loss of monetary value.
For example, referring to an ARM64 architecture (but this could be applied in a similar fashion also to Intel x86_64 and other architectures), the Linux® kernel typically runs in an ELI exception level. Within this ELI exception level, all data is theoretically accessible to any function, regardless whether or not the function has a legitimate reason to access the data. Certain data holds particular relevance, either with regard to protecting the system itself, or purely as information that might be valuable for an attacker to exfiltrate. It is therefore highly desirable to limit access to the data, exclusively to those pieces of code that are specifically supposed or required to access the data. A defence mechanism known in the prior art against such malicious attacks on data stored in memory is to deploy one or more Memory Management Units (MMUs). A given MMU can limit access to certain memory regions, thereby trying to prevent an attack (as previously described). When a program (e.g. an operating system or a hypervisor) is being executed on a Central Processing Unit (CPU), the CPU may configure the MMU to circumscribe sets of addresses accessible by programs running on the CPU. However, the MMU can be reprogrammed, since an attacker that has gained capability of accessing (e.g. to write) the memory, can use the same capability, to re-program or disable an established barrier established by the MMU.
An additional known approach has been to keep some data “secret” by implementing a trusted execution environment (TEE) for management of such secret data, so that such secret data never leave the TEE once loaded. The TEE exposes a set of application programming interfaces (APIs) through which the kernel may interact with such secret data when needed (for example, for both signing and verifying the signature of a data buffer). However, the aforesaid approach has two major disadvantages. Firstly, the TEE may require a separate implementation of some functionality, which might be already available in the kernel. This is often the case, due to licensing, when the TEE is for example either fully proprietary or has a licence which is not compatible with the kernel. Secondly, multiple operations, within the same kernel context while making use of such secret data located within the TEE, may require either a specialized TEE serialization API (which is usually not the case), or multiple TEE invocations, which may cause additional overhead, due to the repeated transitions between different exception levels.
Furthermore, there is another possible vulnerability which might be open to attacks in modern multi-core CPU based computing systems. When a certain data context is being accessed by a CPU core executing a legitimate code, it might be possible for another compromised CPU core, to access the same data. This is possible because, typically within the kernel data or execution context, all data is accessible to every code being executed in any of the cores in the multi-core CPU.
Therefore, in light of the foregoing discussion, there exists a need to overcome the aforementioned drawbacks associated with the apparatus and the method for protecting data memory of computing systems without an overhead that is so large to make the solution unsuitable for real life situations. SUMMARY
The disclosure seeks to provide an apparatus and a method for managing access to data memory by executable codes based on execution context. An aim of the disclosure is to provide a solution that overcomes at least partially the problems encountered in prior art, and to provide apparatus that is able to make enhancements to a kernel, utilize a higher-privilege execution environment, such as either a trusted execution environment (TEE) or a hypervisor (EL2 in ARM parlance) for granting to, or removing from, the kernel access to memory pages of a data context and leverage support for isolation of user space memory mapping between cores and threads where available, for example with a x86_64 architecture. The disclosure also seeks to provide a solution to the existing drawbacks of high execution overhead and a need to replicate the kernel in the TEE, as in known techniques.
The object of the disclosure is achieved by the solutions provided in the enclosed independent claims. Advantageous implementations of the present disclosure are further defined in the dependent claims.
In an aspect, the disclosure provides an apparatus comprising a processor coupled to a data memory, wherein the processor and the data memory are configured to implement a kernel in which an operating system is executed. The kernel is configured to execute a memory manager that determines access that the kernel has to the data memory. Moreover, the processor is configured to provide a higher-privilege execution environment that is managed by the memory manager that controls access that one or more executable codes have to one or more portions of the data memory. Furthermore, the kernel is configured to support a plurality of data contexts that are accessible to the one or more executable codes, while denying the one or more executable codes access to data contexts that are unrelated to the one or more executable codes.
In another aspect, the disclosure provides a method for (namely, a method of) operating an apparatus comprising a processor coupled to a data memory, wherein the processor and the data memory are configured to implement a kernel in which an operating system is executed. The method includes:
(i) configuring the kernel to execute a memory manager that determines access that one or more executable codes have to the data memory, (ii) configuring the processor to provide a higher-privilege execution environment that is managed by the memory manager for the one or more executable codes to access one or more portions of the data memory, and
(iii) arranging for the kernel to support a plurality of data contexts which are accessible to the one or more executable codes, while denying the one or more executable codes access to data contexts that are unrelated to the one or more executable codes.
In yet another aspect, the disclosure provides a non-transitory computer readable media having stored thereon program instructions that when executed by a processor cause the processor to perform the method.
The apparatus and method of the disclosure provide code reusability by performing the needed operation for protecting the data in a secured way within the kernel, instead of doing it within the higher-privilege execution environment, such as trusted execution environment (TEE), so that there is no need to replicate required functionality of the kernel inside the TEE. Furthermore, the apparatus and method of the disclosure reduce execution overhead by granting or revoking access to secret data of the data memory to the kernel. Herein, whenever the kernel enters or exits one or more critical sections, only at those stages the kernel is allowed to access the data, so that the overhead becomes tied to entering to exiting the one or more critical sections, instead of being proportional to the number of operations on data within the one or more critical sections.
In an implementation form, the memory manager accesses an in-kemel memory management library for implementing data segregation of data stored in the data memory, wherein data stored in the data memory that belongs to a given data context is placed in corresponding data memory pages which are exclusively reserved for the given data context, and wherein other data contexts have other corresponding data pages that are non-overlapping with the data pages of the given data context.
The segregation of data stored in the data memory helps in modifying the data context without modifying or even accessing the other data contexts. By leveraging the overlapping effects, it is possible to have multiple data contexts, which are accessible only to the code that is meant to deal with each of them respectively, while preventing access to unrelated code, even while the aforesaid data context is being accessed by its legitimate user. Moreover, all of this may be done primarily from within a kernel exception level, keeping the involvement of the higher- privilege execution environment at a minimum.
In an implementation form, the memory manager segregates data in the data memory into data that is at least one of selectively write protectable and selectively read protectable.
Such selective segregation allows to provide different access levels (namely, types) to different data depending on, for example, sensitivity of the data or the like.
In an implementation form, the higher-privilege execution environment when in operation temporally dynamically changes availability of certain data memory pages between a plurality of data contexts provided by the apparatus, to selectively allow or deny access of the kernel to certain sets of data memory pages, as a function of whether or not a given executable code is active at a given time in the apparatus.
With the segregation, the higher-privilege execution environment can selectively allow or deny access to a certain set of pages, based on them being associated to the kernel context which is currently active. If a certain executable code does not have a need to read or write certain data, such data can be kept inaccessible to such executable code.
In an implementation form, the apparatus is configured to use a separate memory map for segregating each corresponding data context.
This allows to associate specific data context to their legitimate users (execution context), without the executable code incurring any problem/penalty.
In an implementation form, the kernel has a primary memory map, wherein all readable data and executable code are recorded into the primary memory map.
Herein, the primary memory map operates at the kernel level and the data mapped into the primary memory map may be such requiring only write protection, and thus the data in the primary map is not affected by the higher-privilege execution environment.
In an implementation form, the apparatus is configured to provide an isolation of user space memory mapping between CPU cores of the processor and their associated hardware threads to assist the memory manager to manage access of the executable codes to data contexts. The user space memory map is local to the CPU core of the processor which needs access to the data. This prevents executable codes which are being executed in other CPU cores of the processor, possibly a compromised core, from accessing the data.
In an implementation form, the higher-privilege execution environment includes one or more executable tools that are useable to validate a status of the kernel, and to refuse requests coming from the kernel when security of the kernel becomes compromised.
Herein, the higher-privilege execution environment (such as TEE) may determine whether or not the kernel is compromised, and may deny access to data to such compromised kernel, and thereby the higher-privilege execution environment (such as TEE) can prevent exploitation of data by the compromised kernel.
In an implementation form, the apparatus is configured to compute a hash of critical data of the kernel, wherein the apparatus determines that the security of the kernel has been compromised when hashes of the critical data are mutually inconsistent.
Herein, the hash may be computed either periodically or just-in-time, in an event-driven fashion. Such periodical and/or event-driven checking determines whether or not the kernel is compromised, thereby preventing possible data exploitation by a compromised kernel at an early stage.
In an implementation form, the apparatus uses a hypervisor to manage the higher-privilege execution environment.
The hypervisor creates and manages multiple process spaces, and thus can isolate a process, for example processes associated with an operating system, in a separate process space to enable the higher-privilege execution environment, such as TEE to provide different access levels to various executable codes to one or more portions of the data memory. Thus, the hypervisor may further enhance the security by further preventing the kernel gaining access to the TEE unnecessarily.
In an implementation form, the kernel is based on Linux®.
It will be appreciated that all devices, elements, circuitry, units and means described in the present application could be implemented in any form of hardware elements. All steps which are performed by the various entities described in the present application as well as the functionalities described to be performed by the various entities are intended to mean that the respective entity is adapted to or configured to perform the respective steps and functionalities. Even if, in the following description of specific embodiments, a specific functionality or step to be performed by external entities is not reflected in the description of a specific detailed element of that entity which performs that specific step or functionality, it should be clear for a skilled person that these methods and functionalities can be implemented in respective hardware elements. It will be appreciated that features of the present disclosure are susceptible to being combined in various combinations without departing from the scope of the present disclosure as defined by the appended claims.
Additional aspects, advantages, features and objects of the present disclosure would be made apparent from the drawings and the detailed description of the illustrative implementations construed in conjunction with the appended claims that follow.
BRIEF DESCRIPTION OF THE DRAWINGS
The summary above, as well as the following detailed description of illustrative examples, is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the present disclosure, exemplary constructions of the disclosure are shown in the drawings. However, the present disclosure is not limited to specific methods and instrumentalities disclosed herein. Moreover, those in the art will understand that the drawings are not to scale. Wherever possible, like elements have been indicated by identical numbers.
Examples of the present disclosure will now be described, by way of example only, with reference to the following diagrams wherein:
FIG. 1 is a schematic illustration of an apparatus for managing access to a data memory, in accordance with an implementation of the present disclosure;
FIG. 2 is a schematic illustration of a trusted execution environment utilizing segregated data contexts, in accordance with an implementation of the disclosure;
FIG. 3 is a schematic illustration of a memory map providing a mapping scheme for kernel data, in accordance with an implementation of the disclosure; and FIG. 4 is a flowchart listing steps involved in a method for managing access to a data memory, in accordance with an implementation of the disclosure.
In the accompanying drawings, an underlined number is employed to represent an item over which the underlined number is positioned or an item to which the underlined number is adjacent. A non-underlined number relates to an item identified by a line linking the nonunderlined number to the item. When a number is non-underlined and accompanied by an associated arrow, the non-underlined number is used to identify a general item at which the arrow is pointing.
DETAILED DESCRIPTION OF EMBODIMENTS
The following detailed description illustrates implementations of the present disclosure and ways in which they can be implemented. Although some modes of carrying out the disclosure have been disclosed, those skilled in the art would recognize that other implementations for carrying out or practicing the present disclosure are also possible.
Reference in this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. The appearance of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Further, the terms “a” and “an” herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items. Moreover, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments but not for other embodiments.
The disclosed implementations may be implemented, in some cases, in hardware, firmware, software, or any combination thereof. The disclosed implementations may also be implemented as instructions carried by or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage media, which may be read and executed by one or more processors. A machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or non-volatile memory, a media disc, or other media device).
In the drawings, some structural or method features may be shown in specific arrangements and/or orderings. However, it should be appreciated that such specific arrangements and/or orderings may not be required. Rather, in some implementations, such features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of a structural or method feature in a particular figure is not meant to imply that such feature is required in all implementations and, in some implementations, may not be included or may be combined with other features.
FIG. 1 is a schematic illustration of an apparatus 100 for protection of a data memory 102 therein, in accordance with an embodiment of the present disclosure. The apparatus 100, as disclosed herein, may be employed in a variety of computing devices such as laptops, computers, smartphones, palmtops, tablets, and the like. The apparatus 100 can also be implemented as an industrial sensor, an actuator, an Internet of Things (loT) device, a network apparatus, a wearable terminal device, a drone, a device integrated into an automobile, a television, an embedded terminal device, and a cloud device, etc. Herein afterwards, the terms “apparatus,” “computing device” and “computing system” have been interchangeably used without any limitations.
As illustrated in FIG. 1, the apparatus 100 comprises a processor 104, also referred to as Central Processing Unit (CPU). The apparatus 100 further comprises an operating system 106, a memory manager 108 and a hypervisor 110. In the apparatus 100, the processor 104 provides a higher-privilege execution environment 112 that is managed by the memory manager 108. As used herein, the term “managed” may be interpreted to mean that the higher-privilege execution environment 112 may be “steered” or “influenced” by the memory manager 108, such that the higher-privilege execution environment 112 retains a certain level of independence, to vet and possibly reject inconsistent/incorrect requests. Furthermore, the data memory 102, in the apparatus 100, provides a user space memory map 114 (hereinafter, sometimes referred to as memory map 114). The processor 104 may have a plurality of CPU cores (hereinafter, sometimes referred to as “cores”). For example, in the illustrated example of FIG. 1, the processor 104 is shown to include four CPU cores, namely a first core 116, a second core 118, a third core 120 and a fourth core 122. It may be appreciated that the number of cores shown are exemplary only and shall not be construed as limiting to the disclosure in any manner. As illustrated, the processor 104 is in communication with various elements, including the data memory 102, in the apparatus 100 through a first communication link 124 and a second communication link 126.
As used herein, the term “data memory” refers to any appropriate type of computer memory capable of storing and retrieving computer program instructions or data. The data memory 102 may be one of, or a combination of, various types of volatile and non-volatile computer memory such as for example read only memory (ROM), random access memory (RAM), cache memory, magnetic or optical disk, or other types of computer operable memory capable of retaining computer program instructions and data. The data memory 102 is configured to store software program instructions or software programs along with any associated data as may be useful for the apparatus 100. The software programs stored in the data memory 102 may be organized into various software modules or components which may be referred to using terms based on the type or functionality provided by each software component. For example, the software components may include an operating system (OS), a hypervisor, a device or other hardware drivers, and/or various types of user applications such as a media player, an electronic mail application, a banking application, etc.
Furthermore, the term “processor” may refer to any type of computational element, such as but not limited to, a microprocessor, a microcontroller, a complex instruction set computing (CISC) microprocessor, a reduced instruction set (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, or any other type of processor or processing circuit, and refer to any device or portion of a device that processes electronic data, e.g., from registers and/or memory to transform that electronic data into other electronic data that, e.g., may be stored in registers and/or memory. The term “processor” is intended to include multi-core processors that may comprise two or more independent processors (referred to as “cores”) that may execute instructions contemporaneously. Although FIG. 1 shows the processor 104 with multiple cores 116, 118, 120, 122, the processor 104 may be a single processor with a single core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiples cores, or any combination thereof.
Furthermore, as illustrated in FIG. 1, the processor 104 is in data communication with the data memory 102. In embodiments (for example, implementations) of the disclosure, the processor 104 is configured to read non-transient program instructions from the data memory 102 and perform examples of the methods and processes disclosed herein. The software components from the data memory 102 may be executed separately or in combination by the processor 104 within collections of computing resources referred to as processes (or user spaces). As used herein, the term "process" refers to the collection of computing resources accessible to one or more software programs executing code therein. In general, a "process" is an execution context that is managed by an operating system. The operating system, among other things, controls an execution of various processes. Each process, or the user space, is maintained separately by the processor 104 and includes a collection of computing resources. The collection of computing resources associated with a process are accessible to software programs executing within the process and may include resources such as a virtual memory space and/or hardware component(s). The processor 104 is configured to separate, and when required isolate each process from other processes such that code executing in one process may be prevented from accessing or modifying the computing resources associated with a different process.
Herein, the processor 104 and the data memory 102 are configured to implement a kernel in which one or more processes of the operating system 106 is executed. As used herein, the term “operating system” refers to a system software that provides interface between the user and the hardware. An operating system (OS) is a type or category of software program designed to abstract the underlying computer resources and provide services to ensure applications are running properly. Any suitable software program, such as a Linux™ OS, Windows™ OS, Android™, iOS, or other operating systems or applications framework, are appropriate for use as kernel or OS kernel. An OS may be implemented as a single software program or it may be implemented with a central application to handle the basic abstraction and services with a collection of additional utilities and extensions to provide a larger set of functionalities. Herein, the term “kernel” relates to a central application portion of the operating system. The kernel is adapted to execute at an intermediate privilege level and to manage the lifecycle of, and allocate resources for, the user spaces/processes.
In one or more embodiments of the present disclosure, the kernel is based on Linux®. It may be appreciated that the term “Linux” as used in the present disclosure is intended to mean, unless the context suggests otherwise, any Linux-based operating system employing a Linux, or Unix, or a Unix-like kernel. It may be understood that such kernel also covers Android™ based phones, as long as Android™ OS use the Linux kernel.
The kernel is configured to execute the memory manager 108 that determines access that the kernel has to the data memory 102. In an embodiment, the memory manager 108 is a Memory Management Unit (MMU) that is implemented for protection of the data memory 102. The memory manager 108 has its primary function as a translating element, which converts memory addresses of one or more virtual address spaces used by running software to one or more physical address spaces, representing the actual arrangement of data in the data memory 102. Herein, the virtual address space is a set of virtual addresses made available for an executable code, that maps to the physical address space with a corresponding set of virtual addresses. The translation function of the memory manager 108 is performed primarily by using a set of address translation tables. The address translation tables may include plurality of data memory pages (hereinafter, sometimes referred to as “memory pages” or simply “pages” and discussed later in more detail with reference to FIG. 2).
Such memory pages may be contiguous blocks in the virtual memory and may be represented as a single unit in the page translation table. The size of the memory page depends on the architecture of the processor 104. Traditionally, the minimum granularity of the memory page is 4096 bytes, i.e. 4 kb. For the given virtual memory address, the address translation tables may help in locating a corresponding physical page frame which backs it in the physical memory. Moreover, the address translation tables may also determine if the page frame is not available, like with on-demand paging. Besides providing translation, the memory manager 108 may be configured to enforce certain attributes on the memory pages. Such attributes are, for example, “read only,” “write only” or “executable but not modifiable”.
The content of the address translation tables is controlled by the operating system 106. Since the virtual address space may be quite large and there may be a large number of virtual address spaces (albeit, for a certain hardware thread only one virtual address may be active at any time), hence the page tables may be only partially populated, just enough to provide translation support for those locations which are actually in use. To speed up the translation, in some implementations, the memory manager 108 may have an internal cache known as a translation look-aside buffer (TLB) (not shown), which stores the results of the most recent translations on a faster, lower latency memory. When the virtual address is to be mapped to the physical address in the data memory 102, the TLB may be searched first. In case a match is found, the corresponding physical address is returned. However, if no match is found, the address translation table may be searched, and the found corresponding mapping of the physical address therefrom may then be, optionally, stored in the TLB. The processor 104 is configured to provide the higher-privilege execution environment 112 that is managed by the memory manager 108 that controls access that one or more executable codes have to one or more portions of the data memory 102. In the present implementations, the higher-privilege execution environment 112 may be a trusted execution environment (TEE) or a hypervisor (EL2 in ARM parlance), as known in the art. Since they are mostly equivalent, from the perspective of regulating kernel access to the data being protected, the following disclosure will focus on the TEE, with the understanding that a similar method can be used also when an hypervisor (or any other higher-privilege context) is used in place of the TEE. Hereinafter, the terms “higher-privilege execution environment” and “trusted execution environment” have been interchangeably used, which generally refers to an environment comprising trusted program code that is isolated from other code located outside of the trusted execution environment and to which security policies are applied to provide secure execution of the program code. The TEE 112 may represent a secured and isolated environment for the execution of the user applications. The TEE 112 in a microprocessor system, such as the apparatus 100, is a way for the processor 104 therein to provide an additional, hardened, execution context, which is separated from the main environment and is expected to not be easily attackable, even after the primary environment has been compromised. One such example of the TEE 112 is an ARM TrustZone™, which is a system- wide approach to embedded security option for the ARM Cortex-based processor systems.
Typically, the TEE 112 works by creating two environments to run simultaneously on a single core of the processor 104. Of the two environments, one may be a “non-secure” environment and other may be a “secure” environment. The TEE 112 may provide a switch mechanism to switch between the two environments. All codes, data and the user applications that need to be protected may be operated under the aforesaid secure environment, whereas the aforesaid nonsecure environment may be the main environment or the primary environment and may include all the codes, data and the user application which may either not require or not afford such high protection. This typically relies on some specific hardware feature that is directly under the control of the TEE 112, as opposite to the primary environment. The TEE 112 may also have the ability to transfer (“steal”) memory pages from the main environment, so that those memory pages may be neither read nor written. In the present implementation, the TEE 112, by transferring memory pages from the data memory 102, allow for exchange of data between the secure and non-secure environment without having to replicate it, so called “zero copying,” approach. The kernel is further configured to support a plurality of data contexts that are accessible to the one or more executable codes, while denying the one or more executable codes access to data contexts that are unrelated to the one or more executable codes. As discussed, the executable codes may need to access certain data, and the data may have certain entities. Herein, all data having similar attributes or entities, with respect to a particular executable cede, may be grouped together and may be referred to as the data context. FIG. 2 is a schematic illustration of the TEE 112 implemented for segregating data, in accordance with an embodiment of the present disclosure. Herein, the TEE 112 comprises a plurality of data contexts, with each data context comprising a plurality of memory pages. In the exemplary illustration of FIG. 2, the TEE 112 is shown to include two exemplary data contexts, namely a first data context 204 and a second data context 206, with the first data context 204 allocated two exemplary memory pages 208 and 210 of the data memory 102 and the second data context 206 allocated two exemplary memory pages 212 and 214 of the data memory 102. It may be appreciated that the given number of data contexts in the TEE 112 and the given number of memory pages in each of the data contexts are for illustration purposes only, and the actual numbers may be much more without any limitations. The kernel may give access to the specific data context(s) required by the executable codes, and the other data contexts would be hidden. For example, the executable code may need to access the first data context 204. Accordingly, the kernel may give the access for the first data context 204 to the executable code. However, the second data context 206 may be hidden from the executable code. Hence, the second data context 206 may be protected from the executable code.
Referring back to FIG. 1, the data memory 102 is shown to include ‘n’ number of data contexts, namely a first data context 128, a second data context 130 and an nth data context 132, where n may be any positive integer. In the illustrated example of FIG. 1, the first data context 128 and the second data context 130 are managed by the TEE 112 and the nth data context 132 is not managed by the TEE 112. Hence, it may be understood that the first data context 128 and the second data context 130 may work in the secured environment and the nth data context 132 may work in the non-secure environment. That is, the first data context 128 and the second data context 130 may be secured and hidden from the user applications, other than the executable code which may be provided specific access thereto by the TEE 112. In case the executable code needs to access the first data context 128 and/or the second data context 130, the executable code may place a request to the TEE 112, and the TEE 112 may then selectively allow or deny the access. Again referring to FIG. 1, the memory manager 108 accesses an in-kemel memory management library 134 for implementing data segregation of data stored in the data memory 102, wherein data stored in the data memory 102 that belongs to a given data context is placed in corresponding data memory pages which are exclusively reserved for the given data context, and wherein other data contexts have other corresponding data pages that are non-overlapping with the data pages of the given data context. The in-kemel memory management library 134 may store information on how to segregate data according to data context. In an embodiment, the in-kemel memory management library 134 is a “prmem.” The in-kernel memory management library 134 allows to organize kernel data by affinity (i.e. by use case and by various other low-level properties) into pages, so that data belonging to the same use case will not be mixed with data belonging to other use-cases. Such an arrangement allows to alter the properties of the data associated to a certain use-case without interfering with others. Herein, the data which belongs to the given context may be placed within certain memory pages, which may be exclusively reserved for such context. The other data contexts that is the data having different contexts may have other, non-overlapping, sets of pages. Therefore, the in-kemel memory management library 134 ensures that the memory properties associated to a certain context, which have page-level granularity, may not interfere with the properties of another context, by ensuring that each context is orthogonal to each other, at page level. Therefore, the in-kemel memory management library 134 provides both full write protection for constant data and controlled means of altering data which might be target for an attack and should be kept un-writable by ordinary memory write operations.
Optionally, the memory manager 108 segregates data in the data memory 102 into data that is at least one of selectively write protectable and selectively read protectable. As discussed, the in-kemel memory management library 134 allows to segregate data in the data memory 102. In the present embodiments, such segregated data may further be organized into the write protectable and the read protectable by the memory manager 108. Herein, the write protectable data may be the data that may not be modified or overwritten without permission but could be read by the executable code having access thereto; and the read protectable data may be the data that could only be read by the executable code having access thereto when its associated use case is active. The memory manager 108 ensures that the data that is write protectable may be grouped together and the data that is read protectable may be grouped together, so that they do not overlap. This allows to provide selective access to the memory pages of one data context with needing to provide access to the other data context. Optionally, the trusted execution environment 112 when in operation temporally dynamically changes availability of certain data memory pages between a plurality of data contexts provided by the apparatus 100, to selectively allow or deny access of the kernel to certain sets of data memory pages, as a function of whether or not a given executable code is active at a given time in the apparatus 100. That is, after the segregation is done, the TEE 112 may toggle the availability of certain (set of) data memory pages between different exception levels namely, kernel level and the level of the TEE itself in order to selectively allow/deny access to the certain set of data memory pages. Typically, for ARM architecture, in Linux® OS, there may be four exception levels (also known as the privilege levels): exception level 0 (EL0), exception level 1 (ELI), exception level 2 (EL2) and exception level 3 (EL3), with the EL0 being the least privileged exception level. Typically, all the user applications have EL0 access. The kernel may run at ELI, the hypervisor 110 may run at EL2 and the firmware may run at EL3. The executable codes that are executing at one exception level may not have access to data (in the data memory 102) being accessed by other executable codes executing at the same exception level or at higher exception level. However, if an executable code is executing at a higher exception level, such executable code may have access to the data being accessed by the lower exception levels. For instance, the kernel is executed at ELI and the TEE 112 at EL3. Hence, all the data protected at the level of the TEE 112 may not be accessible by the kernel. In order to access the data context in the level of the TEE 112, the kernel may request the TEE 112 to change the accessibility of that specific data context to the kernel level. It may be appreciated that although in the present disclosure, the term “exception level” has been used to describe the level of privilege and which is generally used for in context of ARM architecture in the art, the aforesaid term may generally refer to “execution context” which generally entails anything a processor needs to define the environment to execute instructions therein. Moreover it may be contemplated that other processor architectures may potentially implement different number of privilege levels without any limitations.
As discussed, the TEE 112 can toggle the data contexts between two exception levels. The toggling between the two exception levels is based on which executable code associated to the kernel context is currently active. If certain executable code does not have a need to read/write certain data, the aforesaid data may be kept inaccessible to such executable code, without the executable code incurring in any problem/penalty. For example, with reference to FIG. 2, if the first data context 204 is needed to be accessed by the executable code, the TEE 112 may assign the first data context 204 to the ELI, while the second data context 206 may remain in the TEE level (EL3). Furthermore, if required, rather than providing access to the data contexts fully, only selective memory pages of the data context may be assigned. For example, the kernel may be given access to the memory page 208 of the first data context 204 and the memory page 214 of the second data context 206, while the memory page 210 of the first data context 204 and the memory page 212 of the second data context 206 may be hidden from the kernel.
As discussed with reference to FIG. 1, the processor 104 may have multiple cores 116, 118, 120, 122 with each core adapted to work on one hardware thread at any given instant of time. It may be appreciated by a person skilled in the art that in spite of dynamically changing availability of certain data memory pages, the data memory 102 may still be vulnerable to attacks. For example, when certain data context is being accessed by one core (say, the first core 116) executing the legitimate code, it might be possible for another, rogue core (say, the third core 120) to access the very same data. This would be possible because typically, within the kernel, all of the data is accessible to every code and core.
Optionally, the apparatus 100 is configured to use a separate memory map for segregating each corresponding data context. That is, in order enhance protection of the data memory 102, the separate memory map for each data context may be used. FIG. 3 is an exemplary schematic illustration of a memory map 300 implementing different mappings for the kernel data, in accordance with an embodiment of the present disclosure. The memory map 300, for implementing different mapping of the kernel data, may include a plurality of physical pages stored in the data memory 302, a primary memory map 304, a user space memory map 306 (also referred to as “secondary memory map 306”) and a barrier 308 provided by the TEE (such as, the TEE 112). Herein, the data requiring only write protection is mapped in the primary memory map 304. Such data is not affected by the barrier 308 provided by the TEE. Furthermore, the data requiring read protection is mapped in the user space memory map 306. The mapping is exclusively in a core-local mapping, that it is accessible exclusively to the core which has created it. The accessibility of such data is controlled by the TEE, timewise, so that it can be read only when its associated use case is active.
In the present embodiment, the mapping for all data for the executable codes are recorded in the primary memory map 304 from the plurality of physical pages stored in the data memory 302. The data may be replicated in a context specific copy associated to the certain data context. The replicated data may also contain the mapping for the associated data. The mapping mechanism which allows to have multiple user space processes mapped to the same values of address space on multiple cores, without such mappings overlapping, may also be exploited here. Such mechanism ensures that the user space mapping will be exclusively accessible to its local core, and it may therefore also prevent unauthorised access to the protected data context, from compromised cores. It may be appreciated that, the readable data and executable code may be the data requiring only write protection, and are thus mapped into the primary memory map 304. Hence, when the executable code requests access to the memory map 300 for the write protectable data, the data is mapped in the primary memory map 304. Herein, the primary mapping is at ELI, hence the data mapped into the primary memory map 304 may be accessed by all the cores, but is write-protected, and thus could not be tampered.
Optionally, the apparatus 100 is configured to provide an isolation of user space memory mapping between CPU cores 116, 118, 120, 122 of the processor 104 and their associated hardware threads to assist the memory manager 108 to manage access of the executable codes to data contexts. Herein, the hardware thread may be a single line of instruction to be executed, with each user application generally having multiple threads. Referring back to FIG. 3, as illustrated, the data memory 302 includes a plurality of data contexts. As illustrated, the write protectable data may be mapped from the data memory 302 into the primary memory map 304. For example, the data stored in a page 310 may be mapped to a page 314 of the primary memory map 304 (as represented by links 312). Furthermore, the read protectable data may be mapped from the data memory 302 into the secondary memory map 306 (if the TEE provides an access to the executable code). For example, the data stored in a page 316 is mapped to a page 320 of the secondary memory map 306 (as represented by link 318). Herein, the secondary memory map 306 is local to the core which needs the access to the data. The barrier 308 provided by the TEE (such as, the TEE 112) isolates the data in the primary memory map 304 and the secondary memory map 306.
Optionally, the trusted execution environment 112 includes one or more executable tools that are useable to validate a status of the kernel, and to refuse requests coming from the kernel when security of the kernel becomes compromised. As elucidated in the foregoing, the kernel is the core of the operating system 106. There is a risk however, since the kernel is large and complex it exposes a larger attack surface and thus poses a greater security risk than other software components. Once compromised, the data protected by the operating system 106 is also at risk, i.e. any data in the data memory 102 when accessed by the kernel may be available to the malicious program. By validating the status of the kernel, i.e. checking whether the kernel has been compromised or not, the TEE 112 can block requests from the kernel if determined to be compromised, and thereby prevent exploitation of data in the data memory 102. In one or more examples, the aforesaid executable tools useable to validate a status of the kernel may be hash functions, as known in the art.
Optionally, the apparatus 100 is configured to compute a hash of critical data of the kernel, wherein the apparatus 100 determines that the security of the kernel has been compromised when hashes of the critical data are mutually inconsistent. Herein, the critical data of the kernel may be the data that must not be modified by the malicious programs. Furthermore, the data to be protected can be interpreted to comprise data, for example, relating to a transient state in the apparatus (e.g. some important data in the Random Access Memory (RAM), Cross-point, or Flash, etc.). For a microprocessor system, the data to be protected may be system-level data, for example, the data relating to the operating system. For a microcontroller, the data to be protected may be application data regarding to the operating system and the application software.
As mentioned, the technique used for checking integrity of the kernel may utilize a hash. Typically, the hash may be a code in the form of a string of numbers. The hash of the critical data may be consistent for the kernel. However, if the kernel is compromised that is, if the kernel is attacked by the malicious programs the hash of the critical data may be changed. In order to check whether the kernel is compromised, the hash of critical data may be compared with the known hash value for the same. If it matches, such data may be considered to be safe; otherwise, such data may be considered as compromised. In an example implementation form, the hash may be computed periodically. In another example implementation form, the hash may be computed just-in-time, in an event-driven fashion. Such periodical and/or event-driven checking ensures that the compromised kernel is detected sooner, and then the TEE 112 may block requests from such compromised kernel in early stages and prevent possible data exploitation by compromised kernel (as discussed above).
Optionally, the apparatus 100 uses the hypervisor 110 to manage the trusted execution environment 112. The hypervisor 110 may, generally, create and manage a special type of process space referred to as a virtual machine, which is a type of process space adapted to emulate a physical computer and the hypervisor 110 is typically configured to execute multiple virtual machines on a single computing device, such as the apparatus 100. The hypervisor 110 is usually a small piece of code, and thus may be stored in the non-volatile memory as the firmware. As aforementioned, typically in ARM64 architecture, the exception level of the hypervisor 110 is EL2, which is greater than the exception level of the kernel. Hence, the hypervisor 110 may be completely hidden from the kernel and the user space. That is, the kernel and the user may not even know the existence of the hypervisor 110. Thus, the hypervisor 110 may further enhance the security of the data memory 102 by further preventing the kernel gaining access to the TEE 112 unnecessarily.
The apparatus of the present disclosure provides protection to the data memory 102 in the apparatus 100. This is achieved by managing access to the data memory 102 for protection against unwarranted executable codes. The present disclosure provides enhancements to the in- kemel memory management library 134 (for example, prmem) to introduce the “read protected” property, utilize the trusted execution environment (TEE) 112 for granting/removing the kernel the access to whole sets of memory pages and leveraging support for isolation of user space memory map 114 between the CPU cores 116, 118, 120, 122 of the processor 104 and hardware threads. By leveraging their overlapping effects, it is possible to have multiple data contexts, which are accessible only to the executable code that is meant to deal with each of them respectively, while preventing access to unrelated executable code, even while said data context is being accessed by its legitimate user. And all of this can be done primarily from within kernel exception level, keeping the involvement of the TEE 112 at a minimum, thus reducing the overheads.
As discussed, with the in-kernel memory management library 134, the implementation of the concept of data segregation is achieved, so that data which belongs to a specific context is placed within certain memory pages, which will be exclusively reserved for such context; and different contexts would have other non-overlapping sets of pages. The purpose of the in-kemel memory management library 134 is to ensure that the memory properties associated to a certain context, which have page-level granularity, will not interfere with the properties of another context, by ensuring that each context is orthogonal to each other, at page level.
Furthermore, since the TEE 112 can toggle the availability of certain (set of) pages between different exception levels (namely kernel level and the level of the TEE itself), it can now be used, with the segregation performed through the in-kemel memory management library 134, to selectively allow/deny access to a certain set of pages, based on them being associated to the kernel context which is currently active. If certain code does not have a need to read/write certain data, the aforesaid data can be kept inaccessible to such code, without the code incurring in any problem/penalty, assuming that the code has not being hijacked and it’s behaving abnormally, in which case it’s desirable to interfere with the abnormal behaviour.
Furthermore, with the use of a separate memory map for each data context enables to have multiple user space processes mapped to the same values of address space on multiple cores, without such mappings overlapping. Such a mechanism ensures that the user space mapping will be exclusively accessible to its local core, and it can therefore also prevent unauthorised access to the protected data context, from compromised cores.
In the disclosure, since the additional mapping is modelled after the mapping performed for the user space process, it is possible to have as many additional mappings as needed. Each secondary mapping may contain exclusively those pages relevant for its associated use case. This will prevent that, should a use case somehow be compromised, the others will be still out of reach. Furthermore, since the TEE affects the memory availability to the kernel exception level by using page granularity, the TEE based protection may also support having a multitude of orthogonal secondary mappings. Similar mechanism may also be transposed to isolate part of the data of a user-space process from the user space code which is not supposed to access it. In this example case, the kernel would provide the enforcing backend residing in a more privileged exception level, while it would be necessary to port “prmem” to user space, so that it could be used, instead of the traditional “vmalloc”.
FIG. 4 is a flowchart 400 of a method for (namely, a method of) operating an apparatus (such as, the apparatus 100) comprising the processor coupled to the data memory, wherein the processor and the data memory are configured to implement a kernel in which an operating system is executed. The various embodiments and variants disclosed above apply mutatis mutandis to the present method. At a step 402, the method comprises configuring the kernel to execute a memory manager that determines access that one or more executable codes have to the data memory. At a step 404, the method comprises configuring the processor to provide a trusted execution environment that is managed by the memory manager for the one or more executable codes to access one or more portions of the data memory. At a step 406, the method comprises arranging for the kernel to support a plurality of data contexts which are accessible to the one or more executable codes, while denying the one or more executable codes access to data contexts that are unrelated to the one or more executable codes. The present disclosure also provides a non-transitory computer readable media having stored thereon program instructions that when executed by a processor cause the processor to perform the method. The various embodiments and variants disclosed above apply mutatis mutandis to the present non-transitory computer readable media.
The apparatus and method of the disclosure may be implemented in a number of applications for the read protection of selected data. The apparatus and method protect the data in two ways: firstly, exfiltration of secrets which might be of use to the attacker may be adverted, and secondly, additional hardening for the write protected data is provided by hiding the memory content. Thus, it becomes even harder for the attack attempting to modify the kernel data because it may not be easily located. The apparatus and the method of the disclosure may be suitable for protection of encryption keys, media access control (MAC) addresses, separate wallets for similar, but orthogonal use cases and the likes. Herein, the encryption keys are set of random, unpredictable and unique strings in order to encrypt information, the media access control address (MAC) is a unique string assigned to a network interface controller (NIC).
It may be noted that the apparatus and the method of the disclosure may be applied to any device that has the memory manger based, even loosely, on the architecture described in the disclosure., This means that the application of the concepts listed in the disclosure is not specific to any particular type of processor. For example, it may be applied to x86/x86_64, ARM/ARM64, RISC-V and the likes. The only requirement is that there must be some additional mode such as the TEE or the hypervisor with higher privileges.
The apparatus and the method of the disclosure for the protection of data memory is advantageous both performance wise and code maintenance wise. Compared to the standard implementation where secrets are handled in the TEE, the disclosure provides an improvement quantifiable based on the amount of individual TEE invocations replaced by the single context switch. From the perspective of hardening existing kernel code, it is desirable to minimize the extent of changes required; as in case of upgrading the baseline, the amount of changes that must be migrated is small. Secondly, compared to the case where the whole handling of the secrets must be moved from the kernel to the TEE, it is not necessary to rework the code being replicated, to make it compatible with the TEE environment, which is likely to be different. In the very likely case, the TEE code has a license which is incompatible with the kernel license; in such case, it is not necessary to create a “clean-room” re-implementation of the handling of secrets, which risks introducing a new set of defects. The same code may be used across platforms which may or may not have the capability of handling secrets, streamlining the release management.
Modifications to embodiments of the present disclosure described in the foregoing are possible without departing from the scope of the present disclosure as defined by the accompanying claims. Expressions such as "including", "comprising", "incorporating", "have", "is" used to describe and claim the present disclosure are intended to be construed in a non-exclusive manner, namely allowing for items, components or elements not explicitly described also to be present. Reference to the singular is also to be construed to relate to the plural. The word "exemplary" is used herein to mean "serving as an example, instance or illustration". Any embodiment described as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments and/or to exclude the incorporation of features from other embodiments. The word "optionally" is used herein to mean "is provided in some embodiments and not provided in other embodiments". It is appreciated that certain features of the present disclosure, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable combination or as suitable in any other described embodiment of the disclosure.

Claims

1. An apparatus (100) comprising a processor (104) coupled to a data memory (102, 302), wherein the processor and the data memory are configured to implement a kernel in which an operating system (106) is executed,
- wherein the kernel is configured to execute a memory manager (108) that determines access that the kernel has to the data memory,
- wherein the processor is configured to provide a higher-privilege execution environment (112) that is managed by the memory manager that controls access that one or more executable codes have to one or more portions of the data memory, and
- wherein the kernel is configured to support a plurality of data contexts (128, 130, 132, 204, 206) that are accessible to the one or more executable codes, while denying the one or more executable codes access to data contexts that are unrelated to the one or more executable codes.
2. An apparatus (100) of claim 1, wherein the memory manager (108) accesses an in-kemel memory management library (134) for implementing data segregation of data stored in the data memory, wherein data stored in the data memory that belongs to a given data context is placed in corresponding data memory pages which are exclusively reserved for the given data context, and wherein other data contexts have other corresponding data pages that are non-overlapping with the data pages of the given data context.
3. An apparatus (100) of claim 2, wherein the memory manager (108) segregates data in the data memory (102) into data that is at least one of selectively write protectable and selectively read protectable.
4. An apparatus (100) of claim 2 or 3, wherein the higher-privilege execution environment (112) when in operation temporally dynamically changes availability of certain data memory pages between a plurality of data contexts provided by the apparatus, to selectively allow or deny access of the kernel to certain sets of data memory pages, as a function of whether or not a given executable code is active at a given time in the apparatus.
5. An apparatus (100) of any one of claims 2 to 4, wherein the apparatus is configured to use a separate memory map for segregating each corresponding data context.
6. An apparatus (100) of any one of claims 2 to 5, wherein the kernel has a primary memory map (304), wherein all readable data and executable code are recorded into the primary memory map (304).
24
7. An apparatus (100) of any one of the preceding claims, wherein the apparatus (100) is configured to provide an isolation of user space memory mapping between CPU cores (116, 118, 120, 122) of the processor (104) and their associated hardware threads to assist the memory manager (108) to manage access of the executable codes to data contexts (128, 130, 132, 204, 206).
8. An apparatus (100) of claim 1, wherein the higher-privilege execution environment (112) includes one or more executable tools that are useable to validate a status of the kernel, and to refuse requests coming from the kernel when security of the kernel becomes compromised.
9. An apparatus (100) of claim 8, wherein the apparatus is configured to compute a hash of critical data of the kernel, wherein the apparatus determines that the security of the kernel has been compromised when hashes of the critical data are mutually inconsistent.
10. An apparatus (100) of claim 10, wherein the apparatus uses a hypervisor (110) to manage the higher-privilege execution environment (112).
11. An apparatus (100) of any one of the preceding claims, wherein the kernel is Linux®.
12. A method (400) for operating an apparatus comprising a processor coupled to a data memory, wherein the processor and the data memory are configured to implement a kernel in which an operating system is executed, wherein the method includes:
(i) configuring the kernel to execute a memory manager that determines access that one or more executable codes have to the data memory,
(ii) configuring the processor to provide a higher-privilege execution environment that is managed by the memory manager for the one or more executable codes to access one or more portions of the data memory, and
(iii) arranging for the kernel to support a plurality of data contexts which are accessible to the one or more executable codes, while denying the one or more executable codes access to data contexts that are unrelated to the one or more executable codes.
13. A non-transitory computer readable media having stored thereon program instructions that when executed by a processor cause the processor to perform the method according to claim 12.
PCT/EP2020/087352 2020-12-20 2020-12-20 Apparatus and method for managing access to data memory by executable codes based on execution context WO2022128142A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/EP2020/087352 WO2022128142A1 (en) 2020-12-20 2020-12-20 Apparatus and method for managing access to data memory by executable codes based on execution context
CN202080107892.9A CN116635855A (en) 2020-12-20 2020-12-20 Apparatus and method for managing access of executable code to data memory based on execution context

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2020/087352 WO2022128142A1 (en) 2020-12-20 2020-12-20 Apparatus and method for managing access to data memory by executable codes based on execution context

Publications (1)

Publication Number Publication Date
WO2022128142A1 true WO2022128142A1 (en) 2022-06-23

Family

ID=74175798

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2020/087352 WO2022128142A1 (en) 2020-12-20 2020-12-20 Apparatus and method for managing access to data memory by executable codes based on execution context

Country Status (2)

Country Link
CN (1) CN116635855A (en)
WO (1) WO2022128142A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3726390A1 (en) * 2018-02-02 2020-10-21 Huawei Technologies Co., Ltd. Method and device for protecting kernel integrity

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3726390A1 (en) * 2018-02-02 2020-10-21 Huawei Technologies Co., Ltd. Method and device for protecting kernel integrity

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PETER A LOSCOCCO ET AL: "Linux kernel integrity measurement using contextual inspection", CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY. STC'07, PROCEEDINGS OF THE 2007 ACM WORKSHOP ON SCALABLE TRUSTED COMPUTING. ALEXANDRIA, VIRGINIA, USA, NOVEMBER 2, 2007, ACM, 2 PENN PLAZA, SUITE 701 NEW YORK NY 10121-0701 USA, 2 November 2007 (2007-11-02), pages 21 - 29, XP058345570, ISBN: 978-1-59593-888-6, DOI: 10.1145/1314354.1314362 *
ZHANG ZHANGKAI ET AL: "H-Securebox: A Hardened Memory Data Protection Framework on ARM Devices", 2018 IEEE THIRD INTERNATIONAL CONFERENCE ON DATA SCIENCE IN CYBERSPACE (DSC), IEEE, 18 June 2018 (2018-06-18), pages 325 - 332, XP033375373, DOI: 10.1109/DSC.2018.00053 *

Also Published As

Publication number Publication date
CN116635855A (en) 2023-08-22

Similar Documents

Publication Publication Date Title
JP4759059B2 (en) Page coloring that maps memory pages to programs
CN111651778B (en) Physical memory isolation method based on RISC-V instruction architecture
US20210124824A1 (en) Securing secret data embedded in code against compromised interrupt and exception handlers
US8839239B2 (en) Protection of virtual machines executing on a host device
US10310992B1 (en) Mitigation of cyber attacks by pointer obfuscation
KR102189296B1 (en) Event filtering for virtual machine security applications
US8646050B2 (en) System and method for supporting JIT in a secure system with randomly allocated memory ranges
EP3287932B1 (en) Data protection method and device
US9158710B2 (en) Page coloring with color inheritance for memory pages
CN101201885A (en) Tamper protection of software agents operating in a vt environment methods and apparatuses
CN112639779A (en) Security configuration for translation of memory addresses from object-specific virtual address space to physical address space
EP3178032B1 (en) Embedding secret data in code
US20150379265A1 (en) Systems And Methods For Preventing Code Injection In Virtualized Environments
CN107851032B (en) Computing device, system and method for executing services in containers
US9398019B2 (en) Verifying caller authorization using secret data embedded in code
Xia et al. Colony: A privileged trusted execution environment with extensibility
US11586727B2 (en) Systems and methods for preventing kernel stalling attacks
WO2022128142A1 (en) Apparatus and method for managing access to data memory by executable codes based on execution context
NL2028534B1 (en) Processor for secure data processing
EP3818447B1 (en) Memory access control
US20230098991A1 (en) Systems, methods, and media for protecting applications from untrusted operating systems
Tarkhani et al. Enabling Lightweight Privilege Separation in Applications with MicroGuards
Kuzuno et al. Protection Mechanism of Kernel Data Using Memory Protection Key
Zhu et al. The Formal Functional Specification of DeltaUNITY: An Industrial Software Engineering Practice
KR20220127325A (en) Apparatus and method for controlling access to a set of memory mapped control registers

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20838992

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202080107892.9

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20838992

Country of ref document: EP

Kind code of ref document: A1