CN111984374B - Method for managing secure memory, system, apparatus and storage medium therefor - Google Patents

Method for managing secure memory, system, apparatus and storage medium therefor Download PDF

Info

Publication number
CN111984374B
CN111984374B CN202010843623.0A CN202010843623A CN111984374B CN 111984374 B CN111984374 B CN 111984374B CN 202010843623 A CN202010843623 A CN 202010843623A CN 111984374 B CN111984374 B CN 111984374B
Authority
CN
China
Prior art keywords
memory
page
secure
secure memory
free space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010843623.0A
Other languages
Chinese (zh)
Other versions
CN111984374A (en
Inventor
姜新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Haiguang Information Technology Co Ltd
Original Assignee
Haiguang Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Haiguang Information Technology Co Ltd filed Critical Haiguang Information Technology Co Ltd
Priority to CN202010843623.0A priority Critical patent/CN111984374B/en
Publication of CN111984374A publication Critical patent/CN111984374A/en
Application granted granted Critical
Publication of CN111984374B publication Critical patent/CN111984374B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45583Memory management, e.g. access or allocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45587Isolation or security of virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention provides a method for managing a secure memory, a system, a device and a storage medium thereof. Wherein the method is performed by a secure processor, the secure processor being distinct and separate from a CPU comprised by a host, the host comprising a secure memory and running a virtual machine on the host, the method comprising: receiving a message containing missing page fault information from a CPU; allocating a safe memory for the first memory page according to the missing page fault information in the received message; and pre-allocating the safe memory for one or more second memory pages according to the page fault information in the received message.

Description

Method for managing secure memory, system, apparatus and storage medium therefor
Technical Field
The present invention relates to virtual machines, and more particularly, to a method for managing secure memory, and a system, apparatus, and storage medium thereof.
Background
The memory management strategy in the kernel of the host operating system is to maximize the memory utilization rate as much as possible, the memory of the common virtual machine is managed by the kernel of the host operating system, and only the actual physical memory is allocated to the memory page of the virtual machine with the nesting page fault. The memory management of the host operating system kernel runs inside a Central Processing Unit (CPU), does not relate to the communication with a secure processor, and is an efficient memory management method.
Disclosure of Invention
The memory of the memory-isolated secure virtual machine is managed by the secure processor. Specifically, for a host supporting a Memory isolation type Secure virtual machine, the host reserves a Memory as a Secure Memory Region (SMR) for running a process of the Secure virtual machine during a boot process, and a CPU of the host cannot access the reserved Memory, and management thereof is implemented by a Secure processor. The secure processor acts as a device independent of the CPU, and the time overhead of communication with the host CPU is not negligible. Under such a system architecture, the memory access bottleneck is the communication between the host CPU and the secure processor.
The method comprises the steps that a nested page fault is generated in the running process of a memory isolation type safety virtual machine, a host CPU forwards page fault information to a safety processor, the safety processor establishes a nested page table and distributes a safety memory for a virtual machine memory page related to the page fault. If the security processor allocates the security memory for all memory pages of the virtual machine in advance, the memory pages which are not accessed by the CPU in the virtual machine cannot be utilized, precious security memory is wasted, and the utilization rate of the security memory is reduced; if the secure processor adopts a memory management scheme with the host CPU, that is, the secure memory is allocated only for the memory page of the virtual machine with the page fault, the secure processor is likely to frequently communicate with the host CPU, which increases the communication overhead with the host CPU and reduces the performance of the virtual machine accessing the secure memory.
Therefore, a method for managing a secure memory is needed, which can well balance the access performance and the utilization rate of the secure memory.
One aspect of the present invention discloses a method for managing a secure memory by a secure processor, wherein the secure processor is different and separate from a CPU included in a host, the host includes the secure memory, and a virtual machine is run on the host, the method comprising: receiving a message containing missing page fault information from a CPU; allocating a safe memory for the first memory page according to the missing page fault information in the received message; and pre-allocating the safe memory for one or more second memory pages according to the page fault information in the received message.
Further, a method according to an embodiment of the present disclosure, wherein the page fault information includes a page address and a size of the first memory page.
Furthermore, according to a method of an embodiment of the present disclosure, the page fault information further includes a page address of one or more second memory pages.
Further, a method according to an embodiment of the present disclosure, wherein the first memory page is a memory page in which a page fault failure has occurred, and the second memory page is a memory page in which a page fault failure has not occurred.
Furthermore, a method according to an embodiment of the present disclosure, wherein the address of the secure memory allocated for the first memory page and the address of the secure memory allocated for the second memory page are a continuous segment of addresses.
Furthermore, a method according to an embodiment of the present disclosure, wherein the address of the secure memory allocated for the first memory page is at a start, any middle position, or an end of a continuous segment of addresses.
Further, a method in accordance with an embodiment of the present disclosure, wherein the first memory page is associated with one or more second memory pages.
Further, a method according to an embodiment of the present disclosure, wherein page addresses of the first memory page and the one or more second memory pages are consecutive.
Further, a method in accordance with an embodiment of the present disclosure, wherein the contents of the first memory page and the one or more second memory pages are associated.
Furthermore, a method according to an embodiment of the present disclosure further includes reclaiming the allocated portion of the secure memory according to free space of the secure memory.
Furthermore, according to a method of an embodiment of the present disclosure, reclaiming the allocated portion of the secure memory according to the free space of the secure memory includes: if the free space of the secure memory is less than or equal to a first threshold value, checking and recycling the allocated portion of the secure memory; if the free space of the secure memory is greater than the first threshold, the allocated portion of the secure memory is checked and recycled when the count of page faults satisfies a condition associated with a recycle cycle.
Further, a method according to an embodiment of the present disclosure, wherein after checking and reclaiming the allocated portion of secure memory: if the free space of the secure memory is larger than the first threshold value, the secure memory is allocated to one or more second memory pages; or if the free space of the secure memory is less than or equal to the first threshold, not allocating the secure memory to the one or more second memory pages.
In addition, according to the method of the embodiment of the present disclosure, when the free space of the secure memory is greater than the first threshold and less than or equal to the second threshold, the recycling period is set to the first recycling period; and when the free space of the secure memory is larger than the second threshold and smaller than or equal to a third threshold, setting the recycling period as the second recycling period, and when the free space of the secure memory is larger than the third threshold, setting the recycling period as the third recycling period, wherein the third threshold is larger than the second threshold, and the second threshold is larger than the first threshold.
Further, a method according to an embodiment of the disclosure, wherein the third recycling period is greater than the second recycling period, the second recycling period being greater than the first recycling period.
Further, a method according to an embodiment of the present disclosure, wherein the count of page fault satisfying the condition associated with the recycle cycle is when the count of page fault is an integer multiple of the recycle cycle.
Another aspect of the present invention discloses a system for managing secure memory, the system comprising: a host comprising a CPU and a secure memory, wherein a virtual machine is run on the host; a secure processor distinct and separate from the CPU and configured to: receiving a message containing missing page fault information from a CPU; allocating a safe memory for the first memory page according to the missing page fault information in the received message; and pre-allocating the safe memory for one or more second memory pages according to the page fault information in the received message.
Yet another aspect of the invention is a secure processor for managing secure memory, the secure processor being distinct and separate from a CPU on a host and configured to implement any of the methods described above.
Yet another aspect of the present invention discloses a non-transitory computer-readable recording medium having recorded thereon program code for dynamically pre-allocating secure memory of a virtual machine, the program code, when executed by a computer, performing any one of the methods described above.
In the invention, a host CPU sends a section of memory page address when the virtual machine has a nesting page fault to a security processor, and then the security processor allocates a security memory to the current page fault address and allocates the security memory to a designated virtual machine RAM interval in advance. The method can reduce the communication overhead between the CPU and the security processor and improve the read-write performance of the security memory of the virtual machine. At the same time, the secure processor also periodically reclaims allocated secure pages that have not been accessed by the CPU. The method can avoid the waste of the secure memory, realize the maximization of the utilization rate of the secure memory and effectively balance the access performance and the utilization rate of the secure memory.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and are intended to provide further explanation of the claimed technology.
Drawings
Fig. 1 shows a schematic diagram of a system architecture according to an embodiment of the invention.
FIG. 2 shows a flow diagram of a method for allocating secure memory by a secure processor, according to an embodiment of the invention.
FIG. 3 shows a schematic diagram of a process of translating virtual machine physical addresses, according to an embodiment of the invention.
FIG. 4 shows another schematic diagram of a process of translating virtual machine physical addresses according to an embodiment of the invention.
Fig. 5A-5D show schematic diagrams of data structures for the processes of fig. 3 and/or 4, according to embodiments of the invention.
FIG. 6 illustrates a flow diagram of a method for pre-allocating secure memory by a secure processor, according to an embodiment of the invention.
FIG. 7 is a diagram illustrating physical addresses of a segment of secure memory allocated according to an embodiment of the invention.
FIG. 8 illustrates another flow diagram of a method for pre-allocating secure memory by a secure processor, according to an embodiment of the invention.
FIG. 9 illustrates yet another flow diagram of a method for pre-allocating secure memory by a secure processor, in accordance with an embodiment of the present invention.
FIG. 10 illustrates a schematic diagram of opportunities for checking and reclaiming allocated pages according to an embodiment of the invention.
FIG. 11 sets forth a further flow chart of a method for pre-allocating secure memory by a secure processor according to embodiments of the present invention.
FIG. 12 shows a schematic diagram for setting a recycling period, according to an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only some embodiments of the present disclosure, not all embodiments. The components of the embodiments of the present disclosure, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present disclosure, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Fig. 1 shows a schematic diagram of a system architecture 100 according to an embodiment of the invention.
Referring to FIG. 1, a system architecture 100 is a general system architecture for implementing the present invention. The system architecture 100 may be a computer-based system architecture and may include a host (not shown). For example, the host may be a mainframe computer, personal computer, industrial computer, or other computer capable of being configured to run a virtual machine.
The host may include a memory and a Central Processing Unit (CPU), wherein the memory may be divided into a normal memory 102A for running host processes and a secure memory 102B for running a virtual machine Operating System (OS) and its processes. In some embodiments, the secure memory 102B may be initialized during the boot process of the host, for example, a memory region logically isolated from the normal memory 102A from the original memory region, and initialized by the host at boot time. However, the invention is not limited thereto, and the secure memory 102B may be a memory region physically separated from the normal memory 102A. For example, secure memory 102B may be used for certain functions, such as functions that have special requirements for security and/or encryption. For example, the secure memory 102B may be used for the execution of a virtual machine (also referred to as a "secure virtual machine"), such as the execution of a virtual machine OS and its processes.
The host may also include a secure processor 101 and a Central Processing Unit (CPU) 103.
CPU 103 may include a Memory Management Unit (MMU) (not shown) for managing the allocation of Memory (e.g., normal Memory 102A). The CPU 103 may not be involved in communication with the secure processor 101, for example, the CPU may be set to have no access to the secure memory 102B. CPU 103 may manage communications between host OS 104 and its processes 105A-105N and memory (e.g., normal memory 102A). The CPU 103 may manage communication to the secure processor 101.
The secure processor 101 may be used to manage allocation of the secure memory 102B to run the virtual machine OSs 106A-106N and their processes, e.g., respective virtual machine processes (not shown) running on respective virtual machine OSs 106A-106N. The secure processor 101 is physically independent from the CPU 103, and therefore, the time overhead of communication between the secure processor 101 and the CPU 103 cannot be ignored. In many cases, this time overhead is a bottleneck to memory access performance.
In some embodiments, when the host OS needs to run host processes 105A-105N, the CPU may maintain page tables for each process 105A-105N. The page table may record a page Address (e.g., referred to as a "Virtual Address (VA)") of one or more memory pages corresponding to a process when the process is loaded, and a Physical Address (PA) of a Physical memory storing a Physical resource corresponding to the page Address. In other words, the page table may record the mapping relationship between the virtual address and the physical address, for example, in the form of a table or a function.
The MMU may be configured to translate virtual addresses to physical addresses based on page tables (e.g., CR3/nCR3 pointing page tables/nested page tables) to thereby call memory pages of processes that need to be run onto memory, forming an executable program. After the program in the memory is run, the program is released and then the memory can be recycled, so that the utilization rate of the memory space can be improved. In some embodiments, the page table may be a single level page table or a multi-level page table or a nested page table.
In system architecture 100 according to embodiments of the present invention, CR3 is a register within CPU 103 or virtual machine used to store page tables and nCR3 is a register within secure processor 101 used to store nested page tables. The CR3 within CPU 103 points to a page table that can be used to translate Host Virtual Addresses (HVAs) to Host Physical Addresses (HPAs), while the CR3 within the Virtual machine points to a page table that can be used to translate Virtual machine Virtual addresses (GVAs) to Virtual machine Physical addresses (GPA). The nested page table pointed to by the nCR3 may be used to translate virtual machine physical addresses (GPA) to Host Physical Addresses (HPA).
In system architecture 100 according to embodiments of the present invention, virtual machine processes (not shown) of virtual machine OSs 106A-106N need to be pushed to secure memory 102B to run. Similar to running host processes 105A-105N, when running a virtual machine process, the virtual machine virtual address (GVA) is translated by the virtual machine through the CR3 thereon to a virtual machine physical address (GPA). The secure processor 101 then builds and maintains nested page tables (via the nCR3) corresponding to the virtual machine processes to translate virtual machine physical addresses (GPA) to Host Physical Addresses (HPA) to complete the mapping of virtual machine resources to host resources.
Memory management using page tables/nested page tables tends to suffer from problems such as page faults (also known as page faults or page fault interrupts or page fault errors). Generally, a page fault refers to the inability to find the corresponding physical address (e.g., HPA or GPA) in the page table/nested page table pointed to by CR3/nCR3 using HVAs, GVAs or GPA. In other words, when the MMU within CPU 103 translates the HVA, GVA, or GPA based on page/nested page tables, it finds no physical page corresponding thereto. At this time, the physical address (e.g., HPA or GPA) of the corresponding physical page needs to be added to the page table/nested page table to make the process run smoothly. Thus, one way to address a page fault is to allocate a physical address (e.g., HPA or GPA) to the memory page in which the page fault occurred.
According to an embodiment of the present invention, when a host process fails in a page fault, the page table pointed to by the host's CR3 is updated by the CPU 103. When a page fault occurs on the virtual machine: if the page fault is a page fault that occurred while translating GVAs to GPAs, the page table pointed to by the CR3 of the virtual machine is updated by the virtual machine; if the page fault is a nested page fault that occurs when translating GPAs into HPAs, the CPU 103 informs the secure processor 101 of the information about the occurrence of the nested page fault so that the secure processor 101 updates the nested page table pointed to by the nCR3, e.g., by allocating secure memory for the virtual machine memory page in which the nested page fault occurred to eliminate the nested page fault. According to an embodiment of the invention, a secure processor allocates secure memory by modifying or updating a Page Table Entry (PTE) of a nested Page Table.
However, when the virtual machine process frequently has a nested page fault, the time overhead of communication between the secure processor 101 and the CPU 103 will restrict the improvement of the memory access performance. In the following, unless otherwise indicated, reference to "page fault" refers to a "nested page fault" requiring communication between the CPU and the secure processor as described above.
Fig. 2 shows a flow diagram of a method for allocating secure memory by the secure processor 201, according to an embodiment of the invention.
Referring to fig. 2, the host CPU 200 first learns that a page fault has occurred in step S200, and packages information related to the page fault into a message, for example, a message having a certain format for transmission. Then, the secure processor 201 receives the message from the CPU 200 in step S201, and allocates the secure memory to the memory page according to the information about the page fault included in the message in the subsequent step S202.
In some embodiments, the page fault information may include information about a memory page in which the page fault occurred (hereinafter referred to as "first memory page"). In some embodiments, the information related to the first memory page includes at least a page address (GPA) and a page size thereof. The invention does not limit the unit and magnitude of the page size, and does not further limit the content of the related information of the first memory page.
In some embodiments, the page fault information may further include information about other memory pages (hereinafter referred to as "second memory pages") different from the memory page in which the page fault occurred. For example, the information about the second memory page may include at least its page address and page size. For example, the number of second memory pages may be one or more.
It should be noted that the "first memory page" and the "second memory page" only distinguish two distinct memory pages from each other in the description, and such distinction may be whether a page fault occurs or not as described above. However, the above differences are merely exemplary, and the present invention is not limited thereto.
In some implementations, the second memory page is defined as being associated with the first memory page.
In some examples, the second memory page may be physically associated with the first memory page. For example, the GPAs of the second memory page are contiguous with the GPAs of the first memory page. For example, the second memory page is a single memory page, and the GPA of the second memory page may be before or after the GPA of the first memory page, such as the GPA of the first memory page is add, and the GPA of the second memory page may be add +1 or add-1, where add may be an 8-bit or 16-bit address represented in binary, which is not limited in the present invention. For example, the second memory page may also be a plurality of memory pages, in which case the GPA of the first memory page may form a continuous segment of physical addresses (memory space) with the plurality of GPA of the second memory page. For example, a piece of consecutive physical addresses formed may be represented as [ ram _ start, ram _ end ]. In this case, if the address of the first memory page is represented as GPA1, then the address in [ ram _ start, ram _ end ] out of GPA1 is termed the second memory page. Furthermore, GPA1 may be equal to ram _ start, or GPA1 may be equal to ram _ end, or GPA1 may be any address within the range of [ ram _ start, ram _ end ] (including ram _ start and ram _ end), e.g., the address in the middle of the range of [ ram _ start, ram _ end ] (if [ ram _ start, ram _ end ] contains an odd number of addresses). However, the invention is not limited in this regard.
In some examples, the second memory page may be associated with the first memory page in content. For example, the second memory page may belong to the same process or a consecutive process as the first memory page. For example, after a process or portion of a process corresponding to a first memory page completes running, the process or portion of a process that needs to be run next corresponds to a second memory page. In this regard, any suitable modifications may be made to the present invention to determine the second memory page associated with the first memory page. Also, by taking into account the context of the association, the calls to the second memory pages may be more decentralized (or more discrete, i.e., without regard to the address of each of the one or more second memory pages). For example, a weight or a score may be assigned to each memory page according to the relevance of the content, so that the weight or the score may be considered when selecting the address of the second memory page. For example, a second memory page with the same or similar weight or score as the first memory page may be selected, or a second memory page with the highest weight or score may be selected. For example, the mechanism for updating or modifying the weights or scores may also be implemented in conjunction with the mechanism for weights or scores. However, the present invention is not limited thereto.
In this regard, considering that selecting the second memory page and notifying the second memory page also requires communication overhead (e.g., possibly overhead on the CPU), in the following embodiments according to the present invention, the association logic of the second memory page with the first memory page is described as an association of physical addresses, e.g., the physical addresses of the first memory page and the second memory page are consecutive as described above. However, the present invention is not limited thereto.
According to the embodiment of the invention, the security processor can respond to the page fault, not only allocate the security memory to the memory page with the page fault, but also pre-allocate the security memory to the memory page without the page fault. The scheme for allocating the secure memory can more flexibly schedule the process on the virtual machine, reduce the probability of page fault exception, reduce the communication between the secure processor 201 and the CPU 200, reduce the communication overhead and improve the memory access performance.
Referring next to fig. 3, 4 and 5A-5D, for purposes of better understanding the present invention, a schematic diagram of the process of translating virtual machine physical addresses (GPA) by the MMU through the nCR3 maintained by the secure processor and the associated data structures are shown and briefly described. Of course, these descriptions are merely exemplary, and the specific translation process and data structure of the nested page tables may vary depending on the specific implementation of the nested page tables, and the present invention is not limited thereto.
In some embodiments, the nested page tables may consist of 3-level or 4-level page tables, and the page sizes are therefore different. For example, in a 64-bit machine length mode: the page size may be 4k, corresponding to a 4-level page table (as shown in FIG. 3); alternatively, the page size is 2M, corresponding to a 3-level page table (as shown in FIG. 4).
FIG. 3 shows a schematic diagram of a process of translating virtual machine physical addresses (GPAs), according to an embodiment of the invention.
As shown in fig. 3, the 4-Level Page Table may include a 4-Level Page Map (PML4) Table, a Page Directory Pointer (PDP) Table, a Page Directory (PD) Table, a Page Table (PT), and a physical Page. The PML4 Table consists of PML4 Table entries (PML4 Table Entry, PML4E), the PDP Table consists of PDP Table entries (PDP Table Entry, PDPE), the PD Table consists of PD Table entries (PD Table Entry, PDE), and the PT consists of PT entries (PT Entry, PTE).
As shown in FIG. 3, in this example, the virtual machine physical address (GPA) is 64 bits, which includes a sign extension (bits 63-48), a 4-level page map (PML4) offset (bits 47-39), a page target pointer offset (bits 38-30), a page directory offset (bits 29-21), a page table offset (bits 20-12), and a physical page offset (bits 11-0).
Referring to FIG. 3, when a GPA of a process generates a page fault, corresponding page fault information is sent to the secure processor, and the secure processor typically performs the following steps:
1. the nested page table root directory PML4 table is found by nCR3 which includes a level 4 page mapping Base Address (Base Address).
2. Taking bits 47-39 of GPA as an index, look up the page table entry PML4E in the PML4 table, and proceed to look up the PDP table according to the found PML 4E. If the PDP table does not exist, 4k pages are allocated in the secure memory to serve as the PDP table, otherwise, the next step is carried out.
3. Taking 38-30 bits of GPA as an index, looking up a page table entry PDPE in a PDP table, and continuing to look up the PD table according to the found PDPE. And if the PD table does not exist, allocating 4k pages in the secure memory as the PD table, and otherwise, carrying out the next step.
4. Taking bits 29-21 of GPA as an index, look up page table entry PDE in PD table, and continue to look up PT according to PDE found. If the PT does not exist, 4k pages are allocated in the secure memory as the PT, and otherwise, the next step is carried out.
5. Taking 20-12 bits of GPA as an index, looking up a page table entry PTE in PT, allocating 4k safe memory pages at the same time, and filling physical addresses (shown in FIG. 3) of the allocated safe memory pages into the PTE, i.e. updating the PTE.
Thus, the CPU obtains a secure memory page (i.e., HPA) by the MMU consulting the nested page table, and then takes the 11-0 bits of the GPA as the page offset (i.e., offset within the page), resulting in the final physical address.
FIG. 4 shows another schematic diagram of a process to translate virtual machine physical addresses (GPAs), according to an embodiment of the invention.
As shown in fig. 4, the 3-level page table may include a 4-level page map (PML4) table, a Page Directory Pointer (PDP) table, a Page Directory (PD) table, and a physical page. Wherein, the PML4 table is composed of PML4 table entries (PML4E), the PDP table is composed of PDP table entries (PDPE), and the PD table is composed of PD table entries (PDE).
As shown in FIG. 4, in this example, the virtual machine physical address (GPA) is 64 bits, which includes a sign extension (bits 63-48), a 4-level page map (PML4) offset (bits 47-39), a page target pointer offset (bits 38-30), a page directory offset (bits 29-21), and a physical page offset (bits 20-0).
Referring to FIG. 4, when a GPA of a process generates a page fault, corresponding page fault information is sent to the secure processor, and the secure processor typically performs the following steps:
1. the nested page table root directory PML4 table is found by nCR3, which includes a 4-level page mapping base address.
2. Taking bits 47-39 of GPA as an index, look up the page table entry PML4E in the PML4 table, and proceed to look up the PDP table according to the found PML 4E. If the PDP table does not exist, 4k pages are allocated in the secure memory to serve as the PDP table, otherwise, the next step is carried out.
3. Taking 38-30 bits of GPA as an index, looking up a page table entry PDPE in a PDP table, and continuing to look up the PD table according to the found PDPE. And if the PD table does not exist, allocating 4k pages in the secure memory as the PD table, and otherwise, carrying out the next step.
4. Taking bits 29-21 of GPA as an index, look up the page table entry PDE in the PD table, allocate the 2M secure memory page at the same time, and fill the physical address of the allocated secure memory page (as shown in fig. 4) into the PDE, i.e. update the PDE.
Thus, the CPU obtains a secure memory page (i.e., HPA) by the MMU consulting the nested page table, and then takes the 20-0 bits of the GPA as the page offset (i.e., offset within the page) to obtain the final physical address.
As shown in FIGS. 3, 4 above, and 5A-5D below, each Page table entry may include at least a base address (e.g., 51-12 bits), and bits of other non-base addresses may be bits related to setting attributes, such as an R/W (Read/Write) bit, a U/S (User/super) bit, and a PCD (Page Cache Disable) bit, among others.
Fig. 5A-5D show schematic diagrams of data structures for the processes of fig. 3 and/or 4, according to embodiments of the invention.
FIG. 5A shows a 4k level page map entry (PML4E) in long mode. A typical PML4E may include a no-execute bit (to indicate whether the page is non-executable), an available bit (reserved for use by programs), a page directory pointer base address, a MBZ (Must Be Zero) bit (for reservation), an IGN (ignore) bit (negligible), an Access bit (to indicate whether the page has been accessed), PCD (Page cache disable) bits (to indicate whether the cache is closed), PWT (Page Write Through) bits (to indicate whether data is written to the cache while it is also being written to memory), U/S bits (to indicate user authority/super authority), R/W bits (to indicate read-Write status), P (Present) bits (to indicate whether the Page table entry is valid for address translation), etc.
However, this is merely an architectural limitation, and a given processor implementation may support fewer bits, i.e., the invention is not so limited.
FIG. 5B shows a 4k Page Directory Pointer Entry (PDPE) in long mode.
A typical PDPE may include an NX (execute disabled) bit (to indicate whether the page is not executable), an available bit (reserved for use by the program), a page directory base, an IGN (ignore) bit (negligible), a 0 bit (for reservation), an a (access) bit (to indicate whether the page is accessed), a PCD (page cache disable) bit (to indicate whether the cache is closed), a PWT (page write through) bit (to indicate whether data is written to the cache while it is also being written to memory), a U/S bit (to indicate user authority/super authority), an R/W bit (to indicate read and write status), a P (present) bit (to indicate whether the page table entry is valid for address translation), and so forth.
However, this is merely an architectural limitation, and a given processor implementation may support fewer bits, i.e., the invention is not so limited.
FIG. 5C shows a 4k Page Directory Entry (PDE) in long mode.
A typical PDE may include an NX (execute disable) bit (to indicate whether the page is not executable), an available bit (reserved for use by the program), a page table base, an IGN (ignore) bit (negligible), a 0 bit (for reservation), an a (access) bit (to indicate whether the page is accessed), a PCD (page cache disable) bit (to indicate whether the cache is closed), a PWT (page write through) bit (to indicate whether data is written to the cache while it is also being written to memory), a U/S bit (to indicate user authority/super authority), an R/W bit (to indicate read and write status), a P (present) bit (to indicate whether the page table entry is valid for address translation), and so forth.
However, this is merely an architectural limitation, and a given processor implementation may support fewer bits, i.e., the invention is not so limited.
FIG. 5D shows a 4k Page Table Entry (PTE) in long mode.
A typical PTE may include an NX (execute-prohibited) bit (to indicate whether the Page is non-executable), an available bit (reserved for use by the program), a physical Page base, a G (Global) bit (to indicate whether the Page is always saved in a cache TLB (Address translation lookaside buffer)), a PAT (Page Attribute Table) bit (to set memory attributes at the granularity of the Page level), a D (Dirty) bit (to indicate whether the Page pointed to by the entry writes data), an A (Access) bit (to indicate whether the Page is accessed), a PCD (Page cache disable) bit (to indicate whether the cache permissions are closed), a PWT (Page write through) bit (to indicate whether the data is written to the memory while it is being written to the cache), a U/S super bit (to indicate user permissions/user, and, An R/W bit (to indicate read and write status), a P (present) bit (to indicate whether the page table entry is valid for address translation), etc.
However, this is merely an architectural limitation, and a given processor implementation may support fewer bits, i.e., the invention is not so limited.
Returning to fig. 2, in conjunction with the various embodiments of fig. 3, 4, and 5A-5D, the method of fig. 2 may be further implemented for allocating secure memory by a secure processor after receiving a message containing page fault information.
FIG. 6 illustrates a flow diagram of a method for pre-allocating secure memory by a secure processor, according to an embodiment of the invention.
After the secure processor receives the message containing the missing page fault information from the CPU, the following steps may be implemented:
in step S601, the secure processor may receive a message containing missing page fault information, for example, the missing page fault information may be extracted from the message.
In step S602, the secure processor may allocate the secure memory for the first memory page according to the page fault information, for example, for the page fault.
In step S603, the secure processor may pre-allocate the secure memory for one or more second memory pages according to the page fault information, for example, for a later process running.
In some embodiments, the physical addresses of the secure memory allocated by the secure processor for the first memory page and the one or more second memory pages may also be contiguous. FIG. 7 is a diagram illustrating physical addresses of a segment of secure memory allocated according to an embodiment of the invention. As shown in fig. 7, a segment of the secure memory address space from smr _ start to smr _ end contains 4 secure memories allocated for the first memory page corresponding to the second physical address in the segment of addresses, i.e., normally allocated (e.g., in response to a page fault), and the pre-allocated secure memories for the 3 second memory pages correspond to the first, third, and four physical addresses in the segment of addresses. Keeping the physical addresses of the allocated secure memory contiguous to some extent can improve the speed of the response, however, the present invention is not limited thereto. For example, if the remaining space itself is discrete/discontinuous, the physical address of the secure memory allocated for the memory page may also be discrete/discontinuous.
FIG. 8 illustrates another flow diagram of a method for pre-allocating secure memory by a secure processor, according to an embodiment of the invention.
After the secure processor receives the message containing the missing page fault information from the CPU, the following steps may be implemented:
in step S801, the security processor may receive a message containing missing page fault information, for example, may extract the missing page fault information from the message.
In step S802, the secure processor may allocate the secure memory for the first memory page according to the page fault information, for example, for the page fault.
In step S803, the secure processor may recycle the allocated portion of the secure memory according to the free space of the secure memory. In other words, before considering pre-allocating secure memory for a memory page (e.g., the second memory page) for which a page fault failure has not occurred, the secure processor may, for example, check whether the free space of the secure memory meets the conditions for pre-allocation to properly reclaim the allocated portion of secure memory so that sufficient free secure memory is available for pre-allocation of the second memory page. In some examples, the secure processor may reclaim a portion of secure memory that has been previously allocated but rarely Used, e.g., by selecting a memory that should be reclaimed through a Least Used algorithm (LFU). In some examples, the secure processor may also select the memory that should be reclaimed by a Least Recently Used algorithm (LRU). However, the present invention is not limited thereto. For example, the secure processor reclaims allocated secure memory by appropriately modifying the entry in the nested page table pointed to by nCR 3.
In step S804, the secure processor may pre-allocate the secure memory for one or more second memory pages according to the page fault information, for example, for a later process to run. Based on the execution of step S803, the security processor can better ensure the pre-allocation of the second memory page, in other words, the success rate of the pre-allocation can be increased.
FIG. 9 illustrates yet another flow diagram of a method for pre-allocating secure memory by a secure processor, in accordance with an embodiment of the present invention.
After the secure processor receives the message containing the missing page fault information from the CPU, the following steps may be implemented:
in step S901, the secure processor may allocate a secure memory for the first memory page according to the page fault information, for example, for a page fault.
In step S902, the secure processor may check the free space of the secure memory, i.e., the portion of the secure memory that has not been allocated to the memory page, to determine whether the free space is greater than a first threshold (e.g., expressed as a percentage).
If the free space is greater than the first threshold, the security processor may check in step S903 if the page fault count meets a predetermined condition, further description of which will be described in more detail in FIG. 10. If the security processor determines that the page fault count satisfies the predetermined condition, the process may proceed to step S904, and a part, for example, all of the allocated security memory may be checked and recycled, but the present invention is not limited thereto. In contrast, if the safety processor determines that the missing page fault count does not satisfy the predetermined condition, it may proceed to step S905.
If it is determined in step S902 that the free space is less than or equal to the first threshold, the security processor may proceed to step S904, checking and reclaiming the allocated portion of the secure memory.
In step S905, after checking and reclaiming the allocated portion of the secure memory, the secure processor may check the free space of the secure memory again to determine whether the free space is greater than a first threshold (e.g., expressed in percentage). And if the free space is larger than the first threshold, pre-allocating the safe memory for one or more second memory pages according to the page fault information, namely performing pre-allocation. Otherwise, if the free space is less than or equal to the first threshold, the method ends. In other words, when the free space is still insufficient for pre-allocation after a portion of the secure memory is checked and recycled, the pre-allocation is abandoned and only the secure memory is allocated to the memory page with the page fault (i.e., the first memory page).
It should be noted that the first threshold used in S902 and the first threshold used in S905 are shown to be the same, but are merely illustrative, and the present invention is not limited thereto. Further, for example, the first threshold may be set empirically, and may also be associated with the size of the second memory page to be pre-allocated, such that the operation of reclaiming memory may be performed in a manner that ensures that free space is sufficient for pre-allocation. However, the present invention is not limited thereto.
FIG. 10 illustrates a schematic diagram of opportunities for checking and reclaiming allocated pages according to an embodiment of the invention.
As shown in fig. 10, t (m), t (…), t (m + n) schematically indicate respective timings at which the page fault occurs. In some embodiments, when a page fault occurs at t (m), the security processor may check and reclaim the physical addresses of the allocated secure memory, e.g., as shown in fig. 7. At each time between t (m) and t (m + n), for example, at a time shown as t (…), although the secure processor is notified of the occurrence of the page fault, the allocated secure memory is not checked and recycled, and the pre-allocation is not performed, but only the normal allocation is performed. When a page fault again occurs at t (m + n), the security processor may again check and reclaim the physical addresses of the allocated secure memory, for example, as shown in fig. 7.
In some embodiments, it may be checked whether the time interval tn between t (m) and t (m + n) or the number of differences n satisfies a predetermined condition. In other words, the security processor checks and recycles the allocated pages when the time interval tn between t (m) and t (m + n) or the number of differences n satisfies a predetermined condition. In this way, the allocated pages are checked and recycled at a controlled frequency so that the checking and recycling actions are not so frequent as to reduce the response speed (e.g., average response time) of the page fault.
In some examples, a timer may be set for tn, or a threshold may be set for n. When the timer for tn expires or n meets a threshold, the security processor may check and reclaim allocated pages. Also, the timer for tn and the threshold for n may both be configurable and may both be dynamically configurable. In this way, a dynamic balance of system performance among factors (e.g., page fault response time, secure processor communication overhead, etc.) may be achieved. Herein, the opportunity for checking and reclaiming allocated pages is illustrated only by the page fault count (i.e., n). However, the present invention is not limited thereto.
FIG. 11 illustrates a further flowchart of a method for pre-allocating secure memory by a secure processor, further supplementing the method of FIG. 10, in accordance with an embodiment of the present invention.
After the secure processor receives the message containing the missing page fault information from the CPU, the following steps may be implemented:
in step S1101, the secure processor may allocate secure memory for the first memory page according to the page fault information, for example, for the page fault.
In step S1102, the secure processor may check the free space of the secure memory, i.e., the portion of the secure memory that has not been allocated to the memory page, to determine whether the free space is greater than a first threshold, which may be expressed as a percentage, or a numerical value (e.g., expressed as low), for example, but the invention is not limited thereto.
If it is determined in step S1102 that the free space is greater than the first threshold (e.g., indicated as low), the security processor may set a reclamation period, e.g., a reclamation period measured in a page fault count n. Further, the secure processor sets a recycling period to a first recycling period (e.g., denoted as N _ low) in step S1103. On the contrary, if it is determined in step S1102 that the free space is less than or equal to the first threshold (e.g., indicated as low), the process proceeds to step S1109, where the allocated portion of the secure memory is checked and recycled.
If the recycling period is set to the first recycling period (e.g., denoted as N _ low) in step S1103, the security processor may further determine whether the free space is greater than the second threshold (e.g., denoted as mid) in step S1104, and likewise, the representation of the second threshold of the present invention is not limited thereto. Also, if it is determined in step S1104 that the free space is greater than the second threshold (e.g., denoted as mid), the reclamation period may be set to a second reclamation period (e.g., denoted as N _ mid) in step S1105. On the contrary, if it is determined in step S1104 that the free space is less than or equal to the second threshold (e.g., denoted as mid), it proceeds to step S1108 to determine whether the count of missing page faults satisfies a predetermined condition.
If the reclamation period is set to the second reclamation period (e.g., denoted as N _ mid) in step S1105, the security processor may further determine whether the free space is greater than a third threshold (e.g., denoted as high) in step S1106, and likewise, the representation of the third threshold of the present invention is not limited thereto. Also, if it is determined in step S1106 that the free space is greater than the third threshold (e.g., denoted as high), the recycling period may be set to a third recycling period (e.g., denoted as N _ high) in step S1107. On the contrary, if it is determined in step S1106 that the free space is less than or equal to the third threshold (e.g., indicated as high), the flow proceeds to step S1108, where it is determined whether the count of missing page faults satisfies the predetermined condition.
Of course, the third threshold is greater than the second threshold, which is greater than the first threshold.
As shown in fig. 11, steps S1102 to S1107 are for setting/not setting a recycle period according to the size of the free space, so that stepwise adjustment of the recycle period can be realized to cope with continuous change of the free space. However, the present invention is not limited thereto. For example, comparison of more than three threshold values or less than three threshold values may be performed, comparison of more than three times or less than three times may be performed, and for example, the above steps may also be implemented by a map or the like instead of performing comparison, judgment, and setting a plurality of times as in steps S1102 to S1107.
Referring to fig. 9 and 11, steps S903 to S906 are similar to steps S1108 to S1111, and description thereof is omitted.
FIG. 12 shows a schematic diagram for setting a recycling period, according to an embodiment of the invention.
As shown in fig. 12, the minimum value of the SMR memory capacity is 0, and the maximum value is max. Further, fig. 12 also shows three threshold values N _ low, N _ mid, and N _ high corresponding to the first threshold value, the second threshold value, and the third threshold value mentioned in fig. 11, respectively. Similar to the method of FIG. 11, in other words, when SMR capacity is between 0 and low, the pre-allocation is stopped; when the capacity of the SMR is between low and mid, checking and recycling the secure memory with a recycling period N _ low; when the capacity of the SMR is between mid and high, checking and recycling the secure memory by a recycling period N _ mid; when the SMR capacity is between high and max, the safe memory is checked and reclaimed with a reclamation cycle N _ high. In some embodiments, the magnitude relationship of N _ low, N _ mid, and N _ high may be N _ low < N _ mid < N _ high, i.e., corresponding to the magnitude relationship between low, mid, and high.
As described hereinbefore, fig. 12 schematically shows only one kind of threshold value setting of the recycling period, however, the setting of the respective threshold values of the recycling period of the present invention is not limited thereto. In this way, more reasonable memory reclamation can be achieved. For example, when the free space is large, the recovery period is reduced, so that the recovery frequency is reduced, and system resources, time cost and the like for recovery operation can be saved; and when the free space is smaller, the recovery period is increased, so that the recovery frequency is increased, and the success probability of pre-allocation can be improved.
According to various embodiments of the present invention, the security processor may respond to the page fault, and not only allocate the secure memory for the memory page with the page fault, but also pre-allocate the secure memory for the memory page without the page fault. The scheme for allocating the secure memory can more flexibly schedule the process on the virtual machine, reduce the probability of page fault exception, reduce the communication between the secure processor and the CPU, reduce the communication overhead and improve the memory access performance. And the free space can be recovered according to a preset rule so as to ensure the smooth proceeding of the pre-allocation.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other.
In several embodiments provided herein, it will be understood that each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk. It is noted that, herein, relational terms such as first and third, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present disclosure and is not intended to limit the present disclosure, and various modifications and changes may be made to the present disclosure by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present disclosure, and all the changes or substitutions should be covered within the scope of the present disclosure. Therefore, the protection scope of the present disclosure should be subject to the protection scope of the appended claims and their equivalents.

Claims (17)

1. A method of managing secure memory by a secure processor, wherein the secure processor is distinct and separate from a CPU comprised by a host, the host comprising secure memory, and a virtual machine running on the host, the method comprising:
receiving a message containing missing page fault information from the CPU;
allocating a safe memory for the first memory page according to the missing page fault information in the received message; and
pre-allocating secure memory for one or more second memory pages based on page fault information in the received message, and
the method further comprises recovering the allocated portion of the secure memory according to the free space of the secure memory, wherein a recovery period is set according to the size of the free space of the secure memory.
2. The method of claim 1, wherein the page fault information includes a page address and a size of the first memory page.
3. The method of claim 2, wherein the page fault information further includes a page address of the one or more second memory pages.
4. The method according to any of claims 1-3, wherein the first memory page is a memory page in which a page fault failure occurred, and the second memory page is a memory page in which a page fault failure did not occur.
5. The method according to any of claims 1-3, wherein the address of the secure memory allocated for the first memory page and the address of the secure memory allocated for the second memory page are a contiguous segment of addresses.
6. The method according to claim 5, wherein the address of the secure memory allocated for the first memory page is at a beginning, any middle position, or an end of the continuous segment of addresses.
7. The method according to any of claims 1-3, wherein a first memory page is associated with the one or more second memory pages.
8. The method of claim 7, wherein the page addresses of the first memory page and the one or more second memory pages are contiguous.
9. The method of claim 7, wherein the contents of a first memory page and the one or more second memory pages are associated.
10. The method of claim 1, wherein the reclaiming the allocated portion of secure memory from free space of secure memory comprises:
if the free space of the secure memory is less than or equal to a first threshold value, checking and recycling the allocated portion of the secure memory;
if the free space of the secure memory is greater than the first threshold, the allocated portion of the secure memory is checked and recycled when the count of page faults satisfies a condition associated with a recycle cycle.
11. The method of claim 10, wherein after checking and reclaiming the portion of allocated secure memory:
if the free space of the secure memory is larger than a first threshold value, allocating the secure memory to the one or more second memory pages; or
If the free space of the secure memory is less than or equal to the first threshold, the secure memory is not allocated to the one or more second memory pages.
12. The method of claim 10, wherein,
when the free space of the secure memory is larger than a first threshold value and smaller than or equal to a second threshold value, setting the recovery period as a first recovery period;
when the free space of the secure memory is larger than the second threshold value and smaller than or equal to a third threshold value, setting the recycling period as a second recycling period,
setting the recycle cycle as a third recycle cycle when the free space of the secure memory is larger than a third threshold value,
wherein the third threshold is greater than the second threshold, which is greater than the first threshold.
13. The method of claim 12, wherein the third recovery period is greater than the second recovery period, which is greater than the first recovery period.
14. The method of claim 10, wherein the count of page faults satisfying the condition associated with a recycle period is when the count of page faults is an integer multiple of the recycle period.
15. A system for managing secure memory, the system comprising:
a host comprising a CPU and a secure memory, wherein a virtual machine is run on the host;
a secure processor distinct and separate from the CPU and configured to implement the method of any of claims 1-14.
16. A secure processor for managing secure memory, the secure processor being distinct and separate from a CPU on a host and configured to implement the method of any of claims 1-14.
17. A non-transitory computer-readable recording medium having recorded thereon program code for dynamically pre-allocating secure memory of a virtual machine, the program code, when executed by a computer, performing the method of any one of claims 1-14.
CN202010843623.0A 2020-08-20 2020-08-20 Method for managing secure memory, system, apparatus and storage medium therefor Active CN111984374B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010843623.0A CN111984374B (en) 2020-08-20 2020-08-20 Method for managing secure memory, system, apparatus and storage medium therefor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010843623.0A CN111984374B (en) 2020-08-20 2020-08-20 Method for managing secure memory, system, apparatus and storage medium therefor

Publications (2)

Publication Number Publication Date
CN111984374A CN111984374A (en) 2020-11-24
CN111984374B true CN111984374B (en) 2021-07-23

Family

ID=73444184

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010843623.0A Active CN111984374B (en) 2020-08-20 2020-08-20 Method for managing secure memory, system, apparatus and storage medium therefor

Country Status (1)

Country Link
CN (1) CN111984374B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113342711B (en) * 2021-06-28 2024-02-09 海光信息技术股份有限公司 Page table updating method and device and related equipment
CN114201752B (en) * 2021-11-29 2022-10-18 海光信息技术股份有限公司 Page table management method and device for security isolation virtual machine and related equipment
CN115269188A (en) * 2022-07-28 2022-11-01 江苏安超云软件有限公司 Virtual machine intelligent memory recovery method and device, electronic equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101158924A (en) * 2007-11-27 2008-04-09 北京大学 Dynamic EMS memory mappings method of virtual machine manager
CN101620573A (en) * 2009-07-03 2010-01-06 中国人民解放军国防科学技术大学 Virtualization method of memory management unit of X86 system structure
CN101739346A (en) * 2009-12-04 2010-06-16 北京工业大学 Method for carrying out centralized control on internal memory of safety control module
CN106445835A (en) * 2015-08-10 2017-02-22 北京忆恒创源科技有限公司 Memory allocation method and apparatus
CN109725983A (en) * 2018-11-22 2019-05-07 海光信息技术有限公司 A kind of method for interchanging data, device, relevant device and system
CN109766164A (en) * 2018-11-22 2019-05-17 海光信息技术有限公司 A kind of access control method, EMS memory management process and relevant apparatus
CN109828827A (en) * 2018-11-22 2019-05-31 海光信息技术有限公司 A kind of detection method, device and relevant device
CN110955495A (en) * 2019-11-26 2020-04-03 网易(杭州)网络有限公司 Management method, device and storage medium of virtualized memory
CN111427804A (en) * 2020-03-12 2020-07-17 深圳震有科技股份有限公司 Method for reducing missing page interruption times, storage medium and intelligent terminal

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6038571A (en) * 1996-01-31 2000-03-14 Kabushiki Kaisha Toshiba Resource management method and apparatus for information processing system of multitasking facility
US7613921B2 (en) * 2005-05-13 2009-11-03 Intel Corporation Method and apparatus for remotely provisioning software-based security coprocessors
US8874961B2 (en) * 2010-03-22 2014-10-28 Infosys Limited Method and system for automatic failover of distributed query processing using distributed shared memory
CN102662864B (en) * 2012-03-29 2015-07-08 华为技术有限公司 Processing method, device and system of missing page abnormality
CN103607480B (en) * 2013-11-14 2017-02-08 华为技术有限公司 Method and apparatus for memory resource management, and single board
CN105988876B (en) * 2015-03-27 2019-09-17 杭州迪普科技股份有限公司 Memory allocation method and device
CN107704321A (en) * 2017-09-30 2018-02-16 北京元心科技有限公司 Memory allocation method and device and terminal equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101158924A (en) * 2007-11-27 2008-04-09 北京大学 Dynamic EMS memory mappings method of virtual machine manager
CN101620573A (en) * 2009-07-03 2010-01-06 中国人民解放军国防科学技术大学 Virtualization method of memory management unit of X86 system structure
CN101739346A (en) * 2009-12-04 2010-06-16 北京工业大学 Method for carrying out centralized control on internal memory of safety control module
CN106445835A (en) * 2015-08-10 2017-02-22 北京忆恒创源科技有限公司 Memory allocation method and apparatus
CN109725983A (en) * 2018-11-22 2019-05-07 海光信息技术有限公司 A kind of method for interchanging data, device, relevant device and system
CN109766164A (en) * 2018-11-22 2019-05-17 海光信息技术有限公司 A kind of access control method, EMS memory management process and relevant apparatus
CN109828827A (en) * 2018-11-22 2019-05-31 海光信息技术有限公司 A kind of detection method, device and relevant device
CN110955495A (en) * 2019-11-26 2020-04-03 网易(杭州)网络有限公司 Management method, device and storage medium of virtualized memory
CN111427804A (en) * 2020-03-12 2020-07-17 深圳震有科技股份有限公司 Method for reducing missing page interruption times, storage medium and intelligent terminal

Also Published As

Publication number Publication date
CN111984374A (en) 2020-11-24

Similar Documents

Publication Publication Date Title
CN111984374B (en) Method for managing secure memory, system, apparatus and storage medium therefor
US10706101B2 (en) Bucketized hash tables with remap entries
CN108804350B (en) Memory access method and computer system
US10838862B2 (en) Memory controllers employing memory capacity compression, and related processor-based systems and methods
US9740621B2 (en) Memory controllers employing memory capacity and/or bandwidth compression with next read address prefetching, and related processor-based systems and methods
CN107066397B (en) Method, system, and storage medium for managing data migration
US6490671B1 (en) System for efficiently maintaining translation lockaside buffer consistency in a multi-threaded, multi-processor virtual memory system
US9430402B2 (en) System and method for providing stealth memory
US5381537A (en) Large logical addressing method and means
KR20170098187A (en) Associative and atomic write-back caching system and method for storage subsystem
CN106445835B (en) Memory allocation method and device
CN111602377A (en) Resource adjusting method in cache, data access method and device
CN113312300B (en) Nonvolatile memory caching method integrating data transmission and storage
EP3163451B1 (en) Memory management method and device, and memory controller
CN108762915B (en) Method for caching RDF data in GPU memory
CN115617542A (en) Memory exchange method and device, computer equipment and storage medium
CN115712500A (en) Memory release method, memory recovery method, memory release device, memory recovery device, computer equipment and storage medium
CN115357196A (en) Dynamically expandable set-associative cache method, apparatus, device and medium
EP3249539B1 (en) Method and device for accessing data visitor directory in multi-core system
US20170193005A1 (en) Protection method and device for metadata of file
EP3690660A1 (en) Cache address mapping method and related device
CN115756838A (en) Memory release method, memory recovery method, memory release device, memory recovery device, computer equipment and storage medium
CN116302376A (en) Process creation method, process creation device, electronic equipment and computer readable medium
US20220276889A1 (en) Non fragmenting memory ballooning
CN114676073A (en) TLB table item management method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 300384 industrial incubation-3-8, North 2-204, No. 18, Haitai West Road, Huayuan Industrial Zone, Tianjin

Applicant after: Haiguang Information Technology Co., Ltd

Address before: 300384 industrial incubation-3-8, North 2-204, No. 18, Haitai West Road, Huayuan Industrial Zone, Tianjin

Applicant before: HAIGUANG INFORMATION TECHNOLOGY Co.,Ltd.

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant