CN111966468A - Method, system, secure processor and storage medium for pass-through device - Google Patents

Method, system, secure processor and storage medium for pass-through device Download PDF

Info

Publication number
CN111966468A
CN111966468A CN202010884939.4A CN202010884939A CN111966468A CN 111966468 A CN111966468 A CN 111966468A CN 202010884939 A CN202010884939 A CN 202010884939A CN 111966468 A CN111966468 A CN 111966468A
Authority
CN
China
Prior art keywords
page
memory
secure
processor
virtual machine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010884939.4A
Other languages
Chinese (zh)
Other versions
CN111966468B (en
Inventor
姜新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Haiguang Information Technology Co Ltd
Original Assignee
Haiguang Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Haiguang Information Technology Co Ltd filed Critical Haiguang Information Technology Co Ltd
Priority to CN202010884939.4A priority Critical patent/CN111966468B/en
Publication of CN111966468A publication Critical patent/CN111966468A/en
Application granted granted Critical
Publication of CN111966468B publication Critical patent/CN111966468B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45583Memory management, e.g. access or allocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45587Isolation or security of virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Storage Device Security (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

A method, system, secure processor, and storage medium for pass-through devices, the method comprising: receiving a page missing address sent by a host in response to the occurrence of the nested page missing abnormality of the security virtual machine, and judging whether the page missing address belongs to the device memory of the direct connection device by the security processor; and if the page fault address belongs to the device memory, enabling the page fault address in the nested page table to be set to be read and written in an unencrypted mode by the security processor. The method realizes the direct connection function of the virtual machine equipment, effectively avoids the excessive intervention of the host operating system and improves the safety of the virtual machine.

Description

Method, system, secure processor and storage medium for pass-through device
Technical Field
Embodiments of the present disclosure relate to a design method of a virtual machine, and more particularly, to a method, system, secure processor, and non-transitory storage medium for pass-through devices.
Background
A Virtual Machine (Virtual Machine) refers to a complete computer system with complete hardware system functionality, operating in a completely isolated environment, simulated by software.
Virtual machines typically have a paravirtualization (Virtio) and device-through (IOMMU) scheme for actual physical device support. Para-virtualization utilizes shared memory between virtual machines and a host to transfer data, which a Virtual Machine Monitor (VMM) sends to a physical device. However, Device Pass Through (IOMMU) is generally implemented by an Input/Output Memory Management Unit (i/o Memory Management Unit). The IOMMU is a Memory Management Unit (MMU) that functions to connect a Direct Memory Access-capable I/O Bus (Direct Memory Access-capable I/O Bus) and a Main Memory (Main Memory). The IOMMU may translate virtual addresses accessed by a Device (Device) into physical addresses. The device directly allocates the physical device in the Host to the Guest for use through the IOMMU, and the virtual machine monopolizes the physical device, so that the virtual machine can directly interact with the physical device through the IOMMU.
Disclosure of Invention
In the device pass-through scheme, taking a PCI (Peripheral Component Interconnect) device as an example, the host BIOS is responsible for mapping the device memory to the system address space through the PCI configuration space. Since this configuration space must be completed by the host, the virtual machine cannot configure its pass-through device PCI space again. When the security virtual machine initiates a read-write request to the device memory, a nested page missing exception is generated, and the host informs the security processor to establish a nested page table for the security virtual machine. The host operating system can judge whether the page missing address with the nested page missing abnormality belongs to the equipment memory through the mapping relation of the system address space, and further inform the security processor to correspondingly process the encryption flag bit of the nested page table.
In order to ensure the security, the memory isolation type security virtual machine must run on a host with an opened memory encryption function, which requires the security processor to set an appropriate encryption flag bit for a system address corresponding to the page fault exception. In addition, in order to improve the performance of the virtual machine device, the secure virtual machine needs to support a device pass-through function. When the virtual machine accesses the device memory, the security processor needs to establish a nested page table of the virtual machine for the device memory, map the device memory to the virtual machine address space, and clear the corresponding encryption flag bit. If the host operating system determines whether the address where the current nested page fault exception occurs belongs to the device memory and clears the encryption flag bit, there is a security risk because the host is not trusted and there is a possibility of being attacked.
To solve the above technical problem, an aspect of an embodiment of the present disclosure provides a method for a pass-through device, the method including: receiving a page missing address sent by a host in response to the occurrence of the nested page missing abnormality of the security virtual machine, and judging whether the page missing address belongs to the device memory of the direct connection device by the security processor; and if the page fault address belongs to the device memory, enabling the page fault address in the nested page table to be set to be read and written in an unencrypted mode by the security processor.
For example, a method provided by an embodiment of the present disclosure, wherein determining, by a security processor, whether a page fault address belongs to a device memory of a pass-through device includes: and judging whether the page missing address is in the equipment memory linked list or not by the safety processor, wherein the equipment memory linked list is maintained by the safety processor.
For example, a method provided in accordance with an embodiment of the present disclosure, wherein causing, by the secure processor, a missing page address in the nested page table to be set to read and write in an unencrypted manner, comprises: mapping, by the secure processor, the missing page address in the nested page table, and setting an encryption flag bit of the mapped missing page address in the nested page table to a non-encrypted or clear encryption flag bit.
For example, according to the method provided by the embodiment of the present disclosure, the maintaining of the device memory linked list by the security processor is implemented by the steps including: responding to the initialization of the direct connection equipment by the security virtual machine, receiving equipment memory information of the direct connection equipment sent by the security virtual machine by the security processor, and adding the equipment memory information of the direct connection equipment into an equipment memory linked list; and traversing the nested page table by the secure processor, and if the device memory information has a mapping relation in the nested page table, setting the address with the mapping relation in the nested page table to be read and written in an unencrypted mode.
For example, according to the method provided by the embodiment of the present disclosure, wherein the maintaining of the device memory linked list by the secure processor is further implemented by the steps including: responding to the exit of a driver of the straight-through equipment, and receiving equipment memory information of the straight-through equipment to be exited, which is sent by the security virtual machine, by the security processor; and removing the device memory information of the to-be-exited straight-through device from the device memory linked list by the security processor, traversing the nested page table, and clearing the mapping relation related to the device memory information of the to-be-exited straight-through device.
For example, a method is provided according to an embodiment of the present disclosure, wherein the secure processor is physically separated from a CPU comprised by a host, the host comprises a secure memory, and a secure virtual machine is run on the secure memory, wherein the secure virtual machine sends the device memory information through a secure call.
For example, a method is provided in accordance with an embodiment of the present disclosure, wherein during a boot process of a host, a nested page table is established by a secure processor for a secure virtual machine such that a device configuration space of the secure virtual machine maps to a host system address space.
For example, according to the method provided by the embodiment of the present disclosure, the host only forwards messages for the secure virtual machine and the secure processor, and does not participate in the operation that the secure processor determines whether the address belongs to the device memory according to the device memory linked list.
For example, a method provided in accordance with an embodiment of the present disclosure further includes: if the page fault address does not belong to the device memory, the secure processor causes the page fault address in the nested page table to be set to read and write in an encrypted manner.
Yet another aspect of embodiments of the present disclosure provides a system for pass-through devices, comprising: a host comprising a CPU and a secure memory, wherein a secure virtual machine is run on the secure memory; a pass-through device; a secure processor physically separate from the CPU and configured to perform one or more of the methods described above.
Another aspect of embodiments of the present disclosure provides a secure processor for a pass-through device that is physically separate from a CPU on a host and configured to perform one or more of the above-described methods.
Yet another aspect of embodiments of the present disclosure provides a computer-readable non-transitory storage medium having instructions stored thereon, the instructions being executable by a processor to perform one or more of the methods described above.
According to the embodiment of the disclosure, the security virtual machine determines which address space belongs to the device memory, and informs the security processor to correspondingly process the encryption flag bit of the nested page table, so that the host operating system is ingeniously bypassed, the information leakage of the virtual machine is reduced, and the security is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings of the embodiments of the present disclosure will be briefly described below. It is to be expressly understood that the drawings in the following description are directed to only some embodiments of the disclosure and are not intended as limitations of the disclosure.
Fig. 1 shows a system architecture diagram for pass-through devices according to an exemplary embodiment of the present disclosure.
FIG. 2 illustrates a schematic diagram of translating virtual machine physical addresses, according to an embodiment of the disclosure.
FIG. 3 illustrates another schematic diagram of translating virtual machine physical addresses according to an embodiment of the disclosure.
Fig. 4A-4D illustrate data structure diagrams according to embodiments of the present disclosure.
Fig. 5 shows a flow diagram for a pass-through device according to an exemplary embodiment of the present disclosure.
Fig. 6 shows another flow diagram for a pass-through device according to an exemplary embodiment of the present disclosure.
Fig. 7 shows a schematic diagram of a host boot process for a pass-through device according to an exemplary embodiment of the present disclosure.
Fig. 8 shows a schematic diagram of a system for pass-through devices according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings of the embodiments of the present disclosure. It is to be understood that the described embodiments are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the disclosure without any inventive step, are within the scope of protection of the disclosure.
Hereinafter, exemplary embodiments of the present disclosure will be described in detail with reference to the accompanying drawings.
Fig. 1 shows a schematic diagram of a system architecture 100 for pass-through devices according to an exemplary embodiment of the present disclosure.
Referring to fig. 1, a system architecture 100 according to an exemplary embodiment of the present disclosure is a general system architecture for implementing the present disclosure. The system architecture 100 may be embodied as or included in devices such as smart phones, tablets, desktop computers, Personal Computers (PCs), and other electronic devices capable of device pass-through. Fig. 1 shows only the elements that are germane to the present disclosure. However, the system architecture 100 according to the embodiments of the present disclosure is not limited thereto, but may also include other additional units, such as a storage unit, an application, and other suitable units, or may omit part of the units.
The system architecture 100 may include a host (not shown). For example, the host may be a mainframe computer, personal computer, industrial computer, or other computer capable of being configured to run a virtual machine.
The host may include a memory and a Central Processing Unit (CPU), wherein the memory may be divided into a normal memory 102A for running host processes and a secure memory 102B for running a virtual machine Operating System (OS) and its processes. In some embodiments, the secure memory 102B may be initialized during the boot process of the host, for example, a memory region logically isolated from the normal memory 102A from the original memory region, and initialized by the host at boot time. However, the present disclosure is not limited thereto, and the secure memory 102B may also be a memory area physically isolated from the normal memory 102A. For example, secure memory 102B may be used for certain functions, such as functions having certain requirements for security and/or encryption, among others. For example, secure memory 102B may be used for the running of virtual machines, including virtual machine OS and its processes.
In light of the above description, in the following, unless explicitly indicated otherwise, the terms "virtual machine", "secure virtual machine", or "memory-isolated secure virtual machine" referred to in embodiments of the present disclosure each refer to a virtual machine running on secure memory 102B that is isolated from normal memory 102A.
Commonly used virtual machines share memory with a host (including host processes) and when the host receives an attack, the security of such virtual machines is reduced. However, the secure virtual machine according to the embodiment of the present disclosure may run on the secure memory, in other words, the secure virtual machine maintains its own secure memory, and the host cannot use the secure memory, so that leakage of information of the virtual machine may be reduced, and security of the virtual machine may be improved.
In the present disclosed embodiment, the host may further include a secure processor 101 and a Central Processing Unit (CPU)103 and a host operating system (host OS)104, however, embodiments are not limited thereto and the secure processor 101 may be provided to be physically separated from the host.
CPU103 may include a Memory Management Unit (MMU) (not shown) for managing the allocation of memory (e.g., normal memory 102A). The CPU103 may not be involved in communication with the secure processor 101, for example, the CPU may be set to have no access to the secure memory 102B. CPU103 may manage communications between host OS 104 and its processes 105A-105N and memory (e.g., normal memory 102A). The CPU103 may manage communication to the secure processor 101.
According to an embodiment of the present disclosure, the secure processor 101 is isolated from the CPU103 in hardware. In some cases, there is a possibility that the host may be attacked, and thus, the secure processor 101 may improve the security of the virtual machine reading and writing the secure memory.
In some embodiments, CPU103 may maintain page tables for each process 105A-105N when host OS 104 needs to run host processes 105A-105N. The page table may record a page Address (e.g., referred to as a "Virtual Address (VA)") of one or more memory pages corresponding to a process when the process is loaded, and a Physical Address (PA) of a Physical memory storing a Physical resource corresponding to the page Address. In other words, the page table may record the mapping relationship between the virtual address and the physical address, for example, in the form of a table or a function.
The MMU may be configured to translate virtual addresses to physical addresses based on page tables (e.g., CR3/nCR3 pointing page tables/nested page tables) to thereby call memory pages of processes that need to be run onto memory, forming an executable program. After the program in the memory is run, the program is released and then the memory can be recycled, so that the utilization rate of the memory space can be improved. In some embodiments, the page table may be a single level page table or a multi-level page table or a nested page table. Generally, the structures of the page tables and nested page tables are similar, typically being a three-level or four-level structure, but the disclosure is not so limited and the structures of the page tables and nested page tables may have more or fewer levels, as desired.
In system architecture 100 according to embodiments of the present disclosure, CR3 is a register within CPU103 or secure virtual machine for storing page tables, and nCR3 is a register within secure processor 101 for storing nested page tables. The CR3 within CPU103 points to a page table that can be used to translate Host Virtual Addresses (HVAs) to Host Physical Addresses (HPAs), while the CR3 within the Virtual machine points to a page table that can be used to translate Virtual machine Virtual addresses (GVAs) to Virtual machine Physical addresses (GPA). The nested page table pointed to by the nCR3 may be used to translate virtual machine physical addresses (GPA) to Host Physical Addresses (HPA).
In the system architecture 100 according to embodiments of the present disclosure, virtual machine processes (not shown) of the virtual machine OSs 106A-106N need to be pushed to the secure memory 102B to run. Similar to running host processes 105A-105N, when running a virtual machine process, the virtual machine virtual address (GVA) is translated by the virtual machine through the CR3 thereon to a virtual machine physical address (GPA). The secure processor 101 then builds and maintains nested page tables (via the nCR3) corresponding to the virtual machine processes to translate virtual machine physical addresses (GPA) to Host Physical Addresses (HPA) to complete the mapping of virtual machine resources to host resources.
Memory management using page tables/nested page tables tends to suffer from problems such as page fault exceptions (also known as page faults or page fault interrupts or page fault errors). Generally, a page fault exception refers to the inability to find the corresponding physical address (e.g., HPA or GPA) in the page table/nested page table pointed to by CR3/nCR3 using HVAs, GVAs or GPA. In other words, when the MMU within CPU103 translates the HVA, GVA, or GPA based on page/nested page tables, it finds no physical page corresponding thereto. At this time, the physical address (e.g., HPA or GPA) of the corresponding physical page needs to be added to the page table/nested page table to make the process run smoothly. Thus, one way to resolve a page fault exception is to allocate a physical address (e.g., HPA or GPA) for the memory page in which the page fault exception occurred.
According to an embodiment of the present disclosure, when a page fault exception occurs to a host process, the page table pointed to by the CR3 of the host is updated by the CPU 103. When the page fault exception occurs on the secure virtual machine: if the page fault exception is a page fault exception that occurred when translating GVA to GPA, the secure virtual machine updates the page table pointed to by the CR3 of the virtual machine; if the page fault exception is a nested page fault exception that occurs when translating GPAs into HPAs, the CPU103 notifies the secure processor 101 of the information about the occurrence of the nested page fault exception so that the secure processor 101 updates the nested page table pointed to by the nCR3, e.g., by allocating secure memory to the virtual machine memory page in which the nested page fault exception occurred to eliminate the nested page fault exception. According to the embodiment of the disclosure, the secure processor allocates the secure memory by modifying or updating a Page Table Entry (PTE) of the nested Page Table, so as to realize translation/mapping from the GPA to the HPA of the system physical memory.
Referring next to fig. 2, 3 and 4A-4D, for purposes of better understanding the present disclosure, a schematic diagram of the process of translating virtual machine physical addresses (GPA) by an MMU through nCR3 maintained by a secure processor and associated data structures are shown and briefly described. Of course, the present disclosure is not so limited and these relative descriptions are merely exemplary and the specific translation process and data structures of the nested page tables may vary depending on the specific implementation of the nested page tables.
In some embodiments, the nested page tables may consist of 3-level or 4-level page tables. For example, in a 64-bit machine length mode: the page size may be 4k, corresponding to a 4-level page table (as shown in FIG. 2); alternatively, the page size is 2M, corresponding to a 3-level page table (as shown in FIG. 3).
FIG. 2 illustrates a process diagram for translating virtual machine physical addresses (GPAs), according to an embodiment of the disclosure.
As shown in fig. 2, the 4-Level Page Table may include a 4-Level Page Map (PML4) Table, a Page Directory Pointer (PDP) Table, a Page Directory (PD) Table, a Page Table (PT), and a physical Page. The PML4 Table consists of PML4 Table entries (PML4 Table Entry, PML4E), the PDP Table consists of PDP Table entries (PDP Table Entry, PDPE), the PD Table consists of PD Table entries (PD Table Entry, PDE), and the PT consists of PT entries (PT Entry, PTE).
As shown in FIG. 2, in this example, the virtual machine physical address (GPA) is 64 bits, which includes a sign extension (bits 63-48), a 4-level page map (PML4) offset (bits 47-39), a page target pointer offset (bits 38-30), a page directory offset (bits 29-21), a page table offset (bits 20-12), and a physical page offset (bits 11-0).
Referring to FIG. 2, when a GPA of a process generates a nested page fault exception, corresponding page fault exception information is sent to the secure processor, which typically implements the following steps:
1. the nested page table root directory PML4 table is found by nCR3 which includes a level 4 page mapping Base Address (Base Address).
2. Taking bits 47-39 of GPA as an index, look up the page table entry PML4E in the PML4 table, and proceed to look up the PDP table according to the found PML 4E. If the PDP table does not exist, 4k pages are allocated in the secure memory to serve as the PDP table, otherwise, the next step is carried out.
3. Taking 38-30 bits of GPA as an index, looking up a page table entry PDPE in a PDP table, and continuing to look up the PD table according to the found PDPE. And if the PD table does not exist, allocating 4k pages in the secure memory as the PD table, and otherwise, carrying out the next step.
4. Taking bits 29-21 of GPA as an index, look up page table entry PDE in PD table, and continue to look up PT according to PDE found. If the PT does not exist, 4k pages are allocated in the secure memory as the PT, and otherwise, the next step is carried out.
5. Taking 20-12 bits of GPA as an index, looking up a page table entry PTE in PT, allocating 4k safe memory pages at the same time, and filling physical addresses (shown in FIG. 2) of the allocated safe memory pages into the PTE, i.e. updating the PTE.
Thus, the CPU obtains the secure memory page (i.e., HPA) by the MMU consulting the nested page table, and then takes the 11-0 bits (bit0 through bit11) of the GPA as the page offset (i.e., offset within the page) to get the final physical address.
FIG. 3 illustrates another schematic diagram of a process to translate virtual machine physical addresses (GPAs) according to an embodiment of the disclosure.
As shown in fig. 3, the 3-level page table may include a 4-level page map (PML4) table, a Page Directory Pointer (PDP) table, a Page Directory (PD) table, and a physical page. Wherein, the PML4 table is composed of PML4 table entries (PML4E), the PDP table is composed of PDP table entries (PDPE), and the PD table is composed of PD table entries (PDE).
As shown in FIG. 3, in this example, the virtual machine physical address (GPA) is 64 bits, which includes a sign extension (bits 63-48), a 4-level page map (PML4) offset (bits 47-39), a page target pointer offset (bits 38-30), a page directory offset (bits 29-21), and a physical page offset (bits 20-0).
Referring to FIG. 3, when a GPA of a process generates a nested page fault exception, corresponding page fault exception information is sent to the secure processor, which typically implements the following steps:
1. the nested page table root directory PML4 table is found by nCR3, which includes a 4-level page mapping base address.
2. Taking bits 47-39 of GPA as an index, look up the page table entry PML4E in the PML4 table, and proceed to look up the PDP table according to the found PML 4E. If the PDP table does not exist, 4k pages are allocated in the secure memory to serve as the PDP table, otherwise, the next step is carried out.
3. Taking 38-30 bits of GPA as an index, looking up a page table entry PDPE in a PDP table, and continuing to look up the PD table according to the found PDPE. And if the PD table does not exist, allocating 4k pages in the secure memory as the PD table, and otherwise, carrying out the next step.
4. Taking bits 29-21 of GPA as an index, look up the page table entry PDE in the PD table, allocate the 2M secure memory page at the same time, and fill the physical address of the allocated secure memory page (as shown in fig. 3) into the PDE, i.e. update the PDE.
Thus, the CPU obtains the secure memory page (i.e., HPA) by the MMU consulting the nested page table, and then takes the 20-0 bits (bit0 through bit20) of the GPA as the page offset (i.e., offset within the page) to get the final physical address.
As shown in FIGS. 2, 3 above, and 4A-4D below, each Page table entry may include at least a base address (e.g., 51-12 bits), and bits of other non-base addresses may be bits related to setting attributes, such as an R/W (Read/Write) bit, a U/S (User/super) bit, and a PCD (Page Cache Disable) bit, among others.
4A-4D illustrate schematics of the process data structures of FIG. 2 and/or FIG. 3, such as page table entry structures in nested page tables, according to embodiments of the present disclosure.
FIG. 4A shows a 4k level page map entry (PML4E) in long mode. A typical PML4E may include a no-execute bit (to indicate whether the page is non-executable), an available bit (reserved for use by programs), a page directory pointer base address, a MBZ (Must Be Zero) bit (for reservation), an IGN (ignore) bit (negligible), an Access bit (to indicate whether the page has been accessed), PCD (Page cache disable) bits (to indicate whether the cache is closed), PWT (Page Write Through) bits (to indicate whether data is written to the cache while it is also being written to memory), U/S bits (to indicate user authority/super authority), R/W bits (to indicate read-Write status), P (Present) bits (to indicate whether the Page table entry is valid for address translation), etc.
However, this is merely an architectural limitation, and a given processor implementation may support fewer bits, i.e., the disclosure is not so limited.
FIG. 4B shows a 4k Page Directory Pointer Entry (PDPE) in long mode.
A typical PDPE may include an NX (execute disabled) bit (to indicate whether the page is not executable), an available bit (reserved for use by the program), a page directory base, an IGN (ignore) bit (negligible), a 0 bit (for reservation), an a (access) bit (to indicate whether the page is accessed), a PCD (page cache disable) bit (to indicate whether the cache is closed), a PWT (page write through) bit (to indicate whether data is written to the cache while it is also being written to memory), a U/S bit (to indicate user authority/super authority), an R/W bit (to indicate read and write status), a P (present) bit (to indicate whether the page table entry is valid for address translation), and so forth.
However, this is merely an architectural limitation, and a given processor implementation may support fewer bits, i.e., the disclosure is not so limited.
FIG. 4C shows a 4k Page Directory Entry (PDE) in long mode.
A typical PDE may include an NX (execute disable) bit (to indicate whether the page is not executable), an available bit (reserved for use by the program), a page table base, an IGN (ignore) bit (negligible), a 0 bit (for reservation), an a (access) bit (to indicate whether the page is accessed), a PCD (page cache disable) bit (to indicate whether the cache is closed), a PWT (page write through) bit (to indicate whether data is written to the cache while it is also being written to memory), a U/S bit (to indicate user authority/super authority), an R/W bit (to indicate read and write status), a P (present) bit (to indicate whether the page table entry is valid for address translation), and so forth.
However, this is merely an architectural limitation, and a given processor implementation may support fewer bits, i.e., the disclosure is not so limited.
FIG. 4D shows a 4k Page Table Entry (PTE) in long mode.
A typical PTE may include an NX (execute-prohibited) bit (to indicate whether the Page is non-executable), an available bit (reserved for use by the program), a physical Page base, a G (Global) bit (to indicate whether the Page is always saved in a cache TLB (Address translation lookaside buffer)), a PAT (Page Attribute Table) bit (to set memory attributes at the granularity of the Page level), a D (Dirty) bit (to indicate whether the Page pointed to by the entry writes data), an A (Access) bit (to indicate whether the Page is accessed), a PCD (Page cache disable) bit (to indicate whether the cache permissions are closed), a PWT (Page write through) bit (to indicate whether the data is written to the memory while it is being written to the cache), a U/S super bit (to indicate user permissions/user, and, An R/W bit (to indicate read and write status), a P (present) bit (to indicate whether the page table entry is valid for address translation), etc.
However, this is merely an architectural limitation, and a given processor implementation may support fewer bits, i.e., the disclosure is not so limited.
In accordance with embodiments of the present disclosure, a system architecture for a pass-through device and a process of translating/mapping virtual machine physical addresses (GPAs) to Host Physical Addresses (HPAs) upon generation of a nested page fault exception have been described above in connection with the accompanying drawings. Embodiments of a method for a feed-through device will be described in detail below on this basis with reference to the drawings.
In embodiments of the present disclosure, the nested page tables of the memory-isolated secure virtual machine are responsible for setup/maintenance by the secure processor. In the device direct connection scheme, a device memory is mapped to a system address space by a host, and the content of the device memory is not required to be encrypted and protected unlike a system physical memory; but the security processor cannot distinguish whether the page fault address sent by the host belongs to the system physical memory or the device memory. If the encryption flag bit of the page table entry in the nested page table exists in the device memory, the virtual machine cannot access the device memory, which results in failure of device initialization.
To overcome the above problem, in the embodiments of the present disclosure, for device pass-through, during the device driver initialization process, the security virtual machine sends device memory information to the security processor (e.g., through security call), and the security processor is responsible for establishing and maintaining the device memory linked list. When the nesting page fault of the security virtual machine is abnormal, the security processor judges whether the memory which is currently subjected to page fault is the device memory according to the device memory linked list and clears the encryption flag bit. Based on the embodiment of the disclosure, the memory isolation type security virtual machine device pass-through function can be realized, the mapping relation and the encryption bit flag of the memory of the pass-through device in the nested page table of the security virtual machine are determined through the virtual machine, the excessive intervention of the host is effectively avoided, and the security of the virtual machine is improved.
Fig. 5 shows a flow diagram for a pass-through device according to an exemplary embodiment of the present disclosure. An exemplary flow of a method for pass-through devices according to embodiments of the present disclosure may be performed by the system architecture 100 described with reference to fig. 1. A method for a pass-through device will be described below with reference to the accompanying drawings.
As shown in fig. 5, an exemplary flowchart of a method for pass-through devices according to an embodiment of the present disclosure includes the steps of:
in step S502, a page fault address sent by the host in response to the occurrence of the nested page fault exception is received, and the security processor determines whether the page fault address belongs to the device memory of the pass-through device.
In step S504, if the page fault address belongs to the device memory, the secure processor sets the page fault address in the nested page table to be read or written in an unencrypted manner.
In some embodiments, determining, by the security processor, whether the page fault address belongs to a device memory of the pass-through device includes: and judging whether the page missing address is in the equipment memory linked list or not by the safety processor, wherein the equipment memory linked list is maintained by the safety processor.
In some embodiments, causing, by the secure processor, the missing page address in the nested page table to be set to read and write in an unencrypted manner comprises: mapping, by the secure processor, the missing page address in the nested page table, and setting an encryption flag bit of the mapped missing page address in the nested page table to a non-encrypted or clear encryption flag bit.
In some embodiments, the device memory linked list is maintained by the security processor by steps comprising: responding to the initialization of the direct connection equipment by the security virtual machine, receiving equipment memory information of the direct connection equipment sent by the security virtual machine by the security processor, and adding the equipment memory information of the direct connection equipment into an equipment memory linked list; and traversing the nested page table by the secure processor, and if the device memory information has a mapping relation in the nested page table, setting the address with the mapping relation in the nested page table to be read and written in an unencrypted mode.
In some embodiments, the maintaining of the device memory linked list by the secure processor is further accomplished by including: responding to the exit of a driver of the straight-through equipment, and receiving equipment memory information of the straight-through equipment to be exited, which is sent by the security virtual machine, by the security processor; and removing the device memory information of the to-be-exited straight-through device from the device memory linked list by the security processor, traversing the nested page table, and clearing the mapping relation related to the device memory information of the to-be-exited straight-through device.
In some embodiments, the secure processor is physically separate from a CPU comprised by the host, the host comprises secure memory, and the secure virtual machine is run on the secure memory, wherein the secure virtual machine sends the device memory information through a secure call.
In some embodiments, during boot-up of the host, a nested page table is established by the secure processor for the secure virtual machine such that a device configuration space of the secure virtual machine maps to a host system address space.
In some embodiments, the host only forwards messages for the secure virtual machine and the secure processor, and does not participate in the operation of the secure processor determining whether the address belongs to the device memory according to the device memory linked list.
In some embodiments, for example, if the page fault address does not belong to the device memory, the secure processor causes the page fault address in the nested page table to be set to read and write in an encrypted manner.
According to the embodiment of the disclosure, the security virtual machine can determine the attribute of the address space (whether encryption is needed) according to the requirement of the security virtual machine, and the whole process only participates in the security virtual machine and the security processor. And the host operating system only forwards messages and does not participate in the logic judgment of the process, thereby reducing the information leakage of the virtual machine as much as possible and ensuring the safety. The above-described embodiments of the present disclosure and additional aspects thereof are described in more detail below in conjunction with fig. 6-7.
Fig. 6 shows another flow diagram for a pass-through device according to an exemplary embodiment of the present disclosure.
At step S605, the host starts.
The host startup process will be described in more detail below in conjunction with fig. 7.
Fig. 7 shows a schematic diagram of a host boot process for a pass-through device according to an exemplary embodiment of the present disclosure.
As shown in fig. 7, in some embodiments, for a physical device, for example and without limitation, for PCI physical device 720, the BIOS of the host initializes a corresponding PCI configuration space for PCI physical device 720, where the PCI configuration space typically stores some basic information, a manufacturer, an Interrupt ReQuest (IRQ) interrupt number, and a start address and size defining a mem space and an io space. PCI controller 725 maps PCI physical devices 720 to host system Address space 715 through a Base Address Register (BAR). The security processor may establish a nested page table 710 for the security virtual machine 705 that maps the PCI configuration space of the security virtual machine (i.e., the address space of the segment of the security virtual machine that belongs to the device configuration space) to the host system address space 715 that has been host mapped to PCI device memory, e.g., PCI device memory address space 730 in the host system address space 715 in fig. 7, so that the security virtual machine 705 may directly access the memory of the PCI physical device 720.
It should be noted that the memory (e.g., the secure memory 102B in fig. 1) in which the secure virtual machine operates is encrypted by using the key of the secure virtual machine itself, and the host and other virtual machines cannot obtain the information of the secure virtual machine, thereby ensuring the security of the secure virtual machine. When the memory of the secure virtual machine starts the secure virtual machine, the secure virtual machine works in an encrypted state. Setting the encryption flag bit of the page table entry in the nested page table enables the CPU to read/write the device memory in an encrypted manner, whereas setting the encryption flag bit does not enable the CPU to read/write the device memory in an unencrypted manner. Of course, the encryption flag bit is merely an example of an implementation indicating whether the address pointed to by the page table entry is read/written in an encrypted manner, but is not a limitation.
The encryption flag bit may be part of the address, for example, bit47 in a 64-bit address may represent encryption, as shown in FIGS. 4A-4D. Therefore, when the CPU accesses the memory, it finds that bit47 has the encryption flag bit of 1, and accesses the memory in an encrypted manner, but the embodiment is not limited to this, and may indicate that the memory is accessed in an encrypted manner by setting the encryption flag bit of 0.
For the device memory of the pass-through device, since the device memory is generally stored in the clear text, if the virtual machine reads and writes the device memory in an encryption manner at this time, the wrong data will be read and written. Therefore, when the secure virtual machine accesses the device memory, the address encryption flag bit of the secure virtual machine access device memory needs to be set to be read and written in an unencrypted manner, so as to read and write correct data.
Returning to FIG. 6, at step S610, the host starts the secure virtual machine.
In the process of starting the secure virtual machine by the host, in the scheme of supporting device direct connection, the CPU supports the IOMMU, the IOMMU can realize the conversion between the device physical address of the direct connection device and the virtual machine physical address, the device is connected with the CPU supporting the IOMMU, and the secure virtual machine can directly access the device through the IOMMU configuration. The VMM of the secure virtual machine maps the virtual machine address GPA of the direct connection device and the host system address space HPA through the IOMMU, so that the direct connection of the physical device and the secure virtual machine is realized.
After step S610, at step S615, the secure virtual machine is started. In the starting process of the secure virtual machine, the BIOS of the secure virtual machine initializes the through device, establishes a memory mapping table of the secure virtual machine and calls a device initialization program.
For example, in some embodiments, at step S620, the secure virtual machine loads a driver of the pass-through device to initialize the pass-through device. In the process of initializing the pass-through device, the secure virtual machine obtains a configuration space of the pass-through device in the secure virtual machine. When the configuration space is the device memory attribute, the secure virtual machine sends device memory information to the secure processor, where the device memory information includes an address range belonging to the device memory. In some embodiments, the secure virtual machine may send the device memory information to the secure processor through a secure call.
For example, in some embodiments, in step S625, the secure virtual machine may send the device memory information to the host through a security call. Subsequently, in step S630, the host may send the received device memory information to the security processor.
In some embodiments, the secure call may be implemented by encrypting device memory information with a key known to the secure virtual machine and the secure processor but unknown to the host, however, the manner in which the secure call is implemented is not limited thereto, and the secure call may be implemented using any encryption manner that the secure processor can resolve, but the host cannot resolve.
According to the embodiment of the disclosure, a safe information transmission channel can be established between the safe virtual machine and the safe processor through safe calling, the safe virtual machine can send the device memory information to the safe processor, and the host only forwards the device memory information, so that the leakage and tampering of the device memory information are effectively reduced, and the safety is ensured.
In step S635, the secure processor maintains the device memory linked list; and if the device memory is mapped, setting the corresponding memory encryption flag bit as unencrypted.
For example, in some embodiments, the security processor parses the security call, fetches the device memory information, and adds to the device memory linked list. In an alternative embodiment, the nested page tables may be traversed simultaneously, and if the mapping relationship already exists in the nested page tables in the device memory, the address in the nested page tables where the mapping relationship exists is set to be read and written in an unencrypted manner, so that the secure virtual machine may directly access the device memory through the nested page tables.
According to the embodiment of the disclosure, the maintenance of the memory linked list can promote the security processor to judge whether the page missing address sent by the host belongs to the device memory, and does not need to acquire the page missing address from the host or judge whether the page missing address belongs to the device memory by the host when the judgment is made, so that the security is improved.
After initializing the pass-through device process, in some embodiments, in response to the secure virtual machine reading and writing an address in a nested page table of the secure virtual machine, the secure processor determines whether the address belongs to the device memory according to the device memory linked list.
For example, in step S640, the secure virtual machine may read and write an address in a nested page table of the secure virtual machine.
The address in the nested page table of the secure virtual machine is the physical address of the secure virtual machine, which may belong to either a memory address or a device address. Thus, the secure virtual machine may read from or write to device memory, e.g., the secure virtual machine may access device memory by accessing an address in a nested page table of the secure virtual machine, and possibly generate a nested page fault exception.
As described above, the nested page fault exception is generated, for example, when the page fault address generating the nested page fault exception cannot find the corresponding physical address HPA, or when the access authority is insufficient (such as writing data to a read-only page), or the like, and the nested page fault exception can be recognized by MMU hardware (memory management unit) in the CPU.
Subsequently, in step S645, the host operating system shares the page fault exception information to the secure processor.
The page fault exception information includes a page fault address where a nested page fault exception occurs. The security processor can receive a page fault address sent by the host in response to the nesting page fault exception of the security virtual machine, and after receiving the page fault address, judge whether the page fault address belongs to the device memory according to the device memory linked list. For example, the security processor may determine whether the page missing address belongs to the device memory by determining whether the page missing address is in the device memory linked list, but the example embodiments are not limited thereto, and may also determine whether the page missing address belongs to the device memory by other methods.
Further, in some embodiments, in step S650, if the page fault address belongs to the device memory, the secure processor sets the page fault address in the nested page table to be read and written in an unencrypted manner.
For example, in accordance with the process of translating/mapping virtual machine physical addresses (GPA) to Host Physical Addresses (HPA) of the present disclosure, a secure processor may map the missing page address in a nested page table to map the missing page address to device memory such that device memory has a mapping relationship in the nested page table and the mapped missing page address in the nested page table is set to be read and written in an unencrypted manner. For example, the encryption flag of the missing page address mapped in the nested page table may be cleared, for example, bit47 in fig. 4A to 4D may be cleared, so as to read and write the device memory in an unencrypted manner, thereby implementing reading and writing of the device memory by the secure virtual machine.
In some embodiments, if the missing page address does not belong to the device memory, the secure processor may also map the missing page address in the nested page table, e.g., the missing page address may be mapped to a memory address, and the secure processor may set the encryption flag of the mapped missing page address in the nested page table to be read and written in an encrypted manner, e.g., the secure processor may not modify the encryption flag of the mapped missing page address in the nested page table.
In addition, in the process of driving exit of the straight-through equipment, responding to exit of a driving program of the straight-through equipment, and receiving equipment memory information of the straight-through equipment to be exited, which is sent by the safety virtual machine, by the safety processor; and removing the device memory information of the to-be-exited straight-through device from the device memory linked list by the security processor, traversing the nested page table, and clearing the mapping relation related to the device memory information of the to-be-exited straight-through device.
For example, in some embodiments, in step S655, the secure virtual machine may initiate a pass-through device driver exit.
And then, the safety virtual machine sends the device memory information of the straight-through device to be released to the safety processor. For example, the secure virtual machine may send device memory information to the secure processor via the secure call described above.
For example, in step S660, the secure virtual machine may send the device memory information to the host through a secure call.
Subsequently, in step S665, the host may send the received device memory information to the secure processor.
Further, in step S670, the security processor maintains the device memory linked list, for example, the security processor may remove the device memory information of the pass-through device from the device memory linked list, and may simultaneously traverse the nested page table to clear the mapping relationship of the memory information, so as to reserve a memory space for other processes to use.
According to the method disclosed by the embodiment of the invention, the security virtual machine can determine the attribute of the address space (for example, whether encryption is needed) according to the self requirement, the whole process only participates in the security virtual machine and the security processor, the host operating system only forwards related information and does not participate in the logic judgment of the process, the information leakage of the security virtual machine is reduced as much as possible, and the security of the security virtual machine is ensured.
Embodiments of the present disclosure also provide a system for pass-through devices. Fig. 8 illustrates a system 800 for pass-through devices according to an embodiment of the disclosure, the system 800 may be a more detailed architecture of the system architecture 100 of fig. 1 and additional aspects thereof. The same components in fig. 8 as in fig. 1 may perform the same or similar functions and are not described in detail herein. The system 800 shows only the main elements for description, it being understood that the system 800 may include more or less elements.
In an embodiment of the present disclosure, system 800 may include software 830. Software 830 may illustratively include host process 805, virtual machine OS 806, and host OS 804. However, embodiments are not so limited, and software 830 may include more or fewer components.
In an embodiment of the present disclosure, the System 800 may further include a SoC (System on Chip) 840. SoC 840 may illustratively include CPU 803 (e.g., a CPU core), memory controller 807, IOMMU 815, PCI controller 820, and secure processor 801. However, embodiments are not so limited, and SoC 840 may include more or fewer components. For example, secure processor 801 may be provided physically separate from SoC 840. In other embodiments, such as embodiments where the CPU includes other components in addition to the CPU core, the SoC may also be embodied in the CPU, i.e., be the CPU SoC.
In the embodiment of the present disclosure, the system 800 may further include a normal memory 802A, a secure memory 802B, a normal memory 802N, and a pass-through device 825. Embodiments are not so limited, however, and system 800 may include more or fewer components.
Secure memory 802B in fig. 8 may be the same as secure memory 102B in fig. 1. In some embodiments, memory controller 807 may be utilized to run virtual machine OS 806 on secure memory 802B, in other words, the secure virtual machine has its own secure memory 802B. In addition, the memory controller 807 may be utilized to run the host process 805 on the normal memory 802A. The secure processor 801 in fig. 8 may be the same as the secure processor 101 in fig. 1. In some embodiments, the secure processor 801 is physically separate from the CPU 803 and is configured to implement the methods of embodiments of the present disclosure, which are not described herein.
In some embodiments, pass-through device 825 may be the same as the PCI device of fig. 7, or may include other peripheral devices. The pass-through device 825 may include a DMA (Direct Memory Access) 826 and a configuration space 827. Typically, pass-through device 825 is coupled to the SoC (or CPU SoC) via PCI controller 820, IOMMU 815 is integrated within the SoC, and secure virtual machines can interact directly with pass-through device 825 via IOMMU 815.
In some embodiments, the secure processor may interact with the pass-through device through IOMMU 815 when the page fault address in the nested page table is set to read and write in an unencrypted manner. The flow of interaction may include the following steps.
The VM configures functions to the device through the device configuration space, such as setting a memory copy address (GPA) of the DMA, and sends the GPA to the secure processor as a shared memory (i.e., unencrypted normal memory) through a secure call.
2. The security processor checks the nested page table, and if the GPA is mapped to the security memory in the nested page table, the security memory is released, namely the PTE corresponding to the GPA is emptied; if the memory is already a common memory, the processing is not needed.
3. If the GPA has no mapping in the nested page table, the nested missing page exception is triggered, the host requests the security processor to establish a GPA nested page table entry, the security processor judges that the GPA is a common memory, the host sends a common memory address (the common memory address in the IOMMU page table) corresponding to the GPA to the security processor, and the security processor establishes a mapping relation with the common memory for the GPA.
4. From the view of the equipment, GPA is directly used for DMA operation, and IOMMU takes charge of translating GPA into common internal memory HPA, and finishing DMA copy process.
However, implementing the secure Virtual Machine (VM) to interact with the pass-through device is not limited thereto, and the secure virtual machine may interact with the pass-through device in other suitable manners.
It is noted that in some embodiments, one or more components of system 800 may be embodied in a host (not shown), or system 800 may include a host. For example, similar to system architecture 100 of FIG. 1, system 800 may include a host, which may include memory (e.g., secure memory 802B) and a Central Processing Unit (CPU). However, in some embodiments, some components in system 800 may not be embodied in a host. For example, pass-through device 825 and/or secure processor 801 may not be embodied in a host. That is, pass-through device 825 and/or secure processor 801 may be provided physically separate from the host. Thus, FIG. 8 is merely one example, but not limiting, of a system for pass-through devices.
According to the system disclosed by the embodiment of the invention, the direct connection function of the virtual machine equipment can be realized, the excessive intervention of the host operating system is effectively avoided, and the safety of the virtual machine is improved.
Embodiments of the present disclosure also provide a secure processor for a pass-through device, the secure processor being physically separate from a CPU on a host and configured to perform a method according to embodiments of the present disclosure. As shown in the system architecture 100 described above in fig. 1, the secure processor described herein is the same as the secure processor 101 in fig. 1 and/or the secure processor 801 in fig. 8, and for brevity, will not be described again.
Embodiments of the present disclosure also provide a computer-readable non-transitory storage medium having stored thereon instructions executable by a hardware processor to perform a method according to embodiments of the present disclosure.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other.
In several embodiments provided herein, it will be understood that each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block/step may occur out of the order noted in the figures. For example, two blocks/steps in succession may, in fact, be executed substantially concurrently, or the blocks/steps may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block/step of the block diagrams and/or flowchart illustration, and combinations of blocks/steps in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present disclosure and is not intended to limit the present disclosure, and various modifications and changes may be made to the present disclosure by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present disclosure, and all the changes or substitutions should be covered within the scope of the present disclosure. Therefore, the protection scope of the present disclosure should be subject to the protection scope of the appended claims and their equivalents.

Claims (18)

1. A method for pass-through devices, the method comprising:
receiving a page missing address sent by a host in response to the occurrence of the nested page missing abnormality of the security virtual machine, and judging whether the page missing address belongs to a device memory of the direct connection device by the security processor; and
and if the page missing address belongs to the equipment memory, the security processor sets the page missing address in the nested page table to be read and written in an unencrypted mode.
2. The method of claim 1, wherein determining, by the security processor, whether the missing page address belongs to a device memory of a pass-through device comprises:
and judging whether the page missing address is in an equipment memory linked list or not by the safety processor, wherein the equipment memory linked list is maintained by the safety processor.
3. The method of claim 1, wherein causing, by the secure processor, the missing page address in a nested page table to be set to read and write in an unencrypted manner comprises:
mapping, by the secure processor, the missing page address in the nested page table, and setting an encryption flag bit of the mapped missing page address in the nested page table to be clear or not encrypted.
4. The method of claim 2, wherein the device memory linked list is maintained by the security processor by steps comprising:
responding to the initialization of the straight-through equipment by the security virtual machine, receiving equipment memory information of the straight-through equipment sent by the security virtual machine by the security processor, and adding the equipment memory information of the straight-through equipment into the equipment memory linked list; and
and traversing the nested page table by the security processor, and if the device memory information has a mapping relation in the nested page table, setting the address with the mapping relation in the nested page table to be read and written in an unencrypted mode.
5. The method of claim 2, wherein the device memory linked list is maintained by the secure processor is further implemented by steps comprising:
responding to the exit of a driver of the pass-through equipment, and receiving equipment memory information of the pass-through equipment to be exited, which is sent by the security virtual machine, by the security processor; and
and removing the device memory information of the to-be-exited straight-through device from the device memory linked list by the security processor, traversing the nested page table, and clearing the mapping relation related to the device memory information of the to-be-exited straight-through device.
6. The method of claim 4 or 5, wherein the secure processor is physically separate from a CPU comprised by the host, the host comprises secure memory, and a secure virtual machine is run on the secure memory, wherein the secure virtual machine sends the device memory information through a secure call.
7. The method of claim 1, further comprising:
establishing, by the secure processor, the nested page table for the secure virtual machine during a boot process of the host, such that a device configuration space of the secure virtual machine maps to a host system address space.
8. The method of claim 1, wherein the host forwards messages only for the secure virtual machine and the secure processor without participating in the operation of the secure processor determining whether the missing page address belongs to device memory of a pass-through device.
9. The method of claim 1, further comprising:
and if the page missing address does not belong to the equipment memory, the security processor enables the page missing address in the nested page table to be set to be read and written in an encrypted mode.
10. A system for pass-through devices, comprising:
a host comprising a CPU and a secure memory, wherein a secure virtual machine is run on the secure memory;
a pass-through device;
a secure processor physically separate from the CPU and configured to:
receiving a page missing address sent by a host in response to the occurrence of the nested page missing abnormality of the security virtual machine, and judging whether the page missing address belongs to a device memory of the direct connection device by the security processor; and
and if the page missing address belongs to the equipment memory, the security processor sets the page missing address in the nested page table to be read and written in an unencrypted mode.
11. The system of claim 10, wherein determining, by the security processor, whether the missing page address belongs to a device memory of a pass-through device comprises:
and judging whether the page missing address is in an equipment memory linked list or not by the safety processor, wherein the equipment memory linked list is maintained by the safety processor.
12. The system of claim 10, wherein causing, by the secure processor, the missing page address in a nested page table to be set to read and write in an unencrypted manner comprises:
mapping, by the secure processor, the missing page address in the nested page table, and setting an encryption flag bit of the mapped missing page address in the nested page table to be clear or not encrypted.
13. The system of claim 11, wherein the device memory linked list is maintained by the security processor by steps comprising:
responding to the initialization of the straight-through equipment by the security virtual machine, receiving equipment memory information of the straight-through equipment sent by the security virtual machine by the security processor, and adding the equipment memory information of the straight-through equipment into the equipment memory linked list; and
and traversing the nested page table by the security processor, and if the device memory information has a mapping relation in the nested page table, setting the address with the mapping relation in the nested page table to be read and written in an unencrypted mode.
14. The system of claim 11, wherein the device memory linked list is maintained by the secure processor is further implemented by steps comprising:
responding to the exit of a driver of the pass-through equipment, and receiving equipment memory information of the pass-through equipment to be exited, which is sent by the security virtual machine, by the security processor; and
and removing the device memory information of the to-be-exited straight-through device from the device memory linked list by the security processor, traversing the nested page table, and clearing the mapping relation related to the device memory information of the to-be-exited straight-through device.
15. The system of claim 13 or 14, wherein the secure virtual machine transmits the device memory information through a secure call.
16. The system of claim 10, the security processor further configured to:
and if the page missing address does not belong to the equipment memory, the security processor enables the page missing address in the nested page table to be set to be read and written in an encrypted mode.
17. A secure processor for a pass-through device, the secure processor being physically separate from a CPU on a host and configured to implement the method of any of claims 1-9.
18. A computer-readable non-transitory storage medium having instructions stored thereon, the instructions being executable by a processor to perform the method of any one of claims 1-9.
CN202010884939.4A 2020-08-28 2020-08-28 Method, system, secure processor and storage medium for pass-through device Active CN111966468B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010884939.4A CN111966468B (en) 2020-08-28 2020-08-28 Method, system, secure processor and storage medium for pass-through device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010884939.4A CN111966468B (en) 2020-08-28 2020-08-28 Method, system, secure processor and storage medium for pass-through device

Publications (2)

Publication Number Publication Date
CN111966468A true CN111966468A (en) 2020-11-20
CN111966468B CN111966468B (en) 2021-10-26

Family

ID=73399874

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010884939.4A Active CN111966468B (en) 2020-08-28 2020-08-28 Method, system, secure processor and storage medium for pass-through device

Country Status (1)

Country Link
CN (1) CN111966468B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112416525A (en) * 2020-11-27 2021-02-26 海光信息技术股份有限公司 Device driver initialization method, direct storage access method and related device
CN113342473A (en) * 2021-06-28 2021-09-03 海光信息技术股份有限公司 Data processing method, migration method of secure virtual machine, related device and architecture
CN113342711A (en) * 2021-06-28 2021-09-03 海光信息技术股份有限公司 Page table updating method, device and related equipment
CN114201752A (en) * 2021-11-29 2022-03-18 海光信息技术股份有限公司 Page table management method and device for security isolation virtual machine and related equipment
CN114238185A (en) * 2021-12-20 2022-03-25 海光信息技术股份有限公司 Direct storage access and command data transmission method, device and related equipment
CN114564724A (en) * 2021-12-30 2022-05-31 海光信息技术股份有限公司 Method and device for protecting memory integrity of virtual machine, electronic equipment and storage medium
WO2023055351A1 (en) * 2021-09-29 2023-04-06 Hewlett-Packard Development Company, L.P. Secure page retrieval
WO2023118403A1 (en) * 2021-12-23 2023-06-29 Thales System-on-chip comprising at least one secure iommu
WO2023155694A1 (en) * 2022-02-18 2023-08-24 阿里云计算有限公司 Memory paging method and system, and storage medium
WO2023165400A1 (en) * 2022-03-04 2023-09-07 阿里巴巴(中国)有限公司 Computing system, memory page fault processing method, and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070294496A1 (en) * 2006-06-19 2007-12-20 Texas Instruments Incorporated Methods, apparatus, and systems for secure demand paging and other paging operations for processor devices
US20160232105A1 (en) * 2004-04-08 2016-08-11 Texas Instruments Incorporated Methods, apparatus, and systems for secure demand paging and other paging operations for processor devices
CN109725983A (en) * 2018-11-22 2019-05-07 海光信息技术有限公司 A kind of method for interchanging data, device, relevant device and system
CN109739613A (en) * 2018-11-22 2019-05-10 海光信息技术有限公司 Maintaining method, access control method and the relevant apparatus of nested page table
CN109766164A (en) * 2018-11-22 2019-05-17 海光信息技术有限公司 A kind of access control method, EMS memory management process and relevant apparatus
CN110089070A (en) * 2016-12-30 2019-08-02 英特尔公司 It is exchanged for code key to establish the technology of secure connection in network function virtualized environment
CN110928646A (en) * 2019-11-22 2020-03-27 海光信息技术有限公司 Method, device, processor and computer system for accessing shared memory
US10732891B2 (en) * 2006-05-17 2020-08-04 Richard Fetik Secure application acceleration system and apparatus

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160232105A1 (en) * 2004-04-08 2016-08-11 Texas Instruments Incorporated Methods, apparatus, and systems for secure demand paging and other paging operations for processor devices
US10732891B2 (en) * 2006-05-17 2020-08-04 Richard Fetik Secure application acceleration system and apparatus
US20070294496A1 (en) * 2006-06-19 2007-12-20 Texas Instruments Incorporated Methods, apparatus, and systems for secure demand paging and other paging operations for processor devices
CN110089070A (en) * 2016-12-30 2019-08-02 英特尔公司 It is exchanged for code key to establish the technology of secure connection in network function virtualized environment
CN109725983A (en) * 2018-11-22 2019-05-07 海光信息技术有限公司 A kind of method for interchanging data, device, relevant device and system
CN109739613A (en) * 2018-11-22 2019-05-10 海光信息技术有限公司 Maintaining method, access control method and the relevant apparatus of nested page table
CN109766164A (en) * 2018-11-22 2019-05-17 海光信息技术有限公司 A kind of access control method, EMS memory management process and relevant apparatus
CN110928646A (en) * 2019-11-22 2020-03-27 海光信息技术有限公司 Method, device, processor and computer system for accessing shared memory

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MARK NELSON等: "Modeling and Analysis of Secure Processor Extensions Based on Actor Networks", 《2017 18TH INTERNATIONAL WORKSHOP ON MICROPROCESSOR AND SOC TEST AND VERIFICATION (MTV)》 *
赵剑锋等: "安全处理器研究进展", 《信息安全报》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112416525A (en) * 2020-11-27 2021-02-26 海光信息技术股份有限公司 Device driver initialization method, direct storage access method and related device
CN112416525B (en) * 2020-11-27 2022-06-03 海光信息技术股份有限公司 Device driver initialization method, direct storage access method and related device
CN113342473A (en) * 2021-06-28 2021-09-03 海光信息技术股份有限公司 Data processing method, migration method of secure virtual machine, related device and architecture
CN113342711A (en) * 2021-06-28 2021-09-03 海光信息技术股份有限公司 Page table updating method, device and related equipment
CN113342711B (en) * 2021-06-28 2024-02-09 海光信息技术股份有限公司 Page table updating method and device and related equipment
CN113342473B (en) * 2021-06-28 2024-01-19 海光信息技术股份有限公司 Data processing method, migration method of secure virtual machine, related device and architecture
WO2023055351A1 (en) * 2021-09-29 2023-04-06 Hewlett-Packard Development Company, L.P. Secure page retrieval
CN114201752A (en) * 2021-11-29 2022-03-18 海光信息技术股份有限公司 Page table management method and device for security isolation virtual machine and related equipment
CN114238185A (en) * 2021-12-20 2022-03-25 海光信息技术股份有限公司 Direct storage access and command data transmission method, device and related equipment
WO2023118403A1 (en) * 2021-12-23 2023-06-29 Thales System-on-chip comprising at least one secure iommu
FR3131403A1 (en) * 2021-12-23 2023-06-30 Thales System on a chip comprising at least one secure IOMMU
CN114564724A (en) * 2021-12-30 2022-05-31 海光信息技术股份有限公司 Method and device for protecting memory integrity of virtual machine, electronic equipment and storage medium
WO2023155694A1 (en) * 2022-02-18 2023-08-24 阿里云计算有限公司 Memory paging method and system, and storage medium
WO2023165400A1 (en) * 2022-03-04 2023-09-07 阿里巴巴(中国)有限公司 Computing system, memory page fault processing method, and storage medium

Also Published As

Publication number Publication date
CN111966468B (en) 2021-10-26

Similar Documents

Publication Publication Date Title
CN111966468B (en) Method, system, secure processor and storage medium for pass-through device
US11392506B2 (en) Apparatus and method for secure memory access using trust domains
EP3491520B1 (en) Controlling access to pages in a memory in a computing device
US12007906B2 (en) Method and apparatus for first operating system to access resource of second operating system
JP4237190B2 (en) Method and system for guest physical address virtualization within a virtual machine environment
US10169244B2 (en) Controlling access to pages in a memory in a computing device
US10031858B2 (en) Efficient translation reloads for page faults with host accelerator directly accessing process address space without setting up DMA with driver and kernel by process inheriting hardware context from the host accelerator
US9928142B2 (en) Resolving page faults out of context
CN112241310B (en) Page table management method, information acquisition method, processor, chip, device and medium
US20120072619A1 (en) Memory Overcommit by Using an Emulated IOMMU in a Computer System with a Host IOMMU
US10365825B2 (en) Invalidation of shared memory in a virtual environment
Peleg et al. Utilizing the {IOMMU} scalably
CN106716435B (en) Interface between a device and a secure processing environment
EP4293523A1 (en) Technologies for memory tagging
US10684959B2 (en) Shared memory in a virtual environment
EP4254203A1 (en) Device memory protection for supporting trust domains
CN117120991A (en) System and method for providing page migration
US7571298B2 (en) Systems and methods for host virtual memory reconstitution
US20230195652A1 (en) Method and apparatus to set guest physical address mapping attributes for trusted domain
CN116561824A (en) Method and apparatus for managing memory in a confidential computing architecture
CN118152120A (en) Memory access method, memory access device, electronic equipment and computer program product
CN117769700A (en) Dynamically allocatable physically addressed metadata store

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
CB02 Change of applicant information

Address after: 300384 industrial incubation-3-8, North 2-204, No. 18, Haitai West Road, Huayuan Industrial Zone, Tianjin

Applicant after: Haiguang Information Technology Co., Ltd

Address before: 300384 industrial incubation-3-8, North 2-204, No. 18, Haitai West Road, Huayuan Industrial Zone, Tianjin

Applicant before: HAIGUANG INFORMATION TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant