EP3436938A1 - Conteneur de machine virtuelle haute densité avec écriture de copie sur dma - Google Patents

Conteneur de machine virtuelle haute densité avec écriture de copie sur dma

Info

Publication number
EP3436938A1
EP3436938A1 EP16895985.6A EP16895985A EP3436938A1 EP 3436938 A1 EP3436938 A1 EP 3436938A1 EP 16895985 A EP16895985 A EP 16895985A EP 3436938 A1 EP3436938 A1 EP 3436938A1
Authority
EP
European Patent Office
Prior art keywords
dma
memory page
control signal
memory
map control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP16895985.6A
Other languages
German (de)
English (en)
Inventor
Kun TIAN
Yao Zu Dong
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corp filed Critical Intel Corp
Publication of EP3436938A1 publication Critical patent/EP3436938A1/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45583Memory management, e.g. access or allocation

Definitions

  • Embodiments described herein generally relate to virtual machines and virtual machines running one or more containers.
  • System virtualization for a data center may include nodes or servers of the data center being configured to host virtual machines (VMs) .
  • VMs virtual machines
  • VMs in relation to each other, may provide strong, isolated execution environments for executing applications associated with providing network services.
  • Each VM may run an operating system (OS) for different clients that may be securely isolated from other VMs.
  • OS operating system
  • each VM may have its own OS kernel in addition to an application execution environment.
  • Containers can provide multiple execution environments for applications with a somewhat lessened isolation as compared to VM execution environments.
  • a container may maintain some isolation via separate namespace for process identifiers (PIDs) , interprocess communication (IPC) , storage, etc.
  • PIDs process identifiers
  • IPC interprocess communication
  • storage etc.
  • the app spaces of each container can be isolated from each other.
  • the operating system is shared among the containers.
  • FIG. 1 illustrates an example first system.
  • FIG. 2 illustrates an example copy-on-direct memory access-map scheme.
  • FIG. 3 illustrates a first example scheme to send a DMA-map control signal.
  • FIG. 4 illustrates a second example scheme to send a DMA-map control signal.
  • FIG. 5 illustrates a third example scheme to send a DMA-map control signal.
  • FIG. 6 illustrates a first example technique
  • FIG. 7 illustrates a second example technique.
  • FIG. 8 illustrates an example block diagram for an apparatus.
  • FIG. 9 illustrates an example storage medium according to an embodiment.
  • FIG. 10 illustrates a device according to an embodiment.
  • the present disclosure provides to concurrently operate multiple container instances on a host. Each container instance can be operated within an individual virtual machine. Accordingly, the present disclosure provides isolation of the operating system between containers in addition to isolation between the app space.
  • a virtual machine (VM) or multiple VMs can be cloned or copied from an original VM to provide further isolation between containers.
  • the newly created VMs are created with identical memory allocations (e.g., extended memory page tables, or the like) to the original VM.
  • the EPT tables of each VM can point to the same set of memory pages.
  • the EPT table entries for the newly created VMs are marked as read only.
  • the host system can allocate new memory pages and update the EPT tables as needed to preserve system resources.
  • the present disclosure provides an overall system resource footprint consistent with the requirements from active containers within each VM.
  • containers can be implemented natively or within a VM.
  • VM containers provide better isolation as compared to native containers.
  • native containers can typically be provisioned in a higher density and with greater performance than VM containers.
  • memory copy-on-write can be implemented to increase VM container density and provision time while I/O passthrough can improve overall VM container performance.
  • I/O passthrough can improve overall VM container performance.
  • memory-copy-on-write and I/O passthrough techniques cannot be combined together.
  • One method for allocating new memory pages and updating the EPT tables is referred to as memory copy-on-write.
  • the host allocates a new memory page and updates an EPT table entry when a VM attempts to write to a memory page marked as read-only.
  • a CPU write page fault will be triggered due to the read only entry in the EPT table and the host will be prompted to allocate a new memory page and update the EPT table for the new VM.
  • I/O pass-through which allows a VM to directly operate with an assigned device through an input-output memory management unit (IOMMU) .
  • IOMMU input-output memory management unit
  • IOMMU input-output memory management unit
  • I/O passthrough When I/O passthrough is enabled, write operations from assigned device also need to be captured, otherwise the whole copy-on-write system is broken. Accordingly, either I/O passthrough cannot be used (resulting in a decrease in performance) when using copy-on-write system, or copy-on-write must be disabled (resulting in a decrease in density and provisioning time) when using I/O passthrough.
  • the present disclosure provides to allocate new memory pages and update EPT/IOMMU tables opportunistically before direct memory access (DMA) buffers are used by assigned I/O devices based on a predicted DMA map request.
  • the host system can apply various heuristics to DMA map requests from the various VMs operating on the host.
  • the present disclosure provides to opportunistically allocate memory pages and update EPT/IOMMU tables to provide increased I/O performance versus a pure copy-on-write system.
  • FIG. 1 illustrates an example system 100.
  • system 100 includes a host node 101.
  • Host node 101 may be node or server capable of hosting at least one virtual machines (VM) , such as, VM 120.
  • VM virtual machines
  • the system 100 can be split into a physical layer 102 and a virtual layer 103; where the physical layer includes the host node 101 and the virtual layer includes the hosted VMs.
  • Hosting may include providing composed physical resources (not shown) such as processors, memory, storage or network resources maintained at or accessible to host node 101.
  • host node 101 may be a node/server in a data center having a plurality of interconnected nodes/servers that may be arranged to provide Infrastructure as a Service (IaaS) , Platform as a Service (PaaS) or Software as a Service (SaaS) services for one or more clients or consumers of these types of cloud-based services.
  • IaaS Infrastructure as a Service
  • PaaS Platform as a Service
  • SaaS Software as a Service
  • the host node 101 may have a host operating system (OS) kernel 110.
  • Host OS kernel 110 may be arranged to implement a virtual machine manager (VMM 112) .
  • VMM 112 may be configured to operate as a KVM or hypervisor (so-called type-2 model) to manage various operations and/or configurations for VMs hosted by host node 101.
  • the VMM 112 may be implemented below host OS kernel 110 (so-called type-1 model) , which is not shown in this figure but this whole concept may still be applied and may be applicable to this disclosure.
  • Guest OS kernel 121 may support an executing environment for a single VM 120.
  • VM 120 may be arranged to run at least one set of containers that includes container 122 and container 124.
  • Container 122 may be arranged to execute one or more applications (App (s) ) 123 and container 124 may be arranged to execute one or more App (s) 125.
  • App (s) applications
  • container 124 may be arranged to execute one or more App (s) 125.
  • Apps applications
  • container 124 are depicted running Apps 123 and 123-1 (partially obscured by App 123) and Apps 125 and 125-1 (partially obscured by App 125) , respectively.
  • the host node 101 may also provision resources (e.g., network resources such as network input/output devices, memory, network ports, etc. ) to support a virtual switch 150 capable of routing input/output packets to individual VMs and/or containers.
  • resources e.g., network resources such as network input/output devices, memory, network ports, etc.
  • virtual switch 150 may route network connections through virtual switch 126 at VM 120 to enable containers 122 and 124 to receive or transmit packets associated with executing respective App (s) 123 and 125.
  • VM 120 may also include a container manager 128 to facilitate management or control of containers 122 or 124.
  • the host node 101 may include logic and/or features (e.g., VMM 112) to receive a request to change an operating characteristic for at least one of containers 122 or 124 and to cause these containers to become isolated from each other (e.g., for increased security) .
  • Host node 101 (and particularly VMM 112) can clone VM 120 to result in multiple VMs hosted on the host node 101.
  • VM 120 can be cloned for form VM 120-1, VM 120-2, and VM 120-3.
  • VMs 120-1, 120-2, and 120-3 may be separate instances of VM 120.
  • VMs 120, 120-1, 120-2, and 120-3 may be arranged to run ones of the containers previously operated on VM 120.
  • VM 120 is arranged to run container 122 and App 123
  • VM 120-1 is arranged to run container 122-1 and App 123-1
  • VM 120-2 is arranged to run container 124 and App 125
  • VM 120-3 is arranged to run container 124-1 and App 125-1.
  • the number of containers and Apps depicted is depicted in a quantity to facilitate understanding and not to be limiting.
  • the host node 101 can host and clone any number of VMs while the VMs can run any number and combination of containers and Apps.
  • each of the VMs e.g., VM 120, VM 120-1, VM 120-2, VM 120-3, or the like
  • can include a number of assigned device (s) e.g., refer to FIGS. 3-5) .
  • logic and/or features of host node 101 may implement a copy-on-DMA-map (CDMAM) technique, mechanism, and/or operation to cause VMs 120-1, 120-2, and 120-3 to initially share the same memory pages as VM 120 to reduce provisioning time and to reduce resource usage to that needed by actively running containers.
  • CDM copy-on-DMA-map
  • the CDMAM mechanism may be implemented responsive to execution of App 123-1 by container 122-1 at VM 120-1 leading to a modification of or an attempt to write to cloned memory pages, either from CPU or from assigned device. Additionally, the CDMAM mechanism may be implemented responsive to execution of App 125 by container 124 at VM 120-2 leading to a modification of or an attempt to write to cloned memory pages. Additionally, the CDMAM mechanism may be implemented responsive to execution of App 125-1 by container 124-1 at VM 120-3 leading to a modification of or an attempt to write to cloned memory pages.
  • the host node can allocate and provision new memory pages as needed during operation.
  • the VMM 112 can opportunistically allocate and provision new memory pages based on DMA map requests occurring within each cloned VM.
  • the VMM 112 can opportunistically allocate and provision new memory pages based on CPU write page faults. Accordingly, the VMM 112 can opportunistically allocate and provision new memory pages prior to DMA memory access triggered by the cloned VMs.
  • container density can be transitioned from a course grained density to a fine-grained density while providing passthrough memory access.
  • some examples can provide for an increase in I/O by moving some memory overhead to non-performance-critical paths.
  • FIG. 2 illustrates an example copy-on-DMA-map (CDMAM) scheme 200.
  • CDMAM scheme 200 may be executed by logic and/or features of host node 101, such as, for example, VMM 112 to cause the cloned VMs to use different allocated memory for running their respective containers and Apps.
  • the CDMAM scheme 200 may include largely saving a memory footprint or memory allocated at an initial time that cloned VMs 120-1, 120-2, and 120-3 are run.
  • the memory footprint or memory allocations are cloned for memory (e.g., memory pages) originally allocated or provisioned to VM 120.
  • VMM 112 may cause these containers to use different memory than the originally allocated memory to VM 120.
  • the CMDAM scheme 200 can be implemented to allocate and provision new memory pages for any of the cloned VMs. However, for purposes of clarity, the CDMAM scheme 200 is depicted allocating and provisioning a new memory page for the VM 120-1. Examples, however, are not limited in this respect.
  • the VMM 112 can include a copy-on-DMA-map-manager (cDMAmm) 130.
  • the cDMAmm 130 allocates and provisions new memory pages to be used by cloned VM 120-1. Said differently, the cDMAmm 130 generates updated memory allocation 203, which includes original memory pages 205 and newly allocated memory page (s) 215.
  • the cDMAmm 130 can include an allocation agent 132 and optionally a DMA buffer pool 134.
  • the allocation agent 134 can allocate and provision new memory pages as described herein.
  • the allocation agent 134 can implement the CMDAM scheme 200 to generate updated memory allocation 203.
  • the CDMAM scheme 200 may include use of a multi-level page table, such as, an extended page table (PT) , or an IOMMU PT.
  • a multi-level page table such as, an extended page table (PT) , or an IOMMU PT.
  • PT extended page table
  • IOMMU PT IOMMU PT.
  • a two-level page table for PT 1 may include page directory entry (PDE) table 210 and page table entry (PTE) tables 212 and 214.
  • PDE page directory entry
  • PTE page table entry
  • the VMM 112 can copy the PT from VM 120 to generate PT 1 for use by VM 120-1.
  • VMs 120 and 120-1 may initially share memory page (s) 205 including P1, P2 and P3. These memory pages may make use of PT 1 and PT 2 for memory addressing. For example, the VM 120 can use PT 1 for memory addressing while VM 120-1 can use PT 2 for memory addressing. These shared memory pages 205 however, are marked as read only in both PT 1 and PT 2.
  • allocation agent 132 may duplicate data contents of memory page P2 to memory page P2’ and update the corresponding EPT/IOMMU entries (e.g., PT 1, PT 2, or the like) to reference memory page P2’ .
  • P2’ may be part of one or more different allocated memory page (s) 215 for use by VM 120-1 to run container 122-1.
  • allocation agent 132 may create PT 1’ that includes PDE table 220 and PTE table 222 for memory address mapping to memory page (s) 215.
  • the allocation agent can allocate new memory pages (e.g., memory page (s) 215, or the like) and update the PT (e.g., PT 1, PT 2, or the like) based on receiving a DMA-map control signal 201.
  • the DMA-map control signal 201 can include an indication of the memory address for the DMA map request.
  • the DMA-map control signal 201 can be received from the VM 120-1 responsive to a DMA map request.
  • the VM 120-1 may access memory (e.g., on host node 101, or the like) using DMA and/or CPU write.
  • the VM 120-1 can access memory addressed by PT 1 through DMA (e.g., via an IOMMU PT, or the like) .
  • VM 120-1 can access memory address by PT 1 through a CPU write (e.g., via an extended PT, or the like) .
  • the DMA needs to be mapped within the VM 120-1 and/or the CPU write instruction needs to be issued.
  • a device (refer to FIGS. 3-5) of the VM 120-1 can have memory mapped via a DMA buffer to access the memory.
  • the present disclosure provides that the VMM 112 receives DMA-map control signal 201 while VM 120-1 maps and set the DMA buffer. This is described in greater detail below, particularly, with respect to FIGS. 3-5.
  • the allocation agent 132 can allocate and provision new memory pages each time a DMA-map control signal 201 is received. In some examples, multiple DMA-map control signals can be pooled in DMA buffer pool 134. Subsequently, allocation agent 132 can allocate and provision memory pages speculatively and/or opportunistically as described herein.
  • FIGS. 3-5 illustrates example DMA-map control signal schemes 300, 400, and 500, respectively.
  • the DMA-map control signal schemes can be implemented by the any of the VMs (e.g., VMs 120, 120-1, 120-2, and 120-3) and particularly, the cloned VMs (VMs 120-1, 120-2, and 120-3) to generate DMA-map control signal 201 and send DMA-map control signal 201 to the VMM 112.
  • Schemes 300, 400, and 500 are depicted and described with respect to cloned VM 120-1 for purposes of clarity only and not to be limiting.
  • the schemes can be implemented by any logic and/or features of the system 100 to provide a DMA-map control signal to opportunistically allocate and provision new memory pages as described herein.
  • the scheme 300 can be implemented by VM 120-1 and the VMM 112 to receive the DMA-map controls signal 201 in a guest agnostic manner, that is, automatically without intervention by the VM 120-1.
  • the system 100 can include an an input-output memory management unit (IOMMU) to provide DMA mapping and routing features for the VMs (e.g., VM 120-1) .
  • the guest OS kernel 121-1 can include an IOMMU driver 330 as depicted in this figure.
  • the container VM 120-1 can be configured to access assigned device 310 with a corresponding device driver 311.
  • the VMM 112 can include cDMAmm 130 as well as a vIOMMU 320.
  • the vIOMMU 320 can be an emulated version of the IOMMU driver 330 implemented in the guest OS kernel 121-1.
  • the technique 300 can include process blocks 3.1 to 3.6.
  • the device driver 311 can send a DMA buffer mapping signal to the IOMMU driver 330, the DMA buffer mapping signal to include an indication to map and/or allocate a DMA buffer for the assigned device 310
  • the IOMMU driver 330 configures IOMMU hardware (e.g., DMA logic and/or features) to allow direct access to the memory (e.g., memory pages 215, or the like) from the assigned device 310. Additionally, as the VMM 112 emulates the IOMMU driver 330 as vIOMMU driver 320, vIOMMU driver 320 can receive an indication of the IOMMU entries and/or the DMA buffer mapping allowed by the IOMMU.
  • IOMMU hardware e.g., DMA logic and/or features
  • the vIOMMU 320 can send an indication of the DMA buffer address to the cDMAmm 130.
  • the cDMAmm 130 can allocate and provision new memory pages as needed based on the DMA buffer address.
  • the cDMAmm 130 can implement the copy-on-DMA-write scheme 200 depicted in FIG. 2 to generate the updated memory allocation 203.
  • the device driver 311 can set the DMA buffer to provide DMA features for the assigned device 310.
  • the assigned device 310 can implement a DMA process to access the memory page without fault.
  • the scheme 400 can be implemented by VM 120-1 and the VMM 112 to receive the DMA-map controls signal 201 from the guest OS using a partially virtualized DMA driver.
  • the system 100 can include a front-end DMA driver 430 to provide DMA mapping and routing features for the VMs (e.g., VM 120-1) .
  • the guest OS kernel 121-1 can include front-end DMA driver 430 as depicted in this figure.
  • the VM 120-1 can be configured to access assigned device 310 with a corresponding device driver 311.
  • the VMM 112 can include cDMAmm 130 as well as a back-end DMA driver 440.
  • the back-end DMA driver 440 can be operably coupled to the front-end DMA driver 430 to receive indication of DMA buffer mappings.
  • the technique 400 can include process blocks 4.1 to 4.6.
  • the device driver 311 can send a DMA buffer mapping signal to the front-end DMA driver 430, the DMA buffer mapping signal to include an indication to map and/or allocate a DMA buffer for the assigned device 310
  • the front-end DMA driver 430 can send an indication of the DMA buffer address to the back-end DMA driver 440.
  • the back-end DMA driver 440 can send an indication of the DMA buffer address to the cDMAmm 130.
  • the cDMAmm 130 can allocate and provision new memory pages as needed based on the DMA buffer address.
  • the cDMAmm 130 can implement the copy-on-DMA-write scheme 200 depicted in FIG. 2 to generate the updated memory allocation 203.
  • the device driver 311 can set the DMA buffer to provide DMA features for the assigned device 310.
  • the assigned device 310 can implement a DMA process to access the memory page without fault.
  • the scheme 500 can be implemented by VM 120-1 and the VMM 112 to receive the DMA-map controls signal 201 directly from the driver 311.
  • the system 100 can include an DMA driver 530 to provide DMA mapping and routing features for the VMs (e.g., VM 120-1) .
  • the guest OS kernel 121-1 can DMA driver 530 as depicted in this figure.
  • the VM 120-1 can be configured to access assigned device 310 with a corresponding device driver 511.
  • the VMM 112 can include cDMAmm 130.
  • the device driver 511 can include logic and/or features to sent DMA-map control signal 201 directly to the cDMAmm130.
  • the device driver 511 may be configured to send DMA-map control signal 201 directly to the cDMAmm130 using VMM 112 specific features, or the like. In some examples, the device driver 511 can be configured to send DMA-map control signal 201 to the cDMAmm130 in a VMM agnostic manner. In particular, the device driver 511 can be configured to trigger a CPU write to DMA buffer before configuring the DMA buffer to utilize conventional copy-on-write communication paths for conveying the DAM-map control signal 201.
  • the technique 500 can include process blocks 5.1 to 5.5.
  • the device driver 511 can send a DMA buffer mapping signal to the DMA driver 530, the DMA buffer mapping signal to include an indication to map and/or allocate a DMA buffer for the assigned device 310
  • the device driver 511 can send the DMA-map control signal 201 to the cDMAmm 130.
  • the device driver 511 can send an indication of the DMA buffer address to the cDMAmm 130.
  • the cDMAmm 130 can allocate and provision new memory pages as needed based on the DMA buffer address.
  • the cDMAmm 130 can implement the copy-on-DMA-write scheme 200 depicted in FIG. 2 to generate the updated memory allocation 203.
  • the device driver 511 can set the DMA buffer to provide DMA features for the assigned device 310.
  • the assigned device 310 can implement a DMA process to access the memory page without fault.
  • FIGS. 6-7 illustrate techniques 600 and 700, respectively.
  • the techniques 600 and 700 can be implemented by the system 100 of FIG. 1.
  • the techniques 600 and 700 can be implemented by the VMM 112 and the cloned VMs (e.g., VMs 120-1, 120-2, 120-3, or the like) .
  • the system 100 can implement technique 600 to allocate and provision new memory pages upon receipt of of a DMA-map control signal; or the system 100 can implement technique 700 to cache DMA-map control signals in a pool and allocate and provision new memory pages periodically.
  • the techniques 600 and 700 are described with respect to the system 100 of FIG. 1. However, the techniques can be implemented by systems having different components or configurations than depicted and described for system 100.
  • the technique 600 can begin at process 6.1.
  • Map DMA Buffer the VM 120-1 can map a DMA buffer to provide DMA for a device.
  • the VM 120-1 can map a portion of memory in the host node 101 (e.g., the memory page 205, or the like) to provide DMA features for the VM 120-1 to access the memory page.
  • the VM 120-1 can send a DMA-map control signal 201 to the VMM 112.
  • the VM 120-1 can implement portions of any of the schemes 300, 400, and/or 500 to send a DMA-map control signal to the VMM 112, and specifically, to the cDMAmm 130.
  • the VMM 112 can allocate and provision a memory page based on the DMA-map control signal 201.
  • the cDMAmm 130 of the VMM 112 can allocation a new memory page (e.g., the memory page 215, or the like) and provision an EPT table for the VM 120-1 (e.g., EPT 2, or the like) to reference the memory page 215.
  • the allocation agent 134 can generate the updated memory allocation 203.
  • the VM 120-1 can access the allocated and provisioned memory page (e.g., the memory page 215, or the like) using DMA.
  • the technique 700 can begin at process 7.1.
  • the VM 120-1 can map a DMA buffer to provide DMA for a device.
  • the VM 120-1 can map a portion of memory in the host node 101 (e.g., the memory page 205, or the like) to provide DMA features for the VM 120-1 to access the memory page.
  • the VM 120-1 can send a DMA-map control signal 201 to the VMM 112.
  • the VM 120-1 can implement portions of any of the schemes 300, 400, and/or 500 to send a DMA-map control signal to the VMM 112, and specifically, to the DMA buffer pool 132.
  • the VMM 112 can allocate and provision a memory page based on the DMA-map control signal (s) 201 pooled in the DMA buffer pool 132.
  • the cDMAmm 130 of the VMM 112 can allocation a new memory page (e.g., the memory page 215, or the like) and provision an EPT table for the VM 120-1 (e.g., EPT 2, or the like) to reference the memory page 215.
  • the allocation agent 134 can generate the updated memory allocation 203 based on the pooled DMA-map control signal (s) 201.
  • the VM 120-1 can access the allocated and provisioned memory page (e.g., the memory page 215, or the like) using DMA.
  • the cDMAmm 130 can opportunistically allocate and/or provision memory pages based a number of DMA-map control signal (s) 201, such as, can be pooled in the DMA buffer pool 132.
  • the allocation agent 134 of the cDMAmm 130 can allocate and provision a memory indicated in the DMA-map control signal 201 and can allocate and provision a memory page adjacent to the memory page indicated in the DMA-map control signal 201.
  • the allocation agent 134 can allocate and provision memory pages based on historical (e.g., prior, or the like) DMA-map control signals 201.
  • the allocation agent 134 can allocate and provision memory pages for a cloned VM (e.g., the cloned VM 120-2) based on allocated and provisioned memory pages for a similarly cloned VM (e.g., the cloned VM 120-1, or the like) .
  • the allocation agent 134 can allocate and provision memory pages using idle pathways.
  • the allocation agent 134 can opportunistically allocate and provision memory pages not currently mapped for DMA.
  • FIG. 8 illustrates an example block diagram for an apparatus 800. Although apparatus 800 is depicted having a limited number of elements in a certain topology, it may be appreciated that the apparatus 800 can include more or less elements in alternate topologies as desired for a given implementation.
  • apparatus 800 may be supported by circuitry 820 maintained at a host node/server arranged or provisioned to host a plurality of VMs.
  • Circuitry 820 may be arranged to execute one or more software or firmware implemented modules or components 822-a.
  • the examples presented are not limited in this context and the different variables used throughout may represent the same or different integer values.
  • these “components” may be software/firmware stored in computer-readable media, and although the components are shown in this figure as discrete boxes, this does not limit these components to storage in distinct computer-readable media components (e.g., a separate memory, etc. ) .
  • circuitry 820 may include a processor or processor circuitry to implement logic and/or features that may include one or more components arranged to facilitate cloning of a VM running sets of containers or migration of VMs/containers within or between host nodes/servers.
  • circuitry 820 may be part of circuitry at a host node/server (e.g., host node 101) that may include processing cores or elements.
  • the circuitry 820 may be part of the VMs executing on the host node (e.g., VM 120, cloned VMs 120-1, 120-2, 120-3, etc.
  • the circuitry including one or more processing cores can be any of various commercially available processors, including without limitation an and processors; application, embedded and secure processors; and and processors; IBM and Cell processors; Core (2) Core i3, Core i5, Core i7, Xeon and processors; and similar processors.
  • circuitry 820 may also include an application specific integrated circuit (ASIC) and at least some components 822-a may be implemented as hardware elements of the ASIC.
  • ASIC application specific integrated circuit
  • apparatus 800 may part of a node configured to host a first VM arranged to run at least one set of containers that includes a first container and a second container that are separately arranged to execute respective first and second applications.
  • apparatus 800 may include a clone component 822-1.
  • Clone component 822-1 may be executed by circuitry 820 to clone the first VM to result in a second VM arranged to at least temporarily run the first and second containers concurrent with the first and second containers arranged to run at the first VM.
  • the cloning of the first VM may be responsive to an isolation request received via isolation request 805.
  • apparatus 800 may also include a DMA-map receiving component 822-2.
  • DMA-map receiving component 822-2 may be executed by circuitry 820 to receive a DAM-map control signal 810 including an indication to map memory for DMA for a VM.
  • the DMA-map receiving component 822 can receive the DMA-map control signal 201.
  • the apparatus 800 may also include DMA buffer pool component 822-3.
  • DMA buffer pool component 822-3 be executed by circuitry 820 to pool DMA-map control signal (s) 810 for processing.
  • apparatus 800 may include memory allocation component 822-4.
  • Memory allocation component 822-4 may be executed by circuitry 820 to implement a copy-on-DMA-map mechanism 830.
  • the memory allocation component 822-4 can, responsive to the DMA-map control signal 810, allocate and/or provision new memory pages for the cloned VM.
  • the memory allocation component 822-4 can allocate memory pages 215 and provision memory pages 215 for use by cloned VM 120-2.
  • the copy-DMA-map mechanism may be similar to CODMAM scheme 200 described above.
  • FIG. 9 illustrates an example storage medium 900.
  • the storage medium 900 may comprise an article of manufacture.
  • storage medium 900 may include any non-transitory computer readable medium or machine readable medium, such as an optical, magnetic or semiconductor storage.
  • Storage medium 900 may store various types of computer executable instructions 902, such as instructions to implement techniques 600 and/or 700.
  • Examples of a computer readable or machine readable storage medium may include any tangible media capable of storing electronic data, including volatile memory or non-volatile memory, removable or non-removable memory, erasable or non-erasable memory, writeable or re-writeable memory, and so forth.
  • Examples of computer executable instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, object-oriented code, visual code, and the like. The examples are not limited in this context.
  • FIG. 10 illustrates an example computing platform 1000.
  • computing platform 1000 may include a processing component 1040, other platform components 1050 or a communications interface 1060.
  • computing platform 1000 may be implemented in a node/server.
  • the node/server may be capable of coupling through a network to other nodes/servers and may be part of data center including a plurality of network connected nodes/servers arranged to host VMs arranged to run containers separately arranged to execute one or more applications.
  • processing component 1040 may execute processing operations or logic for apparatus 800 and/or storage medium 900.
  • Processing component 1040 may include various hardware elements, software elements, or a combination of both. Examples of hardware elements may include devices, logic devices, components, processors, microprocessors, circuits, processor circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth) , integrated circuits, application specific integrated circuits (ASIC) , programmable logic devices (PLD) , digital signal processors (DSP) , field programmable gate array (FPGA) , memory units, logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth.
  • ASIC application specific integrated circuits
  • PLD programmable logic devices
  • DSP digital signal processors
  • FPGA field programmable gate array
  • Examples of software elements may include software components, programs, applications, computer programs, application programs, device drivers, system programs, software development programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces (API) , instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an example is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints, as desired for a given example.
  • platform components 1050 may include common computing elements, such as one or more processors, multi-core processors, co-processors, memory units, chipsets, controllers, peripherals, interfaces, oscillators, timing devices, video cards, audio cards, multimedia input/output (I/O) components (e.g., digital displays) , power supplies, and so forth.
  • processors multi-core processors, co-processors, memory units, chipsets, controllers, peripherals, interfaces, oscillators, timing devices, video cards, audio cards, multimedia input/output (I/O) components (e.g., digital displays) , power supplies, and so forth.
  • I/O multimedia input/output
  • Examples of memory units may include without limitation various types of computer readable and machine readable storage media in the form of one or more higher speed memory units, such as read-only memory (ROM) , random-access memory (RAM) , dynamic RAM (DRAM) , Double-Data-Rate DRAM (DDRAM) , synchronous DRAM (SDRAM) , static RAM (SRAM) , programmable ROM (PROM) , erasable programmable ROM (EPROM) , electrically erasable programmable ROM (EEPROM) , flash memory, polymer memory such as ferroelectric polymer memory, ovonic memory, phase change or ferroelectric memory, silicon-oxide-nitride-oxide-silicon (SONOS) memory, magnetic or optical cards, an array of devices such as Redundant Array of Independent Disks (RAID) drives, solid state memory devices (e.g., USB memory) , solid state drives (SSD) and any other type of storage media suitable for storing information.
  • ROM read-only
  • communications interface 1060 may include logic and/or features to support a communication interface.
  • communications interface 1060 may include one or more communication interfaces that operate according to various communication protocols or standards to communicate over direct or network communication links or channels.
  • Direct communications may occur via use of communication protocols or standards described in one or more industry standards (including progenies and variants) such as those associated with the PCIe specification.
  • Network communications may occur via use of communication protocols or standards such those described in one or more Ethernet standards promulgated by IEEE.
  • one such Ethernet standard may include IEEE 802.3.
  • Network communication may also occur according to one or more OpenFlow specifications such as the OpenFlow Hardware Abstraction API Specification.
  • computing platform 1000 may be implemented in a server/node in a data center. Accordingly, functions and/or specific configurations of computing platform 1000 described herein, may be included or omitted in various embodiments of computing platform 1000, as suitably desired for a server/node.
  • computing platform 1000 may be implemented using any combination of discrete circuitry, application specific integrated circuits (ASICs) , logic gates and/or single chip architectures. Further, the features of computing platform 1000 may be implemented using microcontrollers, programmable logic arrays and/or microprocessors or any combination of the foregoing where suitably appropriate. It is noted that hardware, firmware and/or software elements may be collectively or individually referred to herein as “logic” or “circuit. ”
  • the exemplary computing platform 1000 shown in the block diagram of FIG. 10 may represent one functionally descriptive example of many potential implementations. Accordingly, division, omission or inclusion of block functions depicted in the accompanying figures does not infer that the hardware components, circuits, software and/or elements for implementing these functions would necessarily be divided, omitted, or included in embodiments.
  • IP cores may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.
  • hardware elements may include devices, components, processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth) , integrated circuits, application specific integrated circuits (ASIC) , programmable logic devices (PLD) , digital signal processors (DSP) , field programmable gate array (FPGA) , memory units, logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth.
  • ASIC application specific integrated circuits
  • PLD programmable logic devices
  • DSP digital signal processors
  • FPGA field programmable gate array
  • software elements may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces (API) , instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an example is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints, as desired for a given implementation.
  • a computer-readable medium may include a non-transitory storage medium to store logic.
  • the non-transitory storage medium may include one or more types of computer-readable storage media capable of storing electronic data, including volatile memory or non-volatile memory, removable or non-removable memory, erasable or non-erasable memory, writeable or re-writeable memory, and so forth.
  • the logic may include various software elements, such as software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, API, instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof.
  • a computer-readable medium may include a non-transitory storage medium to store or maintain instructions that when executed by a machine, computing device or system, cause the machine, computing device or system to perform methods and/or operations in accordance with the described examples.
  • the instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, and the like.
  • the instructions may be implemented according to a predefined computer language, manner or syntax, for instructing a machine, computing device or system to perform a certain function.
  • the instructions may be implemented using any suitable high-level, low-level, object-oriented, visual, compiled and/or interpreted programming language.
  • Coupled and “connected” along with their derivatives. These terms are not necessarily intended as synonyms for each other. For example, descriptions using the terms “connected” and/or “coupled” may indicate that two or more elements are in direct physical or electrical contact with each other. The term “coupled, ” however, may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other.
  • Example 1 An apparatus comprising: circuitry at a node to host a first virtual machine (VM) arranged to execute at least one set of containers that includes a first container and a second container that are separately arranged to execute respective first and second applications; a clone component for execution by the circuitry to clone the first VM to result in a second VM arranged to execute at least the second container with the first container arranged to execute in the first VM; a receiving component for execution by the circuitry to receive a direct memory access (DMA) map control signal from the second VM, the DMA-map control signal to include an indication to map a memory page to a DMA buffer; and a memory allocation component for execution by the circuitry to implement a copy-on-DMA-map (CODMAM) operation to cause the second VM to use different allocated memory to execute the second container responsive to the DMA-map control signal.
  • VM virtual machine
  • CODMAM copy-on-DMA-map
  • Example 2 The apparatus of example 1, the DMA-map control signal to be received responsive to the second application executed by the second container mapping the memory page to the DMA buffer.
  • Example 3 The apparatus of example 2, the DMA-map control signal a first DMA-map control signal and the memory page a second memory page, the apparatus comprising a DMA buffer pooling component for execution by the circuitry to pool the first DMA-map control signal with at least a second DMA-map control signal, the second DMA-map control signal to include an indication to map a second memory page to the DMA buffer.
  • Example 4 The apparatus of example 3, the second DMA-map control signal to be received responsive to the second application executed by the second container mapping the second memory page to the DMA buffer.
  • Example 5 The apparatus of example 3, comprising a virtual machine manager (VMM) capable of managing the first and second VMs, the VMM comprising the receiving component, the DMA buffer pooling component, and the memory allocation component.
  • VMM virtual machine manager
  • Example 6 The apparatus of example 3, the receiving component a virtual input-output memory management unit (IOMMU) , the virtual IOMMU to emulate an IOMMU of the second VM.
  • IOMMU virtual input-output memory management unit
  • Example 7 The apparatus of example 6, the vIOMMU to receive DMA buffer mapping entries from the IOMMU.
  • Example 8 The apparatus of example 3, the receiving component a back-end DMA driver, the back-end DMA driver to receive the DMA-map control signal from a front-end DMA driver of the second VM.
  • Example 9 The apparatus of example 8, the back-end DMA driver to receive DMA buffer mapping entries from the front-end DMA buffer.
  • Example 10 The apparatus of example 3, the CODMAM operation to cause allocate a first memory page, copy contents from a second memory page to the first memory page, and provision the second VM to use the first memory page, the first VM provisioned to use the second memory page.
  • Example 11 The apparatus of example 10, the memory allocation agent to allocate a third memory page, copy contents from a fourth memory page to the third memory page, and provision the second VM to use the third memory page, first VM provisioned to use the second memory page and the fourth memory page adjacent to the second memory page.
  • Example 12 The apparatus of example 1, comprising a digital display coupled to the circuitry to present a user interface view.
  • Example 13 A method comprising: cloning, by circuitry at a node, a first virtual machine (VM) arranged to execute at least one set of containers that includes a first container and a second container that are separately arranged to execute respective first and second applications, the cloning to result in a second VM arranged to execute at least the second container with the first container arranged to execute in the first VM; receiving a direct memory access (DMA) map control signal from the second VM, the DMA-map control signal to include an indication to map a memory page to a DMA buffer; and applying a copy-on-DMA-map (CODMAM) operation to cause the second VM to use different allocated memory to execute the second container responsive to the DMA-map control signal.
  • DMA direct memory access
  • CODMAM copy-on-DMA-map
  • Example 14 The memory of example 13, receiving the DMA-map control signal responsive to the second application executed by the second container mapping the memory page to the DMA buffer.
  • Example 15 The memory of example 14, the DMA-map control signal a first DMA-map control signal and the memory page a second memory page, the method comprising pooling the first DMA-map control signal with at least a second DMA-map control signal, the second DMA-map control signal to include an indication to map a second memory page to the DMA buffer.
  • Example 16 The method of example 15, receiving the second DMA-map control responsive to the second application executed by the second container mapping the second memory page to the DMA buffer.
  • Example 17 The method of example 15, comprising managing, via a virtual machine manager (VMM) , the first and second VMs.
  • VMM virtual machine manager
  • Example 18 The method of example 15, receiving the DMA-map control signal at a virtual input-output memory management unit (IOMMU) , the virtual IOMMU to emulate an IOMMU of the second VM.
  • IOMMU virtual input-output memory management unit
  • Example 19 The method of example 18, comprising receiving, at the vIOMMU, DMA buffer mapping entries from the IOMMU.
  • Example 20 The method of example 15, receiving, at a back-end DMA driver, the DMA-map control signal from a front-end DMA driver of the second VM.
  • Example 21 The method of example 20, receiving, at the back-end DMA driver, DMA buffer mapping entries from the front-end DMA buffer.
  • Example 22 The method of example 15, comprising: allocating a first memory page; copying contents from a second memory page to the first memory page; and provisioning the second VM to use the first memory page, the first VM provisioned to use the second memory page.
  • Example 23 The method of example 22, comprising: allocating a third memory page; copying contents from a fourth memory page to the third memory page; and provisioning the second VM to use the third memory page, first VM provisioned to use the second memory page and the fourth memory page adjacent to the second memory page.
  • Example 24 The method of example 13, comprising presenting a user interface view on a digital display coupled to the circuitry.
  • Example 25 At least one machine readable medium comprising a plurality of instructions that in response to being executed by system at a server cause the system to carry out a method according to any one of examples 13 to 24.
  • Example 26 An apparatus comprising means for performing the methods of any one of examples 13 to 24.
  • Example 27 At least one machine readable medium comprising a plurality of instructions that in response to being executed by a system at a node cause the system to: clone, by circuitry at a node, a first virtual machine (VM) arranged to execute at least one set of containers that includes a first container and a second container that are separately arranged to execute respective first and second applications, the cloning to result in a second VM arranged to execute at least the second container with the first container arranged to execute in the first VM; receive a direct memory access (DMA) map control signal from the second VM, the DMA-map control signal to include an indication to map a memory page to a DMA buffer; and apply a copy-on-DMA-map (CODMAM) operation to cause the second VM to use different allocated memory to execute the second container responsive to the DMA-map control signal.
  • DMA direct memory access
  • CODMAM copy-on-DMA-map
  • Example 28 The at least one machine readable medium of example 27, the instructions to further cause the system to receive the DMA-map control signal responsive to the second application executed by the second container mapping the memory page to the DMA buffer.
  • Example 29 The at least one machine readable medium of example 28, the DMA-map control signal a first DMA-map control signal and the memory page a second memory page, the instructions to further cause the system to pool the first DMA-map control signal with at least a second DMA-map control signal, the second DMA-map control signal to include an indication to map a second memory page to the DMA buffer.
  • Example 30 The at least one machine readable medium of example 29, the instructions to further cause the system to receive the second DMA-map control responsive to the second application executed by the second container mapping the second memory page to the DMA buffer.
  • Example 31 The at least one machine readable medium of example 30, the instructions to further cause the system to manage, via a virtual machine manager (VMM) , the first and second VMs.
  • VMM virtual machine manager
  • Example 32 The at least one machine readable medium of example 31, the instructions to further cause the system to receive the DMA-map control signal at a virtual input-output memory management unit (IOMMU) , the virtual IOMMU to emulate an IOMMU of the second VM.
  • IOMMU virtual input-output memory management unit
  • Example 33 The at least one machine readable medium of example 32, the instructions to further cause the system to receive, at the vIOMMU, DMA buffer mapping entries from the IOMMU.
  • Example 34 The at least one machine readable medium of example 31, the instructions to further cause the system to receive, at a back-end DMA driver, the DMA-map control signal from a front-end DMA driver of the second VM.
  • Example 35 The at least one machine readable medium of example 34, the instructions to further cause the system to receive, at the back-end DMA driver, DMA buffer mapping entries from the front-end DMA buffer.
  • Example 36 The at least one machine readable medium of example 27, the instructions to further cause the system to: allocate a first memory page; copy contents from a second memory page to the first memory page; and provision the second VM to use the first memory page, the first VM provisioned to use the second memory page.
  • Example 37 The at least one machine readable medium of example 27, the instructions to further cause the system to: allocate a third memory page; copy contents from a fourth memory page to the third memory page; and provision the second VM to use the third memory page, first VM provisioned to use the second memory page and the fourth memory page adjacent to the second memory page.
  • Example 38 The at least one machine readable medium of example 27, the instructions to further cause the system to present a user interface view on a digital display coupled to the circuitry.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

Selon divers modes de réalisation, l'invention concerne de manière générale l'attribution et la transmission de pages de mémoire à l'aide d'un mappage de copie sur accès direct à la mémoire. Un gestionnaire de machine virtuelle est destiné à recevoir un signal de commande de carte d'accès direct à la mémoire en provenance d'une machine virtuelle clonée et à attribuer et transmettre de manière opportuniste des pages de mémoire à la machine virtuelle clonée.
EP16895985.6A 2016-03-31 2016-03-31 Conteneur de machine virtuelle haute densité avec écriture de copie sur dma Withdrawn EP3436938A1 (fr)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/078130 WO2017166205A1 (fr) 2016-03-31 2016-03-31 Conteneur de machine virtuelle haute densité avec écriture de copie sur dma

Publications (1)

Publication Number Publication Date
EP3436938A1 true EP3436938A1 (fr) 2019-02-06

Family

ID=59963276

Family Applications (1)

Application Number Title Priority Date Filing Date
EP16895985.6A Withdrawn EP3436938A1 (fr) 2016-03-31 2016-03-31 Conteneur de machine virtuelle haute densité avec écriture de copie sur dma

Country Status (3)

Country Link
EP (1) EP3436938A1 (fr)
CN (1) CN108701047B (fr)
WO (1) WO2017166205A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10969988B2 (en) 2019-06-07 2021-04-06 International Business Machines Corporation Performing proactive copy-on-write for containers
US11593168B2 (en) * 2019-06-26 2023-02-28 Red Hat, Inc. Zero copy message reception for devices via page tables used to access receiving buffers

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8423747B2 (en) * 2008-06-30 2013-04-16 Intel Corporation Copy equivalent protection using secure page flipping for software components within an execution environment
US7868897B2 (en) * 2006-06-30 2011-01-11 Intel Corporation Apparatus and method for memory address re-mapping of graphics data
US20080065854A1 (en) * 2006-09-07 2008-03-13 Sebastina Schoenberg Method and apparatus for accessing physical memory belonging to virtual machines from a user level monitor
CN100527098C (zh) * 2007-11-27 2009-08-12 北京大学 一种虚拟机管理器的动态内存映射方法
US8645611B2 (en) * 2010-03-31 2014-02-04 Intel Corporation Hot-swapping active memory for virtual machines with directed I/O
EP2691857B1 (fr) * 2011-03-31 2016-11-30 Intel Corporation Mise en miroir de la mémoire et génération de redondance permettant une haute disponibilité
US9672058B2 (en) * 2014-03-13 2017-06-06 Unisys Corporation Reduced service partition virtualization system and method
US10261814B2 (en) * 2014-06-23 2019-04-16 Intel Corporation Local service chaining with virtual machines and virtualized containers in software defined networking

Also Published As

Publication number Publication date
CN108701047B (zh) 2023-08-01
CN108701047A (zh) 2018-10-23
WO2017166205A1 (fr) 2017-10-05

Similar Documents

Publication Publication Date Title
WO2016205977A1 (fr) Techniques pour exécuter un ou plusieurs conteneurs sur une machine virtuelle
US10817333B2 (en) Managing memory in devices that host virtual machines and have shared memory
US11106456B2 (en) Live updates for virtual machine monitor
US11036531B2 (en) Techniques to migrate a virtual machine using disaggregated computing resources
US11556437B2 (en) Live migration of virtual devices in a scalable input/output (I/O) virtualization (S-IOV) architecture
US10353714B1 (en) Non-disruptive upgrade of multipath drivers in information processing system
US11995462B2 (en) Techniques for virtual machine transfer and resource management
US10360061B2 (en) Systems and methods for loading a virtual machine monitor during a boot process
US9934056B2 (en) Non-blocking unidirectional multi-queue virtual machine migration
US20160266938A1 (en) Load balancing function deploying method and apparatus
US20160162311A1 (en) Offloading and parallelizing translation table operations
US9529618B2 (en) Migrating processes between source host and destination host using a shared virtual file system
US20170075706A1 (en) Using emulated input/output devices in virtual machine migration
CN108139937B (zh) 多根i/o虚拟化系统
US10341177B2 (en) Parallel computing system and migration method
US11860792B2 (en) Memory access handling for peripheral component interconnect devices
US11829792B1 (en) In-place live migration of compute instances for efficient host domain patching
WO2017166205A1 (fr) Conteneur de machine virtuelle haute densité avec écriture de copie sur dma
US11900142B2 (en) Improving memory access handling for nested virtual machines
US11989586B1 (en) Scaling up computing resource allocations for execution of containerized applications
US11748136B2 (en) Event notification support for nested virtual machines
WO2019000358A1 (fr) Techniques d'assistance à la migration en direct pour virtualisation d'unité de traitement graphique
US11995466B1 (en) Scaling down computing resource allocations for execution of containerized applications
US20230132905A1 (en) Binary execuction by a virtual device

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20180830

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20191001