CN109791500B - Intermediate host integrated circuit between virtual machine instance and guest programmable logic - Google Patents

Intermediate host integrated circuit between virtual machine instance and guest programmable logic Download PDF

Info

Publication number
CN109791500B
CN109791500B CN201780060352.8A CN201780060352A CN109791500B CN 109791500 B CN109791500 B CN 109791500B CN 201780060352 A CN201780060352 A CN 201780060352A CN 109791500 B CN109791500 B CN 109791500B
Authority
CN
China
Prior art keywords
host
integrated circuit
programmable integrated
virtual machine
programmable
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201780060352.8A
Other languages
Chinese (zh)
Other versions
CN109791500A (en
Inventor
马克·布拉德利·戴维斯
阿西夫·可汗
请求不公布姓名
埃雷兹·伊森伯格
纳菲亚·巴沙拉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Amazon Technologies Inc
Original Assignee
Amazon Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Amazon Technologies Inc filed Critical Amazon Technologies Inc
Publication of CN109791500A publication Critical patent/CN109791500A/en
Application granted granted Critical
Publication of CN109791500B publication Critical patent/CN109791500B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/40Bus structure
    • G06F13/4063Device-to-bus coupling
    • G06F13/4068Electrical coupling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2213/00Indexing scheme relating to interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F2213/0038System on Chip
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/541Interprogram communication via adapters, e.g. between incompatible applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Stored Programmes (AREA)

Abstract

A multi-tenant environment is described having configurable hardware logic (e.g., a Field Programmable Gate Array (FPGA)) located on a host server computer. To communicate with the configurable hardware logic, an intermediate host Integrated Circuit (IC) is located between the configurable hardware logic and a virtual machine executing on a host server computer. The host IC may include a management function and a mapping function for mapping requests between configurable hardware logic and virtual machines. The shared peripheral may be located on the host IC or on configurable hardware logic. The host IC may allocate resources among different configurable hardware logic to ensure that none of the guests may over consume resources.

Description

Intermediate host integrated circuit between virtual machine instance and guest programmable logic
Background
Cloud computing is the use of computing resources (hardware and software) that are available at remote locations and accessible over a network (e.g., the internet). The user may purchase these computing resources (including storage and computing power) as a utility program as desired. Cloud computing delegates remote services through user data, software, and computing. The use of virtual computing resources may provide a number of advantages including cost advantages and/or the ability to quickly adapt to changing computing resource demands.
Users of large computer systems may have various computing requirements due to different use cases. The cloud or computing service provider may provide a variety of different computer systems having different types of components with different levels of performance and/or functionality. Thus, a user may select a computer system that may be more efficient at performing a particular task. For example, a computing service provider may provide a system having different combinations of processing performance, memory performance, storage capacity or performance, and network capacity or performance. In general, multiple customers may share and utilize common resources provided by a computing service provider, thereby making it more cost-effective for the customers to use the services of the computing service provider.
Drawings
FIG. 1 is an exemplary system diagram in which a host logic Integrated Circuit (IC) is located between a plurality of programmable ICs for guest logic.
Fig. 2 is an exemplary embodiment illustrating further details of a host logic IC.
FIG. 3 is an example of an embodiment in which a host logic IC is a Field Programmable Gate Array (FPGA) located between a virtual machine and a plurality of guest FPGAs, wherein the host logic IC includes shared peripherals.
FIG. 4 is an example according to another embodiment in which the shared peripheral is located within a customer FPGA.
FIG. 5 is an exemplary system diagram illustrating multiple virtual machine instances running in a multi-tenant environment, where a host IC is located between a virtual machine and guest configurable hardware.
Fig. 6 is a flow chart of a method of routing requests to a programmable IC using an intermediate host IC.
Fig. 7 is a flow chart of a method of routing requests to a programmable IC using an intermediate host according to another embodiment.
FIG. 8 depicts a generalized example of a suitable computing environment in which the innovation may be implemented.
FIG. 9 is an exemplary system diagram according to another embodiment in which multiple host ICs are located between a programmable IC and a virtual machine.
Detailed Description
In certain aspects, providing custom hardware in a cloud environment violates one of the core benefits of sharing general-purpose hardware (e.g., server computers) across multiple clients. However, programmable logic such as Field Programmable Gate Arrays (FPGAs) are sufficiently versatile and can be programmed by a customer and then reused by other customers. Accordingly, one solution for providing dedicated computing resources within a set of reusable general purpose computing resources is to provide a server computer that includes a configurable logic platform (e.g., by providing the server computer with an add-on card that includes one or more FPGAs) as an option in the general purpose computing resources. Configurable logic is hardware that may be programmed or configured to perform logic functions specified by configuration data that is applied to or loaded onto the configurable logic. For example, a user of a computing resource may provide a specification (e.g., source code written in a hardware description language) for configuring configurable logic, the configurable logic may be configured according to the specification, and the configured logic may be used to perform tasks for the user. However, low-level hardware that allows users to access a computing facility may potentially introduce security and privacy issues within the computing facility. As a specific example, if the configured logic causes one or more server computers within the computing facility to fail (e.g., crash, hang, or restart) or deny network service, then an erroneous or malicious design from one user may potentially result in denial of service to other users. As another specific example, if the configured logic is able to read and/or write to memory of another user's memory space, an error or malicious design from one user may potentially corrupt or read data from another user.
As described herein, the facilities of the computing service may include various computing resources, one type of which may include a server computer having a configurable logic platform. The configurable logic platform may be programmed or configured by a user of the computer system such that hardware (e.g., configurable logic) of the computing resources is customized by the user. For example, a user may program the configurable logic so that it functions as a hardware accelerator that is closely coupled to the server computer. As a specific example, the hardware accelerator may be accessed through a local interconnect of the server computer, such as peripheral component interconnect Express (PCI-Express or PCIe). The user may execute an application on the server computer and the tasks of the application may be performed by the hardware accelerator using PCIe transactions. By tightly coupling the hardware accelerator to the server computer, the delay between the accelerator and the server computer may be reduced, which may potentially increase the processing speed of the application. The configurable logic platform may be a wide variety of configurable logic ICs, but the conventional example is an FPGA, which will be used in the specific examples below, but it should be understood that other reconfigurable hardware may alternatively be used.
Computing service providers can potentially increase the security and/or availability of computing resources by wrapping or encapsulating users' hardware (also referred to herein as application logic) within the host logic of a configurable logic platform. Encapsulating the application logic may include restricting or limiting access by the application logic to configuration resources, physical interfaces, hard macros (hard macros) of the configurable logic platform, and various peripheral devices of the configurable logic platform. For example, a computing service provider may manage programming of a configurable logic platform such that it includes host logic and application logic. The host logic may provide a framework or sandbox in which the application logic operates. In particular, the host logic may communicate with the application logic and constrain the functionality of the application logic. For example, the host logic may perform a bridging function between the local interconnect (e.g., PCIe interconnect) and the application logic such that the application logic cannot directly control signaling on the local interconnect. The host logic may be responsible for forming packets or bus transactions on the local interconnect and ensuring that protocol requirements are met. By controlling transactions on the local interconnect, the host logic can potentially prevent defective transactions or prevent transactions from going out of range locations. As another example, the host logic may isolate the configuration access port such that the application logic cannot cause the configurable logic platform to be reprogrammed without using services provided by the computing service provider.
In some embodiments, the host logic may be located on a separate IC (e.g., FPGA, application Specific IC (ASIC), or system on a chip (SoC)), located between the virtual machine and the configurable hardware platform, and programmed by the hypervisor. The intermediate host logic IC provides the guest with: experience that configurable hardware platform is completely controlled by clients; the ability to communicate with virtual machines using PCIe interfaces; the ability to perform Partial Dynamic Reconfiguration (PDR); hybrid Memory Cubes (HMC) or other standard memory interfaces, etc. In some embodiments, the intermediate host logic IC may provide an upward facing PCIe interface to the virtual machine and a downward facing PCIe interface to the customer FPGA. In this way, the guest FPGA operates as if it were in direct communication with a virtual machine, and the virtual machine operates as if it were in direct communication with the guest FPGA. Instead, the host IC may provide any desired intermediate management and security functions while communicating communications (also referred to as transactions or instructions) between the virtual machine and the guest FPGA. In some embodiments, some additional host logic may be located within the customer FPGA, for example by providing the customer with encrypted RTL blocks for inclusion in the customer logic. The host logic IC virtualizes the guest FPGA so that the guest logic operates as if it were in direct communication with a virtual machine.
The host logic IC may also perform a mapping function in which it maps communications from multiple virtual machines to the appropriate guest FPGAs. Thus, with a single host IC, multiple virtual machines can communicate with different programmable ICs including guest logic.
Fig. 1 is a system diagram illustrating an exemplary computing system 100 including a host server computer 102 having a software portion (SW) 104 and a hardware portion (HW) 106 roughly separated by a dashed line 108. The hardware portion 106 includes one or more CPUs, memories, storage devices, etc., generally illustrated as other hardware 110. The hardware portion 106 may also include a programmable Integrated Circuit (IC), shown generally at 120. The programmable IC may be an FPGA or other type of programmable logic, such as a Complex Programmable Logic Device (CPLD). The programmable IC is designed to be programmed by a customer after manufacture and contains an array of programmable logic blocks and configurable interconnections linking the logic blocks together. Logic blocks may be programmed to perform hardware functions ranging from simple gates to complex combinational functions. In any case, a programmable IC refers to at least the gates of the hardware being programmed and is not meant to include storing simple values in registers to configure existing hardware functions. Rather, it is the hardware itself that is formed by programming. Any number of programmable ICs 120 may be used in host server computer 102, as described further below. Further, programmable IC 120 may include logic from different clients such that multiple clients operate on the same server computer 102 without knowing that each other is present.
The hardware portion 106 also includes at least one intermediate host logic IC 122, which intermediate host logic IC 122 performs management, security, and mapping functions between the programmable IC120 and the software portion 104. The host logic IC may also be a reprogrammable logic, such as an FPGA, or may be other non-reprogrammable hardware, such as an ASIC or SoC.
Running in the software portion 104 on the level above the hardware 106 is a hypervisor or kernel layer, which in this example is illustrated as including a management hypervisor 130. The hypervisor or kernel layer may be classified as a type 1 or type 2 hypervisor. The type 1 hypervisor runs directly on the host hardware to control the hardware and manage the guest operating systems. The type 2 hypervisor runs within a conventional operating system environment. Thus, in a type 2 environment, the hypervisor may be a distinct layer running above the operating system and the operating system interacts with the system hardware. Different types of hypervisors include Xen based, hyper V, ESXi/ESX, linux, etc., but other hypervisors may be used. The management hypervisor 130 may generally include the device drivers required to access the hardware 106.
The software portion 104 may include a plurality of partitions, shown generally at 140, for running virtual machines. A partition is a logical unit of hypervisor isolation and executes a virtual machine. Each partition may allocate memory portions of its own hardware layer, CPU allocation, storage, etc. In addition, each partition may include a virtual machine and its own guest operating system. Thus, each partition is an abstract portion of capacity designed to support its own virtual machine independently of the other partitions. Each virtual machine 140 may communicate with host logic IC 122 through a hardware interface (not shown, but described further below). The host logic IC 122 may map communications to the appropriate programmable IC120 so that the programmable IC120 considers them to be in direct communication with the virtual machine 140. In some embodiments, a thin layer of host logic 150 may be included in programmable IC120 associated with the customer. As further described below, additional host logic 150 may include interface logic for communication between host logic IC 122 and programmable IC 120.
In one example, the hypervisor may be a Xen-based hypervisor, however other hypervisors may be used as described above. In the Xen example, the hypervisor 130 is domain 0 (also referred to as Dom 0), and the VM (virtual machine) 140 is a domain U guest. Domain 0 hypervisors have special rights to access physical I/O resources and interact with domain U clients. Domain U clients cannot access hardware layer 106 without domain 0 authorization. Thus, domain 0 is a management layer that ensures logical isolation (sandboxing) of programmable IC 120.
The management hypervisor 130 is responsible for configuration and control of the programmable IC 120, including the programmable IC. In addition, the management hypervisor 130 may control an interface bus, such as a PCIe interface. Through this interface, the management hypervisor 130 manages and controls hardware programming in the programmable IC 120. However, programming of programmable IC 120 may also occur directly from virtual machine 140 through host logic IC 122. In this way, the management hypervisor 130 can securely manage the programmable IC configuration ports and protect the customer IPs programmed within the programmable ICs. In addition, the management hypervisor 130 also serves as a main interface of an external management service for configuration and operation of the programmable IC. However, when the management hypervisor 130 performs programming and management of the programmable IC, it may be implemented by the intermediate host logic IC 122, which intermediate host logic IC 122 is also located between the management hypervisor 130 and the programmable IC. Thus, the host logic IC 122 may be an intermediate IC that includes host logic for routing communications from the plurality of virtual machines 140 to the plurality of programmable ICs 120, and the host logic IC 122 may provide additional management, security, and configuration for the plurality of programmable ICs 120.
Fig. 2 provides another embodiment of a system 210 for virtualizing a guest programmable IC. The system 210 includes a host server computer 212, the host server computer 212 including a software portion 220 and a hardware portion 222. The software unit 220 includes a management hypervisor 232 and a plurality of virtual machines 230. The management hypervisor 232 allocates the resources of the server computer to the virtual machines 230 so that each virtual machine can share the processing power, memory, and the like of the host server computer 212. Each virtual machine 230 may be associated with a respective programmable IC 250-252, where any integer number of programmable ICs are present in host server computer 212. Although not shown, any virtual machine 230 may be in communication with a plurality of programmable ICs 250-252 (e.g., FPGAs). Communication between the virtual machine 230 and the programmable ICs 250 to 252 occurs via an intermediate host logic IC 260, which intermediate host logic IC 260 itself may be programmable hardware logic, such as an FPGA, programmed by the hypervisor 232. The host logic IC 260 may also be fixed, non-programmable hardware logic, such as an ASIC.
Host logic IC 260 includes interface endpoint 262, which interface endpoint 262 is designed such that virtual machine 230 considers them to be in direct communication with programmable ICs 250-252. Instead, however, their communications are passed through mapping function 264, which mapping function 264 determines which programmable ICs 250-252 should forward the communications. Further, at 266, the host logic IC 260 may include management hardware 266, which management hardware 266 performs security functions, monitoring functions, and the like. The management hardware 266 also ensures encapsulation or sandboxing of the programmable ICs 250-252 so that one customer cannot obtain secure information associated with the operation of the programmable ICs of another customer. Likewise, the management logic 266 may include functionality to ensure that the programmable IC does not use more resources than other programmable ICs. The management logic 266 may pass the communication to the interface 268, which interface 268 then sends the communication to the appropriate endpoint interface 280 on the corresponding programmable IC associated with the virtual machine that sent the communication. Communication from programmable ICs 250 through 252 back to virtual machine 230 occurs in a similar manner. The programmable ICs 250 through 252 communicate through the interface 280 as if they were in direct communication with the virtual machine 230. The host logic IC 260 also includes a configuration and management function 270, which configuration and management function 270 may be used by the management hypervisor 232 to program the programmable ICs 250-252.
The programmable IC may include reconfigurable logic blocks (reconfigurable hardware) and other hardware. The reconfigurable logic blocks may be configured or programmed to perform various functions as desired by the customer of the computing service provider. The reconfigurable logic blocks may be programmed multiple times with different configurations so that the blocks may perform different hardware functions throughout the life of the device. The functions of the programmable IC 250 may be categorized according to the purpose or capability of each function or according to when the function is loaded into the programmable IC 250. For example, programmable IC 250 may include static logic, reconfigurable logic, and hard macros. The functions of the static logic, reconfigurable logic, and hard macros may be configured at different times. Thus, the functions of the programmable IC 250 may be gradually loaded.
The hard macro may perform a predetermined function and may be available when the programmable IC is powered up. For example, a hard macro may include hard-wired circuitry that performs a particular function. As specific examples, the hard macro may include a Configuration Access Port (CAP) for configuring the programmable IC 250, a serializer-deserializer transceiver (SERDES) for transmitting serial data, a memory or Dynamic Random Access Memory (DRAM) controller for signaling and controlling off-chip memory (e.g., double Data Rate (DDR) DRAM), and a memory controller for signaling and controlling memory devices. Other types of communication ports may be used as shared peripheral interfaces. Other types include, but are not limited to, ethernet (Ethernet), ring topology, or other types of network connection interfaces.
The static logic may be loaded onto the reconfigurable logic block at power-on. For example, configuration data specifying static logic functions may be loaded from an on-chip or off-chip flash memory device during a power-on sequence. The power-on sequence may include detecting a power event (e.g., by detecting that the power supply voltage has transitioned from below the threshold to above the threshold) and deasserting the reset signal in response to the power event. The initialization sequence may be triggered in response to a power event or deassertion of a reset. The initialization sequence may include reading configuration data stored on the flash memory device and loading the configuration data onto the programmable IC such that at least a portion of the reconfigurable logic block is programmed using the functionality of the static logic. After loading the static logic, the programmable IC 250 may transition from the loaded state to an operational state that includes the functionality of the static logic.
When programmable IC 250 is operational (e.g., after loading static logic), reconfigurable logic may be loaded onto the reconfigurable logic block. Configuration data corresponding to the reconfigurable logic may be stored on-chip or off-chip memory and/or may be received or streamed from an interface (e.g., interface 280). The reconfigurable logic may be divided into non-overlapping regions, which may interface with static logic. For example, the reconfigurable areas may be arranged in an array structure, or other regular or semi-regular structure. For example, the array structure may include holes or blocks, wherein hard macros are placed within the array structure. The different reconfigurable areas can communicate with each other, static logic and hard macros by using signal lines that can be designated as static logic. The different reconfigurable areas may be configured at different points in time such that the first reconfigurable area may be configured at a first point in time and the second reconfigurable area may be configured at a second point in time.
The functions of the programmable IC 250 may be divided or categorized according to the purpose or capability of the function. For example, functions may be classified into control plane functions, data plane functions, and shared functions. The control plane may be used to manage and configure the programmable IC. The data plane may be used to manage data transfer between client logic and server computers loaded onto the programmable IC. The sharing functionality may be used by the control plane and the data plane. The control plane functions may be loaded onto programmable IC 250 before the data plane functions are loaded. The data plane may include encapsulated reconfigurable logic configured using client application logic. The control plane may include host logic associated with a content service provider.
In general, different functions may be used to access the data plane and the control plane, wherein the different functions are assigned to different address ranges. In particular, the control plane function may be accessed using a management function and the data plane function may be accessed using a data path function or an application function. The address mapping layer 264 can distinguish between transactions bound to the control plane or the data plane. The transaction may be sent over a physical interconnect and received at interconnect interface 280. The interconnect interface may be an endpoint of a physical interconnect. It should be appreciated that the physical interconnect may include additional devices (e.g., switches and bridges) arranged in a fabric for connecting devices or components to the server computer 212.
In summary, the functions can be classified into control plane functions and application functions. Control plane functions may be used to monitor and limit the capabilities of the data plane. The data plane functionality may be used to accelerate user applications running on the server computer. By separating the functions of the control and the functions of the data plane, the security and availability of the server computer 212 and other computing infrastructure can potentially be increased. For example, application logic cannot signal the physical interconnect directly because the middle layer of the control plane controls the formatting and signaling of transactions for the physical interconnect. As another example, application logic may be prevented from using dedicated peripherals that may be used to reconfigure the programmable IC and/or access management information that may be privileged. As another example, application logic may access a hard macro of a programmable IC through a middle tier such that any interactions between the application logic and the hard macro are controlled using the middle tier.
The control plane functions are largely maintained on the host logic IC 260, while the data plane functions are maintained on the programmable ICs 250, 252. By separating the control plane and the data plane into different ICs, the customer experience using the programmable IC 250 is more consistent with a non-multi-tenant environment. Although the above-described functionality relates to programmable IC 250, it is equally applicable to the functionality of other programmable ICs (e.g., 252) on host server computer 212.
FIG. 3 is a detailed example of one embodiment of a system 300 in which an intermediate host FPGA 310 is located between a server host 312 and a plurality of customer FPGAs 314, 316. Although two customer FPGAs 314, 316 are shown, additional customer FPGAs may be added. Furthermore, although the intermediate host 310 is illustrated as an FPGA, other types of ICs, such as an ASIC or SoC, may be used. Customer FPGAs 314, 316 and host FPGA 310 may be located on one or more add-in cards of host server computer 312 or on the motherboard of the host server computer.
Server host 312 may execute one or more virtual machines, such as virtual machine 320. In this particular example, virtual machine 320 includes an application for supporting accelerator hardware programmed to customer FPGA 314, but other hardware may be used instead of accelerators. Virtual machine 320 may include user applications 322, accelerator APIs 324, and application drivers 326. User application 322 can send commands to customer FPGA 314 and receive requests from customer FPGA 314 via accelerator API 324. The API 324 communicates commands and requests through the application driver 326. The application driver 326 communicates through the PCIe root complex 330 located on the server host 312. The root complex connects the processor and memory subsystem on server host 312 to the PCI switch fabric, which includes the switching devices. In this way, the root complex is considered to be the routing logic. Virtual machine 320 also includes FPGA management API 332 and FPGA management driver 334, which may be used to configure and manage customer FPGA 314. Although other virtual machines are not shown, each virtual machine has its own FPGA management API and management driver for controlling the respective FPGA. The management hypervisor 340 may execute an FPGA management application 342, an FPGA configuration 344, and FPGA management and monitoring 346. These applications can communicate with and control the FPGA through FPGA driver 348. The hypervisor 340 may oversee and manage a plurality of virtual machines including the virtual machine 320.
Intermediate host FPGA310 includes a plurality of modules for configuring, managing, and communicating with customer FPGAs 314, 316. The FPGA310 includes a PCIe endpoint 350, which PCIe endpoint 350 acts as an endpoint to which the root complex 330 can switch communications. PCIe mapping layer 352 can distinguish transactions from server computer 312 that are bound to different client FPGAs 314, 316. Specifically, if the address of the transaction falls within the address range of customer FPGA314, the transaction may be routed to FPGA314 accordingly. If that is the case, if the address falls within the range of the customer FPGA 316, then the transaction is routed as well. If there are other customer FPGAs, then transactions can be similarly routed to that FPGA. The transactions pass through FPGA management layer 354, which provides security and monitoring of the transactions to ensure that encapsulation is maintained between clients. For example, the FPGA management layer may potentially identify transactions or data that violate a predetermined rule and may responsively generate an alert. In addition or alternatively, FPGA management 354 may terminate any transactions generated that violate any predetermined criteria. For an active transaction, FPGA management layer 354 may forward the transaction to shared fabric layer 360, and then shared fabric layer 360 may forward the transaction to shared peripheral 362, which shared peripheral 362 may include serial ports, such as GTY SerDes, DRAM controllers, and memory controllers. A shared peripheral is a shared function that is accessible from a control plane or a data plane. A shared peripheral is a component that may have multiple address mappings and may be used by the control plane and the data plane. Examples of shared peripherals include SerDes interfaces, DRAM controls (e.g., DDR DRAM), storage device controls (e.g., hard disk drives and solid state drives), and other various components that may be used to generate, store, or process information. Shared peripheral 362 may include additional peripheral controls. By having shared peripherals within the intermediate host FPGA310, the amount of resources used by any one customer can be controlled so that all customers receive resources in a fair proportion. Communication to FPGAs 314, 316 may be through inter-FPGA transport layer 364, which inter-FPGA transport layer 364 may be a serial bus, an ethernet bus, a ring topology, or any desired communication mechanism. Other communications may be through the host logic dedicated structure 370 for accessing the dedicated peripheral 372. The special purpose peripheral 372 is a component that is only accessible by the computing service provider and not accessible to the client. The special purpose peripheral devices may include JTAG (e.g., IEEE 1149.1), general purpose I/O (GPIO), serial Peripheral Interface (SPI) flash, and Light Emitting Displays (LEDs). The illustrated peripheral devices are examples only, and other peripheral devices may be used.
Mailbox and watchdog timer 374 is a shared function that is accessible from the control plane or data plane. In particular, mailboxes may be used to communicate messages and other information between the control plane and the data plane. For example, the mailbox may include a buffer, a control register (e.g., a semaphore), and a status register. By using mailboxes as intermediaries between the control plane and the data plane, isolation between the data plane and the control plane can potentially be increased, which can increase the security of the configurable hardware platform. The watchdog timer may be used to detect and recover from hardware and/or software faults. For example, the watchdog timer may monitor the amount of time it takes to perform a particular task, and if the amount of time exceeds a threshold, the watchdog timer may initiate an event, such as writing a value to a control register or causing an interrupt or reset to be asserted.
FPGA configuration and management block 376 may include functionality related to managing and configuring FPGAs 314, 316. For example, configuration and management block 376 may provide access for configuring an FPGA. In particular, server computer 312 may send a transaction to block 376 to initiate loading of sandboxed accelerators within customer FPGAs 314, 316.
Each customer FPGA 314, 316 may include an inter-FPGA transport block 380, which is a communication interface for transferring data or control functions between the customer FPGA and the intermediate host FPGA 310. The inter-FPGA transport block may be provided to the customer for inclusion within the customer logic using encrypted RTL code or other means. The host logic wrapper 382 may be located between the inter-FPGA transport interface and the sandboxed accelerator 384 (which is guest hardware logic) to facilitate communications between the two. Although an accelerator function is shown, other guest logic functions may be used.
The intermediate FPGA 310 allows the customer FPGA to be nearly customer specific, with only a small portion of the FPGA including the host logic (i.e., transport block 380 and host logic wrapper 382). The host FPGA 310 may be a physically smaller FPGA (fewer configurable logic blocks) than the guest FPGAs 314, 316 and virtualize the guest FPGAs from the host server computer 312 in much the same way that the hypervisor 340 virtualizes the virtual machine 320. Thus, virtual machine 320 considers it to communicate directly with the guest FPGA using PCIe endpoint 350. However, the intermediate FPGA performs mapping and security functions before passing data and/or commands to the client FPGA. Also, the client FPGA considers it to be in direct communication with the virtual machine 320, but for security, the client FPGA's messages are passed through the intermediate FPGA to monitor and map back to the corresponding virtual machine.
FIG. 4 is an embodiment of a system 400 in which an intermediate host FPGA 410 is located between a server host 412 and a plurality of client FPGAs 414, 416. Many of the blocks associated with the server host 412 and the intermediate host FPGA are similar to those in fig. 3 and for brevity, the description will not be repeated. However, in the embodiment of FIG. 4, shared peripheral 420 is located within customer FPGAs 414, 416. Thus, intermediate FPGA 410 may include PCIe root complex 430. Thus, the first root complex 432 in the server host 412 communicates the transaction to the PCIe endpoint 434 in the host FPGA 410. After the transaction passes through the PCIe map and FPGA management block, the transaction is passed to a second root complex 430 in the intermediate FPGA 410, which second root complex 430 decides how to route the transaction. For example, depending on the address range identified in association with the transaction, PCIe root complex 430 routes the transaction to either client FPGA 414 or client FPGA 416. The root complex 430 sends the transaction to the PCIe endpoint 440 in the client FPGA 414. The PCIe endpoint 440 then communicates the transaction to the client accelerator logic 442. Client accelerator logic 442 may access shared peripheral 420, which contains peripheral logic as described above and further described below.
Thus, in this embodiment, there are two layers of root complexes, one layer located at 432 in the host server computer and the other layer located at 430 in the intermediate host FPGA. As in fig. 3, the intermediate FPGA allows control and management of the customer FPGA without requiring host logic or with very little host logic within the customer FPGA. This gives the customer the experience as if they were controlling the entire FPGA. Some host logic may be included in the customer FPGA by providing the customer with encrypted RTL code that the customer may incorporate into its design.
FIG. 5 is a computing system network of a network-based computing service provider 500 that illustrates one environment in which embodiments described herein may be used. By way of background, a computing service provider 500 (i.e., a cloud provider) is able to deliver computing and storage capacity as a service to a community of end recipients. In an exemplary embodiment, a computing service provider may be established for an organization by or on behalf of the organization. That is, the computing service provider 500 may provide a "private cloud environment. In another embodiment, the computing service provider 500 supports a multi-tenant environment in which multiple clients operate independently (i.e., a public cloud environment). In general, the computing service provider 500 may provide the following model: infrastructure as a service ("IaaS"), platform as a service ("PaaS"), and/or software as a service ("SaaS"). Other models may be provided. For the IaaS model, the computing service provider 500 may provide computers as physical or virtual machines and other resources. The virtual machine may be run by the hypervisor as a guest, as further described below. The PaaS model provides a computing platform that may include an operating system, a programming language execution environment, a database, and a web server. Application developers can develop and run their software solutions on a computing service provider platform without the cost of purchasing and managing the underlying hardware and software. Furthermore, application developers can develop and run their hardware solutions on the configurable hardware of the computing service provider platform. The SaaS model allows for the installation and operation of application software in a computing service provider. In some embodiments, the end user accesses the computing service provider 500 using a networked client device, such as a desktop computer, laptop computer, tablet, smart phone, etc., running a web browser or other lightweight client application. Those skilled in the art will recognize that the computing service provider 500 may be described as a "cloud" environment.
The particular case illustrated by the computing service provider 500 includes a plurality of server computers 502A-502B. Although only two server computers are shown, any number may be used and a large hub may include thousands of server computers. Server computers 502A-502B may provide computing resources for executing software instances 506A-506B. In one embodiment, software instances 506A-506B are virtual machines. As known in the art, a virtual machine is an example of a software implementation of a machine (i.e., a computer) that executes an application like a physical machine. In the example of a virtual machine, each of servers 502A-502B may be configured to execute a hypervisor 508 or other type of program configured to enable execution of multiple software instances 506 on a single server. Further, each software instance 506 may be configured to execute one or more applications.
It should be appreciated that while the embodiments disclosed herein are primarily described in the context of virtual machines, other types of examples along with the concepts and technologies disclosed herein may be used. For example, the techniques disclosed herein may be used with storage resources, data communication resources, and other types of computing resources. Embodiments disclosed herein may also execute all or part of the application directly on the computer system without using a virtual machine instance.
Server computers 502A-502B may include heterogeneous sets of different hardware resources or instance types. Some of the hardware instance types may include configurable hardware that is at least partially configurable by a user of the computing service provider 500. One example of an instance type may include a server computer 502A that communicates with configurable hardware 504A via an intermediate host IC 516A. In particular, the server computer 502A and the host IC 516A may communicate over a local interconnect such as PCIe. Likewise, host IC 516A and configurable hardware 504A may communicate over a PCIe interface. Another example of an instance type may include a server computer 502B, a host IC 516B, and configurable hardware 504B. For example, the configurable logic 504B may be integrated within multiple chip modules or on the same die as the CPU of the server computer 502B. Thus, the configurable hardware 504A, 504B may be located on or off the server computer 502A, 502B. In yet another embodiment, the host IC 516A or 516B may be located external to the host server computer 502A or 502B.
One or more server computers 520 may be reserved for executing software components that manage the operation of the server computer 502 and the software instances 506. For example, server computer 520 may execute management component 522. The customer may access the management component 522 to configure various aspects of the operation of the software instance 506 purchased by the customer. For example, a customer may purchase, rent, or rent an instance and make changes to the configuration of the software instance. The configuration information for each software instance may be stored as a Machine Image (MI) 542 on the network attached memory 540. Specifically, MI 542 describes information for starting a VM instance. The MI may include templates (e.g., OS and applications) for the root volume of the instance, boot permissions to control which customer accounts may use the MI, and a block device map that specifies the volumes attached to the instance when the instance is booted. The MI may also include a reference to a Configurable Hardware Image (CHI) 542, which is loaded onto the configurable hardware 504 when the instance is started. The CHI includes configuration data for programming or configuring at least a portion of the configurable hardware 504.
The customer may also specify settings on how to scale the purchased instances according to the needs. The management component can also include policy documents for implementing the customer policies. The automatic scaling component 524 can scale the instance 506 according to customer-defined rules. In one embodiment, the auto-scaling component 524 allows the customer to specify the magnification rules for determining when a new instance should be instantiated and the minification rules for determining when an existing instance should be terminated. The auto-scaling component 524 may be comprised of a number of sub-components executing on different server computers 502 or other computing devices. The auto-scaling component 524 can monitor available computing resources on the internal management network and modify the available resources as needed.
The deployment component 526 can be employed to facilitate the client in deploying new instances 506 of computing resources. The deployment component can access account information associated with the instance, such as who is the owner of the account, credit card information, the country of the owner, and so forth. Deployment component 526 can receive a configuration from a customer that includes data describing how new instance 506 should be configured. For example, the configuration may specify one or more applications to be installed in the new instance 506, provide scripts and/or other types of code to be executed to configure the new instance 506, provide caching logic that specifies how the application cache should be prepared, and provide other types of information. The deployment component 526 can utilize customer-provided configuration and caching logic to configure, prepare, and launch the new instance 506. Configuration, caching logic, and other information may be specified by the customer usage management component 522 or by providing the information directly to the deployment component 526. The instance manager may be considered part of the deployment component.
Customer account information 528 may include any desired information associated with a customer of the multi-tenant environment. For example, the customer account information may include the customer's unique identifier, customer address, billing information, permissions information, customization parameters for the start instance, scheduling information, auto-scaling parameters, previous IP addresses for accessing the account, a list of MI and CHI's that are accessible to the customer, and the like.
One or more server computers 530 may be reserved for executing software components that manage the download of configuration data to the configurable hardware 504 of the server computer 502. For example, the server computer 530 may execute a logical warehousing service that includes an ingestion component 532, a library tube component 534, and a download component 536. Ingest component 532 may receive host logic and application logic designs or specifications and generate configuration data that may be used to configure configurable hardware 504. The library tube assembly 534 may be used to manage source code, user information, and configuration data associated with the logical warehousing service. For example, library tube assembly 534 may be used to store configuration data generated by a user's design in a user-specified location on network attached storage 540. In particular, the configuration data may be stored within a configurable hardware image 542 on the network attached memory 540. In addition, library tube component 534 can manage versioning and storage of input files (e.g., specifications of application logic and host logic), as well as metadata about users of the logic design and/or logic warehousing services. The library management component 534 may index the generated configuration data with one or more attributes such as a user identifier, instance type, market identifier, machine image identifier, configurable hardware identifier, and the like. The download component 536 can be used to authenticate the request for configuration data and to send the configuration data to the requestor when the request is authenticated. For example, when an instance 506 using configurable hardware 504 is started, agents on server computers 502A-B may send requests to download component 536. As another example, when configurable hardware 504 is in operation, instance 506 requests that configurable hardware 504 be partially reconfigured, agents on server computers 502A-B may send requests to download component 536.
Network Attached Storage (NAS) 540 may be used to provide storage space and access to files stored on NAS 540. For example, NAS 540 may include one or more server computers for processing requests using a network file sharing protocol, such as Network File System (NFS). NAS 540 may include removable or non-removable media including disks, a Storage Area Network (SAN), a Redundant Array of Independent Disks (RAID), a magnetic or video tape, a CD-ROM, a DVD, or any other media that may be used to store information in a non-transitory manner and that may be accessed through network 550.
Network 550 may be used to interconnect server computers 502A-502B, server computers 520, 530, and memory 540. The network 550 may be a Local Area Network (LAN) and may be connected to a Wide Area Network (WAN) 560 so that end users may access the computing service provider 500. It should be appreciated that the network topology shown in fig. 5 has been simplified and that more networks and network connection devices may be used to interconnect the various computing systems disclosed herein.
Fig. 6 is a flow chart of a method of controlling programmable hardware in a multi-tenant environment. In process block 610, a virtual machine instance on a host server computer is executed. The virtual machine may be a local virtual machine on a host server computer. In other embodiments, the virtual machines may be located on separate host server computers. In a particular example, a virtual machine on a separate host server computer may communicate over a network with a management hypervisor on the host server computer where the programmable circuit is located.
In process block 620, the programmable IC may be mapped to a virtual machine instance. For example, one or more host ICs may be located between the virtual machine instance and the programmable IC. For example, in fig. 1, host logic IC 122 may be located between the plurality of programmable ICs 120 and virtual machine 140 and mapped to route data communications (e.g., data) from the virtual machine to the appropriate programmable IC, and vice versa.
In some embodiments, the programmable IC may include a customer portion and a host portion. For example, in FIG. 3, some host logic (i.e., host logic wrapper 382 and inter-FPGA transfer 380) is included in the customer FPGA to facilitate communication with the host FPGA. Customer FPGAs 314, 316 can communicate with the host IC to access shared peripheral 362. The host IC may include logic for ensuring that each client has access to resources (e.g., bandwidth) associated with the shared peripheral. Also, the host IC may ensure that each client has sufficient rights to access the PCIe endpoint.
Fig. 7 is a flow chart of a method of controlling programmable hardware in a multi-tenant environment, according to another embodiment. In process block 710, a plurality of programmable ICs are provided onto a host server computer. For example, in fig. 1, programmable IC 150 is illustrated as including any desired number of ICs. The programmable IC is typically an FPGA, but other programmable ICs may be used. The programmable IC allows programming to form logic gates and other hardware logic. In process block 720, a plurality of virtual machines on a host server computer are started. Again, returning to FIG. 1, any number of virtual machines 140 on the host server computer may be started. In process block 730, the host IC is located between the plurality of virtual machines and the programmable IC. The host IC may include mapping logic (e.g., see 264 of fig. 2) and management logic (e.g., see 266 of fig. 2). The host IC also includes upstream and downstream interfaces for communicating with the virtual machine and the programmable IC, respectively. In process block 740, a plurality of virtual machines may be mapped to the programmable IC. Mapping is performed by an intermediate host IC that includes the appropriate address ranges associated with the programmable IC and the virtual machine. Multiple programmable ICs may be associated with different customers, and the host IC may perform management functions for ensuring sandboxing of the programmable ICs. Thus, any data associated with a programmable IC associated with one customer is not available to another customer. In some embodiments, communication between the host IC and the programmable IC may occur through a serial communication port (e.g., a Serdes port). Other inter-FPGA transport interfaces may be used. In some embodiments, the host IC may include a root complex (see 430 of fig. 4) for communicating with endpoints (see 440 of fig. 4) on the customer FPGA.
FIG. 8 depicts a generalized example of a suitable computing environment 800 in which the above-described innovations may be implemented. The computing environment 800 is not intended to suggest any limitation as to scope of use or functionality, as the innovation may be implemented in different general-purpose or special-purpose computing systems. For example, the computing environment 800 may be any of a variety of computing devices (e.g., a desktop computer, a laptop computer, a server computer, a tablet computer, etc.).
With reference to FIG. 8, a computing environment 800 includes one or more processing units 810, 815 and memory 820, 825. In fig. 8, this basic configuration 830 is included within the dashed line. The processing units 810, 815 execute computer-executable instructions. The processing unit may be a general purpose Central Processing Unit (CPU), a process in an Application Specific Integrated Circuit (ASIC), or any other type of processor. In a multiprocessing system, multiple processing units execute computer-executable instructions to increase processing power. For example, FIG. 8 illustrates a central processing unit 810 and a graphics processing unit or collaboration processing unit 815. The tangible memory 820, 825 may be volatile memory (e.g., registers, cache, RAM), non-volatile memory (e.g., ROM, EEPROM, flash memory, etc.), or some combination of the two, that is accessible by the processing unit. The memories 820, 825 store software 880 embodying one or more of the innovations described herein in the form of computer-executable instructions suitable for execution by the processing unit.
The computing system may have additional features. For example, computing environment 800 includes memory 840, one or more input devices 850, one or more output devices 860, and one or more communication connections 870. An interconnection mechanism (not shown) such as a bus, controller, or network interconnects the components of the computing environment 800. Typically, operating system software (not shown) provides an operating environment for other software executing in the computing environment 800, and coordinates activities of the components of the computing environment 800. The computing system may also include one or more cards 872, which cards 872 include programmable ICs as described herein.
Tangible memory 840 may be removable or non-removable and include magnetic disks, magnetic tape or video tape, CD-ROM, DVD, or any other medium that may be used to store information in a non-transitory manner and that may be accessed within computing environment 800. Memory 840 stores instructions for software 880, which software 880 implements one or more of the innovations described herein.
The input device 850 may be a touch input device (e.g., keyboard, mouse, pen, or trackball), a voice input device, a scanning device, or other device that provides input to the computing environment 800. The output device 860 may be a display, printer, speaker, CD-writer, or other device that provides output from the computing environment 800.
The communication connection 870 is capable of communicating with another computing entity over a communication medium. The communication medium conveys information in a modulated data signal such as computer-executable instructions, audio or video input or output, or other data. A modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media may employ electrical, optical, RF, or other carriers.
Fig. 9 is a system diagram illustrating an exemplary computing system 900 that includes a host server computer 902, the host server computer 902 having a software portion 904 and a hardware portion 906 that are roughly separated by a dashed line 908. The hardware portion 906 includes one or more CPUs, memories, storage devices, etc., generally illustrated as other hardware 910. The hardware portion 906 may also include a programmable Integrated Circuit (IC), shown generally at 920. The programmable IC may be an FPGA or other type of programmable logic, such as a Complex Programmable Logic Device (CPLD). Any number of programmable ICs 920 may be used in the host server computer 902, as described further below. Further, programmable IC 920 may include logic from different clients such that multiple clients operate on the same server computer 902 without knowing that each is present.
The hardware portion 906 also includes two or more intermediate host logic ICs 922, 923, which intermediate host logic ICs 922, 923 perform management, security, and mapping functions between the programmable IC 920 and the software portion 904. The host logic IC may also be a reprogrammable logic, such as an FPGA, or may be other non-reprogrammable hardware, such as an ASIC or SoC.
Running in the software portion 904 on a layer above the hardware 906 is a hypervisor or kernel layer, which in this example is illustrated as including a management hypervisor 930. The management hypervisor 930 may generally include the device drivers required to access the hardware 906. The software portion 904 may include a plurality of partitions, shown generally at 940, for running virtual machines. A partition is a logical unit of hypervisor isolation and executes a virtual machine. Each partition may allocate memory portions of its own hardware layer, CPU allocation, storage, etc. In addition, each partition may include a virtual machine and its own guest operating system. Each virtual machine 940 may communicate with one of the host logic ICs 922, 923 through a hardware interface (not shown, but described further below). The host logic ICs 922, 923 may map communications to the appropriate programmable ICs 920 so that the programmable ICs 920 consider them to be in direct communication with the virtual machine 940. In some embodiments, a thin layer of host logic 950 may be included in programmable IC 920 associated with a customer. Mapping is accomplished by address mapping, wherein logical and physical addresses may be stored in the host IC to be linked to each other.
Although two host logic ICs 922, 923 are shown, any number of host logic ICs may be used. Further, although the host logic ICs are illustrated as being within the host server computer 902, in any of the embodiments described herein, one or more of the host logic ICs may be located external to the host server computer 902. In this case, the programmable IC 920 may also be external to the host server computer 902.
Although the operations of some of the methods disclosed are described in a particular sequential order for convenience of presentation, it should be understood that this manner of description encompasses rearrangement, unless a particular ordering is required by particular language set forth below. For example, in some cases, the operations described in sequence may be rearranged or performed concurrently. Moreover, for the sake of simplicity, the attached figures may not illustrate the various ways in which the disclosed methods can be used in conjunction with other methods.
Any of the methods disclosed may be implemented as computer-executable instructions stored on one or more computer-readable storage media (e.g., one or more optical media discs, volatile storage components (e.g., DRAM or SRAM) or non-volatile storage components (e.g., flash memory or hard drive)) and executed on a computer (e.g., any commercially available computer, including smartphones or other mobile devices having computing hardware). The term computer readable storage media does not include communication connections, such as signals and carrier waves. Any computer-executable instructions for implementing the disclosed techniques, as well as any data created and used during implementation of the disclosed embodiments, may be stored on one or more computer-readable storage media. The computer-executable instructions may be, for example, part of a dedicated software application or a software application that is accessible or downloadable via a web browser or other software application (e.g., a remote computing application). The software may be executed, for example, on a single local computer (e.g., any suitable commercial computer) or in a network environment using one or more network computers (e.g., via the internet, a wide area network, a local area network, a client server network (e.g., a cloud computing network), or other similar network).
For clarity, only certain selected aspects of the software-based implementation are illustrated. Other details known in the art are omitted. For example, it should be understood that the disclosed technology is not limited to any particular computer language or program. For example, the disclosed techniques may be implemented by software written in C++, java, perl, javaScript, adobe Flash, or any other suitable programming language. Likewise, the disclosed technology is not limited to any particular computer or hardware type. Certain details of suitable computers and hardware are well known and need not be set forth in detail herein.
It should also be well understood that any of the functions described herein may be performed, at least in part, by one or more hardware logic components rather than software. For example, but not limited to, illustrative types of hardware logic components that may be used include Field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like. Any of these devices may be used in the embodiments described herein.
Furthermore, any software-based embodiment (e.g., including computer-executable instructions for causing a computer to perform any of the disclosed methods) may be uploaded, downloaded, or accessed remotely via suitable communication means. Such suitable communication means include, for example, the internet, the world wide web, an intranet, software applications, cables (including fiber optic cables), magnetic communications, electromagnetic communications (including RF, microwave and infrared communications), electronic communications, or other similar communication means.
Embodiments of the invention may be described in terms of the following clauses:
1. an apparatus in a multi-tenant environment, the apparatus comprising:
a host server computer having a processor configured to execute a management hypervisor and at least first and second virtual machine instances;
a first programmable integrated circuit IC located within the host server computer, the first programmable integrated circuit being programmable to include hardware logic associated with the first virtual machine instance;
a second programmable IC located within the host server computer, the second programmable integrated circuit being programmable to include hardware logic associated with the second virtual machine instance; and
a host IC located between the first virtual machine instance and the first programmable IC and between the second virtual machine instance and the second programmable IC, the host IC mapping the first programmable IC to the first virtual machine instance and the second programmable IC to the second virtual machine instance.
2. The apparatus of clause 1, wherein the host IC comprises an interface endpoint for communicating with the first and the second virtual machine instances, and an interface for communicating with the first and the second programmable ICs.
3. The apparatus of any preceding clause, wherein the host IC comprises mapping logic to associate the first virtual machine instance with the first programmable IC or the second programmable IC.
4. The apparatus of any preceding clause, wherein each of the first and the second programmable ICs has sandboxed hardware logic programmed therein.
5. The apparatus of any preceding clause, wherein the host IC comprises a shared peripheral, and wherein the host IC controls an amount of resources that can be used by each of the first and the second programmable ICs.
6. The apparatus of any preceding clause, wherein the host IC comprises routing logic and the first and the second programmable ICs comprise interface endpoints for communicating with the host IC.
7. A method of controlling programmable hardware in a multi-tenant environment, the method comprising:
executing a virtual machine instance on a host server computer in the multi-tenant environment, the host server computer comprising a plurality of programmable integrated circuits ICs; and
a first programmable IC of the plurality of programmable ICs is mapped to the virtual machine instance using one or more host ICs located between the virtual machine instance and the plurality of programmable ICs.
8. The method of clause 7, wherein the host IC has an interface endpoint for communicating with the virtual machine instance, and routing logic for communicating with an endpoint within the first programmable IC.
9. The method of clause 7 or 8, wherein the first programmable IC comprises a host portion and a portion associated with the virtual machine instance, the host portion comprising an interface for communicating with the host IC.
10. The method of any of clauses 7-9, wherein the host IC comprises a shared peripheral having a serial port.
11. The method of any of clauses 7 to 10, wherein the host server computer comprises a management hypervisor, and the method further comprises: the virtual machine instance is started using the management hypervisor and the one or more host ICs are configured.
12. The method of any of clauses 7 to 11, wherein the plurality of programmable ICs is a field programmable gate array FPGA.
13. The method of any of clauses 7 to 12, wherein the host IC is a field programmable gate array FPGA.
14. The method of any of clauses 7-14, wherein the plurality of programmable ICs are coupled to the host IC by a peripheral bus.
15. A method, comprising:
providing a plurality of programmable integrated circuit ICs onto a host server computer;
starting a plurality of virtual machines on the host server computer;
providing a host IC located between the plurality of virtual machines and the plurality of programmable ICs; and
the plurality of virtual machines are mapped to the plurality of programmable ICs.
16. The method of clause 15, wherein the host IC includes a shared resource, and the host IC allocates resources associated with the shared resource to the plurality of programmable ICs.
17. The method of clause 15 or 16, wherein the preventing communication between the plurality of programmable ICs.
18. The method of any of clauses 15 to 17, wherein the host IC is a field programmable gate array or a system on chip SoC.
19. The method of any of clauses 15 to 18, wherein the host IC communicates with the plurality of programmable ICs through a serial port.
20. The method of any of clauses 15 to 20, wherein the host IC comprises an endpoint for communicating with the plurality of virtual machines and a root complex for communicating with the plurality of programmable ICs.
The disclosed methods, apparatus, and systems should not be construed as limiting in any way. Rather, the invention is directed to all novel and nonobvious features and aspects of the various embodiments disclosed, both separately and in various combinations and subcombinations with one another. The disclosed methods, apparatus and systems are not limited to any specific aspect or feature or combination thereof, nor do the disclosed embodiments require that any one or more specific advantages be present or problems be solved.
In view of the many possible embodiments to which the disclosed inventive principles may be applied, it should be recognized that the described embodiments are only preferred examples of the invention and should not be taken as limiting the scope of the invention. Rather, the scope of the invention is defined by the appended claims. Accordingly, applicants claim all that come within the scope of the following claims as applicants' invention.

Claims (15)

1. An apparatus in a multi-tenant environment, the apparatus comprising:
a host server computer having a processor configured to execute a management hypervisor and at least a first virtual machine instance and a second virtual machine instance;
a first programmable integrated circuit IC located within the host server computer, the first programmable integrated circuit being programmable to include hardware logic associated with the first virtual machine instance;
a second programmable integrated circuit IC located within the host server computer, the second programmable integrated circuit being programmable to include hardware logic associated with the second virtual machine instance; and
a host IC located between the first virtual machine instance and the first programmable integrated circuit IC and between the second virtual machine instance and the second programmable integrated circuit IC, the host IC mapping the first programmable integrated circuit IC to the first virtual machine instance and the second programmable integrated circuit IC to the second virtual machine instance,
Wherein the host IC includes a first peripheral component interconnect express PCIe interface for communicating with the first virtual machine instance and the second virtual machine instance, and a second PCIe interface for communicating with the first programmable integrated circuit and the second programmable integrated circuit, and wherein the host IC includes management hardware positioned between the first PCIe interface and the second PCIe interface, the management hardware encapsulates the first programmable integrated circuit and the second programmable integrated circuit such that the second programmable integrated circuit cannot obtain security information associated with the first programmable integrated circuit.
2. The apparatus of claim 1, wherein the host IC comprises an interface endpoint for communicating with the first virtual machine instance and the second virtual machine instance, and an interface for communicating with the first programmable integrated circuit IC and the second programmable integrated circuit IC.
3. The apparatus of claim 1, wherein the host IC comprises mapping logic to associate the first virtual machine instance with the first programmable integrated circuit IC or the second programmable integrated circuit IC.
4. The apparatus of claim 1, wherein each of the first programmable integrated circuit IC and the second programmable integrated circuit IC has sandboxed hardware logic programmed therein.
5. The apparatus of claim 1, wherein the host IC comprises a shared peripheral, and wherein the host IC controls an amount of resources that each of the first programmable integrated circuit IC and the second programmable integrated circuit IC can use.
6. The apparatus of claim 1, wherein the host IC comprises routing logic and the first programmable integrated circuit IC and the second programmable integrated circuit IC comprise interface endpoints for communicating with the host IC.
7. A method of controlling programmable hardware in a multi-tenant environment, the method comprising:
executing a first virtual machine instance and a second virtual machine instance on a host server computer in the multi-tenant environment, the host server computer comprising a plurality of programmable integrated circuits ICs including a first programmable integrated circuit and a second programmable integrated circuit; and
mapping a first programmable integrated circuit IC of the plurality of programmable integrated circuit ICs to the first virtual machine instance using one or more host ICs located between the first virtual machine instance and the second virtual machine instance and the plurality of programmable integrated circuit ICs,
Wherein the one or more host ICs include a first peripheral component interconnect high-speed interface for communicating with the first virtual machine instance and the second virtual machine instance and a second peripheral component interconnect high-speed interface for communicating with the first programmable integrated circuit and the second programmable integrated circuit, and wherein the host ICs include management hardware positioned between the first peripheral component interconnect high-speed interface and the second peripheral component interconnect high-speed interface, the management hardware encapsulating the first programmable integrated circuit and the second programmable integrated circuit such that the second programmable integrated circuit cannot obtain security information associated with the first programmable integrated circuit.
8. The method of claim 7, wherein the host IC has an interface endpoint for communicating with the first virtual machine instance, and routing logic for communicating with an endpoint within the first programmable integrated circuit IC.
9. The method of claim 7, wherein the first programmable integrated circuit IC includes a host portion and a portion associated with the first virtual machine instance, the host portion including an interface for communicating with the host IC.
10. The method of claim 7, wherein the host IC comprises a shared peripheral device having a serial port.
11. The method of claim 7, wherein the host server computer includes a management hypervisor, and the method further comprises: the first virtual machine instance and the second virtual machine instance are started using the management hypervisor, and the one or more host ICs are configured.
12. The method of claim 7, wherein the plurality of programmable integrated circuits ICs are field programmable gate array FPGAs.
13. The method of claim 7, wherein the host IC is a field programmable gate array FPGA.
14. The method of claim 7, wherein the plurality of programmable integrated circuit ICs are coupled to the host IC by a peripheral bus.
15. The method of claim 7, wherein communication is prevented between the plurality of programmable integrated circuit ICs.
CN201780060352.8A 2016-09-28 2017-09-28 Intermediate host integrated circuit between virtual machine instance and guest programmable logic Active CN109791500B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US15/279,164 US11099894B2 (en) 2016-09-28 2016-09-28 Intermediate host integrated circuit between virtual machine instance and customer programmable logic
US15/279,164 2016-09-28
PCT/US2017/054175 WO2018064415A1 (en) 2016-09-28 2017-09-28 Intermediate host integrated circuit between a virtual machine instance and customer programmable logic

Publications (2)

Publication Number Publication Date
CN109791500A CN109791500A (en) 2019-05-21
CN109791500B true CN109791500B (en) 2024-01-23

Family

ID=60117771

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780060352.8A Active CN109791500B (en) 2016-09-28 2017-09-28 Intermediate host integrated circuit between virtual machine instance and guest programmable logic

Country Status (5)

Country Link
US (1) US11099894B2 (en)
EP (1) EP3519953B1 (en)
JP (1) JP6864749B2 (en)
CN (1) CN109791500B (en)
WO (1) WO2018064415A1 (en)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11099894B2 (en) 2016-09-28 2021-08-24 Amazon Technologies, Inc. Intermediate host integrated circuit between virtual machine instance and customer programmable logic
US10338135B2 (en) 2016-09-28 2019-07-02 Amazon Technologies, Inc. Extracting debug information from FPGAs in multi-tenant environments
US10223317B2 (en) * 2016-09-28 2019-03-05 Amazon Technologies, Inc. Configurable logic platform
US10795742B1 (en) 2016-09-28 2020-10-06 Amazon Technologies, Inc. Isolating unresponsive customer logic from a bus
US10250572B2 (en) 2016-09-29 2019-04-02 Amazon Technologies, Inc. Logic repository service using encrypted configuration data
US10282330B2 (en) 2016-09-29 2019-05-07 Amazon Technologies, Inc. Configurable logic platform with multiple reconfigurable regions
US10162921B2 (en) 2016-09-29 2018-12-25 Amazon Technologies, Inc. Logic repository service
US10423438B2 (en) 2016-09-30 2019-09-24 Amazon Technologies, Inc. Virtual machines controlling separate subsets of programmable hardware
US10642492B2 (en) 2016-09-30 2020-05-05 Amazon Technologies, Inc. Controlling access to previously-stored logic in a reconfigurable logic device
US11115293B2 (en) 2016-11-17 2021-09-07 Amazon Technologies, Inc. Networked programmable logic service provider
US10747565B2 (en) * 2017-04-18 2020-08-18 Amazon Technologies, Inc. Virtualization of control and status signals
US10776145B2 (en) * 2017-04-21 2020-09-15 Dell Products L.P. Systems and methods for traffic monitoring in a virtualized software defined storage architecture
US10503922B2 (en) 2017-05-04 2019-12-10 Dell Products L.P. Systems and methods for hardware-based security for inter-container communication
US10402219B2 (en) 2017-06-07 2019-09-03 Dell Products L.P. Managing shared services in reconfigurable FPGA regions
US10503551B2 (en) * 2017-06-07 2019-12-10 Dell Products L.P. Coordinating FPGA services using cascaded FPGA service managers
US11474555B1 (en) * 2017-08-23 2022-10-18 Xilinx, Inc. Data-driven platform characteristics capture and discovery for hardware accelerators
US10853134B2 (en) * 2018-04-18 2020-12-01 Xilinx, Inc. Software defined multi-domain creation and isolation for a heterogeneous System-on-Chip
GB2574800B (en) * 2018-06-06 2021-01-06 Kaleao Ltd A system and method for bridging computer resources
CN109144722B (en) * 2018-07-20 2020-11-24 上海研鸥信息科技有限公司 Management system and method for efficiently sharing FPGA resources by multiple applications
US10673439B1 (en) 2019-03-27 2020-06-02 Xilinx, Inc. Adaptive integrated programmable device platform
CN112860420A (en) * 2019-11-27 2021-05-28 阿里巴巴集团控股有限公司 Data processing method and device based on hardware virtualization
CN111797439A (en) * 2020-05-18 2020-10-20 联想企业解决方案(新加坡)有限公司 Method and apparatus for providing virtual device
KR20210143611A (en) 2020-05-20 2021-11-29 삼성전자주식회사 Storage device supporting multi tenancy and operating method thereof
CN112948022B (en) * 2021-03-22 2021-11-16 弘大芯源(深圳)半导体有限公司 Method for realizing soft logic hardware
CN113821308B (en) * 2021-09-29 2023-11-24 上海阵量智能科技有限公司 System on chip, virtual machine task processing method and device and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101615106A (en) * 2008-06-23 2009-12-30 国际商业机器公司 The method and system that is used for virtualizing SAS storage adapter

Family Cites Families (136)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2321322B (en) 1996-10-28 2001-10-10 Altera Corp Remote software technical support
US6011407A (en) 1997-06-13 2000-01-04 Xilinx, Inc. Field programmable gate array with dedicated computer bus interface and method for configuring both
US8686549B2 (en) 2001-09-03 2014-04-01 Martin Vorbach Reconfigurable elements
US6034542A (en) 1997-10-14 2000-03-07 Xilinx, Inc. Bus structure for modularized chip with FPGA modules
JP3809727B2 (en) 1998-06-17 2006-08-16 富士ゼロックス株式会社 Information processing system, circuit information management method, and circuit information storage device
DE69910826T2 (en) 1998-11-20 2004-06-17 Altera Corp., San Jose COMPUTER SYSTEM WITH RECONFIGURABLE PROGRAMMABLE LOGIC DEVICE
US6539438B1 (en) 1999-01-15 2003-03-25 Quickflex Inc. Reconfigurable computing system and method and apparatus employing same
US7678048B1 (en) 1999-09-14 2010-03-16 Siemens Medical Solutions Usa, Inc. Medical diagnostic ultrasound system and method
US6595921B1 (en) 1999-09-14 2003-07-22 Acuson Corporation Medical diagnostic ultrasound imaging system and method for constructing a composite ultrasound image
US6785816B1 (en) 2000-05-01 2004-08-31 Nokia Corporation System and method for secured configuration data for programmable logic devices
US6826717B1 (en) 2000-06-12 2004-11-30 Altera Corporation Synchronization of hardware and software debuggers
WO2002001425A2 (en) 2000-06-23 2002-01-03 Xilinx, Inc. Method for remotely utilizing configurable hardware
US8058899B2 (en) 2000-10-06 2011-11-15 Martin Vorbach Logic cell array and bus system
US6802026B1 (en) 2001-05-15 2004-10-05 Xilinx, Inc. Parameterizable and reconfigurable debugger core generators
JP2002366597A (en) 2001-06-07 2002-12-20 Pfu Ltd System and program of fpga design
US6476634B1 (en) 2002-02-01 2002-11-05 Xilinx, Inc. ALU implementation in single PLD logic cell
US6693452B1 (en) 2002-02-25 2004-02-17 Xilinx, Inc. Floor planning for programmable gate array having embedded fixed logic circuitry
US8914590B2 (en) 2002-08-07 2014-12-16 Pact Xpp Technologies Ag Data processing method and device
GB0304628D0 (en) 2003-02-28 2003-04-02 Imec Inter Uni Micro Electr Method for hardware-software multitasking on a reconfigurable computing platform
US6938488B2 (en) 2002-08-21 2005-09-06 Battelle Memorial Institute Acoustic inspection device
US7117481B1 (en) 2002-11-06 2006-10-03 Vmware, Inc. Composite lock for computer systems with multiple domains
US6907595B2 (en) 2002-12-13 2005-06-14 Xilinx, Inc. Partial reconfiguration of a programmable logic device using an on-chip processor
US7313794B1 (en) 2003-01-30 2007-12-25 Xilinx, Inc. Method and apparatus for synchronization of shared memory in a multiprocessor system
JPWO2004075056A1 (en) 2003-02-21 2006-06-01 独立行政法人産業技術総合研究所 Virus check device and system
US7177961B2 (en) 2003-05-12 2007-02-13 International Business Machines Corporation Managing access, by operating system images of a computing environment, of input/output resources of the computing environment
US7505891B2 (en) 2003-05-20 2009-03-17 Verisity Design, Inc. Multi-user server system and method
JP2005107911A (en) 2003-09-30 2005-04-21 Daihen Corp Program for generating write information, program for writing information in hardware, computer-readable recording medium with its program recorded, device for generating write information and device for writing information
US7552426B2 (en) 2003-10-14 2009-06-23 Microsoft Corporation Systems and methods for using synthetic instructions in a virtual machine
US20050198235A1 (en) 2004-01-29 2005-09-08 Arvind Kumar Server configuration and management
US7243221B1 (en) 2004-02-26 2007-07-10 Xilinx, Inc. Method and apparatus for controlling a processor in a data processing system
US7281082B1 (en) 2004-03-26 2007-10-09 Xilinx, Inc. Flexible scheme for configuring programmable semiconductor devices using or loading programs from SPI-based serial flash memories that support multiple SPI flash vendors and device families
US20050223227A1 (en) 2004-03-31 2005-10-06 Deleeuw William C Addressable authentication in a scalable, reconfigurable communication architecture
US7721036B2 (en) 2004-06-01 2010-05-18 Quickturn Design Systems Inc. System and method for providing flexible signal routing and timing
US7987373B2 (en) 2004-09-30 2011-07-26 Synopsys, Inc. Apparatus and method for licensing programmable hardware sub-designs using a host-identifier
US8621597B1 (en) 2004-10-22 2013-12-31 Xilinx, Inc. Apparatus and method for automatic self-erasing of programmable logic devices
US8458467B2 (en) 2005-06-21 2013-06-04 Cisco Technology, Inc. Method and apparatus for adaptive application message payload content transformation in a network infrastructure element
US7886126B2 (en) 2005-01-14 2011-02-08 Intel Corporation Extended paging tables to map guest physical memory addresses from virtual memory page tables to host physical memory addresses in a virtual machine system
US7716497B1 (en) 2005-06-14 2010-05-11 Xilinx, Inc. Bitstream protection without key storage
US7451426B2 (en) 2005-07-07 2008-11-11 Lsi Corporation Application specific configurable logic IP
US7706417B1 (en) 2005-10-25 2010-04-27 Xilinx, Inc. Method of and circuit for generating a plurality of data streams
US7739092B1 (en) 2006-01-31 2010-06-15 Xilinx, Inc. Fast hardware co-simulation reset using partial bitstreams
JP2007243671A (en) 2006-03-09 2007-09-20 Kddi Corp Logic programmable device protective circuit
US7715433B2 (en) 2006-07-14 2010-05-11 Boren Gary W Universal controller and signal monitor
WO2008014494A2 (en) 2006-07-28 2008-01-31 Drc Computer Corporation Fpga co-processor for accelerated computation
US7809936B2 (en) 2006-08-02 2010-10-05 Freescale Semiconductor, Inc. Method and apparatus for reconfiguring a remote device
US7734859B2 (en) 2007-04-20 2010-06-08 Nuon, Inc Virtualization of a host computer's native I/O system architecture via the internet and LANs
US7564727B1 (en) 2007-06-25 2009-07-21 Xilinx, Inc. Apparatus and method for configurable power management
US8219989B2 (en) 2007-08-02 2012-07-10 International Business Machines Corporation Partition adjunct with non-native device driver for facilitating access to a physical input/output device
US7902866B1 (en) * 2007-08-27 2011-03-08 Virginia Tech Intellectual Properties, Inc. Wires on demand: run-time communication synthesis for reconfigurable computing
US7904629B2 (en) 2007-10-02 2011-03-08 NVON, Inc. Virtualized bus device
JP4593614B2 (en) 2007-12-27 2010-12-08 富士通株式会社 Image data verification method and image data verification system
US8145894B1 (en) 2008-02-25 2012-03-27 Drc Computer Corporation Reconfiguration of an accelerator module having a programmable logic device
JP5246863B2 (en) 2008-11-14 2013-07-24 独立行政法人産業技術総合研究所 Logic program data protection system and protection method for reconfigurable logic device
US9064058B2 (en) 2008-12-24 2015-06-23 Nuon, Inc. Virtualized PCI endpoint for extended systems
US8776090B2 (en) 2009-02-17 2014-07-08 Broadcom Corporation Method and system for network abstraction and virtualization for a single operating system (OS)
WO2010100871A1 (en) 2009-03-03 2010-09-10 日本電気株式会社 Delay library generation system
WO2010106738A1 (en) 2009-03-18 2010-09-23 日本電気株式会社 Reconfigurable logic circuit
US8560758B2 (en) 2009-08-24 2013-10-15 Red Hat Israel, Ltd. Mechanism for out-of-synch virtual machine memory management optimization
US8626970B2 (en) 2010-06-23 2014-01-07 International Business Machines Corporation Controlling access by a configuration to an adapter function
US8516272B2 (en) 2010-06-30 2013-08-20 International Business Machines Corporation Secure dynamically reconfigurable logic
JP5646764B2 (en) 2010-10-22 2014-12-24 サムスン ヘビー インダストリーズ カンパニー リミテッド Control system and method reconfigurable during operation
US8561065B2 (en) 2010-11-15 2013-10-15 International Business Machines Corporation Virtualization of vendor specific network interfaces of self-virtualizing input/output device virtual functions
US8881141B2 (en) 2010-12-08 2014-11-04 Intenational Business Machines Corporation Virtualization of hardware queues in self-virtualizing input/output devices
CN102736945B (en) * 2011-03-31 2016-05-18 国际商业机器公司 A kind of method and system of the Multi-instance running application
US9218195B2 (en) 2011-05-17 2015-12-22 International Business Machines Corporation Vendor-independent resource configuration interface for self-virtualizing input/output device
JP5653865B2 (en) 2011-08-23 2015-01-14 日本電信電話株式会社 Data processing system
KR20140061479A (en) 2011-08-31 2014-05-21 톰슨 라이센싱 Method for a secured backup and restore of configuration data of an end-user device, and device using the method
US8726337B1 (en) 2011-09-30 2014-05-13 Emc Corporation Computing with presentation layer for multiple virtual machines
KR101614859B1 (en) 2011-12-02 2016-04-22 엠파이어 테크놀로지 디벨롭먼트 엘엘씨 Integrated circuits as a service
US9448846B2 (en) 2011-12-13 2016-09-20 International Business Machines Corporation Dynamically configurable hardware queues for dispatching jobs to a plurality of hardware acceleration engines
US9465632B2 (en) 2012-02-04 2016-10-11 Global Supercomputing Corporation Parallel hardware hypervisor for virtualizing application-specific supercomputers
US8775576B2 (en) 2012-04-17 2014-07-08 Nimbix, Inc. Reconfigurable cloud computing
US9619292B2 (en) 2012-04-30 2017-04-11 Alcatel Lucent Resource placement in networked cloud based on resource constraints
US9009703B2 (en) 2012-05-10 2015-04-14 International Business Machines Corporation Sharing reconfigurable computing devices between workloads
US9104453B2 (en) 2012-06-21 2015-08-11 International Business Machines Corporation Determining placement fitness for partitions under a hypervisor
CN103577266B (en) 2012-07-31 2017-06-23 国际商业机器公司 For the method and system being allocated to field programmable gate array resource
US8799992B2 (en) 2012-10-24 2014-08-05 Watchguard Technologies, Inc. Systems and methods for the rapid deployment of network security devices
WO2014116206A1 (en) 2013-01-23 2014-07-31 Empire Technology Development Llc Management of hardware accelerator configurations in a processor chip
US9361416B2 (en) * 2013-01-30 2016-06-07 Empire Technology Development Llc Dynamic reconfiguration of programmable hardware
US9766910B1 (en) 2013-03-07 2017-09-19 Amazon Technologies, Inc. Providing field-programmable devices in a distributed execution environment
US8928351B1 (en) 2013-03-13 2015-01-06 Xilinx, Inc. Emulating power domains in an integrated circuit using partial reconfiguration
JP2014178784A (en) 2013-03-13 2014-09-25 Ricoh Co Ltd Information processing device, information processing system, and information processing program
US9396012B2 (en) 2013-03-14 2016-07-19 Qualcomm Incorporated Systems and methods of using a hypervisor with guest operating systems and virtual processors
US9747185B2 (en) * 2013-03-26 2017-08-29 Empire Technology Development Llc Acceleration benefit estimator
JP6102511B2 (en) 2013-05-23 2017-03-29 富士通株式会社 Integrated circuit, control apparatus, control method, and control program
WO2014189529A1 (en) 2013-05-24 2014-11-27 Empire Technology Development, Llc Datacenter application packages with hardware accelerators
US9672167B2 (en) 2013-07-22 2017-06-06 Futurewei Technologies, Inc. Resource management for peripheral component interconnect-express domains
US8910109B1 (en) 2013-08-12 2014-12-09 Altera Corporation System level tools to support FPGA partial reconfiguration
WO2015030731A1 (en) 2013-08-27 2015-03-05 Empire Technology Development Llc Speculative allocation of instances
US9098662B1 (en) 2013-08-28 2015-08-04 Altera Corporation Configuring a device to debug systems in real-time
WO2015042684A1 (en) 2013-09-24 2015-04-02 University Of Ottawa Virtualization of hardware accelerator
US9237165B2 (en) 2013-11-06 2016-01-12 Empire Technology Development Llc Malicious attack prevention through cartography of co-processors at datacenter
US10461937B1 (en) 2013-12-18 2019-10-29 Amazon Technologies, Inc. Hypervisor supported secrets compartment
JP6190471B2 (en) 2013-12-27 2017-08-30 株式会社日立製作所 Partition execution control device, partition execution control method, and computer-readable storage medium
US9904749B2 (en) 2014-02-13 2018-02-27 Synopsys, Inc. Configurable FPGA sockets
US9483639B2 (en) 2014-03-13 2016-11-01 Unisys Corporation Service partition virtualization system and method having a secure application
US9298865B1 (en) 2014-03-20 2016-03-29 Altera Corporation Debugging an optimized design implemented in a device with a pre-optimized design simulation
US9503093B2 (en) 2014-04-24 2016-11-22 Xilinx, Inc. Virtualization of programmable integrated circuits
US9811365B2 (en) * 2014-05-09 2017-11-07 Amazon Technologies, Inc. Migration of applications between an enterprise-based network and a multi-tenant network
US9851998B2 (en) 2014-07-30 2017-12-26 Microsoft Technology Licensing, Llc Hypervisor-hosted virtual machine forensics
US10230591B2 (en) 2014-09-30 2019-03-12 Microsoft Technology Licensing, Llc Network resource governance in multi-tenant datacenters
US9672935B2 (en) 2014-10-17 2017-06-06 Lattice Semiconductor Corporation Memory circuit having non-volatile memory cell and methods of using
US9372956B1 (en) 2014-11-10 2016-06-21 Xilinx, Inc. Increased usable programmable device dice
US10394731B2 (en) 2014-12-19 2019-08-27 Amazon Technologies, Inc. System on a chip comprising reconfigurable resources for multiple compute sub-systems
US9703703B2 (en) 2014-12-23 2017-07-11 Intel Corporation Control of entry into protected memory views
WO2016118978A1 (en) 2015-01-25 2016-07-28 Objective Interface Systems, Inc. A multi-session zero client device and network for transporting separated flows to device sessions via virtual nodes
US9762392B2 (en) 2015-03-26 2017-09-12 Eurotech S.P.A. System and method for trusted provisioning and authentication for networked devices in cloud-based IoT/M2M platforms
US9983938B2 (en) 2015-04-17 2018-05-29 Microsoft Technology Licensing, Llc Locally restoring functionality at acceleration components
US10027543B2 (en) 2015-04-17 2018-07-17 Microsoft Technology Licensing, Llc Reconfiguring an acceleration component among interconnected acceleration components
EP3089035A1 (en) 2015-04-30 2016-11-02 Virtual Open Systems Virtualization manager for reconfigurable hardware accelerators
US20160323143A1 (en) 2015-05-02 2016-11-03 Hyeung-Yun Kim Method and apparatus for neuroplastic internet of things by cloud computing infrastructure as a service incorporating reconfigurable hardware
US9678681B2 (en) 2015-06-17 2017-06-13 International Business Machines Corporation Secured multi-tenancy data in cloud-based storage environments
US9684743B2 (en) 2015-06-19 2017-06-20 Synopsys, Inc. Isolated debugging in an FPGA based emulation environment
US10387209B2 (en) 2015-09-28 2019-08-20 International Business Machines Corporation Dynamic transparent provisioning of resources for application specific resources
US10013212B2 (en) * 2015-11-30 2018-07-03 Samsung Electronics Co., Ltd. System architecture with memory channel DRAM FPGA module
US9590635B1 (en) 2015-12-03 2017-03-07 Altera Corporation Partial reconfiguration of programmable devices
US20170187831A1 (en) 2015-12-29 2017-06-29 Itron, Inc. Universal Abstraction Layer and Management of Resource Devices
US10069681B2 (en) 2015-12-31 2018-09-04 Amazon Technologies, Inc. FPGA-enabled compute instances
US9940483B2 (en) 2016-01-25 2018-04-10 Raytheon Company Firmware security interface for field programmable gate arrays
JP6620595B2 (en) 2016-02-25 2019-12-18 富士通株式会社 Information processing system, information processing apparatus, management apparatus, processing program, and processing method
US10169065B1 (en) 2016-06-29 2019-01-01 Altera Corporation Live migration of hardware accelerated applications
US10833969B2 (en) 2016-07-22 2020-11-10 Intel Corporation Methods and apparatus for composite node malleability for disaggregated architectures
US10402566B2 (en) 2016-08-01 2019-09-03 The Aerospace Corporation High assurance configuration security processor (HACSP) for computing devices
US10511589B2 (en) 2016-09-14 2019-12-17 Oracle International Corporation Single logout functionality for a multi-tenant identity and data security management cloud service
US10846390B2 (en) 2016-09-14 2020-11-24 Oracle International Corporation Single sign-on functionality for a multi-tenant identity and data security management cloud service
US10528765B2 (en) 2016-09-16 2020-01-07 Intel Corporation Technologies for secure boot provisioning and management of field-programmable gate array images
US10223317B2 (en) 2016-09-28 2019-03-05 Amazon Technologies, Inc. Configurable logic platform
US11099894B2 (en) 2016-09-28 2021-08-24 Amazon Technologies, Inc. Intermediate host integrated circuit between virtual machine instance and customer programmable logic
US10338135B2 (en) 2016-09-28 2019-07-02 Amazon Technologies, Inc. Extracting debug information from FPGAs in multi-tenant environments
US10282330B2 (en) 2016-09-29 2019-05-07 Amazon Technologies, Inc. Configurable logic platform with multiple reconfigurable regions
US10250572B2 (en) 2016-09-29 2019-04-02 Amazon Technologies, Inc. Logic repository service using encrypted configuration data
US10162921B2 (en) 2016-09-29 2018-12-25 Amazon Technologies, Inc. Logic repository service
US10642492B2 (en) 2016-09-30 2020-05-05 Amazon Technologies, Inc. Controlling access to previously-stored logic in a reconfigurable logic device
US10423438B2 (en) 2016-09-30 2019-09-24 Amazon Technologies, Inc. Virtual machines controlling separate subsets of programmable hardware
US11023258B2 (en) 2016-12-30 2021-06-01 Intel Corporation Self-morphing server platforms
WO2019083991A1 (en) 2017-10-23 2019-05-02 Yuan Zhichao Programmable hardware based data encryption and decryption systems and methods

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101615106A (en) * 2008-06-23 2009-12-30 国际商业机器公司 The method and system that is used for virtualizing SAS storage adapter

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"A PCIe DMA engine to support the virtualization of 40 Gbps FPGA-accelerated network appliances";ZAZO JOSE FERNANDO ET AL;《IEEE》;20151207;正文第1页至第5页 *
"Enabling FPGAs in the Cloud";FEI CHEN ET AL;《ACM》;20140520;正文第1页至第10页 *

Also Published As

Publication number Publication date
CN109791500A (en) 2019-05-21
EP3519953A1 (en) 2019-08-07
JP2019535092A (en) 2019-12-05
US11099894B2 (en) 2021-08-24
JP6864749B2 (en) 2021-04-28
US20180088992A1 (en) 2018-03-29
WO2018064415A1 (en) 2018-04-05
EP3519953B1 (en) 2023-11-01

Similar Documents

Publication Publication Date Title
CN109791500B (en) Intermediate host integrated circuit between virtual machine instance and guest programmable logic
US10423438B2 (en) Virtual machines controlling separate subsets of programmable hardware
US11860810B2 (en) Configurable logic platform
US11182320B2 (en) Configurable logic platform with multiple reconfigurable regions
US11275503B2 (en) Controlling access to previously-stored logic in a reconfigurable logic device
CN110998555B (en) Logical warehousing services supporting adaptable host logic
CN110520847B (en) Virtualization of control and status signals
US10860357B1 (en) Secure reconfiguring programmable hardware with host logic comprising a static portion and a reconfigurable portion
US20240134811A1 (en) Configurable logic platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant