WO2023034512A1 - Virtualized medium access ecosystem and methods - Google Patents

Virtualized medium access ecosystem and methods Download PDF

Info

Publication number
WO2023034512A1
WO2023034512A1 PCT/US2022/042356 US2022042356W WO2023034512A1 WO 2023034512 A1 WO2023034512 A1 WO 2023034512A1 US 2022042356 W US2022042356 W US 2022042356W WO 2023034512 A1 WO2023034512 A1 WO 2023034512A1
Authority
WO
WIPO (PCT)
Prior art keywords
fpga
vnf
fpgas
accelerators
vnfs
Prior art date
Application number
PCT/US2022/042356
Other languages
French (fr)
Inventor
Juan DEATON
Original Assignee
Envistacom, Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Envistacom, Llc filed Critical Envistacom, Llc
Publication of WO2023034512A1 publication Critical patent/WO2023034512A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies
    • H04L41/122Discovery or management of network topologies of virtualised topologies, e.g. software-defined networks [SDN] or network function virtualisation [NFV]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/40Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/30Circuit design
    • G06F30/34Circuit design for reconfigurable circuits, e.g. field programmable gate arrays [FPGA] or programmable logic devices [PLD]

Definitions

  • This disclosure relates to a reconfigurable pool of Field Programmable Gate Arrays (FPGA) that are used in a Network function virtualization (NFV) environment as hardware accelerators.
  • FPGA Field Programmable Gate Arrays
  • NFV Network function virtualization
  • NFV Network function virtualization
  • MNOs Mobile Network Operators
  • NFV Network function virtualization
  • FPGAs Field Programmable Gate Arrays
  • This disclosure provides a system and method for deploying accelerators onto FPGAs in a network function virtualization environment.
  • a network fimction virtualization (NFV) system may comprise a plurality of Dynamic Partial Reconfigurable (DPR) Field Programmable Gate Arrays (FPGA) comprising a plurality of slots each configured with an accelerator and electrically coupled a plurality of network connections and a plurality of host interfaces, and a plurality of software access stacks configured with an accelerated software application to support the accelerators, wherein the network connections are configured to exchange data between the accelerator and a network, wherein the host interfaces are configured to exchange data between the accelerator and software access stack; coupled to a virtual network function layer comprising a virtual network coupled to a plurality of Virtual Network Functions (VNFs) each comprising Software Access Stacks (SASs), accelerators(AC), and Floor Plans (FPs), wherein the VNFs are instantiated via a hypervisor/containerizer.
  • the software access stack may comprise accelerated software applications, application programing interfaces, hardware abstraction layer, and hardware all configured with an accelerated software application to support the
  • the hardware may comprise a CPU, memory, driver, and combinations thereof.
  • system may farther comprise a shell comprising a virtual bus, network interfaces, and host interfaces configured to support the DPR FPGA.
  • the hypervisor may provide the mechanisms to support virtualization.
  • the hypervisor/containerizer may virtualize the DPR FPGA resources, optionally virtualizing CPU, memory, and networking resources.
  • the DPR FGPA system may comprise about 1-10 slots configured with an accelerator.
  • the DPR FGPA system may comprise about 1, 2, 3, 4, 5, 6, 7 8, 9, or 10 slots configured with an accelerator.
  • the system further comprise a NFV Management and Orchestration system that supports at least one business application comprising service orchestrator, NVF manager, and a infrastructure manager, wherein the service orchestrator is configured to provide operational and fanctional processes involved in designing, creating, and delivering an end-to-end service by deploying VNFs to support business applications, wherein the NVF manager is configured to support VNF lifecycle management, wherein the infrastructure manager comprises a FPGA resource manager configured to coordinates the assignment of floor plans to a plurality of FPGAs that is in the infrastructure layers and the assignment of accelerators to the slots of the floor plans of the FGPAs.
  • a NFV Management and Orchestration system that supports at least one business application comprising service orchestrator, NVF manager, and a infrastructure manager
  • the service orchestrator is configured to provide operational and fanctional processes involved in designing, creating, and delivering an end-to-end service by deploying VNFs to support business applications
  • the NVF manager is configured to support VNF lifecycle management
  • the infrastructure manager comprises
  • a computing cloud system may comprise the network function virtualization (NFV) system described herein.
  • NFV network function virtualization
  • a local server may comprise the network function virtualization (NFV) system described herein.
  • NFV network function virtualization
  • an edge computing system may comprise the network function virtualization (NFV) system described herein.
  • NFV network function virtualization
  • a virtual network fanction system may comprise (a) a plurality of Dynamic Partial Reconfigurable (DPR) Field Programmable Gate Arrays (FPGAs) configured to be available for loads from a plurality of Virtual Network Functions (VNFs), wherein each VNF comprises acceleration functions and floor plans that can be loaded into FPGA resources, wherein each VNF comprises software access stacks configured with accelerators deployed on plurality of FPGAs, and (b) an FPGA resource manager configured to manage the assignment of accelerators to slots and floor plans to FPGAs.
  • the FPGA floor plans may be preconfigured, deployed by the FPGA resource manager from a database, or are contained as binaries from the virtual network function.
  • the plurality of FPGAs may be configured with a plurality of different floorplans, wherein each floorplan is configured with a plurality of slots, and wherein each slot hosts an accelerator.
  • the accelerator may be pre-configured, or can be deployed by a virtual network function.
  • the VNFs may be virtual machines, software container network functions, software containers deployed on virtual machines, or a combination thereof.
  • the plurality of FPGAs may be managed in an infrastructure layer and are configured to be used for hardware acceleration by a plurality of VNFs.
  • the infrastructure layer may comprise a plurality of FPGAs and CPU hosts configured to execute/serve other virtual fimctions.
  • the FPGA resource manager may load floor plans and accelerators into FPGA slots on behalf of the VNF.
  • the VNFs may be configured to deploy floor plans and accelerators directly into accelerators without an FPGA manager.
  • a method to load Dynamic Partial Reconfigurable (DPR) Field Programmable Gate Arrays (FPGA) slots with acceleration fimctions from Virtual Network Functions may comprise (a) a Virtual Network Function (VNF) communicating with an FGPA resource manager on at least one data interface; (b) the FPGA resource manager managing and communicating with a plurality of FPGAs on the at least one data interface, (c) the FPGA resource manager managing, assigning, and communicating the available FPGA resources to the VNF; (d) the VNF authenticates with the FPGA resource manager to send requests for configurations information and available resources; (e) the VNF inquires on the available FPGA resources through the FPGA resource manager; (f) the VNF requests FPGA for loading the floor plans; and (g) the VNF requests the slot for loading of the accelerator.
  • the VNF may comprise a plurality of VNFs configured to use a plurality of FPGAs for deploying accelerators and floor plans.
  • the infrastructure layer may comprise a plurality of FPGAs and CPU hosts for serving other virtual fimctions.
  • the plurality of FPGAs may be configured with a plurality of different floorplans, where each floorplan may be configured with a plurality of slots, where each slot hosts an accelerator.
  • the FPGA floor plans may be preconfigured, deployed by the FPGA resource manager from a database, or may be contained as binaries from the virtual network function.
  • the accelerators may be pre-configured, or can be deployed by a virtual network function.
  • the FPGAs can be part of a computing cloud, local server, or edge computing system.
  • a plurality of FPGAs may be managed in an infrastructure layer, which are configured to be used for hardware acceleration among a plurality of VNFs.
  • the VNFs may be virtual machines, software container network functions, or software containers deployed on virtual machines.
  • the FPGA floorplan may comprise a plurality of network and host interfaces configured to be electronically coupled to slots hosting accelerators.
  • a plurality of VNFs may be configured to use a plurality of FPGAs for deploying accelerators and floor plans.
  • the FPGA resource manager may load floor plans and accelerators into FPGA slots on behalf of the VNF.
  • a plurality of VNFs may comprise software access stacks, accelerators, and floor plans, which configured to be deployed onto a plurality of FPGAs.
  • the VNFs is configured to deploy floor plans and accelerators directly into accelerators independent of an FPGA manager.
  • FIG. 1 depicts an exemplary Dynamic Partial Reconfiguration (DPR) FPGA architecture.
  • the FPGA Network Interfaces (NI) are show at the bottom of the figure that connect to external Internet Protocol (IP) networks.
  • IP Internet Protocol
  • the FPGA also is connected through a Host Interface (HI), which is used to exchange data with CPU host. This data exchange is further supported through a hardware abstraction layer and application programing interface for user software applications.
  • the FPGA contains a variable number of reconfigurable slots, which are used to load FPGA accelerators.
  • a virtual bus connects the accelerators to each other, the network interfaces, or the host interface.
  • the floor plan which is contained inside an FPGA comprises the physical resources and geometries utilized by the slots and the shell.
  • FIG. 2 depicts an exemplary virtual network function architecture that deploys a plurality of FPGAs in the infrastructure layer.
  • FPGAs have reconfigurable floorplans, where each floor plan can host a plurality of slots, where each slot can host a plurality of accelerators.
  • a plurality of Virtual Network Functions (VNFs) come with Accelerators (ACs) and Floor Plans (FPs) that can deployed on any available FPGA to support accelerated software applications for plurality of associated Software Access Stacks (SASs) contained in the VNF.
  • VNFs are connected together in a virtual network.
  • the business service layer and NFV management orchestration layers serve their appropriate ftmctions as documented in “NFV Architectural Framework,” ETS GS NFV 002.
  • the FPGA resource manager coordinates deployment of floor plans and accelerators into the plurality of FPGAs in the infrastructure layer.
  • N and M is used to represent an arbitrary number, in every use, when indicating a plurality of elements.
  • FIG. 3 depicts an exemplary procedure where the VNF deploys a floorplan (FP) and accelerator (AC) into an FPGA.
  • the VNF communicates to NFV management and Orchestration ftmctions, specifically the infrastructure manager, which has the role of managing FPGA resources, using interface 1.
  • VNF-1 first completes an authentication procedure, which authorizes VNF-1 to utilize FPGA resources. After being authenticated, VNF-1 inquiries and receives a response from the infrastructure manager on the availability of FPGAs and their resources. The infrastructure manager may inquire available resources from the FPGAs before allocating the resources to VNF-1. After obtaining the necessary resource reservations from the infrastructure manager, VNF-1 then will load its bit file that contains the FP for the allocated FPGA. Subsequently, updates the infrastructure manager on the FP used and availability of FPGA slots available for accelerators. VNF-1 then loads its own accelerator onto the FPGA DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
  • “Accelerator,” (AC) as used herein, refers to broadly as FPGA resources with a deployed design that supports an accelerated software application.
  • Accelerated Software Applications refers to broadly as software functions that use an accelerator to execute functions that leverage FPGA resources for acceleration.
  • Applications Programming Interface refers to broadly as the software interface that defines interactions between multiple software applications or mixed hardware-software intermediaries.
  • Business Service Layer refers broadly to the logical collection of business applications that leverage services provided by the virtual network function layer.
  • Cloud Computing refers broadly to the practice of using a network of servers hosted on the internet to store, manage, and process data, rather than a local computing resources.
  • Container or “Software Container,” as used herein, refers broadly to a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.
  • Container Network Function refers broadly as to a software container which has the function of supporting services of the network or system through its specific function.
  • Containerize! refers broadly to as an operating system that supports the deployment of Container Network Functions (CNFs) instead of VNFs. In contrast to hypervisors, Containerizers rely on an operating system as the basis for “virtualization”.
  • Field Programable Gate Array refers to a semiconductor device that is based around a matrix of Configurable Logic Blocks (CLBs) connected via programmable interconnects. FPGAs can be reprogrammed to desired application or fimctionality requirements after manufacturing.
  • CLBs Configurable Logic Blocks
  • FP Floor Plan
  • slot configuration e.g., size of resources and number of slots
  • virtual bus e.g., size of resources and number of slots
  • network e.g., network, and host interfaces to transport data.
  • FPGA Resource Manager refers to broadly as the fimction that coordinates the assignment of floor plans to a plurality of FPGAs that is in the infrastructure layers and the assignment of accelerators to the slots of the floor plans of the FGPAs.
  • Hardware Abstraction Layer refers broadly to software that provides exchange of information with the hardware device e.g., FPGA.
  • Hardware Acceleration refers broadly to the process by which an application will offload certain computing tasks from a CPU onto specialized hardware components- typically FPGAs, GPUs, DSPs, or ASICS- within the system, enabling greater efficiency than is possible in software running on a general-purpose CPU alone.
  • High Performance Computer refers broadly to a Central Processing Unit (CPU) with hardware acceleration.
  • Host Interface refers broadly to as the hardware, firmware, and software that supports the exchange of data between the FPGA and CPU/Memory/Drivers.
  • Heypervisor refers broadly to computer software, firmware, hardware, and combinations thereof, which manages hardware resources to support virtual network functions.
  • “Infrastructure Layer,” as used herein, refers broadly to common hardware shared in a Network Function Virtualization (NFV) architecture.
  • the common hardware includes but is not limited to processing, memory, networking, and FPGA resources.
  • Network Interface refers to broadly as the set of hardware, firmware, and software that is used to provide a connection to an internet protocol network.
  • Network Function Virtualization refers broadly to a network architecture that decouples network functions from dedicated hardware through virtualization into a set of Virtual Network Functions (VNF), which are connected together through a virtual or real network, to create services.
  • NFV Management and Orchestration refers broadly to a system of Virtual Network Functions (VNFs) that manage service orchestration, (e.g., supporting the business layer with VNFs) VNF management, (e.g., deployment, management, and teardown of VNFs) and Infrastructure Management (e.g., assignment of infrastructure resources to the VNF layer).
  • VNFs Virtual Network Functions
  • Service orchestration e.g., supporting the business layer with VNFs
  • VNF management e.g., deployment, management, and teardown of VNFs
  • Infrastructure Management e.g., assignment of infrastructure resources to the VNF layer
  • Service orchestration refers broadly to the execution of the operational and functional processes involved in designing, creating, and delivering an end-to- end service by deploying Virtual Network Functions.
  • Shell refers broadly to as the static FPGA resources and geometry of the floor plan that provide the virtual bus, network interfaces, and host interfaces.
  • Slot refers broadly to the FPGA resources (logic, memory, digital signal processing) with respect to the geometry of the floorplan.
  • SDN Software Defined Networking
  • VNF Virtual Network Function
  • VNF Virtual Network Function Layer
  • VNF Virtual Network
  • Virtual Bus refers broadly as the data connections on the FPGA, which route and facilitate data transfers between slots, network interfaces, and host interfaces.
  • Virtual Digital sample interface refers broadly to a digital sample interface connection that is connected via virtual network or though a software defined network. When two or more device exchange digital sample interface messages using a virtual network, it may be referred to as a virtual digital sample interface connection.
  • FPGAs Field Programmable Gate Arrays
  • software applications e.g., data compression, deep learning, waveform processing, encryption, decryption
  • NFV Network Function Virtualization
  • FPGAs may be used as the computing component for telecommunications applications but may require new methods for integration with Virtualized Network Functions (VNFs).
  • VNFs Virtualized Network Functions
  • the systems and methods described herein provide the necessary mechanisms for FPGA configuration and reconfiguration in virtualization environments. For example, FPGAs are becoming a cornerstone of computing and by providing software acceleration of applications (e.g., data compression, deep learning, waveform processing, encryption, decryption) and provide superior performance per watt.
  • FPGAs are creating a necessary computing component for telecommunications applications and require new methods for integration with Virtualized Network Functions (VNFs).
  • VNFs Virtualized Network Functions
  • CPU systems offer flexibility but lack power efficiency. Further, it is estimated that 20% of the total costs associated with data centers are power. FPGAs are dedicated to a single task, but are far more energy efficient than a CPU of similar computing power.
  • DPR Dynamic Partial Reconfiguration
  • FPGA Field-programmable gate array
  • DPR Dynamic Partial Reconfiguration
  • regions of the FPGA resources are used dynamically by changing different bit file designs from the DPR region. This is in stark contrast to normal FPGA operation where a single bit file is used to configure the entire FPGA and then that FPGA is left untouched for most of its operational life. DPR regions may be divided into slots, where bit files (FPGA firmware designs) may be deployed, we refer to these designs as accelerators, which provide acceleration to computations that are called with a higher lever programming language such as C+.
  • This disclosure provides a system and method for the portability of FPGA designs using DRP within the scope of NFV designs.
  • FPGAs may be configured with a capability known as Dynamic Partial Reconfiguration (DPR).
  • DPR Dynamic Partial Reconfiguration
  • regions of the FPGA resources are used dynamically by changing different bit file designs from the DPR region. This is in contrast to normal FPGA operation where a single bit file is used to configure the entire FPGA and then that FPGA is left untouched for most of its operational life.
  • DPR regions may be divided into slots, where bit files (FPGA firmware designs) may be deployed, these are referred to as “accelerators,” which provide acceleration to computations required by a higher level programming languages, e.g., C+.
  • FPGA Field Programable Gate Array
  • NFV Network Function Virtualization
  • Accelerators can be deployed as separate modules into computing architectures, e.g., through PCIe (peripheral component interconnect express) cards in network servers.
  • CPU architectures may provide interfaces to allow for direct programing and easier access to hardware acceleration.
  • FPGAs architectures have fiirther moved toward Dynamic Partial Reconfiguration (DPR) architectures.
  • DPR Dynamic Partial Reconfiguration
  • FIG. 1 depicts an exemplary Dynamic Partial Reconfiguration (DPR) Field-programmable gate array (FPGA) system 100.
  • DPR Dynamic Partial Reconfiguration
  • FPGA Field-programmable gate array
  • the Field-programmable gate array (FPGA) 101 comprises a plurality of reconfigurable slots 102, which are used to load FPGA accelerators (e.g., Slot-1 (AC-1), Slot-2 (AC-2), Slot-N (AC-N).
  • FPGA accelerators e.g., Slot-1 (AC-1), Slot-2 (AC-2), Slot-N (AC-N).
  • a virtual bus 105 electronically couples the accelerators to each other, the network interfaces, and/or the host interface.
  • the floor plan contained inside an FPGA comprises the physical resources and geometries utilized by the slots and the shell 106.
  • a DPR FPGA system reconfigurable regions within the FPGA are used to program fimctions used by high-level programming languages, these reconfigurable regions are referred to as slots 102.
  • High-level programming languages include but are not limited to Python, Visual Basic, Delphi, Perl, PHP, ECMAScript, Ruby, C++, C#, and Java.
  • a plurality of slots may be supported by the DPR FPGA system.
  • the DPR FPGA system may comprise about 1-10 slots configured with an accelerator.
  • the DPR FPGA system may comprise about 1, 2, 3, 4, 5, 6, 7 8, 9, or 10 slots configured with an accelerator.
  • the DPR FPGA system described herein may comprise a shell 106.
  • the shell is “static” region of FPGA firmware configured to provide interfaces and connectivity among slots, network interfaces, and host interfaces external to the FPGA.
  • the shell provides an analogous firmware container for deploying of dynamic firmware (e.g., accelerators) into reconfigurable regions (e.g., slots).
  • dynamic firmware e.g., accelerators
  • reconfigurable regions e.g., slots
  • the physical resources and geometries of the slots and the shell comprise the floor plan 107 of the FPGA.
  • An FPGA may support any floorplan, given that there are sufficient resources on the FGPA.
  • Network interfaces (NI) 103 are configured to exchange data between the FPGA and hardware platform to another server on the network.
  • the host interface (HI) 109 is configured to provide access to accelerated software applications through the software access stack(s) 108.
  • Each software access stack comprises accelerated software applications, application programing interface, hardware abstraction layer, and hardware comprising a CPU, memory, and drivers, configured to provide the accelerated software applications the ability to send and exchange data with the accelerator deployed in a slot.
  • the FPGA Network Interfaces (NI) 103 are coupled to external Internet Protocol (IP) networks 104.
  • the FPGA also is connected through a Host Interface (HI) 109 configured to exchange data with CPU host configured to provide the exchange of data with the software access stack. This data exchange is further supported through a hardware abstraction layer and application programing interface for user software applications provided by the software access stack(s) 108.
  • IP Internet Protocol
  • HI Host Interface
  • FIG. 2 depicts an exemplary DPR FPGAs used in a Network function virtualization (NFV) system 200 comprising a plurality of FPGAs described herein.
  • the business service layer and NFV management orchestration layers 204 serve their appropriate functions as documented in “NFV Architectural Framework,” ETS GS NFV 002.
  • the infrastructure layer embodies all hardware resources that are virtualized, through a hypervisor or containerizer functions 205, and used to support the VNF layer 202.
  • the infrastructure layer contains a plurality of DPR FPGAs 211, which are configured with floor plans.
  • the plurality of DPR FPGAs in the infrastructure layer are used by a plurality of VNFs 212 in the VNF layer.
  • Each VNF hosts its necessary set of Software Access Stacks (SASs), acceleration fimctions (AC), and Floor Plans (FPs) that can be installed into DPR FPGA.
  • SASs Software Access Stacks
  • AC acceleration fimctions
  • FPs Floor Plans
  • the infrastructure manager includes an FPGA resource manager to manage and coordinate the deployment of accelerators and floorplans by the VNFs into the FPGAs.
  • the VNFs are also connected together through a virtual network, which allows VNFs to exchange data with one another.
  • Network function virtualization (NFV) system 200 comprise an infrastructure layer 201 comprising a plurality of Field-programmable gate arrays (FPGAs) as described herein (shown in FIG. 1).
  • the infrastructure 201 is coupled to a virtual network function layer 202 comprising a virtual network 222 coupled to/comprising a plurality of VNFs 212 each comprising Software Access Stacks (SASs), acceleration functions (AC), and Floor Plans (FPs), where the VNFs are instantiated via a hypervisor/containerizer 205.
  • the hypervisor provides the mechanisms to support virtualization, e.g., Virtual Network Functions (VNFs).
  • VNFs Virtual Network Functions
  • the hypervisor/containerizer virtualizes the DPR FPGA resources, allowing for greater flexibility in the utilizing of the DPR FPGA resources, in addition to virtualizing CPU, memory, and networking resources.
  • the DPR FPGA may be assigned and reassigned tasks to each individual accelerator to execute.
  • the Business Service Layer 203 comprising a plurality of Business Applications 213.
  • the Business Service Layer 203, Virtual Network Function 202, and the Infrastructure 201 are all electronically coupled and managed by a NFV Management and Orchestration plane 204 comprising a service orchestrator 214 configured to provide operational and fimctional processes involved in designing, creating, and delivering an end-to-end service by deploying VNFs.
  • the VNF manager 224 configured to support VNF lifecycle management (e.g., instantiation, update, query, scaling, and termination) and an infrastructure manager 234 configured control and manage resources (computing - CPU FPGA-, storage, networking) and their assignment to VNFs, comprising a FPGA resource manager configured to coordinates the assignment of floor plans to a plurality of FPGAs that is in the infrastructure layers and the assignment of accelerators to the slots of the floor plans of the FGPAs.
  • VNF lifecycle management e.g., instantiation, update, query, scaling, and termination
  • an infrastructure manager 234 configured control and manage resources (computing - CPU FPGA-, storage, networking) and their assignment to VNFs, comprising a FPGA resource manager configured to coordinates the assignment of floor plans to a plurality of FPGAs that is in the infrastructure layers and the assignment of accelerators to the slots of the floor plans of the FGPAs.
  • FIG. 3 shows the infrastructure and VNF layers as well as the NFV management and orchestration plane, from FIG 2.
  • the virtualization layer shows a single VNF and the infrastructure layer shows a single FPGA.
  • An infrastructure compartment/device/layer 301 comprising at least one FPGA as described herein is electronically coupled to a Virtual Network Function 302 comprising set of Software Access Stacks (SASs), acceleration fimctions (AC), and Floor Plans (FPs).
  • the Virtual Network Function 302 is electronically coupled to an Interface 1.
  • the two interfaces, Interface 1 and Interface 2 may be virtual/real or local/network IP communication.
  • the VNF has the need to deploy hardware acceleration as part of its normal function.
  • the VNF authenticates with the FPGA resource manager that it has valid access to resources.
  • the FPGA resource manager responds to a configuration inquiry (2) of what FPGAs have available resources for floor plans and/or slots.
  • the VNF After the configuration inquiry, the VNF requests its desired resources (3) and the FGPA resource manager assigns those resources to the VNF (4). Assuming nonhomogeneous slot sizes, the configuration response would include FPGA resource information (RAM, Logic, DSP, etc.) for each slot.
  • the resource assign/request assigns specific FPGA and/or FPGA slots to the VNF.
  • the VNF follows by loading the floor plan (5) and/or accelerator to the specified FPGA (6).
  • Non-Patent Literature All publications (e.g., Non-Patent Literature), patents, patent application publications, and patent applications mentioned in this specification are indicative of the level of skill of those skilled in the art to which this invention pertains. All such publications (e.g., NonPatent Literature), patents, patent application publications, and patent applications are herein incorporated by reference to the same extent as if each individual publication, patent, patent application publication, or patent application was specifically and individually indicated to be incorporated by reference.

Abstract

Provided is a network function virtualization (NFV) system may comprise a plurality of Dynamic Partial Reconfigurable (DPR) Field Programmable Gate Arrays (FPGA) comprising a plurality of slots each configured with an accelerator and electrically coupled a plurality of network connections and a plurality of host interfaces, and a plurality of software access stacks configured with an accelerated software application to support the accelerators, coupled to a virtual network function layer comprising a virtual network coupled to a plurality of Virtual Network Functions (VNFs) each comprising Software Access Stacks (SASs), accelerators(AC), and Floor Plans (FPs).

Description

VIRTUALIZED MEDIUM ACCESS ECOSYSTEM AND METHODS
CROSS REFERENCE TO RELATED APPLICATION FIELD OF THE INVENTION [0001] This is an International Application under the Patent Cooperation Treaty, claiming priority to United States Provisional Patent Application No. 63/239,675 filed September 1, 2021 the contents of which are incorporated herein by reference in their entirety.
APPLICATION FIELD OF THE INVENTION
[0002] This disclosure relates to a reconfigurable pool of Field Programmable Gate Arrays (FPGA) that are used in a Network function virtualization (NFV) environment as hardware accelerators.
BACKGROUND OF THE INVENTION
[0003] Network function virtualization (NFV) is being adopted by Mobile Network Operators (MNOs) for providing additional flexibility, capability, and reducing costs in telecommunications networks. Using NFV, network functions may operate as part of a virtualized infrastructure instead of being based on purpose-built hardware. In combination with the rise of Field Programmable Gate Arrays (FPGAs), computing environments hold the capability of providing higher performance and efficiency computing through heterogeneous computing systems. The system and methods described herein provides the necessary mechanisms to FPGA configuration and reconfiguration in virtualization environments.
[0004] SUMMARY OF VARIOUS EMBODIMENTS OF THE INVENTION
[0005] This disclosure provides a system and method for deploying accelerators onto FPGAs in a network function virtualization environment.
[0006] In an embodiment, a network fimction virtualization (NFV) system may comprise a plurality of Dynamic Partial Reconfigurable (DPR) Field Programmable Gate Arrays (FPGA) comprising a plurality of slots each configured with an accelerator and electrically coupled a plurality of network connections and a plurality of host interfaces, and a plurality of software access stacks configured with an accelerated software application to support the accelerators, wherein the network connections are configured to exchange data between the accelerator and a network, wherein the host interfaces are configured to exchange data between the accelerator and software access stack; coupled to a virtual network function layer comprising a virtual network coupled to a plurality of Virtual Network Functions (VNFs) each comprising Software Access Stacks (SASs), accelerators(AC), and Floor Plans (FPs), wherein the VNFs are instantiated via a hypervisor/containerizer. The software access stack may comprise accelerated software applications, application programing interfaces, hardware abstraction layer, and hardware all configured with an accelerated software application to support the accelerator functions.
[0007] In an embodiment, the hardware may comprise a CPU, memory, driver, and combinations thereof.
[0008] In an embodiment, the system may farther comprise a shell comprising a virtual bus, network interfaces, and host interfaces configured to support the DPR FPGA.
[0009] In an embodiment, the hypervisor may provide the mechanisms to support virtualization.
[0010] In an embodiment, the hypervisor/containerizer may virtualize the DPR FPGA resources, optionally virtualizing CPU, memory, and networking resources.
[0011] In an embodiment, the DPR FGPA system may comprise about 1-10 slots configured with an accelerator. The DPR FGPA system may comprise about 1, 2, 3, 4, 5, 6, 7 8, 9, or 10 slots configured with an accelerator.
[0012] In an embodiment, the system further comprise a NFV Management and Orchestration system that supports at least one business application comprising service orchestrator, NVF manager, and a infrastructure manager, wherein the service orchestrator is configured to provide operational and fanctional processes involved in designing, creating, and delivering an end-to-end service by deploying VNFs to support business applications, wherein the NVF manager is configured to support VNF lifecycle management, wherein the infrastructure manager comprises a FPGA resource manager configured to coordinates the assignment of floor plans to a plurality of FPGAs that is in the infrastructure layers and the assignment of accelerators to the slots of the floor plans of the FGPAs.
[0013] In an embodiment, a computing cloud system may comprise the network function virtualization (NFV) system described herein.
[0014] In an embodiment, a local server may comprise the network function virtualization (NFV) system described herein.
[0015] In an embodiment, an edge computing system may comprise the network function virtualization (NFV) system described herein.
[0016] In an embodiment, a virtual network fanction system may comprise (a) a plurality of Dynamic Partial Reconfigurable (DPR) Field Programmable Gate Arrays (FPGAs) configured to be available for loads from a plurality of Virtual Network Functions (VNFs), wherein each VNF comprises acceleration functions and floor plans that can be loaded into FPGA resources, wherein each VNF comprises software access stacks configured with accelerators deployed on plurality of FPGAs, and (b) an FPGA resource manager configured to manage the assignment of accelerators to slots and floor plans to FPGAs. The FPGA floor plans may be preconfigured, deployed by the FPGA resource manager from a database, or are contained as binaries from the virtual network function.
[0017] In an embodiment, the plurality of FPGAs may be configured with a plurality of different floorplans, wherein each floorplan is configured with a plurality of slots, and wherein each slot hosts an accelerator.
[0018] In an embodiment, the accelerator may be pre-configured, or can be deployed by a virtual network function.
[0019] In an embodiment, the VNFs may be virtual machines, software container network functions, software containers deployed on virtual machines, or a combination thereof.
[0020] In an embodiment, the plurality of FPGAs may be managed in an infrastructure layer and are configured to be used for hardware acceleration by a plurality of VNFs.
[0021] In an embodiment, the infrastructure layer may comprise a plurality of FPGAs and CPU hosts configured to execute/serve other virtual fimctions.
[0022] In an embodiment, the FPGA resource manager may load floor plans and accelerators into FPGA slots on behalf of the VNF.
[0023] In an embodiment, the VNFs may be configured to deploy floor plans and accelerators directly into accelerators without an FPGA manager.
[0024] In an embodiment, a method to load Dynamic Partial Reconfigurable (DPR) Field Programmable Gate Arrays (FPGA) slots with acceleration fimctions from Virtual Network Functions may comprise (a) a Virtual Network Function (VNF) communicating with an FGPA resource manager on at least one data interface; (b) the FPGA resource manager managing and communicating with a plurality of FPGAs on the at least one data interface, (c) the FPGA resource manager managing, assigning, and communicating the available FPGA resources to the VNF; (d) the VNF authenticates with the FPGA resource manager to send requests for configurations information and available resources; (e) the VNF inquires on the available FPGA resources through the FPGA resource manager; (f) the VNF requests FPGA for loading the floor plans; and (g) the VNF requests the slot for loading of the accelerator. The VNF may comprise a plurality of VNFs configured to use a plurality of FPGAs for deploying accelerators and floor plans.
[0025] In an embodiment, the infrastructure layer may comprise a plurality of FPGAs and CPU hosts for serving other virtual fimctions. [0026] In an embodiment, the plurality of FPGAs may be configured with a plurality of different floorplans, where each floorplan may be configured with a plurality of slots, where each slot hosts an accelerator.
[0027] In an embodiment, the FPGA floor plans may be preconfigured, deployed by the FPGA resource manager from a database, or may be contained as binaries from the virtual network function.
[0028] In an embodiment, the accelerators may be pre-configured, or can be deployed by a virtual network function.
[0029] In an embodiment, the FPGAs can be part of a computing cloud, local server, or edge computing system.
[0030] In an embodiment, a plurality of FPGAs may be managed in an infrastructure layer, which are configured to be used for hardware acceleration among a plurality of VNFs.
[0031] In an embodiment, the VNFs may be virtual machines, software container network functions, or software containers deployed on virtual machines.
[0032] In an embodiment, the FPGA floorplan may comprise a plurality of network and host interfaces configured to be electronically coupled to slots hosting accelerators.
[0033] In an embodiment, a plurality of VNFs may be configured to use a plurality of FPGAs for deploying accelerators and floor plans.
[0034] In an embodiment, the FPGA resource manager may load floor plans and accelerators into FPGA slots on behalf of the VNF.
[0035] In an embodiment, a plurality of VNFs may comprise software access stacks, accelerators, and floor plans, which configured to be deployed onto a plurality of FPGAs.
[0036] In an embodiment, the VNFs is configured to deploy floor plans and accelerators directly into accelerators independent of an FPGA manager.
BRIEF DESCRIPTION OF THE DRAWINGS
[0037] The advantages and features of the present invention will become better understood with reference to the following more detailed description taken in conjunction with the accompanying drawings.
[0038] Figure 1 depicts an exemplary Dynamic Partial Reconfiguration (DPR) FPGA architecture. The FPGA Network Interfaces (NI) are show at the bottom of the figure that connect to external Internet Protocol (IP) networks. The FPGA also is connected through a Host Interface (HI), which is used to exchange data with CPU host. This data exchange is further supported through a hardware abstraction layer and application programing interface for user software applications. The FPGA contains a variable number of reconfigurable slots, which are used to load FPGA accelerators. A virtual bus connects the accelerators to each other, the network interfaces, or the host interface. The floor plan which is contained inside an FPGA comprises the physical resources and geometries utilized by the slots and the shell. N and M represent an arbitrary number, in every use, for indicating a plurality of elements in all figures. [0039] Figure 2 depicts an exemplary virtual network function architecture that deploys a plurality of FPGAs in the infrastructure layer. These FPGAs have reconfigurable floorplans, where each floor plan can host a plurality of slots, where each slot can host a plurality of accelerators. A plurality of Virtual Network Functions (VNFs) come with Accelerators (ACs) and Floor Plans (FPs) that can deployed on any available FPGA to support accelerated software applications for plurality of associated Software Access Stacks (SASs) contained in the VNF. VNFs are connected together in a virtual network. The business service layer and NFV management orchestration layers serve their appropriate ftmctions as documented in “NFV Architectural Framework,” ETS GS NFV 002. In addition to these traditional ftmctions, in the infrastructure manager function, the FPGA resource manager, coordinates deployment of floor plans and accelerators into the plurality of FPGAs in the infrastructure layer. Without loss of generality, N and M is used to represent an arbitrary number, in every use, when indicating a plurality of elements.
[0040] Figure 3 depicts an exemplary procedure where the VNF deploys a floorplan (FP) and accelerator (AC) into an FPGA. The VNF communicates to NFV management and Orchestration ftmctions, specifically the infrastructure manager, which has the role of managing FPGA resources, using interface 1. VNF-1 first completes an authentication procedure, which authorizes VNF-1 to utilize FPGA resources. After being authenticated, VNF-1 inquiries and receives a response from the infrastructure manager on the availability of FPGAs and their resources. The infrastructure manager may inquire available resources from the FPGAs before allocating the resources to VNF-1. After obtaining the necessary resource reservations from the infrastructure manager, VNF-1 then will load its bit file that contains the FP for the allocated FPGA. Subsequently, updates the infrastructure manager on the FP used and availability of FPGA slots available for accelerators. VNF-1 then loads its own accelerator onto the FPGA DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
[0041] While the present invention is described with respect to what is presently considered to be the preferred embodiments, it is understood that the invention is not limited to the disclosed embodiments. The present invention is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims. [0042] Furthermore, it is understood that this invention is not limited to the particular methodology, materials and modifications described and as such may, of course, vary. It is also understood that the terminology used herein is for the purpose of describing particular aspects only and is not intended to limit the scope of the present invention, which is limited only by the appended claims.
Definitions
[0043] Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood to one of ordinary skill in the art to which this invention belongs. It should be appreciated that the term “substantially” is synonymous with terms such as “nearly”, “very nearly”, “about”, “approximately”, “around”, “bordering on”, “close to”, “essentially”, “in the neighborhood of’, “in the vicinity of’, etc., and such terms may be used interchangeably as appearing in the specification and claims. It should be appreciated that the term “proximate” is synonymous with terms such as “nearby”, “close”, “adjacent”, “neighboring”, “immediate”, “adjoining”, etc., and such terms may be used interchangeably as appearing in the specification and claims.
[0044] “Accelerator,” (AC) as used herein, refers to broadly as FPGA resources with a deployed design that supports an accelerated software application.
[0045] “Accelerated Software Applications,” as used herein, refers to broadly as software functions that use an accelerator to execute functions that leverage FPGA resources for acceleration.
[0046] “Applications Programming Interface,” as used herein, refers to broadly as the software interface that defines interactions between multiple software applications or mixed hardware-software intermediaries.
[0047] “Business Service Layer,” as used herein, refers broadly to the logical collection of business applications that leverage services provided by the virtual network function layer.
[0048] “Cloud Computing,” as used herein, refers broadly to the practice of using a network of servers hosted on the internet to store, manage, and process data, rather than a local computing resources.
[0049] “Container” or “Software Container,” as used herein, refers broadly to a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.
[0050] “Container Network Function,” as used herein, refers broadly as to a software container which has the function of supporting services of the network or system through its specific function. [0051] “Containerize!,” as used herein, refers broadly to as an operating system that supports the deployment of Container Network Functions (CNFs) instead of VNFs. In contrast to hypervisors, Containerizers rely on an operating system as the basis for “virtualization”.
[0052] “Field Programable Gate Array” as used herein, refers to a semiconductor device that is based around a matrix of Configurable Logic Blocks (CLBs) connected via programmable interconnects. FPGAs can be reprogrammed to desired application or fimctionality requirements after manufacturing.
[0053] “Floor Plan (FP)” as used herein refers to broadly layout of the FPGA with regards to size and shape of the shell, slot configuration (e.g., size of resources and number of slots), virtual bus, network, and host interfaces to transport data.
[0054] “FPGA Resource Manager” as used herein refers to broadly as the fimction that coordinates the assignment of floor plans to a plurality of FPGAs that is in the infrastructure layers and the assignment of accelerators to the slots of the floor plans of the FGPAs.
[0055] “Hardware Abstraction Layer” as used herein refers broadly to software that provides exchange of information with the hardware device e.g., FPGA.
[0056] “Hardware Acceleration” as used herein, refers broadly to the process by which an application will offload certain computing tasks from a CPU onto specialized hardware components- typically FPGAs, GPUs, DSPs, or ASICS- within the system, enabling greater efficiency than is possible in software running on a general-purpose CPU alone.
[0057] “High Performance Computer (HPC),” as used herein, refers broadly to a Central Processing Unit (CPU) with hardware acceleration.
[0058] “Host Interface (HI),” as used herein, refers broadly to as the hardware, firmware, and software that supports the exchange of data between the FPGA and CPU/Memory/Drivers. [0059] “Hypervisor,” as used herein, refers broadly to computer software, firmware, hardware, and combinations thereof, which manages hardware resources to support virtual network functions.
[0060] “Infrastructure Layer,” as used herein, refers broadly to common hardware shared in a Network Function Virtualization (NFV) architecture. The common hardware includes but is not limited to processing, memory, networking, and FPGA resources.
[0061] “Network Interface,” as used herein refers to broadly as the set of hardware, firmware, and software that is used to provide a connection to an internet protocol network. [0062] “Network Function Virtualization (NFV),” as used herein, refers broadly to a network architecture that decouples network functions from dedicated hardware through virtualization into a set of Virtual Network Functions (VNF), which are connected together through a virtual or real network, to create services.
[0063] “NFV Management and Orchestration,” as used herein refers broadly to a system of Virtual Network Functions (VNFs) that manage service orchestration, (e.g., supporting the business layer with VNFs) VNF management, (e.g., deployment, management, and teardown of VNFs) and Infrastructure Management (e.g., assignment of infrastructure resources to the VNF layer).
[0064] “Service orchestration,” as used herein, refers broadly to the execution of the operational and functional processes involved in designing, creating, and delivering an end-to- end service by deploying Virtual Network Functions.
[0065] “Shell”, as used herein, refers broadly to as the static FPGA resources and geometry of the floor plan that provide the virtual bus, network interfaces, and host interfaces.
[0066] “Slot,” as used herein, refers broadly to the FPGA resources (logic, memory, digital signal processing) with respect to the geometry of the floorplan.
[0067] “Software Defined Networking (SDN),” as used herein, refers broadly to a networking paradigm that decouples decisions from networking infrastructure into a logically centralized controller to determine network management policies and operation.
[0068] “Virtual Network Function,” (VNF) as used herein, refers broadly to a virtualized computing unit (virtual machine), supported through a hypervisor or containerizer, which has the function of supporting services of the network or system through its specific function.
[0069] “Virtual Network Function Layer,” (VNF) as used herein, refers broadly to a logical collection of virtual network functions and virtual networks.
[0070] “Virtual Network,” (VNF) as used herein, refers broadly as the combination of hardware and software network resources to combine network functionality into a single software-based administrative entity.
[0071] “Virtual Bus,” as used herein, refers broadly as the data connections on the FPGA, which route and facilitate data transfers between slots, network interfaces, and host interfaces. [0072] “Virtual Digital sample interface,” as used herein, refers broadly to a digital sample interface connection that is connected via virtual network or though a software defined network. When two or more device exchange digital sample interface messages using a virtual network, it may be referred to as a virtual digital sample interface connection.
FPGA VIRTUALIZATION FRAMEWORK
[0073] Field Programmable Gate Arrays (FPGAs) provide software applications (e.g., data compression, deep learning, waveform processing, encryption, decryption) at superior performance per watt. In combination with Network Function Virtualization (NFV), FPGAs may be used as the computing component for telecommunications applications but may require new methods for integration with Virtualized Network Functions (VNFs). The systems and methods described herein provide the necessary mechanisms for FPGA configuration and reconfiguration in virtualization environments. For example, FPGAs are becoming a cornerstone of computing and by providing software acceleration of applications (e.g., data compression, deep learning, waveform processing, encryption, decryption) and provide superior performance per watt. In combination with NFV, FPGAs are creating a necessary computing component for telecommunications applications and require new methods for integration with Virtualized Network Functions (VNFs). In light of these advancements, our disclosure provides necessary mechanisms to FPGA configuration and reconfiguration in virtualization environments.
[0074] CPU systems offer flexibility but lack power efficiency. Further, it is estimated that 20% of the total costs associated with data centers are power. FPGAs are dedicated to a single task, but are far more energy efficient than a CPU of similar computing power. The Dynamic Partial Reconfiguration (DPR) Field-programmable gate array (FPGA) system described herein solves this problem by offering flexibility in the allocation of FPGA resources combined with greater energy efficiency.
[0075] FPGAs vendors are now featuring a capability known as Dynamic Partial Reconfiguration (DPR). In DPR “regions” of the FPGA resources are used dynamically by changing different bit file designs from the DPR region. This is in stark contrast to normal FPGA operation where a single bit file is used to configure the entire FPGA and then that FPGA is left untouched for most of its operational life. DPR regions may be divided into slots, where bit files (FPGA firmware designs) may be deployed, we refer to these designs as accelerators, which provide acceleration to computations that are called with a higher lever programming language such as C+. This disclosure provides a system and method for the portability of FPGA designs using DRP within the scope of NFV designs.
[0076] FPGAs may be configured with a capability known as Dynamic Partial Reconfiguration (DPR). In DPR “regions” of the FPGA resources are used dynamically by changing different bit file designs from the DPR region. This is in contrast to normal FPGA operation where a single bit file is used to configure the entire FPGA and then that FPGA is left untouched for most of its operational life. DPR regions may be divided into slots, where bit files (FPGA firmware designs) may be deployed, these are referred to as “accelerators,” which provide acceleration to computations required by a higher level programming languages, e.g., C+. Methods for the portability of FPGA configuration using DRP within the framework of NFV designs are also described herein.
[0077] The inventors surprisingly found that the combination of Field Programable Gate Array (FPGA) acceleration modules and Network Function Virtualization (NFV) technologies can be used to support the deployment of virtual networks using homogenous commodity computing hardware for all network functions. Computing resources have been historically based on Central Processing Units (CPUs) becoming heterogeneous (e.g., relying on other silicon architectures for computing) by leveraging Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Graphics purpose Processing Units (GPUs), and combinations thereof, for hardware acceleration.
[0078] In past practices, access to accelerators was established through purpose built hardware designed around the accelerator or integrating acceleration into existing designs. Accelerators can be deployed as separate modules into computing architectures, e.g., through PCIe (peripheral component interconnect express) cards in network servers. CPU architectures may provide interfaces to allow for direct programing and easier access to hardware acceleration. Furthermore, FPGAs architectures have fiirther moved toward Dynamic Partial Reconfiguration (DPR) architectures. Systems that deploy reprogrammable FPGAs acceleration functions dynamically are described herein. The systems and methods described herein improve the fimction of the computer systems, for example, cloud-based computer systems, edge devices, and servers, by providing, for example, greater computational power without a significant increase in energy consumption and/or space.
[0079] In reference to FIG. 1, which depicts an exemplary Dynamic Partial Reconfiguration (DPR) Field-programmable gate array (FPGA) system 100.
[0080] The Field-programmable gate array (FPGA) 101 comprises a plurality of reconfigurable slots 102, which are used to load FPGA accelerators (e.g., Slot-1 (AC-1), Slot-2 (AC-2), Slot-N (AC-N). A virtual bus 105 electronically couples the accelerators to each other, the network interfaces, and/or the host interface. The floor plan contained inside an FPGA comprises the physical resources and geometries utilized by the slots and the shell 106.
[0081] In a DPR FPGA system, reconfigurable regions within the FPGA are used to program fimctions used by high-level programming languages, these reconfigurable regions are referred to as slots 102. High-level programming languages include but are not limited to Python, Visual Basic, Delphi, Perl, PHP, ECMAScript, Ruby, C++, C#, and Java. A plurality of slots may be supported by the DPR FPGA system. The DPR FPGA system may comprise about 1-10 slots configured with an accelerator. For example, the DPR FPGA system may comprise about 1, 2, 3, 4, 5, 6, 7 8, 9, or 10 slots configured with an accelerator.
[0082] The DPR FPGA system described herein may comprise a shell 106. The shell is “static” region of FPGA firmware configured to provide interfaces and connectivity among slots, network interfaces, and host interfaces external to the FPGA. The shell provides an analogous firmware container for deploying of dynamic firmware (e.g., accelerators) into reconfigurable regions (e.g., slots). The physical resources and geometries of the slots and the shell comprise the floor plan 107 of the FPGA. An FPGA may support any floorplan, given that there are sufficient resources on the FGPA. Network interfaces (NI) 103 are configured to exchange data between the FPGA and hardware platform to another server on the network. The host interface (HI) 109 is configured to provide access to accelerated software applications through the software access stack(s) 108. Each software access stack comprises accelerated software applications, application programing interface, hardware abstraction layer, and hardware comprising a CPU, memory, and drivers, configured to provide the accelerated software applications the ability to send and exchange data with the accelerator deployed in a slot.
[0083] The FPGA Network Interfaces (NI) 103 are coupled to external Internet Protocol (IP) networks 104. The FPGA also is connected through a Host Interface (HI) 109 configured to exchange data with CPU host configured to provide the exchange of data with the software access stack. This data exchange is further supported through a hardware abstraction layer and application programing interface for user software applications provided by the software access stack(s) 108.
[0084] In reference to FIG. 2 depicts an exemplary DPR FPGAs used in a Network function virtualization (NFV) system 200 comprising a plurality of FPGAs described herein. The business service layer and NFV management orchestration layers 204 serve their appropriate functions as documented in “NFV Architectural Framework,” ETS GS NFV 002. The infrastructure layer embodies all hardware resources that are virtualized, through a hypervisor or containerizer functions 205, and used to support the VNF layer 202. The infrastructure layer contains a plurality of DPR FPGAs 211, which are configured with floor plans. The plurality of DPR FPGAs in the infrastructure layer are used by a plurality of VNFs 212 in the VNF layer. Each VNF hosts its necessary set of Software Access Stacks (SASs), acceleration fimctions (AC), and Floor Plans (FPs) that can be installed into DPR FPGA. In addition to traditional NFV architecture, the infrastructure manager includes an FPGA resource manager to manage and coordinate the deployment of accelerators and floorplans by the VNFs into the FPGAs. The VNFs are also connected together through a virtual network, which allows VNFs to exchange data with one another.
[0085] Network function virtualization (NFV) system 200 comprise an infrastructure layer 201 comprising a plurality of Field-programmable gate arrays (FPGAs) as described herein (shown in FIG. 1). The infrastructure 201 is coupled to a virtual network function layer 202 comprising a virtual network 222 coupled to/comprising a plurality of VNFs 212 each comprising Software Access Stacks (SASs), acceleration functions (AC), and Floor Plans (FPs), where the VNFs are instantiated via a hypervisor/containerizer 205. The hypervisor provides the mechanisms to support virtualization, e.g., Virtual Network Functions (VNFs). The hypervisor/containerizer virtualizes the DPR FPGA resources, allowing for greater flexibility in the utilizing of the DPR FPGA resources, in addition to virtualizing CPU, memory, and networking resources. In contrast to standard FPGA, the DPR FPGA may be assigned and reassigned tasks to each individual accelerator to execute.
[0086] On top of the Virtual Network Function lies the Business Service Layer 203 comprising a plurality of Business Applications 213. The Business Service Layer 203, Virtual Network Function 202, and the Infrastructure 201 are all electronically coupled and managed by a NFV Management and Orchestration plane 204 comprising a service orchestrator 214 configured to provide operational and fimctional processes involved in designing, creating, and delivering an end-to-end service by deploying VNFs. The VNF manager 224 configured to support VNF lifecycle management (e.g., instantiation, update, query, scaling, and termination) and an infrastructure manager 234 configured control and manage resources (computing - CPU FPGA-, storage, networking) and their assignment to VNFs, comprising a FPGA resource manager configured to coordinates the assignment of floor plans to a plurality of FPGAs that is in the infrastructure layers and the assignment of accelerators to the slots of the floor plans of the FGPAs.
[0087] In reference to FIG. 3, an exemplary process 300 of the interactions between the VNF, FPGA resource manager, and the FPGA to load Floor Plans (FP) and accelerators (AC) into FPGAs is depicted. FIG. 3 shows the infrastructure and VNF layers as well as the NFV management and orchestration plane, from FIG 2. The virtualization layer shows a single VNF and the infrastructure layer shows a single FPGA.
[0088] An infrastructure compartment/device/layer 301 comprising at least one FPGA as described herein is electronically coupled to a Virtual Network Function 302 comprising set of Software Access Stacks (SASs), acceleration fimctions (AC), and Floor Plans (FPs). The Virtual Network Function 302 is electronically coupled to an Interface 1. The two interfaces, Interface 1 and Interface 2, may be virtual/real or local/network IP communication. The VNF has the need to deploy hardware acceleration as part of its normal function. To begin the process, the VNF authenticates with the FPGA resource manager that it has valid access to resources. After the authentication process (1), the FPGA resource manager responds to a configuration inquiry (2) of what FPGAs have available resources for floor plans and/or slots. After the configuration inquiry, the VNF requests its desired resources (3) and the FGPA resource manager assigns those resources to the VNF (4). Assuming nonhomogeneous slot sizes, the configuration response would include FPGA resource information (RAM, Logic, DSP, etc.) for each slot. The resource assign/request assigns specific FPGA and/or FPGA slots to the VNF. The VNF follows by loading the floor plan (5) and/or accelerator to the specified FPGA (6).
[0089] While the present invention is described with respect to what is presently considered to be the preferred embodiments, it is understood that the invention is not limited to the disclosed embodiments. The present invention is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.
[0090] Furthermore, it is understood that this invention is not limited to the particular methodology, materials and modifications described and as such may, of course, vary. It is also understood that the terminology used herein is for the purpose of describing particular aspects only and is not intended to limit the scope of the present invention, which is limited only by the appended claims.
[0091] Although the invention has been described in some detail by way of illustration and example for purposes of clarity of understanding, it should be understood that certain changes and modifications may be practiced within the scope of the appended claims. Modifications of the above-described modes for carrying out the invention that would be understood in view of the foregoing disclosure or made apparent with routine practice or implementation of the invention to persons of skill in electrical engineering, telecommunications, computer science, and/or related fields are intended to be within the scope of the following claims.
[0092] All publications (e.g., Non-Patent Literature), patents, patent application publications, and patent applications mentioned in this specification are indicative of the level of skill of those skilled in the art to which this invention pertains. All such publications (e.g., NonPatent Literature), patents, patent application publications, and patent applications are herein incorporated by reference to the same extent as if each individual publication, patent, patent application publication, or patent application was specifically and individually indicated to be incorporated by reference.

Claims

CLAIMS We claim:
1. A network function virtualization (NFV) system comprising a plurality of Dynamic Partial Reconfigurable (DPR) Field Programmable Gate Arrays (FPGA) comprising: a plurality of slots each configured with an accelerator and electrically coupled to a plurality of network connections and a plurality of host interfaces; and a plurality of software access stacks configured with an accelerated software application to support the accelerators, wherein the network connections are configured to exchange data between the accelerator and a network, wherein the host interfaces are configured to exchange data between the accelerator and the software access stacks; coupled to a virtual network function layer comprising a virtual network coupled to a plurality of Virtual Network Functions (VNFs) each comprising Software Access Stacks (SASs), accelerators (AC), and Floor Plans (FPs), wherein the VNFs are instantiated via a hypervisor/containerizer.
2. The system of claim 1, wherein the software access stack comprises accelerated software applications, application programing interfaces, hardware abstraction layer, and hardware all configured with an accelerated software application to support the accelerator functions.
3. The system of claim 2, wherein the hardware comprises a CPU, memory, driver, and combinations thereof.
4. The system of any one of claim 1-3, further comprising a shell comprising a virtual bus, network interfaces, and host interfaces configured to support the DPR FPGA.
5. The system of any one of claims 1-4, wherein the hypervisor provides the mechanisms to support virtualization.
6. The system of any one of claims 1-4, wherein the hypervisor/containerizer virtualizes the DPR FPGA resources, optionally virtualizing CPU, memory, and networking resources.
7. The system of any one of claims 1-6, wherein the DPR FGPA system comprises about 1-10 slots configured with an accelerator.
8. The system of any one of claims 1-7, wherein the DPR FGPA system comprises about 1, 2, 3, 4, 5, 6, 7 8, 9, or 10 slots configured with an accelerator.
9. The system of any one of claims 1-8, further comprising a NFV Management and Orchestration system that supports at least one business application comprising service orchestrator, NVF manager, and a infrastructure manager, wherein the service orchestrator is configured to provide operational and functional processes involved in designing, creating, and delivering an end-to-end service by deploying VNFs to support business applications, wherein the NVF manager is configured to support VNF lifecycle management, wherein the infrastructure manager comprises a FPGA resource manager configured to coordinates the assignment of floor plans to a plurality of FPGAs that is in the infrastructure layers and the assignment of accelerators to the slots of the floor plans of the FGPAs. A computing cloud system comprising the NFV system of any one of claims 1-9. A local server comprising the NFV system of any one of claims 1-9. An edge computing system comprising the NFV system of any one of claims 1-9. A virtual network function system comprising
(a) a plurality of Dynamic Partial Reconfigurable (DPR) Field Programmable Gate Arrays (FPGAs) configured to be available for loads from a plurality of Virtual Network Functions (VNFs) wherein each VNF comprises acceleration functions and floor plans that can be loaded into FPGA resources, wherein each VNF comprises software access stacks configured with accelerators deployed on plurality of FPGAs
(b) FPGA resource manager configured to manages the assignment of accelerators to slots and floor plans to FPGAs. The system of claim 13, wherein the FPGA floor plans are preconfigured, deployed by the FPGA resource manager from a database, or are contained as binaries from the virtual network function. The system of claim 13 or 14, wherein the plurality of FPGAs are configured with a plurality of different floorplans, wherein each floorplan is configured with a plurality of slots, and wherein each slot hosts an accelerator. The system of claim 15, wherein the accelerator is pre-configured, or can be deployed by a virtual network function. The system of any one of claims 13-16, wherein the VNFs are virtual machines, software container network functions, software containers deployed on virtual machines, or a combination thereof. The system of any one of claims 13-17, wherein the plurality of FPGAs are managed in an infrastructure layer and are configured to be used for hardware acceleration by a plurality of VNFs. The system of claim 18, wherein infrastructure layer comprises a plurality of FPGAs and CPU hosts configured to execute/serve other virtual functions. The system of any one of claims 13-19, wherein the FPGA resource manager loads floor plans and accelerators into FPGA slots on behalf of the VNF. The system of any one of claims 13-20, wherein the VNFs are configured to deploy floor plans and accelerators directly into accelerators without an FPGA manager. A method to load Dynamic Partial Reconfigurable (DPR) Field Programmable Gate Arrays (FPGA) slots with acceleration functions from Virtual Network Functions comprising
(a) a Virtual Network Function (VNF) communicating with an FGPA resource manager on at least one data interface;
(b) the FPGA resource manager manages and communicates with a plurality of FPGAs on the at least one data interface,
(c) the FPGA resource manager manages, assigns, and communicates the available FPGA resources to the VNF;
(d) the VNF authenticates with the FPGA resource manager to send requests for configurations information and available resources;
(e) the VNF inquires on the available FPGA resources through the FPGA resource manager;
(f) the VNF requests FPGA for loading the floor plans; and
(g) the VNF requests the slot for loading of the accelerator. The method of claim 22, wherein the VNF comprises a plurality of VNFs configured to use a plurality of FPGAs for deploying accelerators and floor plans. The method of claim 22 or 23, wherein the plurality VNFs comprise software access stacks, accelerators, and floor plans, all electronically coupled and configured to access, manage, and./or virtualize the FPGA resources. The method of any one of claims 22-24, wherein the plurality of VNFs are deployed onto a plurality of FPGAs. The method of any one of claims 22-25, wherein the FPGA floorplan comprises a plurality of network and host interfaces. A method to load Dynamic Partial Reconfigurable (DPR) Field Programmable Gate Arrays (FPGA) slots with acceleration functions from Virtual Network Functions comprising
16 (a) a Virtual Network Function (VNF) communicating with a VNF manager on at least one data interface;
(b) the VNF manager manages and communicates with a plurality of FPGAs on the at least one data interface,
(c) the VNF manager manages, assigns, and communicates the available FPGA resources to the VNF;
(d) the VNF authenticates with the VNF manager to send requests for configurations information and available resources;
(e) the VNF inquires on the available FPGA resources through the VNF manager;
(f the VNF requests FPGA for loading the floor plans; and
(g) the VNF requests the slot for loading of the accelerator.
17
PCT/US2022/042356 2021-09-01 2022-09-01 Virtualized medium access ecosystem and methods WO2023034512A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163239675P 2021-09-01 2021-09-01
US63/239,675 2021-09-01

Publications (1)

Publication Number Publication Date
WO2023034512A1 true WO2023034512A1 (en) 2023-03-09

Family

ID=85411553

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/042356 WO2023034512A1 (en) 2021-09-01 2022-09-01 Virtualized medium access ecosystem and methods

Country Status (1)

Country Link
WO (1) WO2023034512A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117312233A (en) * 2023-11-28 2023-12-29 苏州元脑智能科技有限公司 Field programmable gate array chip, construction method thereof and accelerator equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170371692A1 (en) * 2016-06-22 2017-12-28 Ciena Corporation Optimized virtual network function service chaining with hardware acceleration
US20190220703A1 (en) * 2019-03-28 2019-07-18 Intel Corporation Technologies for distributing iterative computations in heterogeneous computing environments

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170371692A1 (en) * 2016-06-22 2017-12-28 Ciena Corporation Optimized virtual network function service chaining with hardware acceleration
US20190220703A1 (en) * 2019-03-28 2019-07-18 Intel Corporation Technologies for distributing iterative computations in heterogeneous computing environments

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
NIEMIEC, G ET AL.: "A Survey on FPGA Support for the Feasible Execution of Virtualized Network Functions", IEEE COMMUNICATIONS SURVEYS & TUTORIALS, vol. 22, no. 1, 25 September 2022 (2022-09-25), pages 504 - 525, XP011778010, Retrieved from the Internet <URL:https://ieeexplore.ieee.org/document/8848427> [retrieved on 20221122], DOI: 10.1109/COMST.2019.2943690 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117312233A (en) * 2023-11-28 2023-12-29 苏州元脑智能科技有限公司 Field programmable gate array chip, construction method thereof and accelerator equipment
CN117312233B (en) * 2023-11-28 2024-02-23 苏州元脑智能科技有限公司 Field programmable gate array chip, construction method thereof and accelerator equipment

Similar Documents

Publication Publication Date Title
US10360061B2 (en) Systems and methods for loading a virtual machine monitor during a boot process
CN110741352B (en) Virtual network function management system, virtual network function management method and computer readable storage device
KR100834340B1 (en) System and method of determining an optimal distribution of source servers in target servers
US11775335B2 (en) Platform independent GPU profiles for more efficient utilization of GPU resources
US10262390B1 (en) Managing access to a resource pool of graphics processing units under fine grain control
US8327372B1 (en) Virtualization and server imaging system for allocation of computer hardware and software
US9009707B2 (en) Sharing reconfigurable computing devices between workloads
KR102361929B1 (en) Capacity Management in Provider Networks Using Dynamic Host Device Instance Model Reconfiguration
CN106201647A (en) Multilamellar service quality (QoS) for network function virtual platform
US20200356415A1 (en) Apparatus and method for depoying a machine learning inference as a service at edge systems
US20230127141A1 (en) Microservice scheduling
JP2013518330A (en) Abstraction method and system for virtual machine placement based on non-functional requirements
CN110719206A (en) Space-based FPGA (field programmable Gate array) virtualization computing service system, method and readable storage medium
US10042673B1 (en) Enhanced application request based scheduling on heterogeneous elements of information technology infrastructure
Gogouvitis et al. Seamless computing in industrial systems using container orchestration
WO2023034512A1 (en) Virtualized medium access ecosystem and methods
WO2019024994A1 (en) System, method and computer program for virtual machine resource allocation
US9509562B2 (en) Method of providing a dynamic node service and device using the same
Seth et al. Dynamic threshold-based dynamic resource allocation using multiple VM migration for cloud computing systems
US20220229695A1 (en) System and method for scheduling in a computing system
Fontenla-González et al. Lightweight container-based OpenEPC deployment and its evaluation
CN116075809A (en) Automatic node exchange between compute nodes and infrastructure nodes in edge regions
EP4203431A1 (en) Methods and apparatus for network interface device-based edge computing
US11720388B2 (en) Management of dynamic sharing of central processing units
Park et al. Cloud computing platform for GIS image processing in U-city

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22865561

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE