US10445850B2 - Technologies for offloading network packet processing to a GPU - Google Patents

Technologies for offloading network packet processing to a GPU Download PDF

Info

Publication number
US10445850B2
US10445850B2 US14/836,142 US201514836142A US10445850B2 US 10445850 B2 US10445850 B2 US 10445850B2 US 201514836142 A US201514836142 A US 201514836142A US 10445850 B2 US10445850 B2 US 10445850B2
Authority
US
United States
Prior art keywords
gpu
application
network device
estimated
performance metrics
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US14/836,142
Other versions
US20170061566A1 (en
Inventor
Alexander W. Min
Shinae WOO
Jr-Shian Tsai
Janet Tseng
Tsung-Yuan C. Tai
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corp filed Critical Intel Corp
Priority to US14/836,142 priority Critical patent/US10445850B2/en
Assigned to INTEL CORPORATION reassignment INTEL CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TAI, TSUNG-YUAN C., TSAI, JR-SHIAN, Tseng, Janet, MIN, ALEXANDER W., WOO, Shinae
Priority to PCT/US2016/044012 priority patent/WO2017034731A1/en
Priority to CN201680043884.6A priority patent/CN107852413B/en
Publication of US20170061566A1 publication Critical patent/US20170061566A1/en
Application granted granted Critical
Publication of US10445850B2 publication Critical patent/US10445850B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0805Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability
    • H04L43/0817Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability by checking functioning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/82Miscellaneous aspects
    • H04L47/822Collecting or measuring resource availability data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/83Admission control; Resource allocation based on usage prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/10Packet switching elements characterised by the switching fabric construction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/45Arrangements for providing or supporting expansion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]

Definitions

  • a level of performance can become more difficult to predict as additional network packet processing applications utilize the GPGPU as an offloading engine or an accelerator.
  • a GPGPU-accelerated application may not be aware of and/or may not be able to communicate with another GPGPU-accelerated application, which can result in inefficient and/or uncoordinated usage of the GPGPU. More specifically, if the first GPGPU-accelerated application is fully utilizing resources of the GPGPU, offloading the second GPGPU-accelerated application may result in performance degradation due to resource contention, etc.
  • FIG. 1 is a simplified block diagram of at least one embodiment of a system that includes a network device for offloading network packet processing to a graphics processing unit (GPU) of the network device;
  • a network device for offloading network packet processing to a graphics processing unit (GPU) of the network device;
  • GPU graphics processing unit
  • FIG. 2 is a simplified block diagram of at least one embodiment of the network device of the system of FIG. 1 ;
  • FIG. 3 is a simplified block diagram of at least one embodiment of an environment that may be established by the network device of FIG. 2 ;
  • FIG. 4 is a simplified block diagram of another embodiment of an environment that may be established by the network device of FIG. 2 ;
  • FIGS. 5 and 6 are a simplified flow diagram of at least one embodiment of a method for offloading network packet processing to the GPU of the network device of FIG. 2 .
  • references in the specification to “one embodiment,” “an embodiment,” “an illustrative embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include that particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
  • items included in a list in the form of “at least one of A, B, and C” can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C).
  • items listed in the form of “at least one of A, B, or C” can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C).
  • the disclosed embodiments may be implemented, in some cases, in hardware, firmware, software, or any combination thereof.
  • the disclosed embodiments may also be implemented as instructions carried by or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage media, which may be read and executed by one or more processors.
  • a machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or non-volatile memory, a media disc, or other media device).
  • a system 100 for offloading network packet processing to a graphics processing unit includes a computing device 102 and a remote computing device 110 in communication over a network 104 via one or more network devices 106 .
  • the network device 106 facilitates the transmission of network packets (e.g., based on workload type, flow information, etc.) between the computing device 102 and the remote computing device 110 over the network 104 .
  • the computing device 102 may request data from the remote computing device 110 by sending one or more network packets that indicate the computing device 102 is requesting data from the remote computing device 110 .
  • the remote computing device 110 may attempt to transmit response data (e.g., a payload, a message body, etc.) via one or more network packets to the computing device 102 across the network 104 .
  • the network packets are processed by the network devices 106 prior to being forwarded along.
  • a network device 106 may allocate a number of computing resources for one or more virtual machines (VMs) to perform various network functions or services (e.g., firewall services, network address translation (NAT) services, load-balancing services, deep packet inspection (DPI) services, transmission control protocol (TCP) optimization services, 4G/LTE network services, etc.) on the network packets.
  • VMs virtual machines
  • network functions or services e.g., firewall services, network address translation (NAT) services, load-balancing services, deep packet inspection (DPI) services, transmission control protocol (TCP) optimization services, 4G/LTE network services, etc.
  • the network device 106 can process each network packet, such as to determine where to route the network packets, whether the network packets should be dropped, etc. To do so, one or more of the VMs may be configured to perform a particular service that can be used to process the network packets.
  • Each VM may perform the relevant processing of the network packets based on the service for which they were configured using a graphics processing unit (GPU) of the network device 106 (see, e.g., the GPU 210 of FIG. 2 ) or a central processing unit (CPU) of the network device 106 (see, e.g., the CPU 202 of FIG. 2 ).
  • the network device 106 estimates an impact on a performance metric (e.g., a level of performance) for each of the CPU and GPU to determine whether to perform the service (i.e., the portion of the processing for which that particular VM is configured) on the network packets at either the CPU or the GPU.
  • a performance metric e.g., a level of performance
  • the network device 106 estimates how the CPU might perform if the processing were to be performed on the CPU and how the GPU might perform if the processing were to be performed on the GPU. Based on the performance metric estimations, the network device 106 can then determine whether to perform the service on the CPU or the GPU.
  • the computing device 102 may be embodied as any type of computation or computing device capable of performing the functions described herein, including, without limitation, a computer, a desktop computer, a smartphone, a workstation, a laptop computer, a notebook computer, a tablet computer, a mobile computing device, a wearable computing device, a network appliance, a web appliance, a distributed computing system, a processor-based system, and/or a consumer electronic device.
  • the remote computing device 110 may be embodied as any type of computation or computing device capable of performing the functions described herein, including, without limitation, a computer, a desktop computer, a smartphone, a workstation, a laptop computer, a notebook computer, a tablet computer, a mobile computing device, a wearable computing device, a network appliance, a web appliance, a distributed computing system, a processor-based system, and/or a consumer electronic device.
  • Each of the computing device 102 and the remote computing device 110 may include components commonly found in a computing device such as a processor, memory, input/output subsystem, data storage, communication circuitry, etc.
  • the network 104 may be embodied as any type of wired or wireless communication network, including cellular networks (e.g., Global System for Mobile Communications (GSM), 3G, Long Term Evolution (LTE), Worldwide Interoperability for Microwave Access (WiMAX), etc.), digital subscriber line (DSL) networks, cable networks (e.g., coaxial networks, fiber networks, etc.), telephony networks, local area networks (LANs) or wide area networks (WANs), global networks (e.g., the Internet), or any combination thereof. Additionally, the network 104 may include any number of network devices 106 as needed to facilitate communication between the computing device 102 and the remote computing device 110 .
  • GSM Global System for Mobile Communications
  • LTE Long Term Evolution
  • WiMAX Worldwide Interoperability for Microwave Access
  • DSL digital subscriber line
  • cable networks e.g., coaxial networks, fiber networks, etc.
  • LANs local area networks
  • WANs wide area networks
  • global networks e.g., the Internet
  • the network device 106 may additionally be connected to a network controller 108 .
  • the network controller 108 may be embodied as, or otherwise include, any type of hardware, software, and/or firmware capable of providing a platform for performing the functions described herein, such a computing device, a multiprocessor system, a server (e.g., stand-alone, rack-mounted, blade, etc.), a network appliance, a compute device, etc.
  • the network controller 108 may be configured to store and/or maintain topology information of the network 104 (i.e., the arrangement and interconnectivity of the network devices 106 ) and/or network packet management information (e.g., network packet/flow management/processing information, policies corresponding to network packet types/flows, etc.).
  • the network controller 108 may be configured to function as a software-defined networking (SDN) controller, a network functions virtualization (NFV) management and orchestration (MANO), etc.
  • SDN software-defined networking
  • NFV network functions virtualization
  • MANO network functions virtualization
  • the network controller 108 may send (e.g., transmit, etc.) network flow information (e.g., network packet/flow policies) to the network devices 106 capable of operating in an SDN environment and/or a NFV environment.
  • the network device 106 may be embodied as any type of computing device capable of facilitating wired and/or wireless network communications between the computing device 102 and the remote computing device 110 .
  • the network device 106 may be embodied as a computing device, an access point, a router, a switch, a network hub, a storage device, a compute device, a multiprocessor system, a server (e.g., stand-alone, rack-mounted, blade, etc.), a network appliance (e.g., physical or virtual), etc.
  • a server e.g., stand-alone, rack-mounted, blade, etc.
  • a network appliance e.g., physical or virtual
  • an illustrative network device 106 includes a central processing unit (CPU) 202 , an input/output (I/O) subsystem 204 , a main memory 206 , a GPU memory 208 , a GPU 210 , a data storage device 212 , and communication circuitry 214 that includes a network interface card (NIC) 216 .
  • the network device 106 may include other or additional components, such as those commonly found in a network device (e.g., virtualization services, drivers, operating systems, schedulers, etc.). Additionally, in some embodiments, one or more of the illustrative components may be incorporated in, or otherwise form a portion of, another component.
  • the main memory 206 may be incorporated in the CPU 202 and/or the GPU memory 208 may be incorporated in the GPU 210 , in some embodiments. Additionally or alternatively, in some embodiments, the GPU memory 208 , or portions thereof, may be a part of the main memory 206 (e.g., integrated graphics such as Intel® Processor Graphics).
  • the CPU 202 may be embodied as any type of processor capable of performing the functions described herein.
  • the CPU 202 may be embodied as a single or multi-core processor(s), digital signal processor, microcontroller, or other processor or processing/controlling circuit.
  • the I/O subsystem 204 may be embodied as circuitry and/or components to facilitate input/output operations with the CPU 202 , the main memory 206 , the GPU 210 , the GPU memory 208 , and other components of the network device 106 .
  • the I/O subsystem 204 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, integrated sensor hubs, firmware devices, communication links (i.e., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.) and/or other components and subsystems to facilitate the input/output operations.
  • the I/O subsystem 204 may form a portion of a system-on-a-chip (SoC) and be incorporated, along with the CPU 202 , the GPU 210 , the GPU memory 208 , the main memory 206 , and other components of the network device 106 , on a single integrated circuit chip.
  • SoC system-on-a-chip
  • the GPU 210 illustratively includes an array of processor cores or parallel processors, each of which can execute a number of parallel and concurrent threads to handle specific types of GPU tasks.
  • the processor cores of the GPU 210 may be configured to individually handle 3D rendering tasks, blitter (e.g., 2D graphics), video, and video encoding/decoding tasks, by providing electronic circuitry that can perform mathematical operations rapidly using extensive parallelism and many concurrent threads.
  • the GPU 210 is generally capable of parallelizing network packet processing (e.g., internet protocol (IP) forwarding, hashing, pattern matching, etc.) via the processor cores of the GPU 210 .
  • IP internet protocol
  • the GPU 210 can be an alternative to the CPU 202 (i.e., the CPU) for performing at least a portion of the processing of the network packet.
  • GPU 210 can free up resources of the CPU 202 (e.g., memory, cache, processor cores, communication bus bandwidth, etc.), which can be dedicated to other tasks, such as application performance management.
  • graphics processing unit or “GPU” may be used herein to refer to, among other things, a graphics processing unit, a graphics accelerator, or other type of specialized electronic circuit or device, such as a general purpose GPU (GPGPU) or any other device or circuit that is configured to be used by the network device 106 to accelerate network packet processing tasks and/or perform other parallel computing operations that would benefit from accelerated processing, such as network traffic monitoring.
  • the GPU 210 may be embodied as a peripheral device (e.g., on a discrete graphics card), or may be located on the CPU 202 motherboard or on the CPU 202 die.
  • the GPU memory 208 (e.g., integrated graphics such as Intel® Processor Graphics) and the main memory 206 may be embodied as any type of volatile or non-volatile memory or data storage capable of performing the functions described herein.
  • the main memory 206 may store various data and software used during operation of the network device 106 , such as operating systems, applications, programs, libraries, and drivers.
  • portions of the main memory 206 may at least temporarily store command buffers and GPU commands that are created by the CPU 202
  • portions of the GPU memory 208 may at least temporarily store the GPU commands received from the main memory 206 by, e.g., direct memory access (DMA).
  • DMA direct memory access
  • the GPU memory 208 is communicatively coupled to the GPU 210 , and the main memory 206 is similarly communicatively coupled to the CPU 202 via the I/O subsystem 204 .
  • the GPU memory 208 or portions thereof, may be a part of the main memory 206 , and both CPU 202 and GPU 210 may have access the GPU memory 208 .
  • the data storage device 212 may be embodied as any type of device or devices configured for short-term or long-term storage of data such as, for example, memory devices and circuits, memory cards, hard disk drives, solid-state drives, or other data storage devices.
  • the data storage device 212 may include a system partition that stores data and firmware code for the network device 106 .
  • the data storage device 212 may also include an operating system partition that stores data files and executables for an operating system of the network device 106 .
  • the communication circuitry 214 may be embodied as any communication circuit, device, or collection thereof, capable of enabling communications over the network 104 between the network device 106 and the computing device 102 , another network device 106 , the network controller 108 , and/or the remote computing device 110 .
  • the communication circuitry 214 may be configured to use any one or more communication technology (e.g., wired or wireless communications) and associated protocols (e.g., Ethernet, Bluetooth®, Wi-Fi®, WiMAX, etc.) to effect such communication.
  • the communication circuitry 214 additionally includes a network interface card (NIC) 216 .
  • the NIC 216 may connect the computing device 102 , the remote computing device 110 , and/or another network device 106 to one of the network devices 106 .
  • the NIC 216 may be embodied as one or more add-in-boards, daughtercards, network interface cards, controller chips, chipsets, or other devices that may be used by the network device 106 .
  • the NIC 216 may be embodied as an expansion card coupled to the I/O subsystem 204 over an expansion bus, such as PCI Express.
  • the network device 106 establishes an environment 300 during operation.
  • the illustrative environment 300 includes a network communication module 310 , a performance monitoring module 320 , a GPU admission control module 330 , and a system resource management module 340 .
  • Each of the modules, logic, and other components of the environment 300 may be embodied as hardware, software, firmware, or a combination thereof.
  • each of the modules, logic, and other components of the environment 300 may form a portion of, or otherwise be established by, the CPU 202 or other hardware components of the network device 106 .
  • one or more of the modules of the environment 300 may be embodied as circuitry or collection of electrical devices (e.g., network communication circuitry 310 , GPU admission control circuitry 330 , performance monitoring circuitry 320 , system resource management circuitry 340 , etc.).
  • the network device 106 includes system resource utilization data 302 , application performance data 304 , performance estimation data 306 , and scheduling policy data 308 , each of which may be accessed by the various modules and/or sub-modules of the network device 106 .
  • the network device 106 may include other components, sub-components, modules, sub-modules, and/or devices commonly found in a network device, which are not illustrated in FIG. 3 for clarity of the description.
  • the network communication module 310 is configured to facilitate inbound and outbound network communications (e.g., network traffic, network packets, network flows, etc.) to and from the network device 106 , respectively. To do so, the network communication module 310 is configured to receive and process network packets from one computing device (e.g., the computing device 102 , another network device 106 , the remote computing device 110 ) and to prepare and transmit network packets to another computing device (e.g., the computing device 102 , another network device 106 , the remote computing device 110 ). Accordingly, in some embodiments, at least a portion of the functionality of the network communication module 310 may be performed by the communication circuitry 214 , and more specifically by the NIC 216 .
  • the network communication module 310 may be performed by the communication circuitry 214 , and more specifically by the NIC 216 .
  • the performance monitoring module 320 is configured to monitor one or more performance metrics of various physical and/or virtual resources of the network device 106 .
  • the illustrative performance monitoring module 320 includes a system resource performance monitoring module 322 and an application performance monitoring module 324 .
  • the system resource performance monitoring module 322 is configured to monitor various system resource performance metrics, or statistics, of the network device 106 .
  • the system resource performance metrics may include any data indicative of a utilization or other statistic of one or more physical or virtual components or available resources of the network device 106 .
  • the system resource performance metrics may include such performance metrics as a CPU utilization, a GPU utilization, a memory utilization, cache hits/misses, a GPU thread occupancy, a cache miss rate, a translation lookaside buffer (TLB) miss, a page fault, etc.
  • the system resource performance monitoring module 322 may be configured to periodically read hardware and/or software (e.g., an operating system (OS)) performance counters. Additionally, in some embodiments, the system resource performance metrics may be stored in the system resource utilization data 302 .
  • OS operating system
  • the application performance monitoring module 324 is configured to monitor application performance metrics, or statistics, of the applications presently running on the network device 106 .
  • application performance metrics may include any data indicative of the operation or related performance of an application executed by the network device 106 .
  • the application performance metrics may include such performance metrics as a throughput level, a cache usage, a memory usage, a packet processing delay, a number of transmitted/received packets, a packet loss/drop count or ratio, a latency level, a power consumption level, etc.
  • the application performance metrics may be stored in the application performance data 304 .
  • the application performance monitoring module 324 may be configured to interface with various VMs (e.g., of a service function chain) and/or applications capable of being executed by the network device 106 , such as those applications configured to perform at least a portion of the processing of a received network packet.
  • VMs e.g., of a service function chain
  • applications capable of being executed by the network device 106 , such as those applications configured to perform at least a portion of the processing of a received network packet.
  • the application performance monitoring module 324 may be configured to periodically read hardware and/or software (e.g., a virtual switch) performance counters. For example, in some embodiments, a shim layer between application processing interface (API) calls and a device driver can intercept the API calls. In another example, specially defined APIs may be used between the application and the application performance monitoring module 324 .
  • API application processing interface
  • the GPU admission control module 330 is configured to determine whether to admit a network packet processing application (e.g., firewall services, NAT services, load-balancing services, DPI services, TCP optimization services, 4G/LTE network services, etc.) to be scheduled for the GPU 210 . To do so, the illustrative GPU admission control module 330 includes a resource criteria determination module 332 , a GPU performance estimation module 334 , a CPU performance estimation module 336 , and a GPU admission determination module 338 .
  • a network packet processing application e.g., firewall services, NAT services, load-balancing services, DPI services, TCP optimization services, 4G/LTE network services, etc.
  • the resource criteria determination module 332 is configured to determine system resources required (i.e., resource criteria) to execute (i.e., run) an application (e.g., the network packet processing application).
  • the resource criteria include any data that defines a performance requirement, such as a maximum latency, a minimum throughput, a minimum amount of one or more system resources of the network device required to run the application, an amount of available processing power, an amount of available memory, etc.
  • the GPU performance estimation module 334 is configured to determine one or more estimated performance metrics of the GPU 210 (i.e., estimated GPU performance metrics) based on the present state of the system resources of the network device 106 available to process a network packet and determine an impact on the performance metric if the application were to be scheduled for and processed by the GPU 210 . To do so, the GPU performance estimation module 334 may receive or access, for example, the performance data generated by the system resource performance monitoring module 322 and/or the application performance monitoring module 324 that relate to the GPU 210 . That is, the estimated GPU performance metrics may include any data indicative of an estimated utilization, operation, or other performance level metric of the GPU 210 , if the GPU 210 were scheduled to run the application, taking into account a present workload of the GPU 210 .
  • the CPU performance estimation module 336 is configured to estimate one or more estimated performance metrics of the CPU 202 (i.e., estimated CPU performance metrics) based on the present state of the system resources of the network device 106 available to process the network packet and determine an impact on the performance metric if the application were to be scheduled for and processed by the CPU 202 . To do so, the CPU performance estimation module 336 may receive or access, for example, the performance data generated by the system resource performance monitoring module 322 and/or the application performance monitoring module 324 that relate to the CPU 202 . That is, the estimated CPU performance metrics may include any data indicative of an estimated utilization, operation, or other performance level metric of the CPU 202 , if the CPU 202 were scheduled to run the application, taking into account a present workload of the CPU 202 .
  • the GPU admission determination module 338 is configured to determine whether a sufficient level of GPU resources exist to meet the system resource demand of the application (i.e., the resource criteria). To do so, the GPU admission determination module 338 may be configured to retrieve present GPU utilization statistics and compare the present GPU utilization statistics to the resource criteria.
  • the GPU admission determination module 338 is further configured to analyze the resource criteria of the application (e.g., as determined by resource criteria determination module 332 ) and the estimated performance metrics (e.g., as determined by the GPU performance estimation module 334 and/or the CPU performance estimation module 336 ) to determine whether to admit (i.e., run, schedule, etc.) the application for the GPU 210 .
  • additional and/or alternative utilization statistics may be used to determine whether to run the application on the GPU 210 or the CPU 202 , such as a number of applications running on the GPU, a maximum number of applications that may be run on the GPU, a maximum number of GPU cores to put in use at the same time, etc.
  • one or more of the performance metrics may be weighted such that a particular performance metric is given more weight than another performance metric when determining whether to run the network processing application on the GPU 210 or the CPU 202 .
  • additional and/or alternative analysis may be performed to determine whether to run the application on the GPU 210 or the CPU 202 , such as a performance history of running like applications on the GPU 210 , for example.
  • one or more of the estimated performance metrics determined by the GPU performance estimation module 334 and/or the CPU performance estimation module 336 may be stored in the performance estimation data 306 .
  • the system resource management module 340 is configured to manage the allocation of system resource (e.g., computing resources, storage resources, network resources, etc.) of the network device 106 after receiving the network packet or performing another network processing function on the packet (e.g., in a service function chain). To do so, the system resource management module 340 may be capable of instantiating (i.e., creating) VMs, suspending VMs, shutting down (i.e., closing) VMs, and redirecting network traffic to either the GPU 210 or the CPU 202 for more efficient processing (e.g., is faster, is an improvement, a more efficient level of resource usage, etc.).
  • system resource e.g., computing resources, storage resources, network resources, etc.
  • the “more efficient” processor may be determined by a GPU performance metric that is higher or lower than a CPU performance metric, depending on the particular performance metric being compared. For example, an improved throughput metric may be of a higher value while an improved latency may be of a lower value.
  • the system resource allocation may be based on one or more scheduling policies including instructions on which network packets are permissible to schedule to the GPU 210 . In such embodiments, the scheduling policies may be received from the network controller 108 , for example, and/or stored in the scheduling policy data 308 .
  • the illustrative system resource management module 340 includes a GPU scheduler module 342 and a CPU scheduler module 344 .
  • the GPU scheduler module 342 is configured to schedule processes (e.g., the network processing application) for execution by the GPU 210 , such as may be triggered by receiving GPU commands issued by the GPU admission control module 330 (e.g., the GPU admission determination module 338 upon a determination that the GPU is to execute the network packet processing application).
  • the GPU scheduler module 342 may select a scheduling policy from a number of possible scheduling policies (e.g., from the scheduling policy data 308 ), based on one or more attributes of the GPU commands, GPU command buffer dependencies, and/or other decision criteria, and schedules the GPU commands according to the selected scheduling policy.
  • the GPU scheduler module 342 communicates with the applications in the VMs to control the submission of GPU commands to the GPU 210 .
  • results of the execution may be stored in the application performance data 304 for future reference, such as by the performance monitoring module 320 .
  • the CPU scheduler module 344 is configured to schedule processes (e.g., the network processing application) for execution by the CPU 202 , such as after an event (e.g., running an application on an instantiated VM to process or otherwise perform a service on a network packet) or an interrupt.
  • the CPU 202 may have one or more cores (e.g., a multi-core processor).
  • the CPU scheduler module 344 may be further configured to schedule the process to a particular core of the CPU 202 based on available system resources of the network device 106 .
  • the illustrative operational environment 400 includes the performance monitoring module 320 , the GPU admission control module 330 , and the system resource management module 340 of FIG. 3 , as well as a virtual switch 410 .
  • the network device is executing a first virtual machine, which is designated as VM( 1 ) 402 , and a second virtual machine, which is designated as VM(N) 406 (i.e., the “Nth” virtual machine running on the network device 106 , wherein “N” is a positive integer and designates one or more additional virtual machines running on the network device 106 ).
  • Each of the VM( 1 ) 402 and the VM(N) 406 include a corresponding application, a first application 404 and an “Nth” application 408 , respectively. It should be appreciated that one or more of the VMs 402 , 406 may run more than one application.
  • the applications 404 , 408 may indicate any type of service or other network processing function presently being performed via the VMs 402 , 406 on the network packets, such as a firewall, NAT, load-balancing, DPI, TCP optimization, etc.
  • the VMs 402 , 406 may be configured to function as a service function chain comprised of a number of VMs to perform certain services on the network packets based on various factors, such as type, flow, workload, destination, etc.
  • the virtual switch 410 may be configured to manage the internal data transfer of network traffic related information.
  • the performance monitoring module 320 and/or the GPU admission control module 330 may receive mirrored and/or duplicated network packets that are to be processed internally (i.e., the applications 404 , 408 running on the local VMs 402 , 406 ).
  • the GPU admission control module 330 may receive a mirrored or duplicated network packet to determine an estimation of the impact processing the network packet may have on a network packet processing application performance metric and/or a system resource performance metric.
  • the virtual switch 410 may be configured to facilitate the transfer of the mirrored and/or duplicated network traffic between the VMs 402 , 406 and the performance monitoring module 320 and/or the GPU admission control module 330 .
  • the illustrative operational environment 400 additionally includes system resources 412 that include the various components of the network device 106 , such as the CPU 202 , the main memory 206 , the GPU memory 208 , the GPU 210 , and the data storage 212 of FIG. 2 .
  • the system resource management module 340 is communicatively coupled to the system resources 412 , such that the system resource management module 340 can manage the system resource and facilitate the transfer of utilization data to the performance monitoring module 320 and the GPU admission control module 330 .
  • the network device 106 may execute a method 500 for offloading network packet processing to a GPU (e.g., the GPU 210 ) of the network device 106 .
  • the method 500 begins with block 502 , in which the network device 106 determines whether a GPU offload request was received. In other words, the network device 106 determines whether a network packet processing application is to be offloaded to the GPU 210 (i.e., processed by the GPU 210 rather than the CPU 202 ). If the network device 106 determines the GPU offload request was not received, the method 500 loops back to block 502 to continue monitoring for the GPU offload request. Otherwise, if the network device 106 determines the GPU offload request was received, the method 500 advances to block 504 .
  • the network device 106 determines resource criteria for the network packet processing application to be offloaded to the GPU 210 (i.e., that corresponds to the GPU offload request received at block 502 ). To do so, in block 506 , the network device 106 determines system resources required to run the application. As described previously, the system resources required to run the application may include any data indicative of a minimum or maximum threshold of a system resource required to run the application, such as a minimum amount of memory to allocate, a minimum level of compute power, etc. Additionally, in block 508 , the network device 106 determines performance requirements for running the application. As described previously, the performance requirements for running the application may include any data indicative of a minimum or maximum threshold of a performance level when running the application, such as a minimum throughput, a maximum latency, a maximum power consumption level, etc.
  • the network device 106 determines utilization information of the system resources of the network device 106 . To do so, in block 512 , the network device 106 determines one or more system resource performance metrics. As described previously, the system resource performance metrics may include any data indicative of a utilization or other statistic of one or more physical or virtual components or available resources of the network device 106 . The system resource performance metrics may include CPU utilization, GPU utilization, memory utilization, cache hits/misses, GPU thread occupancy, cache miss rate, TLB misses, page faults, etc.
  • the network device 106 determines available GPU resources, such as a number of available GPU cycles, a number of additional applications that the GPU can run, a GPU frequency, a number of available GPU cores, an amount of available GPU memory 208 , etc. Additionally, in block 516 , the network device 106 determines available CPU (e.g., the CPU 202 of FIG. 2 ) resources, such as a number of available cores of the CPU 202 , an amount of available cache, a number of processes that can be supported by the CPU 202 , an amount of available power of the CPU 202 , an amount of available main memory 206 , etc. In some embodiments, to determine the available GPU 210 resources in block 514 and the available CPU 202 resources in block 516 , the network device 106 may read hardware and/or software performance counters of the network device 106 .
  • available GPU resources such as a number of available GPU cycles, a number of additional applications that the GPU can run, a GPU frequency, a number of available GPU cores, an amount of available
  • the network device 106 determines one or more application performance metrics of the network packet processing application that corresponds to the GPU offload request received at block 502 .
  • the application performance metrics may include a throughput level, a cache usage, a memory usage, a packet processing delay duration, a number of transmitted/received packets, a packet loss/drop count or ratio, a latency level, a power consumption level, etc.
  • the network device 106 determines whether sufficient GPU 210 resources are available to process the application based on the available GPU resources determined in block 514 and the resource criteria for running the application determined in block 504 . For example, the network device 106 may determine whether sufficient GPU 210 resources are available to process the application based on a current utilization of the GPU 210 , a number of applications presently running on the GPU 210 , etc. If not, the method 500 advances to block 522 , wherein the network device 106 schedules the application for processing by the CPU 202 before the method return to block 502 to determine whether another GPU offload request was received. Otherwise, if the network device 106 determines there are sufficient GPU 210 resources available to process the application, the method 500 advances to block 524 .
  • the network device 106 determines one or more estimated processing performance metrics for running the network packet processing application. To do so, in block 526 , the network device 106 determines estimated GPU performance metrics for running the application on the GPU 210 . Additionally, in block 528 , the network device 106 determines estimated CPU performance metrics for running the application on the CPU 202 .
  • the estimated GPU performance metrics determined at block 526 and/or the estimated CPU performance metrics determined at block 528 may include a throughput, a latency, a power consumption, a number of used/available processing cores during running of the application, etc.
  • the network device 106 analyzes the estimated processing performance metrics determined in block 524 . To do so, in block 534 , the network device 106 compares one or more of the estimated GPU performance metrics to one or more corresponding performance requirements of the application. Additionally, in block 532 , the network device 106 compares one or more of the estimated GPU performance metrics against the estimated CPU performance metrics.
  • the network device 106 determines whether to offload the application to the GPU 210 based on at least a portion of the estimated GPU performance metrics, the estimated CPU performance metrics, and the system resource performance metrics. For example, in an embodiment wherein the application has a threshold application performance requirement, such as a maximum latency requirement, one of the estimated GPU performance metrics may be an estimated latency associated with the GPU 210 processing the application. Accordingly, in such an embodiment, if the estimated latency does not meet the maximum latency requirement of the application (i.e., is not less than), the network device may determine the application should be processed by the CPU 202 rather than being offloaded to the GPU 210 for execution. It should be appreciated that “meeting” a threshold application performance requirement may include being greater than or less than the threshold application performance requirement, depending on the threshold application performance requirement being compared.
  • the network device 106 may still determine to offload the application to the GPU 210 .
  • the network device 106 may still determine to offload the application to the GPU 210 .
  • freeing up the CPU 202 to perform other tasks may be more beneficial to the network device 106 than scheduling based solely on whether the GPU 210 is estimated to outperform the CPU 202 .
  • the method 500 branches to block 522 , wherein the network device 106 schedules the application for processing by the CPU 202 before the method return to block 502 to determine whether another GPU offload request was received. Otherwise, if the network device 106 determines to offload the application to the GPU 210 , the method 500 advances to block 538 , wherein the network device schedules the application for the GPU 210 (i.e., provides an indication to a scheduler of the GPU 210 to schedule the application for processing by the GPU 210 ). From block 538 , the method 500 loops back to block 502 to determine whether another GPU offload request was received.
  • An embodiment of the technologies disclosed herein may include any one or more, and any combination of, the examples described below.
  • Example 1 includes a network device to offload processing of a network packet to a graphics processing unit (GPU) of a network device, the network device comprising one or more processors; and one or more memory devices having stored therein a plurality of instructions that, when executed by the one or more processors, cause the network device to determine resource criteria of an application, wherein the resource criteria define a minimum amount of one or more system resources of the network device required to run the application; determine available GPU resources of the GPU of the network device; determine whether the available GPU resources are sufficient to process the application based on the resource criteria of the application and the available GPU resources; determine, in response to a determination that the available GPU resources are sufficient to process the application, one or more estimated GPU performance metrics based on the resource criteria of the application and the available GPU resources, wherein the estimated GPU performance metrics indicate an estimated level of performance of the GPU if the GPU were to run the application; and offload processing of the application to the GPU as a function of the one or more estimated GPU performance metrics.
  • the resource criteria define a minimum amount of one or more
  • Example 2 includes the subject matter of Example 1, and wherein the one or more memory devices, when executed by the one or more processors, further cause the network device to determine one or more estimated central processing unit (CPU) performance metrics of a CPU of the network device, wherein the estimated CPU performance metrics are determined based on available CPU resources and are indicative of an estimated level of performance of the CPU during a runtime of the application by the CPU; and compare the estimated GPU performance metrics and the estimated CPU performance metrics, and wherein to offload processing of the application to the GPU comprises to offload processing of the application to the GPU in response to a determination that the estimated GPU performance metrics are an improvement relative to the CPU performance metrics.
  • CPU central processing unit
  • Example 3 includes the subject matter of any of Examples 1 and 2, and wherein the one or more memory devices, when executed by the one or more processors, further cause the network device to run the processing of the application at the CPU in response to a determination that the estimated GPU performance metrics are an improvement relative to the CPU performance metrics.
  • Example 4 includes the subject matter of any of Examples 1-3, and wherein the one or more memory devices, when executed by the one or more processors, further cause the network device to determine utilization information for at least a portion of the system resources of the network device, wherein the utilization information is indicative of an amount at which the at least the portion of the system resources are presently utilized, and wherein to determine the one or more estimated GPU performance metrics is further based on the utilization information.
  • Example 5 includes the subject matter of any of Examples 1-4, and wherein the one or more memory devices, when executed by the one or more processors, further cause the network device to determine whether the estimated GPU performance metrics meet one or more predetermined performance metric thresholds that correspond to at least a portion of the estimated GPU performance metrics, wherein to offload the processing of the application to the GPU comprises to offload the processing of the application to the GPU in response to a determination that the estimated GPU performance metrics meets the predetermined performance metric thresholds.
  • Example 6 includes the subject matter of any of Examples 1-5, and wherein the one or more memory devices, when executed by the one or more processors, further cause the network device to schedule the processing of the application by a central processing unit (CPU) of the network device in response to a determination that the estimated GPU performance metrics do not meet the predetermined performance metric thresholds.
  • CPU central processing unit
  • Example 7 includes the subject matter of any of Examples 1-6, and wherein to determine utilization information for at least a portion of the system resources of the network device comprises to determine at least one of a memory utilization, a command scheduling delay, a cache miss rate, a translation lookaside buffer (TLB) miss, and a page fault.
  • to determine utilization information for at least a portion of the system resources of the network device comprises to determine at least one of a memory utilization, a command scheduling delay, a cache miss rate, a translation lookaside buffer (TLB) miss, and a page fault.
  • TLB translation lookaside buffer
  • Example 8 includes the subject matter of any of Examples 1-7, and wherein the application comprises a network packet processing application for processing a network packet received by the network device.
  • Example 9 includes the subject matter of any of Examples 1-8, and wherein to determine the resource criteria of the application comprises to determine at least one of a minimum amount of memory available to store data related to the application, one or more dependencies of the application, and a minimum number of processing cycles to run the application.
  • Example 10 includes the subject matter of any of Examples 1-9, and wherein to determine the available GPU resources of the GPU comprises to determine at least one of a number of available cores of the GPU, a number of available cycles of the GPU, a number of total applications supported by the GPU, a number of other applications presently running on the GPU, and a present utilization percentage of the GPU.
  • Example 11 includes the subject matter of any of Examples 1-10, and wherein to determine the estimated GPU performance metrics comprises to determine at least one of a utilization of a plurality of cores of the GPU, a number of other applications presently running on the GPU, a present performance metric for each of the other applications presently running on the GPU, and a frequency rate of the GPU.
  • Example 12 includes a method for offloading processing of a network packet to a graphics processing unit (GPU) of a network device, the method comprising determining, by the network device, resource criteria of an application, wherein the resource criteria define a minimum amount of one or more system resources of the network device required to run the application; determining, by the network device, available GPU resources of the GPU of the network device; determining, by the network device, whether the available GPU resources are sufficient to process the application based on the resource criteria of the application and the available GPU resources; determining, by the network device and in response to a determination that the available GPU resources are sufficient to process the application, one or more estimated GPU performance metrics based on the resource criteria of the application and the available GPU resources, wherein the estimated GPU performance metrics indicate an estimated level of performance of the GPU if the GPU were to run the application; and offloading, by the network device, processing of the application to the GPU as a function of the one or more estimated GPU performance metrics.
  • the resource criteria define a minimum amount of one or more system resources of the network device
  • Example 13 includes the subject matter of Example 12, and further including determining, by the network device, one or more estimated central processing unit (CPU) performance metrics of a CPU of the network device, wherein the estimated CPU performance metrics are determined based on available CPU resources and are indicative of an estimated level of performance of the CPU during a runtime of the application by the CPU; and comparing, by the network device, the estimated GPU performance metrics and the estimated CPU performance metrics, wherein offloading processing of the application to the GPU comprises offloading processing of the application to the GPU in response to a determination that the estimated GPU performance metrics are an improvement relative to the CPU performance metrics.
  • CPU central processing unit
  • Example 14 includes the subject matter of any of Examples 12 and 13, and further including scheduling the processing of the application at the CPU in response to a determination that the estimated GPU performance metrics are an improvement relative to the CPU performance metrics.
  • Example 15 includes the subject matter of any of Examples 12-14, and further including determining, by the network device, utilization information for at least a portion of the system resources of the network device, wherein the utilization information is indicative of an amount at which the at least the portion of the system resources are presently utilized, and wherein the determining the one or more estimated GPU performance metrics is further based on the utilization information.
  • Example 16 includes the subject matter of any of Examples 12-15, and further including determining, by the network device, whether the estimated GPU performance metrics meets one or more predetermined performance metric thresholds that correspond to each of the estimated GPU performance metrics, wherein offloading the processing of the application to the GPU comprises offloading the processing of the application to the GPU in response to a determination that the estimated GPU performance metrics meets the predetermined performance metric thresholds.
  • Example 17 includes the subject matter of any of Examples 12-16, and further including scheduling the processing of the application to a central processing unit (CPU) of the network device in response to a determination that the estimated GPU performance metrics do not meet the predetermined performance metric thresholds.
  • CPU central processing unit
  • Example 18 includes the subject matter of any of Examples 12-17, and wherein determining utilization information for at least a portion of the system resources of the network device comprises determining at least one of a memory utilization, a command scheduling delay, a cache miss rate, a translation lookaside buffer (TLB) miss, and a page fault.
  • determining utilization information for at least a portion of the system resources of the network device comprises determining at least one of a memory utilization, a command scheduling delay, a cache miss rate, a translation lookaside buffer (TLB) miss, and a page fault.
  • TLB translation lookaside buffer
  • Example 19 includes the subject matter of any of Examples 12-18, and wherein the application comprises a network packet processing application for processing a network packet received by the network device.
  • Example 20 includes the subject matter of any of Examples 12-19, and wherein determining the resource criteria of the application comprises determining at least one of a minimum amount of memory available to store data related to the application, one or more dependencies of the application, and a minimum number of processing cycles to run the application.
  • Example 21 includes the subject matter of any of Examples 12-20, and wherein determining the available GPU resources of the GPU comprises determining at least one of a number of available cores of the GPU, a number of available cycles of the GPU, a number of total applications supported by the GPU, a number of other applications presently running on the GPU, and a present utilization percentage of the GPU.
  • Example 22 includes the subject matter of any of Examples 12-21, and wherein determining the estimated GPU performance metrics comprises determining at least one of a utilization of a plurality of cores of the GPU, a number of other applications presently running on the GPU, a present performance metric for each of the other applications presently running on the GPU, and a frequency rate of the GPU.
  • Example 23 includes a computing device comprising a processor; and a memory having stored therein a plurality of instructions that when executed by the processor cause the computing device to perform the method of any of Examples 12-22.
  • Example 24 includes one or more machine readable storage media comprising a plurality of instructions stored thereon that in response to being executed result in a computing device performing the method of any of Examples 12-22.
  • Example 25 includes a network device to offload processing of a network packet to a graphics processing unit (GPU) of a network device, the network device comprising a resource criteria determination circuitry to determine resource criteria of an application, wherein the resource criteria define a minimum amount of one or more system resources of the network device required to run the application a performance monitoring circuitry to determine available GPU resources of the GPU of the network device a GPU admission determination circuitry to determine whether the available GPU resources are sufficient to process the application based on the resource criteria of the application and the available GPU resources; and a GPU performance estimation circuitry to determine, in response to a determination that the available GPU resources are sufficient to process the application, one or more estimated GPU performance metrics based on the resource criteria of the application and the available GPU resources, wherein the estimated GPU performance metrics indicate an estimated level of performance of the GPU if the GPU were to run the application, wherein the GPU admission determination circuitry is further to offload processing of the application to the GPU as a function of the one or more estimated GPU performance metrics.
  • the resource criteria define a minimum amount of one or
  • Example 26 includes the subject matter of Example 25, and further including a CPU performance estimation circuitry to determine one or more estimated central processing unit (CPU) performance metrics of a CPU of the network device, wherein the estimated CPU performance metrics are determined based on available CPU resources and are indicative of an estimated level of performance of the CPU during a runtime of the application by the CPU, wherein the GPU admission determination circuitry is further to compare the estimated GPU performance metrics and the estimated CPU performance metrics, and wherein to offload processing of the application to the GPU comprises to offload processing of the application to the GPU in response to a determination that the estimated GPU performance metrics are an improvement relative to the CPU performance metrics.
  • CPU central processing unit
  • Example 27 includes the subject matter of any of Examples 25 and 26, and wherein the GPU admission determination circuitry is further to run the processing of the application at the CPU in response to a determination that the estimated GPU performance metrics are an improvement relative to the CPU performance metrics.
  • Example 28 includes the subject matter of any of Examples 25-27, and wherein the performance monitoring circuitry is further to determine utilization information for at least a portion of the system resources of the network device, wherein the utilization information is indicative of an amount at which the at least the portion of the system resources are presently utilized, and wherein to determine the one or more estimated GPU performance metrics is further based on the utilization information.
  • Example 29 includes the subject matter of any of Examples 25-28, and wherein the GPU admission determination circuitry is further to determine whether the estimated GPU performance metrics meet one or more predetermined performance metric thresholds that correspond to at least a portion of the estimated GPU performance metrics, wherein to offload the processing of the application to the GPU comprises to offload the processing of the application to the GPU in response to a determination that the estimated GPU performance metrics meets the predetermined performance metric thresholds.
  • Example 30 includes the subject matter of any of Examples 25-29, and further including a system resource management circuitry to schedule the processing of the application by a central processing unit (CPU) of the network device in response to a determination that the estimated GPU performance metrics do not meet the predetermined performance metric thresholds.
  • CPU central processing unit
  • Example 31 includes the subject matter of any of Examples 25-30, and wherein to determine utilization information for at least a portion of the system resources of the network device comprises to determine at least one of a memory utilization, a command scheduling delay, a cache miss rate, a translation lookaside buffer (TLB) miss, and a page fault.
  • to determine utilization information for at least a portion of the system resources of the network device comprises to determine at least one of a memory utilization, a command scheduling delay, a cache miss rate, a translation lookaside buffer (TLB) miss, and a page fault.
  • TLB translation lookaside buffer
  • Example 32 includes the subject matter of any of Examples 25-31, and wherein the application comprises a network packet processing application for processing a network packet received by the network device.
  • Example 33 includes the subject matter of any of Examples 25-32, and wherein to determine the resource criteria of the application comprises to determine at least one of a minimum amount of memory available to store data related to the application, one or more dependencies of the application, and a minimum number of processing cycles to run the application.
  • Example 34 includes the subject matter of any of Examples 25-33, and wherein to determine the available GPU resources of the GPU comprises to determine at least one of a number of available cores of the GPU, a number of available cycles of the GPU, a number of total applications supported by the GPU, a number of other applications presently running on the GPU, and a present utilization percentage of the GPU.
  • Example 35 includes the subject matter of any of Examples 25-34, and wherein to determine the estimated GPU performance metrics comprises to determine at least one of a utilization of a plurality of cores of the GPU, a number of other applications presently running on the GPU, a present performance metric for each of the other applications presently running on the GPU, and a frequency rate of the GPU.
  • Example 36 includes a network device to offload processing of a network packet to a graphics processing unit (GPU) of a network device, the network device comprising means for determining resource criteria of an application, wherein the resource criteria define a minimum amount of one or more system resources of the network device required to run the application; means for determining available GPU resources of the GPU of the network device; means for determining whether the available GPU resources are sufficient to process the application based on the resource criteria of the application and the available GPU resources; means for determining, in response to a determination that the available GPU resources are sufficient to process the application, one or more estimated GPU performance metrics based on the resource criteria of the application and the available GPU resources, wherein the estimated GPU performance metrics indicate an estimated level of performance of the GPU if the GPU were to run the application; and means for offloading processing of the application to the GPU as a function of the one or more estimated GPU performance metrics.
  • the resource criteria define a minimum amount of one or more system resources of the network device required to run the application
  • Example 37 includes the subject matter of Example 36, and further including g means for determining one or more estimated central processing unit (CPU) performance metrics of a CPU of the network device, wherein the estimated CPU performance metrics are determined based on available CPU resources and are indicative of an estimated level of performance of the CPU during a runtime of the application by the CPU; and means for comparing the estimated GPU performance metrics and the estimated CPU performance metrics, wherein the means for offloading processing of the application to the GPU comprises means for offloading processing of the application to the GPU in response to a determination that the estimated GPU performance metrics are an improvement relative to the CPU performance metrics.
  • CPU central processing unit
  • Example 38 includes the subject matter of any of Examples 36 and 37, and further including means for scheduling the processing of the application at the CPU in response to a determination that the estimated GPU performance metrics are an improvement relative to the CPU performance metrics.
  • Example 39 includes the subject matter of any of Examples 36-38, and further including means for determining utilization information for at least a portion of the system resources of the network device, wherein the utilization information is indicative of an amount at which the at least the portion of the system resources are presently utilized, and wherein the means for determining the one or more estimated GPU performance metrics is further based on the utilization information.
  • Example 40 includes the subject matter of any of Examples 36-39, and further including means for determining whether the estimated GPU performance metrics meets one or more predetermined performance metric thresholds that correspond to each of the estimated GPU performance metrics, wherein the means for offloading the processing of the application to the GPU comprises means for offloading the processing of the application to the GPU in response to a determination that the estimated GPU performance metrics meets the predetermined performance metric thresholds.
  • Example 41 includes the subject matter of any of Examples 36-40, and further including means for scheduling the processing of the application to a central processing unit (CPU) of the network device in response to a determination that the estimated GPU performance metrics do not meet the predetermined performance metric thresholds.
  • CPU central processing unit
  • Example 42 includes the subject matter of any of Examples 36-41, and wherein the means for determining utilization information for at least a portion of the system resources of the network device comprises means for determining at least one of a memory utilization, a command scheduling delay, a cache miss rate, a translation lookaside buffer (TLB) miss, and a page fault.
  • the means for determining utilization information for at least a portion of the system resources of the network device comprises means for determining at least one of a memory utilization, a command scheduling delay, a cache miss rate, a translation lookaside buffer (TLB) miss, and a page fault.
  • TLB translation lookaside buffer
  • Example 43 includes the subject matter of any of Examples 36-42, and wherein the application comprises a network packet processing application for processing a network packet received by the network device.
  • Example 44 includes the subject matter of any of Examples 36-43, and wherein the means for determining the resource criteria of the application comprises means for determining at least one of a minimum amount of memory available to store data related to the application, one or more dependencies of the application, and a minimum number of processing cycles to run the application.
  • Example 45 includes the subject matter of any of Examples 36-44, and wherein the means for determining the available GPU resources of the GPU comprises means for determining at least one of a number of available cores of the GPU, a number of available cycles of the GPU, a number of total applications supported by the GPU, a number of other applications presently running on the GPU, and a present utilization percentage of the GPU.
  • Example 46 includes the subject matter of any of Examples 36-45, and wherein the means for determining the estimated GPU performance metrics comprises means for determining at least one of a utilization of a plurality of cores of the GPU, a number of other applications presently running on the GPU, a present performance metric for each of the other applications presently running on the GPU, and a frequency rate of the GPU.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Environmental & Geological Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Debugging And Monitoring (AREA)

Abstract

Technologies for offloading an application for processing a network packet to a graphics processing unit (GPU) of a network device. The network device is configured to determine resource criteria of the application and available resources of the GPU. The network device is further configured to determine whether the available GPU resources are sufficient to process the application based on the resource criteria of the application and the available GPU resources. Additionally, the network device is configured to determine one or more estimated GPU performance metrics based on the resource criteria of the application and the available GPU resources to determine whether to offload the application to the GPU. Other embodiments are described and claimed.

Description

BACKGROUND
With the technological advancements of server, network, and storage technologies, hardware-based network functions are being transitioned to software-based network functions on standard high-volume servers. To meet the performance requirements, software-based network functions typically require more central processor unit (CPU) cycles, as compared to their hardware-based counterparts. Alternatively, general purpose graphics processor units (GPUs), or GPGPUs, may be used for network packet processing workloads. The GPGPU performance of a single network packet processing application (e.g., a deep packet inspection (DPI), a firewall, encryption/decryption, layer-3 forwarding, etc.), having exclusive access to a GPGPU is relatively predictable. However, a level of performance can become more difficult to predict as additional network packet processing applications utilize the GPGPU as an offloading engine or an accelerator. For example, a GPGPU-accelerated application may not be aware of and/or may not be able to communicate with another GPGPU-accelerated application, which can result in inefficient and/or uncoordinated usage of the GPGPU. More specifically, if the first GPGPU-accelerated application is fully utilizing resources of the GPGPU, offloading the second GPGPU-accelerated application may result in performance degradation due to resource contention, etc.
BRIEF DESCRIPTION OF THE DRAWINGS
The concepts described herein are illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. Where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements.
FIG. 1 is a simplified block diagram of at least one embodiment of a system that includes a network device for offloading network packet processing to a graphics processing unit (GPU) of the network device;
FIG. 2 is a simplified block diagram of at least one embodiment of the network device of the system of FIG. 1;
FIG. 3 is a simplified block diagram of at least one embodiment of an environment that may be established by the network device of FIG. 2;
FIG. 4 is a simplified block diagram of another embodiment of an environment that may be established by the network device of FIG. 2; and
FIGS. 5 and 6 are a simplified flow diagram of at least one embodiment of a method for offloading network packet processing to the GPU of the network device of FIG. 2.
DETAILED DESCRIPTION OF THE DRAWINGS
While the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will be described herein in detail. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims.
References in the specification to “one embodiment,” “an embodiment,” “an illustrative embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include that particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Additionally, it should be appreciated that items included in a list in the form of “at least one of A, B, and C” can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C). Similarly, items listed in the form of “at least one of A, B, or C” can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C).
The disclosed embodiments may be implemented, in some cases, in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage media, which may be read and executed by one or more processors. A machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or non-volatile memory, a media disc, or other media device).
In the drawings, some structural or method features may be shown in specific arrangements and/or orderings. However, it should be appreciated that such specific arrangements and/or orderings may not be required. Rather, in some embodiments, such features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of a structural or method feature in a particular figure is not meant to imply that such feature is required in all embodiments and, in some embodiments, may not be included or may be combined with other features.
Referring now to FIG. 1, in an illustrative embodiment, a system 100 for offloading network packet processing to a graphics processing unit (GPU) includes a computing device 102 and a remote computing device 110 in communication over a network 104 via one or more network devices 106. In use, the network device 106 facilitates the transmission of network packets (e.g., based on workload type, flow information, etc.) between the computing device 102 and the remote computing device 110 over the network 104. For example, the computing device 102 may request data from the remote computing device 110 by sending one or more network packets that indicate the computing device 102 is requesting data from the remote computing device 110. In response to the request, the remote computing device 110 may attempt to transmit response data (e.g., a payload, a message body, etc.) via one or more network packets to the computing device 102 across the network 104.
Typically, the network packets are processed by the network devices 106 prior to being forwarded along. For example, a network device 106 may allocate a number of computing resources for one or more virtual machines (VMs) to perform various network functions or services (e.g., firewall services, network address translation (NAT) services, load-balancing services, deep packet inspection (DPI) services, transmission control protocol (TCP) optimization services, 4G/LTE network services, etc.) on the network packets. Based on the various network functions or services that the network device 106 has allocated, the network device 106 can process each network packet, such as to determine where to route the network packets, whether the network packets should be dropped, etc. To do so, one or more of the VMs may be configured to perform a particular service that can be used to process the network packets.
Each VM may perform the relevant processing of the network packets based on the service for which they were configured using a graphics processing unit (GPU) of the network device 106 (see, e.g., the GPU 210 of FIG. 2) or a central processing unit (CPU) of the network device 106 (see, e.g., the CPU 202 of FIG. 2). In use, the network device 106 estimates an impact on a performance metric (e.g., a level of performance) for each of the CPU and GPU to determine whether to perform the service (i.e., the portion of the processing for which that particular VM is configured) on the network packets at either the CPU or the GPU. In other words, the network device 106 estimates how the CPU might perform if the processing were to be performed on the CPU and how the GPU might perform if the processing were to be performed on the GPU. Based on the performance metric estimations, the network device 106 can then determine whether to perform the service on the CPU or the GPU.
The computing device 102 may be embodied as any type of computation or computing device capable of performing the functions described herein, including, without limitation, a computer, a desktop computer, a smartphone, a workstation, a laptop computer, a notebook computer, a tablet computer, a mobile computing device, a wearable computing device, a network appliance, a web appliance, a distributed computing system, a processor-based system, and/or a consumer electronic device. Similarly, the remote computing device 110 may be embodied as any type of computation or computing device capable of performing the functions described herein, including, without limitation, a computer, a desktop computer, a smartphone, a workstation, a laptop computer, a notebook computer, a tablet computer, a mobile computing device, a wearable computing device, a network appliance, a web appliance, a distributed computing system, a processor-based system, and/or a consumer electronic device. Each of the computing device 102 and the remote computing device 110 may include components commonly found in a computing device such as a processor, memory, input/output subsystem, data storage, communication circuitry, etc.
The network 104 may be embodied as any type of wired or wireless communication network, including cellular networks (e.g., Global System for Mobile Communications (GSM), 3G, Long Term Evolution (LTE), Worldwide Interoperability for Microwave Access (WiMAX), etc.), digital subscriber line (DSL) networks, cable networks (e.g., coaxial networks, fiber networks, etc.), telephony networks, local area networks (LANs) or wide area networks (WANs), global networks (e.g., the Internet), or any combination thereof. Additionally, the network 104 may include any number of network devices 106 as needed to facilitate communication between the computing device 102 and the remote computing device 110.
In some embodiments, the network device 106 may additionally be connected to a network controller 108. The network controller 108 may be embodied as, or otherwise include, any type of hardware, software, and/or firmware capable of providing a platform for performing the functions described herein, such a computing device, a multiprocessor system, a server (e.g., stand-alone, rack-mounted, blade, etc.), a network appliance, a compute device, etc. In some embodiments, the network controller 108 may be configured to store and/or maintain topology information of the network 104 (i.e., the arrangement and interconnectivity of the network devices 106) and/or network packet management information (e.g., network packet/flow management/processing information, policies corresponding to network packet types/flows, etc.). For example, the network controller 108 may be configured to function as a software-defined networking (SDN) controller, a network functions virtualization (NFV) management and orchestration (MANO), etc. Accordingly, the network controller 108 may send (e.g., transmit, etc.) network flow information (e.g., network packet/flow policies) to the network devices 106 capable of operating in an SDN environment and/or a NFV environment.
The network device 106 may be embodied as any type of computing device capable of facilitating wired and/or wireless network communications between the computing device 102 and the remote computing device 110. For example, the network device 106 may be embodied as a computing device, an access point, a router, a switch, a network hub, a storage device, a compute device, a multiprocessor system, a server (e.g., stand-alone, rack-mounted, blade, etc.), a network appliance (e.g., physical or virtual), etc. As shown in FIG. 2, an illustrative network device 106 includes a central processing unit (CPU) 202, an input/output (I/O) subsystem 204, a main memory 206, a GPU memory 208, a GPU 210, a data storage device 212, and communication circuitry 214 that includes a network interface card (NIC) 216. Of course, in other embodiments, the network device 106 may include other or additional components, such as those commonly found in a network device (e.g., virtualization services, drivers, operating systems, schedulers, etc.). Additionally, in some embodiments, one or more of the illustrative components may be incorporated in, or otherwise form a portion of, another component. For example, in some embodiments, the main memory 206, or portions thereof, may be incorporated in the CPU 202 and/or the GPU memory 208 may be incorporated in the GPU 210, in some embodiments. Additionally or alternatively, in some embodiments, the GPU memory 208, or portions thereof, may be a part of the main memory 206 (e.g., integrated graphics such as Intel® Processor Graphics).
The CPU 202 may be embodied as any type of processor capable of performing the functions described herein. The CPU 202 may be embodied as a single or multi-core processor(s), digital signal processor, microcontroller, or other processor or processing/controlling circuit. The I/O subsystem 204 may be embodied as circuitry and/or components to facilitate input/output operations with the CPU 202, the main memory 206, the GPU 210, the GPU memory 208, and other components of the network device 106. For example, the I/O subsystem 204 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, integrated sensor hubs, firmware devices, communication links (i.e., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.) and/or other components and subsystems to facilitate the input/output operations. In some embodiments, the I/O subsystem 204 may form a portion of a system-on-a-chip (SoC) and be incorporated, along with the CPU 202, the GPU 210, the GPU memory 208, the main memory 206, and other components of the network device 106, on a single integrated circuit chip.
The GPU 210 illustratively includes an array of processor cores or parallel processors, each of which can execute a number of parallel and concurrent threads to handle specific types of GPU tasks. For example, in some embodiments, the processor cores of the GPU 210 may be configured to individually handle 3D rendering tasks, blitter (e.g., 2D graphics), video, and video encoding/decoding tasks, by providing electronic circuitry that can perform mathematical operations rapidly using extensive parallelism and many concurrent threads. Additionally or alternatively, the GPU 210 is generally capable of parallelizing network packet processing (e.g., internet protocol (IP) forwarding, hashing, pattern matching, etc.) via the processor cores of the GPU 210. Accordingly, the GPU 210 can be an alternative to the CPU 202 (i.e., the CPU) for performing at least a portion of the processing of the network packet.
Using the GPU 210 can free up resources of the CPU 202 (e.g., memory, cache, processor cores, communication bus bandwidth, etc.), which can be dedicated to other tasks, such as application performance management. For ease of discussion, “graphics processing unit” or “GPU” may be used herein to refer to, among other things, a graphics processing unit, a graphics accelerator, or other type of specialized electronic circuit or device, such as a general purpose GPU (GPGPU) or any other device or circuit that is configured to be used by the network device 106 to accelerate network packet processing tasks and/or perform other parallel computing operations that would benefit from accelerated processing, such as network traffic monitoring. It should be appreciated that, in some embodiments, the GPU 210 may be embodied as a peripheral device (e.g., on a discrete graphics card), or may be located on the CPU 202 motherboard or on the CPU 202 die.
The GPU memory 208 (e.g., integrated graphics such as Intel® Processor Graphics) and the main memory 206 may be embodied as any type of volatile or non-volatile memory or data storage capable of performing the functions described herein. In operation, the main memory 206 may store various data and software used during operation of the network device 106, such as operating systems, applications, programs, libraries, and drivers. For example, portions of the main memory 206 may at least temporarily store command buffers and GPU commands that are created by the CPU 202, and portions of the GPU memory 208 may at least temporarily store the GPU commands received from the main memory 206 by, e.g., direct memory access (DMA). The GPU memory 208 is communicatively coupled to the GPU 210, and the main memory 206 is similarly communicatively coupled to the CPU 202 via the I/O subsystem 204. As described previously, in some embodiments, the GPU memory 208, or portions thereof, may be a part of the main memory 206, and both CPU 202 and GPU 210 may have access the GPU memory 208.
The data storage device 212 may be embodied as any type of device or devices configured for short-term or long-term storage of data such as, for example, memory devices and circuits, memory cards, hard disk drives, solid-state drives, or other data storage devices. The data storage device 212 may include a system partition that stores data and firmware code for the network device 106. The data storage device 212 may also include an operating system partition that stores data files and executables for an operating system of the network device 106.
The communication circuitry 214 may be embodied as any communication circuit, device, or collection thereof, capable of enabling communications over the network 104 between the network device 106 and the computing device 102, another network device 106, the network controller 108, and/or the remote computing device 110. The communication circuitry 214 may be configured to use any one or more communication technology (e.g., wired or wireless communications) and associated protocols (e.g., Ethernet, Bluetooth®, Wi-Fi®, WiMAX, etc.) to effect such communication.
The communication circuitry 214 additionally includes a network interface card (NIC) 216. The NIC 216 may connect the computing device 102, the remote computing device 110, and/or another network device 106 to one of the network devices 106. The NIC 216 may be embodied as one or more add-in-boards, daughtercards, network interface cards, controller chips, chipsets, or other devices that may be used by the network device 106. For example, the NIC 216 may be embodied as an expansion card coupled to the I/O subsystem 204 over an expansion bus, such as PCI Express.
Referring now to FIG. 3, in an embodiment, the network device 106 establishes an environment 300 during operation. The illustrative environment 300 includes a network communication module 310, a performance monitoring module 320, a GPU admission control module 330, and a system resource management module 340. Each of the modules, logic, and other components of the environment 300 may be embodied as hardware, software, firmware, or a combination thereof. For example, each of the modules, logic, and other components of the environment 300 may form a portion of, or otherwise be established by, the CPU 202 or other hardware components of the network device 106. As such, in some embodiments, one or more of the modules of the environment 300 may be embodied as circuitry or collection of electrical devices (e.g., network communication circuitry 310, GPU admission control circuitry 330, performance monitoring circuitry 320, system resource management circuitry 340, etc.). In the illustrative environment 300, the network device 106 includes system resource utilization data 302, application performance data 304, performance estimation data 306, and scheduling policy data 308, each of which may be accessed by the various modules and/or sub-modules of the network device 106. It should be appreciated that the network device 106 may include other components, sub-components, modules, sub-modules, and/or devices commonly found in a network device, which are not illustrated in FIG. 3 for clarity of the description.
The network communication module 310 is configured to facilitate inbound and outbound network communications (e.g., network traffic, network packets, network flows, etc.) to and from the network device 106, respectively. To do so, the network communication module 310 is configured to receive and process network packets from one computing device (e.g., the computing device 102, another network device 106, the remote computing device 110) and to prepare and transmit network packets to another computing device (e.g., the computing device 102, another network device 106, the remote computing device 110). Accordingly, in some embodiments, at least a portion of the functionality of the network communication module 310 may be performed by the communication circuitry 214, and more specifically by the NIC 216.
The performance monitoring module 320 is configured to monitor one or more performance metrics of various physical and/or virtual resources of the network device 106. To do so, the illustrative performance monitoring module 320 includes a system resource performance monitoring module 322 and an application performance monitoring module 324. The system resource performance monitoring module 322 is configured to monitor various system resource performance metrics, or statistics, of the network device 106. The system resource performance metrics may include any data indicative of a utilization or other statistic of one or more physical or virtual components or available resources of the network device 106. For example, the system resource performance metrics may include such performance metrics as a CPU utilization, a GPU utilization, a memory utilization, cache hits/misses, a GPU thread occupancy, a cache miss rate, a translation lookaside buffer (TLB) miss, a page fault, etc. In some embodiments, to monitor various system resources of the network device 106, the system resource performance monitoring module 322 may be configured to periodically read hardware and/or software (e.g., an operating system (OS)) performance counters. Additionally, in some embodiments, the system resource performance metrics may be stored in the system resource utilization data 302.
The application performance monitoring module 324 is configured to monitor application performance metrics, or statistics, of the applications presently running on the network device 106. Such application performance metrics may include any data indicative of the operation or related performance of an application executed by the network device 106. For example, the application performance metrics may include such performance metrics as a throughput level, a cache usage, a memory usage, a packet processing delay, a number of transmitted/received packets, a packet loss/drop count or ratio, a latency level, a power consumption level, etc. In some embodiments, the application performance metrics may be stored in the application performance data 304. To monitor the application performance metrics, the application performance monitoring module 324 may be configured to interface with various VMs (e.g., of a service function chain) and/or applications capable of being executed by the network device 106, such as those applications configured to perform at least a portion of the processing of a received network packet.
In some embodiments, to monitor the application performance metrics of the applications presently running on the network device 106, the application performance monitoring module 324 may be configured to periodically read hardware and/or software (e.g., a virtual switch) performance counters. For example, in some embodiments, a shim layer between application processing interface (API) calls and a device driver can intercept the API calls. In another example, specially defined APIs may be used between the application and the application performance monitoring module 324.
The GPU admission control module 330 is configured to determine whether to admit a network packet processing application (e.g., firewall services, NAT services, load-balancing services, DPI services, TCP optimization services, 4G/LTE network services, etc.) to be scheduled for the GPU 210. To do so, the illustrative GPU admission control module 330 includes a resource criteria determination module 332, a GPU performance estimation module 334, a CPU performance estimation module 336, and a GPU admission determination module 338.
The resource criteria determination module 332 is configured to determine system resources required (i.e., resource criteria) to execute (i.e., run) an application (e.g., the network packet processing application). The resource criteria include any data that defines a performance requirement, such as a maximum latency, a minimum throughput, a minimum amount of one or more system resources of the network device required to run the application, an amount of available processing power, an amount of available memory, etc.
The GPU performance estimation module 334 is configured to determine one or more estimated performance metrics of the GPU 210 (i.e., estimated GPU performance metrics) based on the present state of the system resources of the network device 106 available to process a network packet and determine an impact on the performance metric if the application were to be scheduled for and processed by the GPU 210. To do so, the GPU performance estimation module 334 may receive or access, for example, the performance data generated by the system resource performance monitoring module 322 and/or the application performance monitoring module 324 that relate to the GPU 210. That is, the estimated GPU performance metrics may include any data indicative of an estimated utilization, operation, or other performance level metric of the GPU 210, if the GPU 210 were scheduled to run the application, taking into account a present workload of the GPU 210.
The CPU performance estimation module 336 is configured to estimate one or more estimated performance metrics of the CPU 202 (i.e., estimated CPU performance metrics) based on the present state of the system resources of the network device 106 available to process the network packet and determine an impact on the performance metric if the application were to be scheduled for and processed by the CPU 202. To do so, the CPU performance estimation module 336 may receive or access, for example, the performance data generated by the system resource performance monitoring module 322 and/or the application performance monitoring module 324 that relate to the CPU 202. That is, the estimated CPU performance metrics may include any data indicative of an estimated utilization, operation, or other performance level metric of the CPU 202, if the CPU 202 were scheduled to run the application, taking into account a present workload of the CPU 202.
The GPU admission determination module 338 is configured to determine whether a sufficient level of GPU resources exist to meet the system resource demand of the application (i.e., the resource criteria). To do so, the GPU admission determination module 338 may be configured to retrieve present GPU utilization statistics and compare the present GPU utilization statistics to the resource criteria.
The GPU admission determination module 338 is further configured to analyze the resource criteria of the application (e.g., as determined by resource criteria determination module 332) and the estimated performance metrics (e.g., as determined by the GPU performance estimation module 334 and/or the CPU performance estimation module 336) to determine whether to admit (i.e., run, schedule, etc.) the application for the GPU 210. In some embodiments, additional and/or alternative utilization statistics may be used to determine whether to run the application on the GPU 210 or the CPU 202, such as a number of applications running on the GPU, a maximum number of applications that may be run on the GPU, a maximum number of GPU cores to put in use at the same time, etc.
Additionally or alternatively, in some embodiments, one or more of the performance metrics may be weighted such that a particular performance metric is given more weight than another performance metric when determining whether to run the network processing application on the GPU 210 or the CPU 202. In some embodiments, additional and/or alternative analysis may be performed to determine whether to run the application on the GPU 210 or the CPU 202, such as a performance history of running like applications on the GPU 210, for example. In some embodiments, one or more of the estimated performance metrics determined by the GPU performance estimation module 334 and/or the CPU performance estimation module 336 may be stored in the performance estimation data 306.
The system resource management module 340 is configured to manage the allocation of system resource (e.g., computing resources, storage resources, network resources, etc.) of the network device 106 after receiving the network packet or performing another network processing function on the packet (e.g., in a service function chain). To do so, the system resource management module 340 may be capable of instantiating (i.e., creating) VMs, suspending VMs, shutting down (i.e., closing) VMs, and redirecting network traffic to either the GPU 210 or the CPU 202 for more efficient processing (e.g., is faster, is an improvement, a more efficient level of resource usage, etc.). In some embodiments, the “more efficient” processor may be determined by a GPU performance metric that is higher or lower than a CPU performance metric, depending on the particular performance metric being compared. For example, an improved throughput metric may be of a higher value while an improved latency may be of a lower value. Additionally, in some embodiments, the system resource allocation may be based on one or more scheduling policies including instructions on which network packets are permissible to schedule to the GPU 210. In such embodiments, the scheduling policies may be received from the network controller 108, for example, and/or stored in the scheduling policy data 308.
The illustrative system resource management module 340 includes a GPU scheduler module 342 and a CPU scheduler module 344. The GPU scheduler module 342 is configured to schedule processes (e.g., the network processing application) for execution by the GPU 210, such as may be triggered by receiving GPU commands issued by the GPU admission control module 330 (e.g., the GPU admission determination module 338 upon a determination that the GPU is to execute the network packet processing application). The GPU scheduler module 342 may select a scheduling policy from a number of possible scheduling policies (e.g., from the scheduling policy data 308), based on one or more attributes of the GPU commands, GPU command buffer dependencies, and/or other decision criteria, and schedules the GPU commands according to the selected scheduling policy. In use, the GPU scheduler module 342 communicates with the applications in the VMs to control the submission of GPU commands to the GPU 210.
It should be appreciated that, in embodiments wherein the application is scheduled to be run on the GPU 210, upon completion of the execution of the application, results of the execution (e.g., performance metrics of the GPU 210) may be stored in the application performance data 304 for future reference, such as by the performance monitoring module 320.
Similar to the GPU scheduler module 342, the CPU scheduler module 344 is configured to schedule processes (e.g., the network processing application) for execution by the CPU 202, such as after an event (e.g., running an application on an instantiated VM to process or otherwise perform a service on a network packet) or an interrupt. It should be appreciated that, in some embodiments, the CPU 202 may have one or more cores (e.g., a multi-core processor). In such embodiments the CPU scheduler module 344 may be further configured to schedule the process to a particular core of the CPU 202 based on available system resources of the network device 106.
Referring now to FIG. 4, an operational environment 400 of the network device 106 is shown. The illustrative operational environment 400 includes the performance monitoring module 320, the GPU admission control module 330, and the system resource management module 340 of FIG. 3, as well as a virtual switch 410. In the illustrative operational environment 400, the network device is executing a first virtual machine, which is designated as VM(1) 402, and a second virtual machine, which is designated as VM(N) 406 (i.e., the “Nth” virtual machine running on the network device 106, wherein “N” is a positive integer and designates one or more additional virtual machines running on the network device 106). Each of the VM(1) 402 and the VM(N) 406 include a corresponding application, a first application 404 and an “Nth” application 408, respectively. It should be appreciated that one or more of the VMs 402, 406 may run more than one application. The applications 404, 408 may indicate any type of service or other network processing function presently being performed via the VMs 402, 406 on the network packets, such as a firewall, NAT, load-balancing, DPI, TCP optimization, etc. In some embodiments, the VMs 402, 406 may be configured to function as a service function chain comprised of a number of VMs to perform certain services on the network packets based on various factors, such as type, flow, workload, destination, etc.
The virtual switch 410 may be configured to manage the internal data transfer of network traffic related information. In some embodiments, the performance monitoring module 320 and/or the GPU admission control module 330 may receive mirrored and/or duplicated network packets that are to be processed internally (i.e., the applications 404, 408 running on the local VMs 402, 406). For example, the GPU admission control module 330 may receive a mirrored or duplicated network packet to determine an estimation of the impact processing the network packet may have on a network packet processing application performance metric and/or a system resource performance metric. Accordingly, the virtual switch 410 may be configured to facilitate the transfer of the mirrored and/or duplicated network traffic between the VMs 402, 406 and the performance monitoring module 320 and/or the GPU admission control module 330.
The illustrative operational environment 400 additionally includes system resources 412 that include the various components of the network device 106, such as the CPU 202, the main memory 206, the GPU memory 208, the GPU 210, and the data storage 212 of FIG. 2. As shown, the system resource management module 340 is communicatively coupled to the system resources 412, such that the system resource management module 340 can manage the system resource and facilitate the transfer of utilization data to the performance monitoring module 320 and the GPU admission control module 330.
Referring now to FIGS. 5 and 6, in use, the network device 106 may execute a method 500 for offloading network packet processing to a GPU (e.g., the GPU 210) of the network device 106. The method 500 begins with block 502, in which the network device 106 determines whether a GPU offload request was received. In other words, the network device 106 determines whether a network packet processing application is to be offloaded to the GPU 210 (i.e., processed by the GPU 210 rather than the CPU 202). If the network device 106 determines the GPU offload request was not received, the method 500 loops back to block 502 to continue monitoring for the GPU offload request. Otherwise, if the network device 106 determines the GPU offload request was received, the method 500 advances to block 504.
In block 504, the network device 106 determines resource criteria for the network packet processing application to be offloaded to the GPU 210 (i.e., that corresponds to the GPU offload request received at block 502). To do so, in block 506, the network device 106 determines system resources required to run the application. As described previously, the system resources required to run the application may include any data indicative of a minimum or maximum threshold of a system resource required to run the application, such as a minimum amount of memory to allocate, a minimum level of compute power, etc. Additionally, in block 508, the network device 106 determines performance requirements for running the application. As described previously, the performance requirements for running the application may include any data indicative of a minimum or maximum threshold of a performance level when running the application, such as a minimum throughput, a maximum latency, a maximum power consumption level, etc.
In block 510, the network device 106 determines utilization information of the system resources of the network device 106. To do so, in block 512, the network device 106 determines one or more system resource performance metrics. As described previously, the system resource performance metrics may include any data indicative of a utilization or other statistic of one or more physical or virtual components or available resources of the network device 106. The system resource performance metrics may include CPU utilization, GPU utilization, memory utilization, cache hits/misses, GPU thread occupancy, cache miss rate, TLB misses, page faults, etc. From the system resource performance metrics, in block 514, the network device 106 determines available GPU resources, such as a number of available GPU cycles, a number of additional applications that the GPU can run, a GPU frequency, a number of available GPU cores, an amount of available GPU memory 208, etc. Additionally, in block 516, the network device 106 determines available CPU (e.g., the CPU 202 of FIG. 2) resources, such as a number of available cores of the CPU 202, an amount of available cache, a number of processes that can be supported by the CPU 202, an amount of available power of the CPU 202, an amount of available main memory 206, etc. In some embodiments, to determine the available GPU 210 resources in block 514 and the available CPU 202 resources in block 516, the network device 106 may read hardware and/or software performance counters of the network device 106.
In block 518, the network device 106 determines one or more application performance metrics of the network packet processing application that corresponds to the GPU offload request received at block 502. As described previously, the application performance metrics may include a throughput level, a cache usage, a memory usage, a packet processing delay duration, a number of transmitted/received packets, a packet loss/drop count or ratio, a latency level, a power consumption level, etc.
In block 520, the network device 106 determines whether sufficient GPU 210 resources are available to process the application based on the available GPU resources determined in block 514 and the resource criteria for running the application determined in block 504. For example, the network device 106 may determine whether sufficient GPU 210 resources are available to process the application based on a current utilization of the GPU 210, a number of applications presently running on the GPU 210, etc. If not, the method 500 advances to block 522, wherein the network device 106 schedules the application for processing by the CPU 202 before the method return to block 502 to determine whether another GPU offload request was received. Otherwise, if the network device 106 determines there are sufficient GPU 210 resources available to process the application, the method 500 advances to block 524.
In block 524, the network device 106 determines one or more estimated processing performance metrics for running the network packet processing application. To do so, in block 526, the network device 106 determines estimated GPU performance metrics for running the application on the GPU 210. Additionally, in block 528, the network device 106 determines estimated CPU performance metrics for running the application on the CPU 202. The estimated GPU performance metrics determined at block 526 and/or the estimated CPU performance metrics determined at block 528 may include a throughput, a latency, a power consumption, a number of used/available processing cores during running of the application, etc.
In block 530, the network device 106 analyzes the estimated processing performance metrics determined in block 524. To do so, in block 534, the network device 106 compares one or more of the estimated GPU performance metrics to one or more corresponding performance requirements of the application. Additionally, in block 532, the network device 106 compares one or more of the estimated GPU performance metrics against the estimated CPU performance metrics.
In block 536, the network device 106 determines whether to offload the application to the GPU 210 based on at least a portion of the estimated GPU performance metrics, the estimated CPU performance metrics, and the system resource performance metrics. For example, in an embodiment wherein the application has a threshold application performance requirement, such as a maximum latency requirement, one of the estimated GPU performance metrics may be an estimated latency associated with the GPU 210 processing the application. Accordingly, in such an embodiment, if the estimated latency does not meet the maximum latency requirement of the application (i.e., is not less than), the network device may determine the application should be processed by the CPU 202 rather than being offloaded to the GPU 210 for execution. It should be appreciated that “meeting” a threshold application performance requirement may include being greater than or less than the threshold application performance requirement, depending on the threshold application performance requirement being compared.
It should be further appreciated that, in an embodiment wherein the estimated GPU performance metrics meet the threshold application performance requirement and the estimated GPU performance metrics are not determined to be an improvement relative to the CPU performance metrics, the network device 106 may still determine to offload the application to the GPU 210. For example, in such an embodiment wherein performance of the GPU is not impacted such that other applications presently being run on the GPU 210 would no longer meet their respective threshold application performance requirements, freeing up the CPU 202 to perform other tasks may be more beneficial to the network device 106 than scheduling based solely on whether the GPU 210 is estimated to outperform the CPU 202.
If the network device 106 determines not to offload the application to the GPU 210, the method branches to block 522, wherein the network device 106 schedules the application for processing by the CPU 202 before the method return to block 502 to determine whether another GPU offload request was received. Otherwise, if the network device 106 determines to offload the application to the GPU 210, the method 500 advances to block 538, wherein the network device schedules the application for the GPU 210 (i.e., provides an indication to a scheduler of the GPU 210 to schedule the application for processing by the GPU 210). From block 538, the method 500 loops back to block 502 to determine whether another GPU offload request was received.
EXAMPLES
Illustrative examples of the technologies disclosed herein are provided below. An embodiment of the technologies may include any one or more, and any combination of, the examples described below.
Example 1 includes a network device to offload processing of a network packet to a graphics processing unit (GPU) of a network device, the network device comprising one or more processors; and one or more memory devices having stored therein a plurality of instructions that, when executed by the one or more processors, cause the network device to determine resource criteria of an application, wherein the resource criteria define a minimum amount of one or more system resources of the network device required to run the application; determine available GPU resources of the GPU of the network device; determine whether the available GPU resources are sufficient to process the application based on the resource criteria of the application and the available GPU resources; determine, in response to a determination that the available GPU resources are sufficient to process the application, one or more estimated GPU performance metrics based on the resource criteria of the application and the available GPU resources, wherein the estimated GPU performance metrics indicate an estimated level of performance of the GPU if the GPU were to run the application; and offload processing of the application to the GPU as a function of the one or more estimated GPU performance metrics.
Example 2 includes the subject matter of Example 1, and wherein the one or more memory devices, when executed by the one or more processors, further cause the network device to determine one or more estimated central processing unit (CPU) performance metrics of a CPU of the network device, wherein the estimated CPU performance metrics are determined based on available CPU resources and are indicative of an estimated level of performance of the CPU during a runtime of the application by the CPU; and compare the estimated GPU performance metrics and the estimated CPU performance metrics, and wherein to offload processing of the application to the GPU comprises to offload processing of the application to the GPU in response to a determination that the estimated GPU performance metrics are an improvement relative to the CPU performance metrics.
Example 3 includes the subject matter of any of Examples 1 and 2, and wherein the one or more memory devices, when executed by the one or more processors, further cause the network device to run the processing of the application at the CPU in response to a determination that the estimated GPU performance metrics are an improvement relative to the CPU performance metrics.
Example 4 includes the subject matter of any of Examples 1-3, and wherein the one or more memory devices, when executed by the one or more processors, further cause the network device to determine utilization information for at least a portion of the system resources of the network device, wherein the utilization information is indicative of an amount at which the at least the portion of the system resources are presently utilized, and wherein to determine the one or more estimated GPU performance metrics is further based on the utilization information.
Example 5 includes the subject matter of any of Examples 1-4, and wherein the one or more memory devices, when executed by the one or more processors, further cause the network device to determine whether the estimated GPU performance metrics meet one or more predetermined performance metric thresholds that correspond to at least a portion of the estimated GPU performance metrics, wherein to offload the processing of the application to the GPU comprises to offload the processing of the application to the GPU in response to a determination that the estimated GPU performance metrics meets the predetermined performance metric thresholds.
Example 6 includes the subject matter of any of Examples 1-5, and wherein the one or more memory devices, when executed by the one or more processors, further cause the network device to schedule the processing of the application by a central processing unit (CPU) of the network device in response to a determination that the estimated GPU performance metrics do not meet the predetermined performance metric thresholds.
Example 7 includes the subject matter of any of Examples 1-6, and wherein to determine utilization information for at least a portion of the system resources of the network device comprises to determine at least one of a memory utilization, a command scheduling delay, a cache miss rate, a translation lookaside buffer (TLB) miss, and a page fault.
Example 8 includes the subject matter of any of Examples 1-7, and wherein the application comprises a network packet processing application for processing a network packet received by the network device.
Example 9 includes the subject matter of any of Examples 1-8, and wherein to determine the resource criteria of the application comprises to determine at least one of a minimum amount of memory available to store data related to the application, one or more dependencies of the application, and a minimum number of processing cycles to run the application.
Example 10 includes the subject matter of any of Examples 1-9, and wherein to determine the available GPU resources of the GPU comprises to determine at least one of a number of available cores of the GPU, a number of available cycles of the GPU, a number of total applications supported by the GPU, a number of other applications presently running on the GPU, and a present utilization percentage of the GPU.
Example 11 includes the subject matter of any of Examples 1-10, and wherein to determine the estimated GPU performance metrics comprises to determine at least one of a utilization of a plurality of cores of the GPU, a number of other applications presently running on the GPU, a present performance metric for each of the other applications presently running on the GPU, and a frequency rate of the GPU.
Example 12 includes a method for offloading processing of a network packet to a graphics processing unit (GPU) of a network device, the method comprising determining, by the network device, resource criteria of an application, wherein the resource criteria define a minimum amount of one or more system resources of the network device required to run the application; determining, by the network device, available GPU resources of the GPU of the network device; determining, by the network device, whether the available GPU resources are sufficient to process the application based on the resource criteria of the application and the available GPU resources; determining, by the network device and in response to a determination that the available GPU resources are sufficient to process the application, one or more estimated GPU performance metrics based on the resource criteria of the application and the available GPU resources, wherein the estimated GPU performance metrics indicate an estimated level of performance of the GPU if the GPU were to run the application; and offloading, by the network device, processing of the application to the GPU as a function of the one or more estimated GPU performance metrics.
Example 13 includes the subject matter of Example 12, and further including determining, by the network device, one or more estimated central processing unit (CPU) performance metrics of a CPU of the network device, wherein the estimated CPU performance metrics are determined based on available CPU resources and are indicative of an estimated level of performance of the CPU during a runtime of the application by the CPU; and comparing, by the network device, the estimated GPU performance metrics and the estimated CPU performance metrics, wherein offloading processing of the application to the GPU comprises offloading processing of the application to the GPU in response to a determination that the estimated GPU performance metrics are an improvement relative to the CPU performance metrics.
Example 14 includes the subject matter of any of Examples 12 and 13, and further including scheduling the processing of the application at the CPU in response to a determination that the estimated GPU performance metrics are an improvement relative to the CPU performance metrics.
Example 15 includes the subject matter of any of Examples 12-14, and further including determining, by the network device, utilization information for at least a portion of the system resources of the network device, wherein the utilization information is indicative of an amount at which the at least the portion of the system resources are presently utilized, and wherein the determining the one or more estimated GPU performance metrics is further based on the utilization information.
Example 16 includes the subject matter of any of Examples 12-15, and further including determining, by the network device, whether the estimated GPU performance metrics meets one or more predetermined performance metric thresholds that correspond to each of the estimated GPU performance metrics, wherein offloading the processing of the application to the GPU comprises offloading the processing of the application to the GPU in response to a determination that the estimated GPU performance metrics meets the predetermined performance metric thresholds.
Example 17 includes the subject matter of any of Examples 12-16, and further including scheduling the processing of the application to a central processing unit (CPU) of the network device in response to a determination that the estimated GPU performance metrics do not meet the predetermined performance metric thresholds.
Example 18 includes the subject matter of any of Examples 12-17, and wherein determining utilization information for at least a portion of the system resources of the network device comprises determining at least one of a memory utilization, a command scheduling delay, a cache miss rate, a translation lookaside buffer (TLB) miss, and a page fault.
Example 19 includes the subject matter of any of Examples 12-18, and wherein the application comprises a network packet processing application for processing a network packet received by the network device.
Example 20 includes the subject matter of any of Examples 12-19, and wherein determining the resource criteria of the application comprises determining at least one of a minimum amount of memory available to store data related to the application, one or more dependencies of the application, and a minimum number of processing cycles to run the application.
Example 21 includes the subject matter of any of Examples 12-20, and wherein determining the available GPU resources of the GPU comprises determining at least one of a number of available cores of the GPU, a number of available cycles of the GPU, a number of total applications supported by the GPU, a number of other applications presently running on the GPU, and a present utilization percentage of the GPU.
Example 22 includes the subject matter of any of Examples 12-21, and wherein determining the estimated GPU performance metrics comprises determining at least one of a utilization of a plurality of cores of the GPU, a number of other applications presently running on the GPU, a present performance metric for each of the other applications presently running on the GPU, and a frequency rate of the GPU.
Example 23 includes a computing device comprising a processor; and a memory having stored therein a plurality of instructions that when executed by the processor cause the computing device to perform the method of any of Examples 12-22.
Example 24 includes one or more machine readable storage media comprising a plurality of instructions stored thereon that in response to being executed result in a computing device performing the method of any of Examples 12-22.
Example 25 includes a network device to offload processing of a network packet to a graphics processing unit (GPU) of a network device, the network device comprising a resource criteria determination circuitry to determine resource criteria of an application, wherein the resource criteria define a minimum amount of one or more system resources of the network device required to run the application a performance monitoring circuitry to determine available GPU resources of the GPU of the network device a GPU admission determination circuitry to determine whether the available GPU resources are sufficient to process the application based on the resource criteria of the application and the available GPU resources; and a GPU performance estimation circuitry to determine, in response to a determination that the available GPU resources are sufficient to process the application, one or more estimated GPU performance metrics based on the resource criteria of the application and the available GPU resources, wherein the estimated GPU performance metrics indicate an estimated level of performance of the GPU if the GPU were to run the application, wherein the GPU admission determination circuitry is further to offload processing of the application to the GPU as a function of the one or more estimated GPU performance metrics.
Example 26 includes the subject matter of Example 25, and further including a CPU performance estimation circuitry to determine one or more estimated central processing unit (CPU) performance metrics of a CPU of the network device, wherein the estimated CPU performance metrics are determined based on available CPU resources and are indicative of an estimated level of performance of the CPU during a runtime of the application by the CPU, wherein the GPU admission determination circuitry is further to compare the estimated GPU performance metrics and the estimated CPU performance metrics, and wherein to offload processing of the application to the GPU comprises to offload processing of the application to the GPU in response to a determination that the estimated GPU performance metrics are an improvement relative to the CPU performance metrics.
Example 27 includes the subject matter of any of Examples 25 and 26, and wherein the GPU admission determination circuitry is further to run the processing of the application at the CPU in response to a determination that the estimated GPU performance metrics are an improvement relative to the CPU performance metrics.
Example 28 includes the subject matter of any of Examples 25-27, and wherein the performance monitoring circuitry is further to determine utilization information for at least a portion of the system resources of the network device, wherein the utilization information is indicative of an amount at which the at least the portion of the system resources are presently utilized, and wherein to determine the one or more estimated GPU performance metrics is further based on the utilization information.
Example 29 includes the subject matter of any of Examples 25-28, and wherein the GPU admission determination circuitry is further to determine whether the estimated GPU performance metrics meet one or more predetermined performance metric thresholds that correspond to at least a portion of the estimated GPU performance metrics, wherein to offload the processing of the application to the GPU comprises to offload the processing of the application to the GPU in response to a determination that the estimated GPU performance metrics meets the predetermined performance metric thresholds.
Example 30 includes the subject matter of any of Examples 25-29, and further including a system resource management circuitry to schedule the processing of the application by a central processing unit (CPU) of the network device in response to a determination that the estimated GPU performance metrics do not meet the predetermined performance metric thresholds.
Example 31 includes the subject matter of any of Examples 25-30, and wherein to determine utilization information for at least a portion of the system resources of the network device comprises to determine at least one of a memory utilization, a command scheduling delay, a cache miss rate, a translation lookaside buffer (TLB) miss, and a page fault.
Example 32 includes the subject matter of any of Examples 25-31, and wherein the application comprises a network packet processing application for processing a network packet received by the network device.
Example 33 includes the subject matter of any of Examples 25-32, and wherein to determine the resource criteria of the application comprises to determine at least one of a minimum amount of memory available to store data related to the application, one or more dependencies of the application, and a minimum number of processing cycles to run the application.
Example 34 includes the subject matter of any of Examples 25-33, and wherein to determine the available GPU resources of the GPU comprises to determine at least one of a number of available cores of the GPU, a number of available cycles of the GPU, a number of total applications supported by the GPU, a number of other applications presently running on the GPU, and a present utilization percentage of the GPU.
Example 35 includes the subject matter of any of Examples 25-34, and wherein to determine the estimated GPU performance metrics comprises to determine at least one of a utilization of a plurality of cores of the GPU, a number of other applications presently running on the GPU, a present performance metric for each of the other applications presently running on the GPU, and a frequency rate of the GPU.
Example 36 includes a network device to offload processing of a network packet to a graphics processing unit (GPU) of a network device, the network device comprising means for determining resource criteria of an application, wherein the resource criteria define a minimum amount of one or more system resources of the network device required to run the application; means for determining available GPU resources of the GPU of the network device; means for determining whether the available GPU resources are sufficient to process the application based on the resource criteria of the application and the available GPU resources; means for determining, in response to a determination that the available GPU resources are sufficient to process the application, one or more estimated GPU performance metrics based on the resource criteria of the application and the available GPU resources, wherein the estimated GPU performance metrics indicate an estimated level of performance of the GPU if the GPU were to run the application; and means for offloading processing of the application to the GPU as a function of the one or more estimated GPU performance metrics.
Example 37 includes the subject matter of Example 36, and further including g means for determining one or more estimated central processing unit (CPU) performance metrics of a CPU of the network device, wherein the estimated CPU performance metrics are determined based on available CPU resources and are indicative of an estimated level of performance of the CPU during a runtime of the application by the CPU; and means for comparing the estimated GPU performance metrics and the estimated CPU performance metrics, wherein the means for offloading processing of the application to the GPU comprises means for offloading processing of the application to the GPU in response to a determination that the estimated GPU performance metrics are an improvement relative to the CPU performance metrics.
Example 38 includes the subject matter of any of Examples 36 and 37, and further including means for scheduling the processing of the application at the CPU in response to a determination that the estimated GPU performance metrics are an improvement relative to the CPU performance metrics.
Example 39 includes the subject matter of any of Examples 36-38, and further including means for determining utilization information for at least a portion of the system resources of the network device, wherein the utilization information is indicative of an amount at which the at least the portion of the system resources are presently utilized, and wherein the means for determining the one or more estimated GPU performance metrics is further based on the utilization information.
Example 40 includes the subject matter of any of Examples 36-39, and further including means for determining whether the estimated GPU performance metrics meets one or more predetermined performance metric thresholds that correspond to each of the estimated GPU performance metrics, wherein the means for offloading the processing of the application to the GPU comprises means for offloading the processing of the application to the GPU in response to a determination that the estimated GPU performance metrics meets the predetermined performance metric thresholds.
Example 41 includes the subject matter of any of Examples 36-40, and further including means for scheduling the processing of the application to a central processing unit (CPU) of the network device in response to a determination that the estimated GPU performance metrics do not meet the predetermined performance metric thresholds.
Example 42 includes the subject matter of any of Examples 36-41, and wherein the means for determining utilization information for at least a portion of the system resources of the network device comprises means for determining at least one of a memory utilization, a command scheduling delay, a cache miss rate, a translation lookaside buffer (TLB) miss, and a page fault.
Example 43 includes the subject matter of any of Examples 36-42, and wherein the application comprises a network packet processing application for processing a network packet received by the network device.
Example 44 includes the subject matter of any of Examples 36-43, and wherein the means for determining the resource criteria of the application comprises means for determining at least one of a minimum amount of memory available to store data related to the application, one or more dependencies of the application, and a minimum number of processing cycles to run the application.
Example 45 includes the subject matter of any of Examples 36-44, and wherein the means for determining the available GPU resources of the GPU comprises means for determining at least one of a number of available cores of the GPU, a number of available cycles of the GPU, a number of total applications supported by the GPU, a number of other applications presently running on the GPU, and a present utilization percentage of the GPU.
Example 46 includes the subject matter of any of Examples 36-45, and wherein the means for determining the estimated GPU performance metrics comprises means for determining at least one of a utilization of a plurality of cores of the GPU, a number of other applications presently running on the GPU, a present performance metric for each of the other applications presently running on the GPU, and a frequency rate of the GPU.

Claims (25)

The invention claimed is:
1. A network device to offload processing of a network packet to a graphics processing unit (GPU) of the network device, the network device comprising:
one or more processors; and
one or more memory devices having stored therein a plurality of instructions that, when executed by the one or more processors, cause the network device to:
determine resource criteria of an application that is to be offloaded to the GPU prior to the offloading of the application, wherein the resource criteria define a minimum amount of one or more system resources of the network device required to run the application;
determine available GPU resources of the GPU of the network device;
determine whether the available GPU resources are sufficient to process the application based on the resource criteria of the application and the available GPU resources;
determine, in response to a determination that the available GPU resources are sufficient to process the application, one or more estimated GPU performance metrics based on the resource criteria of the application and the available GPU resources prior to the offloading of the application to the GPU, wherein the estimated GPU performance metrics indicate an estimated level of performance of the GPU if the GPU were to run the application; and
offload processing of the application to the GPU as a function of the one or more estimated GPU performance metrics.
2. The network device of claim 1, wherein the one or more memory devices, when executed by the one or more processors, further cause the network device to:
determine one or more estimated central processing unit (CPU) performance metrics of a CPU of the network device, wherein the estimated CPU performance metrics are determined based on available CPU resources and are indicative of an estimated level of performance of the CPU during a runtime of the application by the CPU; and
compare the estimated GPU performance metrics and the estimated CPU performance metrics, and wherein to offload processing of the application to the GPU comprises to offload processing of the application to the GPU in response to a determination that the estimated GPU performance metrics are an improvement relative to the CPU performance metrics.
3. The network device of claim 2, wherein the one or more memory devices, when executed by the one or more processors, further cause the network device to run the processing of the application at the CPU in response to a determination that the estimated GPU performance metrics are an improvement relative to the CPU performance metrics.
4. The network device of claim 1, wherein the one or more memory devices, when executed by the one or more processors, further cause the network device to determine utilization information for at least a portion of the system resources of the network device, wherein the utilization information is indicative of an amount at which the at least the portion of the system resources are presently utilized, and wherein to determine the one or more estimated GPU performance metrics is further based on the utilization information.
5. The network device of claim 4, wherein the one or more memory devices, when executed by the one or more processors, further cause the network device to determine whether the estimated GPU performance metrics meet one or more predetermined performance metric thresholds that correspond to at least a portion of the estimated GPU performance metrics, wherein to offload the processing of the application to the GPU comprises to offload the processing of the application to the GPU in response to a determination that the estimated GPU performance metrics meets the predetermined performance metric thresholds.
6. The network device of claim 5, wherein the one or more memory devices, when executed by the one or more processors, further cause the network device to schedule the processing of the application by a central processing unit (CPU) of the network device in response to a determination that the estimated GPU performance metrics do not meet the predetermined performance metric thresholds.
7. The network device of claim 4, wherein to determine utilization information for at least a portion of the system resources of the network device comprises to determine at least one of a memory utilization, a command scheduling delay, a cache miss rate, a translation lookaside buffer (TLB) miss, and a page fault.
8. The network device of claim 1, wherein the application comprises a network packet processing application for processing the network packet received by the network device.
9. The network device of claim 1, wherein to determine the resource criteria of the application comprises to determine at least one of a minimum amount of memory available to store data related to the application, one or more dependencies of the application, and a minimum number of processing cycles to run the application.
10. The network device of claim 1, wherein to determine the available GPU resources of the GPU comprises to determine at least one of a number of available cores of the GPU, a number of available cycles of the GPU, a number of total applications supported by the GPU, a number of other applications presently running on the GPU, and a present utilization percentage of the GPU.
11. The network device of claim 1, wherein to determine the estimated GPU performance metrics comprises to determine at least one of a utilization of a plurality of cores of the GPU, a number of other applications presently running on the GPU, a present performance metric for each of the other applications presently running on the GPU, and a frequency rate of the GPU.
12. One or more non-transitory computer-readable storage media comprising a plurality of instructions stored thereon that in response to being executed cause a network device to:
determine resource criteria of an application that is to be offloaded to the GPU prior to the offloading of the application, wherein the resource criteria define a minimum amount of one or more system resources of the network device required to run the application;
determine available graphics processing unit (GPU) resources of the GPU of the network device;
determine whether the available GPU resources are sufficient to process the application based on the resource criteria of the application and the available GPU resources;
determine, in response to a determination that the available GPU resources are sufficient to process the application, one or more estimated GPU performance metrics based on the resource criteria of the application and the available GPU resources prior to the offloading of the application to the GPU, wherein the estimated GPU performance metrics indicate an estimated level of performance of the GPU if the GPU were to run the application; and
offload processing of the application to the GPU as a function of the one or more estimated performance metrics.
13. The one or more non-transitory computer-readable storage media of claim 12, further comprising a plurality of instructions that in response to being executed cause the network device to:
determine one or more estimated central processing unit (CPU) performance metrics of a CPU of the network device, wherein the estimated CPU performance metrics are determined based on available CPU resources and are indicative of an estimated level of performance of the CPU during a runtime of the application by the CPU; and
compare the estimated GPU performance metrics and the estimated CPU performance metrics,
wherein to offload processing of the application to the GPU comprises to offload processing of the application to the GPU in response to a determination that the estimated GPU performance metrics are an improvement relative to the CPU performance metrics.
14. The one or more non-transitory computer-readable storage media of claim 13, further comprising a plurality of instructions that in response to being executed cause the network device to schedule the processing of the application at the CPU in response to a determination that the estimated GPU performance metrics are an improvement relative to the CPU performance metrics.
15. The one or more non-transitory computer-readable storage media of claim 12, further comprising a plurality of instructions that in response to being executed cause the network device to determine utilization information for at least a portion of the system resources of the network device, wherein the utilization information is indicative of an amount at which the at least the portion of the system resources are presently utilized, and wherein to determine the one or more estimated GPU performance metrics is further based on the utilization information.
16. The one or more non-transitory computer-readable storage media of claim 15, further comprising a plurality of instructions that in response to being executed cause the network device to determine whether the estimated GPU performance metrics meets one or more predetermined performance metric thresholds that correspond to each of the estimated GPU performance metrics, wherein to offload the processing of the application to the GPU comprises to offload the processing of the application to the GPU in response to a determination that the estimated GPU performance metrics meets the predetermined performance metric thresholds.
17. The one or more non-transitory computer-readable storage media of claim 16, further comprising a plurality of instructions that in response to being executed cause the network device to schedule the processing of the application to a central processing unit (CPU) of the network device in response to a determination that the estimated GPU performance metrics do not meet the predetermined performance metric thresholds.
18. The one or more non-transitory computer-readable storage media of claim 15, wherein to determine utilization information for at least a portion of the system resources of the network device comprises to determine at least one of a memory utilization, a command scheduling delay, a cache miss rate, a translation lookaside buffer (TLB) miss, and a page fault.
19. The one or more non-transitory computer-readable storage media of claim 12, wherein the application comprises a network packet processing application for processing a network packet received by the network device.
20. The one or more non-transitory computer-readable storage media of claim 12, wherein to determine the resource criteria of the application comprises to determine at least one of a minimum amount of memory available to store data related to the application, one or more dependencies of the application, and a minimum number of processing cycles to run the application.
21. The one or more non-transitory computer-readable storage media of claim 12, wherein to determine the available GPU resources of the GPU comprises to determine at least one of a number of available cores of the GPU, a number of available cycles of the GPU, a number of total applications supported by the GPU, a number of other applications presently running on the GPU, and a present utilization percentage of the GPU.
22. The one or more non-transitory computer-readable storage media of claim 12, wherein to determine the estimated GPU performance metrics comprises to determine at least one of a utilization of a plurality of cores of the GPU, a number of other applications presently running on the GPU, a present performance metric for each of the other applications presently running on the GPU, and a frequency rate of the GPU.
23. A method for offloading processing of a network packet to a graphics processing unit (GPU) of a network device, the method comprising:
determining, by the network device, resource criteria of an application that is to be offloaded to the GPU prior to the offloading of the application, wherein the resource criteria define a minimum amount of one or more system resources of the network device required to run the application;
determining, by the network device, available GPU resources of the GPU of the network device;
determining, by the network device, whether the available GPU resources are sufficient to process the application based on the resource criteria of the application and the available GPU resources;
determining, by the network device and in response to a determination that the available GPU resources are sufficient to process the application, one or more estimated GPU performance metrics based on the resource criteria of the application and the available GPU resources prior to the offloading of the application to the GPU, wherein the estimated GPU performance metrics indicate an estimated level of performance of the GPU if the GPU were to run the application; and
offloading, by the network device, processing of the application to the GPU as a function of the one or more estimated performance metrics.
24. The method of claim 23, further comprising:
determining, by the network device, one or more estimated central processing unit (CPU) performance metrics of a CPU of the network device, wherein the estimated CPU performance metrics are determined based on available CPU resources and are indicative of an estimated level of performance of the CPU during a runtime of the application by the CPU; and
comparing, by the network device, the estimated GPU performance metrics and the estimated CPU performance metrics,
wherein offloading processing of the application to the GPU comprises offloading processing of the application to the GPU in response to a determination that the estimated GPU performance metrics are an improvement relative to the CPU performance metrics.
25. The method of claim 24, further comprising scheduling the processing of the application at the CPU in response to a determination that the estimated GPU performance metrics are an improvement relative to the CPU performance metrics.
US14/836,142 2015-08-26 2015-08-26 Technologies for offloading network packet processing to a GPU Active 2035-10-18 US10445850B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US14/836,142 US10445850B2 (en) 2015-08-26 2015-08-26 Technologies for offloading network packet processing to a GPU
PCT/US2016/044012 WO2017034731A1 (en) 2015-08-26 2016-07-26 Technologies for offloading network packet processing to a gpu
CN201680043884.6A CN107852413B (en) 2015-08-26 2016-07-26 Network device, method and storage medium for offloading network packet processing to a GPU

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/836,142 US10445850B2 (en) 2015-08-26 2015-08-26 Technologies for offloading network packet processing to a GPU

Publications (2)

Publication Number Publication Date
US20170061566A1 US20170061566A1 (en) 2017-03-02
US10445850B2 true US10445850B2 (en) 2019-10-15

Family

ID=58100540

Family Applications (1)

Application Number Title Priority Date Filing Date
US14/836,142 Active 2035-10-18 US10445850B2 (en) 2015-08-26 2015-08-26 Technologies for offloading network packet processing to a GPU

Country Status (3)

Country Link
US (1) US10445850B2 (en)
CN (1) CN107852413B (en)
WO (1) WO2017034731A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10897428B2 (en) * 2017-10-27 2021-01-19 EMC IP Holding Company LLC Method, server system and computer program product for managing resources

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10122610B2 (en) * 2016-03-25 2018-11-06 Ca, Inc. Provisioning of network services based on virtual network function performance characteristics
WO2017170470A1 (en) * 2016-03-28 2017-10-05 日本電気株式会社 Network function virtualization management orchestration device, method and program
US10055255B2 (en) 2016-04-14 2018-08-21 International Business Machines Corporation Performance optimization of hardware accelerators
US10936533B2 (en) * 2016-10-18 2021-03-02 Advanced Micro Devices, Inc. GPU remote communication with triggered operations
US10523540B2 (en) 2017-03-29 2019-12-31 Ca, Inc. Display method of exchanging messages among users in a group
WO2018183553A1 (en) 2017-03-29 2018-10-04 Fungible, Inc. Non-blocking any-to-any data center network having multiplexed packet spraying within access node groups
US10244034B2 (en) 2017-03-29 2019-03-26 Ca, Inc. Introspection driven monitoring of multi-container applications
CN110710139A (en) 2017-03-29 2020-01-17 芬基波尔有限责任公司 Non-blocking full mesh data center network with optical displacers
US10686729B2 (en) 2017-03-29 2020-06-16 Fungible, Inc. Non-blocking any-to-any data center network with packet spraying over multiple alternate data paths
CN110741356B (en) 2017-04-10 2024-03-15 微软技术许可有限责任公司 Relay coherent memory management in multiprocessor systems
EP3625679A1 (en) 2017-07-10 2020-03-25 Fungible, Inc. Data processing unit for stream processing
US10659254B2 (en) 2017-07-10 2020-05-19 Fungible, Inc. Access node integrated circuit for data centers which includes a networking unit, a plurality of host units, processing clusters, a data network fabric, and a control network fabric
US10475149B2 (en) * 2017-09-25 2019-11-12 Intel Corporation Policies and architecture to dynamically offload VR processing to HMD based on external cues
US10965586B2 (en) 2017-09-29 2021-03-30 Fungible, Inc. Resilient network communication using selective multipath packet flow spraying
CN111149329A (en) 2017-09-29 2020-05-12 芬基波尔有限责任公司 Architecture control protocol for data center networks with packet injection via multiple backup data paths
WO2019104090A1 (en) 2017-11-21 2019-05-31 Fungible, Inc. Work unit stack data structures in multiple core processor system for stream data processing
WO2019152063A1 (en) 2018-02-02 2019-08-08 Fungible, Inc. Efficient work unit processing in a multicore system
US11409569B2 (en) * 2018-03-29 2022-08-09 Xilinx, Inc. Data processing system
US10616136B2 (en) * 2018-04-19 2020-04-07 Microsoft Technology Licensing, Llc Utilization based dynamic resource allocation
WO2020005276A1 (en) * 2018-06-29 2020-01-02 Intel IP Corporation Technologies for cross-layer task distribution
US11347653B2 (en) 2018-08-31 2022-05-31 Nyriad, Inc. Persistent storage device management
WO2020090142A1 (en) * 2018-10-30 2020-05-07 日本電信電話株式会社 Offloading server and offloading program
US10795840B2 (en) 2018-11-12 2020-10-06 At&T Intellectual Property I, L.P. Persistent kernel for graphics processing unit direct memory access network packet processing
US10929175B2 (en) 2018-11-21 2021-02-23 Fungible, Inc. Service chaining hardware accelerators within a data stream processing integrated circuit
US11271994B2 (en) * 2018-12-28 2022-03-08 Intel Corporation Technologies for providing selective offload of execution to the edge
US11595204B2 (en) * 2019-06-04 2023-02-28 EMC IP Holding Company LLC Adaptive re-keying in a storage system
US20200241999A1 (en) * 2020-03-25 2020-07-30 Intel Corporation Performance monitoring for short-lived functions
CN111698178B (en) * 2020-04-14 2022-08-30 新华三技术有限公司 Flow analysis method and device
EP4187879A4 (en) * 2020-09-24 2023-11-22 Samsung Electronics Co., Ltd. Method and device for offloading hardware to software package
US11494076B2 (en) * 2021-01-19 2022-11-08 Dell Products L.P. Storage-usage-based host/storage mapping management system
US20220261287A1 (en) * 2021-02-12 2022-08-18 Nvidia Corporation Method and apparatus for improving processor resource utilization during program execution
EP4315055A1 (en) * 2021-04-01 2024-02-07 Telefonaktiebolaget LM Ericsson (publ) Managing deployment of an application

Citations (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060221086A1 (en) * 2003-08-18 2006-10-05 Nvidia Corporation Adaptive load balancing in a multi-processor graphics processing system
US20060242710A1 (en) * 2005-03-08 2006-10-26 Thomas Alexander System and method for a fast, programmable packet processing system
US20080030508A1 (en) * 2006-08-01 2008-02-07 Nvidia Corporation System and method for dynamically processing content being communicated over a network for display purposes
US7372465B1 (en) 2004-12-17 2008-05-13 Nvidia Corporation Scalable graphics processing for remote display
US7466316B1 (en) * 2004-12-14 2008-12-16 Nvidia Corporation Apparatus, system, and method for distributing work to integrated heterogeneous processors
US20090027403A1 (en) 2007-07-26 2009-01-29 Lg Electronics Inc. Graphic data processing apparatus and method
US20090109230A1 (en) * 2007-10-24 2009-04-30 Howard Miller Methods and apparatuses for load balancing between multiple processing units
US20100245366A1 (en) * 2009-03-31 2010-09-30 Siddhartha Nath Electronic device having switchable graphics processors
US7860999B1 (en) 2000-10-11 2010-12-28 Avaya Inc. Distributed computation in network devices
US20110050713A1 (en) * 2009-09-03 2011-03-03 Advanced Micro Devices, Inc. Hardware-Based Scheduling of GPU Work
US20110213947A1 (en) * 2008-06-11 2011-09-01 John George Mathieson System and Method for Power Optimization
US20110304634A1 (en) * 2010-06-10 2011-12-15 Julian Michael Urbach Allocation of gpu resources across multiple clients
US20120079498A1 (en) * 2010-09-27 2012-03-29 Samsung Electronics Co., Ltd. Method and apparatus for dynamic resource allocation of processing units
US20120149464A1 (en) * 2010-12-14 2012-06-14 Amazon Technologies, Inc. Load balancing between general purpose processors and graphics processors
US20120169742A1 (en) * 2009-04-20 2012-07-05 Barco, Inc. Using GPU for Network Packetization
US20120192200A1 (en) * 2011-01-21 2012-07-26 Rao Jayanth N Load Balancing in Heterogeneous Computing Environments
US20130093779A1 (en) * 2011-10-14 2013-04-18 Bally Gaming, Inc. Graphics processing unit memory usage reduction
US20130117305A1 (en) * 2010-07-21 2013-05-09 Sqream Technologies Ltd System and Method for the Parallel Execution of Database Queries Over CPUs and Multi Core Processors
US20130160016A1 (en) * 2011-12-16 2013-06-20 Advanced Micro Devices, Inc. Allocating Compute Kernels to Processors in a Heterogeneous System
US20130332937A1 (en) * 2012-05-29 2013-12-12 Advanced Micro Devices, Inc. Heterogeneous Parallel Primitives Programming Model
US20140033207A1 (en) * 2012-07-30 2014-01-30 Alcatel-Lucent Usa Inc. System and Method for Managing P-States and C-States of a System
US20140052965A1 (en) * 2012-02-08 2014-02-20 Uzi Sarel Dynamic cpu gpu load balancing using power
US20140189708A1 (en) * 2011-08-17 2014-07-03 Samsung Electronics Co., Ltd. Terminal and method for executing application in same
WO2014166758A1 (en) 2013-04-09 2014-10-16 Alcatel Lucent Control system, apparatus, methods, and computer readable storage medium storing instructions for a network node and/or a network controller
US20150091922A1 (en) * 2013-10-01 2015-04-02 International Business Machines Corporation Diagnosing Graphics Display Problems
US20150116340A1 (en) * 2013-10-29 2015-04-30 International Business Machines Corporation Selective utilization of graphics processing unit (gpu) based acceleration in database management
US20150199214A1 (en) * 2014-01-13 2015-07-16 Electronics And Telecommunications Research Institute System for distributed processing of stream data and method thereof
US20150317762A1 (en) * 2014-04-30 2015-11-05 Qualcomm Incorporated Cpu/gpu dcvs co-optimization for reducing power consumption in graphics frame processing
US20170004808A1 (en) * 2015-07-02 2017-01-05 Nvidia Corporation Method and system for capturing a frame buffer of a virtual machine in a gpu pass-through environment
US20170010923A1 (en) * 2015-07-09 2017-01-12 International Business Machines Corporation Increasing the efficiency of scheduled and unscheduled computing tasks
US20170228849A1 (en) * 2016-02-05 2017-08-10 Mediatek Inc. Apparatuses and methods for activity-based resource management, and storage medium thereof
US20170255496A1 (en) * 2014-11-19 2017-09-07 Huawei Technologies Co., Ltd. Method for scheduling data flow task and apparatus
US20180052709A1 (en) * 2016-08-19 2018-02-22 International Business Machines Corporation Dynamic usage balance of central processing units and accelerators
US20180108109A1 (en) * 2015-06-19 2018-04-19 Huawei Technologies Co., Ltd. Gpu resource allocation method and system
US20180143907A1 (en) * 2016-11-23 2018-05-24 Advanced Micro Devices, Inc. Dual mode local data store
US20180276044A1 (en) * 2017-03-27 2018-09-27 International Business Machines Corporation Coordinated, topology-aware cpu-gpu-memory scheduling for containerized workloads
US20180332252A1 (en) * 2017-05-10 2018-11-15 Mediatek Inc. Apparatuses and methods for dynamic frame rate adjustment
US20180349146A1 (en) * 2017-06-02 2018-12-06 Apple Inc. GPU Resource Tracking
US20180373564A1 (en) * 2017-06-22 2018-12-27 Banuba Limited Computer Systems And Computer-Implemented Methods For Dynamically Adaptive Distribution Of Workload Between Central Processing Unit(s) and Graphics Processing Unit(s)
US20190004868A1 (en) * 2017-07-01 2019-01-03 TuSimple System and method for distributed graphics processing unit (gpu) computation
US20190098039A1 (en) * 2017-09-26 2019-03-28 Edge2020 LLC Determination of cybersecurity recommendations
US20190129757A1 (en) * 2017-10-31 2019-05-02 Guangdong Oppa Mobile Telecommunications Corp., Ltd. Method for Resource Allocation and Terminal Device
US20190132257A1 (en) * 2017-10-27 2019-05-02 EMC IP Holding Company Method, server system and computer program product of managing resources
US20190146842A1 (en) * 2016-05-31 2019-05-16 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method and Apparatus for Allocating Computing Resources of Processor

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7793308B2 (en) * 2005-01-06 2010-09-07 International Business Machines Corporation Setting operation based resource utilization thresholds for resource use by a process
US8233527B2 (en) * 2007-05-11 2012-07-31 Advanced Micro Devices, Inc. Software video transcoder with GPU acceleration
JP2012003619A (en) * 2010-06-18 2012-01-05 Sony Corp Information processor, control method thereof and program
US9304570B2 (en) * 2011-12-15 2016-04-05 Intel Corporation Method, apparatus, and system for energy efficiency and energy conservation including power and performance workload-based balancing between multiple processing elements
US20150055456A1 (en) * 2013-08-26 2015-02-26 Vmware, Inc. Traffic and load aware dynamic queue management
US9478000B2 (en) * 2013-09-27 2016-10-25 Intel Corporation Sharing non-page aligned memory

Patent Citations (55)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7860999B1 (en) 2000-10-11 2010-12-28 Avaya Inc. Distributed computation in network devices
US20060221086A1 (en) * 2003-08-18 2006-10-05 Nvidia Corporation Adaptive load balancing in a multi-processor graphics processing system
US7466316B1 (en) * 2004-12-14 2008-12-16 Nvidia Corporation Apparatus, system, and method for distributing work to integrated heterogeneous processors
US7372465B1 (en) 2004-12-17 2008-05-13 Nvidia Corporation Scalable graphics processing for remote display
US20060242710A1 (en) * 2005-03-08 2006-10-26 Thomas Alexander System and method for a fast, programmable packet processing system
US20110063307A1 (en) 2005-03-08 2011-03-17 Thomas Alexander System and method for a fast, programmable packet processing system
US20080030508A1 (en) * 2006-08-01 2008-02-07 Nvidia Corporation System and method for dynamically processing content being communicated over a network for display purposes
US20090027403A1 (en) 2007-07-26 2009-01-29 Lg Electronics Inc. Graphic data processing apparatus and method
US20090109230A1 (en) * 2007-10-24 2009-04-30 Howard Miller Methods and apparatuses for load balancing between multiple processing units
US8284205B2 (en) * 2007-10-24 2012-10-09 Apple Inc. Methods and apparatuses for load balancing between multiple processing units
US20110213947A1 (en) * 2008-06-11 2011-09-01 John George Mathieson System and Method for Power Optimization
US20100245366A1 (en) * 2009-03-31 2010-09-30 Siddhartha Nath Electronic device having switchable graphics processors
US20120169742A1 (en) * 2009-04-20 2012-07-05 Barco, Inc. Using GPU for Network Packetization
US20150062133A1 (en) * 2009-04-20 2015-03-05 Barco, Inc. Using GPU for Network Packetization
US8878864B2 (en) * 2009-04-20 2014-11-04 Barco, Inc. Using GPU for network packetization
US20110050713A1 (en) * 2009-09-03 2011-03-03 Advanced Micro Devices, Inc. Hardware-Based Scheduling of GPU Work
US20110304634A1 (en) * 2010-06-10 2011-12-15 Julian Michael Urbach Allocation of gpu resources across multiple clients
US8803892B2 (en) * 2010-06-10 2014-08-12 Otoy, Inc. Allocation of GPU resources across multiple clients
US9660928B2 (en) * 2010-06-10 2017-05-23 Otoy, Inc. Allocation of GPU resources across multiple clients
US20140325073A1 (en) * 2010-06-10 2014-10-30 Otoy, Inic Allocation of gpu resources across multiple clients
US20130117305A1 (en) * 2010-07-21 2013-05-09 Sqream Technologies Ltd System and Method for the Parallel Execution of Database Queries Over CPUs and Multi Core Processors
US20120079498A1 (en) * 2010-09-27 2012-03-29 Samsung Electronics Co., Ltd. Method and apparatus for dynamic resource allocation of processing units
US9311157B2 (en) * 2010-09-27 2016-04-12 Samsung Electronics Co., Ltd Method and apparatus for dynamic resource allocation of processing units on a resource allocation plane having a time axis and a processing unit axis
US20120149464A1 (en) * 2010-12-14 2012-06-14 Amazon Technologies, Inc. Load balancing between general purpose processors and graphics processors
US20120192200A1 (en) * 2011-01-21 2012-07-26 Rao Jayanth N Load Balancing in Heterogeneous Computing Environments
US20140189708A1 (en) * 2011-08-17 2014-07-03 Samsung Electronics Co., Ltd. Terminal and method for executing application in same
US20130093779A1 (en) * 2011-10-14 2013-04-18 Bally Gaming, Inc. Graphics processing unit memory usage reduction
US20130160016A1 (en) * 2011-12-16 2013-06-20 Advanced Micro Devices, Inc. Allocating Compute Kernels to Processors in a Heterogeneous System
US20140052965A1 (en) * 2012-02-08 2014-02-20 Uzi Sarel Dynamic cpu gpu load balancing using power
US20130332937A1 (en) * 2012-05-29 2013-12-12 Advanced Micro Devices, Inc. Heterogeneous Parallel Primitives Programming Model
US20140033207A1 (en) * 2012-07-30 2014-01-30 Alcatel-Lucent Usa Inc. System and Method for Managing P-States and C-States of a System
WO2014166758A1 (en) 2013-04-09 2014-10-16 Alcatel Lucent Control system, apparatus, methods, and computer readable storage medium storing instructions for a network node and/or a network controller
US20150091922A1 (en) * 2013-10-01 2015-04-02 International Business Machines Corporation Diagnosing Graphics Display Problems
US20150116340A1 (en) * 2013-10-29 2015-04-30 International Business Machines Corporation Selective utilization of graphics processing unit (gpu) based acceleration in database management
US20150199214A1 (en) * 2014-01-13 2015-07-16 Electronics And Telecommunications Research Institute System for distributed processing of stream data and method thereof
US20150317762A1 (en) * 2014-04-30 2015-11-05 Qualcomm Incorporated Cpu/gpu dcvs co-optimization for reducing power consumption in graphics frame processing
US20170255496A1 (en) * 2014-11-19 2017-09-07 Huawei Technologies Co., Ltd. Method for scheduling data flow task and apparatus
US20180108109A1 (en) * 2015-06-19 2018-04-19 Huawei Technologies Co., Ltd. Gpu resource allocation method and system
US20170004808A1 (en) * 2015-07-02 2017-01-05 Nvidia Corporation Method and system for capturing a frame buffer of a virtual machine in a gpu pass-through environment
US20170010923A1 (en) * 2015-07-09 2017-01-12 International Business Machines Corporation Increasing the efficiency of scheduled and unscheduled computing tasks
US20170228849A1 (en) * 2016-02-05 2017-08-10 Mediatek Inc. Apparatuses and methods for activity-based resource management, and storage medium thereof
US20190146842A1 (en) * 2016-05-31 2019-05-16 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method and Apparatus for Allocating Computing Resources of Processor
US20180052709A1 (en) * 2016-08-19 2018-02-22 International Business Machines Corporation Dynamic usage balance of central processing units and accelerators
US20180143907A1 (en) * 2016-11-23 2018-05-24 Advanced Micro Devices, Inc. Dual mode local data store
US10073783B2 (en) * 2016-11-23 2018-09-11 Advanced Micro Devices, Inc. Dual mode local data store
US20180276044A1 (en) * 2017-03-27 2018-09-27 International Business Machines Corporation Coordinated, topology-aware cpu-gpu-memory scheduling for containerized workloads
US20180332252A1 (en) * 2017-05-10 2018-11-15 Mediatek Inc. Apparatuses and methods for dynamic frame rate adjustment
US20180349146A1 (en) * 2017-06-02 2018-12-06 Apple Inc. GPU Resource Tracking
US20180373564A1 (en) * 2017-06-22 2018-12-27 Banuba Limited Computer Systems And Computer-Implemented Methods For Dynamically Adaptive Distribution Of Workload Between Central Processing Unit(s) and Graphics Processing Unit(s)
US10228972B2 (en) * 2017-06-22 2019-03-12 Banuba Limited Computer systems and computer-implemented methods for dynamically adaptive distribution of workload between central processing unit(s) and graphics processing unit(s)
US20190004868A1 (en) * 2017-07-01 2019-01-03 TuSimple System and method for distributed graphics processing unit (gpu) computation
US10303522B2 (en) * 2017-07-01 2019-05-28 TuSimple System and method for distributed graphics processing unit (GPU) computation
US20190098039A1 (en) * 2017-09-26 2019-03-28 Edge2020 LLC Determination of cybersecurity recommendations
US20190132257A1 (en) * 2017-10-27 2019-05-02 EMC IP Holding Company Method, server system and computer program product of managing resources
US20190129757A1 (en) * 2017-10-31 2019-05-02 Guangdong Oppa Mobile Telecommunications Corp., Ltd. Method for Resource Allocation and Terminal Device

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
Han et al., PacketShader: a GPU-Accelerated Software Router, Sep. 3, 2010, SIGCOMM. *
International Search Report for PCT/US16/044012, dated Oct. 21, 2016 (3 pages).
Kim et al., GPUnet: Networking Abstractions for GPU Programs, Oct. 8, 2014, 11th USENIX Symposium on Operating Systems Design and Implementation. *
Lee et al., Fast Forwarding Table Lookup Exploiting GPU Memory Architecture, Nov. 19, 2010, IEEE, 2010 International Conference on Information and Communication Technology Convergence (ICTC). *
Mu et al., IP Routing Processing with Graphic Processors, Mar. 12, 2010, IEEE, 2010 Design, Automation & Test in Europe Conference & Exhibition (DATE). *
Vasiliadis et al., Gnort: High Performance Network Intrusion Detection Using Graphics Processors, 2008, Springer, Recent Advances in Intrusion Detection. RAID 2008. Lecture Notes in Computer Science, vol. 5230, pp. 116-134. *
Written Opinion for PCT/US16/044012, dated Oct. 21, 2016 (6 pages).

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10897428B2 (en) * 2017-10-27 2021-01-19 EMC IP Holding Company LLC Method, server system and computer program product for managing resources

Also Published As

Publication number Publication date
US20170061566A1 (en) 2017-03-02
WO2017034731A1 (en) 2017-03-02
CN107852413A (en) 2018-03-27
CN107852413B (en) 2022-04-15

Similar Documents

Publication Publication Date Title
US10445850B2 (en) Technologies for offloading network packet processing to a GPU
US20200167258A1 (en) Resource allocation based on applicable service level agreement
US20230412459A1 (en) Technologies for dynamically selecting resources for virtual switching
US12093746B2 (en) Technologies for hierarchical clustering of hardware resources in network function virtualization deployments
US11431600B2 (en) Technologies for GPU assisted network traffic monitoring and analysis
US9934062B2 (en) Technologies for dynamically allocating hardware acceleration units to process data packets
CA2849565C (en) Method, apparatus, and system for scheduling processor core in multiprocessor core system
KR101455899B1 (en) Microprocessor with software control over allocation of shared resources among multiple virtual servers
US10019280B2 (en) Technologies for dynamically managing data bus bandwidth usage of virtual machines in a network device
US20160378570A1 (en) Techniques for Offloading Computational Tasks between Nodes
US11567556B2 (en) Platform slicing of central processing unit (CPU) resources
US9172646B2 (en) Dynamic reconfiguration of network devices for outage prediction
US10932202B2 (en) Technologies for dynamic multi-core network packet processing distribution
EP3611622A1 (en) Technologies for classifying network flows using adaptive virtual routing
US12020068B2 (en) Mechanism to automatically prioritize I/O for NFV workloads at platform overload
Garikipati et al. Rt-opex: Flexible scheduling for cloud-ran processing
US20230100935A1 (en) Microservice deployments using accelerators
US20190044832A1 (en) Technologies for optimized quality of service acceleration
US11412059B2 (en) Technologies for paravirtual network device queue and memory management
US20230401109A1 (en) Load balancer
US12039375B2 (en) Resource management device and resource management method
Zhang et al. Performance management challenges for virtual network functions
CN114675972B (en) Cloud network resource flexible scheduling method and system based on integral algorithm
Lu et al. Local resource shaper for mapreduce
Damkondwar et al. Obm-an Optimal Bandwidth Management Strategy to Virtual Machines in Cloud Environment Using Predictive Analytics

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTEL CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MIN, ALEXANDER W.;WOO, SHINAE;TSAI, JR-SHIAN;AND OTHERS;SIGNING DATES FROM 20150814 TO 20150817;REEL/FRAME:037133/0771

STCV Information on status: appeal procedure

Free format text: NOTICE OF APPEAL FILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4